00:00:00.001 Started by upstream project "autotest-spdk-master-vs-dpdk-v22.11" build number 2376 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3641 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.181 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.181 The recommended git tool is: git 00:00:00.182 using credential 00000000-0000-0000-0000-000000000002 00:00:00.183 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.228 Fetching changes from the remote Git repository 00:00:00.230 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.261 Using shallow fetch with depth 1 00:00:00.261 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.261 > git --version # timeout=10 00:00:00.289 > git --version # 'git version 2.39.2' 00:00:00.289 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.306 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.306 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:06.858 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:06.868 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:06.879 Checking out Revision b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf (FETCH_HEAD) 00:00:06.879 > git config core.sparsecheckout # timeout=10 00:00:06.889 > git read-tree -mu HEAD # timeout=10 00:00:06.904 > git checkout -f b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf # timeout=5 00:00:06.920 Commit message: "jenkins/jjb-config: Ignore OS version mismatch under freebsd" 00:00:06.920 > git rev-list --no-walk b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf # timeout=10 00:00:07.002 [Pipeline] Start of Pipeline 00:00:07.016 [Pipeline] library 00:00:07.017 Loading library shm_lib@master 00:00:07.018 Library shm_lib@master is cached. Copying from home. 00:00:07.035 [Pipeline] node 00:00:07.058 Running on GP11 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:07.060 [Pipeline] { 00:00:07.071 [Pipeline] catchError 00:00:07.072 [Pipeline] { 00:00:07.087 [Pipeline] wrap 00:00:07.097 [Pipeline] { 00:00:07.110 [Pipeline] stage 00:00:07.113 [Pipeline] { (Prologue) 00:00:07.324 [Pipeline] sh 00:00:08.288 + logger -p user.info -t JENKINS-CI 00:00:08.321 [Pipeline] echo 00:00:08.323 Node: GP11 00:00:08.338 [Pipeline] sh 00:00:08.700 [Pipeline] setCustomBuildProperty 00:00:08.709 [Pipeline] echo 00:00:08.710 Cleanup processes 00:00:08.714 [Pipeline] sh 00:00:09.009 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:09.009 5355 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:09.025 [Pipeline] sh 00:00:09.319 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:09.319 ++ grep -v 'sudo pgrep' 00:00:09.319 ++ awk '{print $1}' 00:00:09.319 + sudo kill -9 00:00:09.319 + true 00:00:09.336 [Pipeline] cleanWs 00:00:09.350 [WS-CLEANUP] Deleting project workspace... 00:00:09.350 [WS-CLEANUP] Deferred wipeout is used... 00:00:09.367 [WS-CLEANUP] done 00:00:09.371 [Pipeline] setCustomBuildProperty 00:00:09.386 [Pipeline] sh 00:00:09.693 + sudo git config --global --replace-all safe.directory '*' 00:00:09.798 [Pipeline] httpRequest 00:00:12.163 [Pipeline] echo 00:00:12.164 Sorcerer 10.211.164.20 is alive 00:00:12.174 [Pipeline] retry 00:00:12.176 [Pipeline] { 00:00:12.190 [Pipeline] httpRequest 00:00:12.196 HttpMethod: GET 00:00:12.196 URL: http://10.211.164.20/packages/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:12.197 Sending request to url: http://10.211.164.20/packages/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:12.219 Response Code: HTTP/1.1 200 OK 00:00:12.220 Success: Status code 200 is in the accepted range: 200,404 00:00:12.220 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:16.334 [Pipeline] } 00:00:16.353 [Pipeline] // retry 00:00:16.361 [Pipeline] sh 00:00:16.660 + tar --no-same-owner -xf jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:16.681 [Pipeline] httpRequest 00:00:17.025 [Pipeline] echo 00:00:17.027 Sorcerer 10.211.164.20 is alive 00:00:17.038 [Pipeline] retry 00:00:17.040 [Pipeline] { 00:00:17.054 [Pipeline] httpRequest 00:00:17.059 HttpMethod: GET 00:00:17.059 URL: http://10.211.164.20/packages/spdk_83e8405e4c25408c010ba2b9e02ce45e2347370c.tar.gz 00:00:17.060 Sending request to url: http://10.211.164.20/packages/spdk_83e8405e4c25408c010ba2b9e02ce45e2347370c.tar.gz 00:00:17.086 Response Code: HTTP/1.1 200 OK 00:00:17.087 Success: Status code 200 is in the accepted range: 200,404 00:00:17.087 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_83e8405e4c25408c010ba2b9e02ce45e2347370c.tar.gz 00:00:52.862 [Pipeline] } 00:00:52.880 [Pipeline] // retry 00:00:52.888 [Pipeline] sh 00:00:53.191 + tar --no-same-owner -xf spdk_83e8405e4c25408c010ba2b9e02ce45e2347370c.tar.gz 00:00:56.503 [Pipeline] sh 00:00:56.802 + git -C spdk log --oneline -n5 00:00:56.802 83e8405e4 nvmf/fc: Qpair disconnect callback: Serialize FC delete connection & close qpair process 00:00:56.802 0eab4c6fb nvmf/fc: Validate the ctrlr pointer inside nvmf_fc_req_bdev_abort() 00:00:56.802 4bcab9fb9 correct kick for CQ full case 00:00:56.802 8531656d3 test/nvmf: Interrupt test for local pcie nvme device 00:00:56.802 318515b44 nvme/perf: interrupt mode support for pcie controller 00:00:56.823 [Pipeline] withCredentials 00:00:56.836 > git --version # timeout=10 00:00:56.847 > git --version # 'git version 2.39.2' 00:00:56.878 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:00:56.880 [Pipeline] { 00:00:56.888 [Pipeline] retry 00:00:56.890 [Pipeline] { 00:00:56.904 [Pipeline] sh 00:00:57.511 + git ls-remote http://dpdk.org/git/dpdk-stable v22.11.4 00:00:57.792 [Pipeline] } 00:00:57.814 [Pipeline] // retry 00:00:57.820 [Pipeline] } 00:00:57.840 [Pipeline] // withCredentials 00:00:57.852 [Pipeline] httpRequest 00:00:58.242 [Pipeline] echo 00:00:58.244 Sorcerer 10.211.164.20 is alive 00:00:58.256 [Pipeline] retry 00:00:58.258 [Pipeline] { 00:00:58.271 [Pipeline] httpRequest 00:00:58.277 HttpMethod: GET 00:00:58.278 URL: http://10.211.164.20/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:00:58.279 Sending request to url: http://10.211.164.20/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:00:58.294 Response Code: HTTP/1.1 200 OK 00:00:58.295 Success: Status code 200 is in the accepted range: 200,404 00:00:58.295 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:32.823 [Pipeline] } 00:01:32.843 [Pipeline] // retry 00:01:32.852 [Pipeline] sh 00:01:33.152 + tar --no-same-owner -xf dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:34.559 [Pipeline] sh 00:01:34.850 + git -C dpdk log --oneline -n5 00:01:34.850 caf0f5d395 version: 22.11.4 00:01:34.850 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:01:34.850 dc9c799c7d vhost: fix missing spinlock unlock 00:01:34.850 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:01:34.850 6ef77f2a5e net/gve: fix RX buffer size alignment 00:01:34.863 [Pipeline] } 00:01:34.877 [Pipeline] // stage 00:01:34.887 [Pipeline] stage 00:01:34.889 [Pipeline] { (Prepare) 00:01:34.910 [Pipeline] writeFile 00:01:34.926 [Pipeline] sh 00:01:35.222 + logger -p user.info -t JENKINS-CI 00:01:35.236 [Pipeline] sh 00:01:35.527 + logger -p user.info -t JENKINS-CI 00:01:35.543 [Pipeline] sh 00:01:35.834 + cat autorun-spdk.conf 00:01:35.834 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:35.834 SPDK_TEST_NVMF=1 00:01:35.834 SPDK_TEST_NVME_CLI=1 00:01:35.834 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:35.834 SPDK_TEST_NVMF_NICS=e810 00:01:35.834 SPDK_TEST_VFIOUSER=1 00:01:35.834 SPDK_RUN_UBSAN=1 00:01:35.834 NET_TYPE=phy 00:01:35.834 SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:35.834 SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:35.844 RUN_NIGHTLY=1 00:01:35.848 [Pipeline] readFile 00:01:35.892 [Pipeline] withEnv 00:01:35.894 [Pipeline] { 00:01:35.907 [Pipeline] sh 00:01:36.200 + set -ex 00:01:36.200 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:01:36.200 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:36.200 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:36.200 ++ SPDK_TEST_NVMF=1 00:01:36.200 ++ SPDK_TEST_NVME_CLI=1 00:01:36.200 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:36.200 ++ SPDK_TEST_NVMF_NICS=e810 00:01:36.200 ++ SPDK_TEST_VFIOUSER=1 00:01:36.200 ++ SPDK_RUN_UBSAN=1 00:01:36.200 ++ NET_TYPE=phy 00:01:36.200 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:36.200 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:36.200 ++ RUN_NIGHTLY=1 00:01:36.200 + case $SPDK_TEST_NVMF_NICS in 00:01:36.200 + DRIVERS=ice 00:01:36.200 + [[ tcp == \r\d\m\a ]] 00:01:36.200 + [[ -n ice ]] 00:01:36.200 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:36.200 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:39.512 rmmod: ERROR: Module irdma is not currently loaded 00:01:39.512 rmmod: ERROR: Module i40iw is not currently loaded 00:01:39.512 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:39.512 + true 00:01:39.512 + for D in $DRIVERS 00:01:39.512 + sudo modprobe ice 00:01:39.512 + exit 0 00:01:39.524 [Pipeline] } 00:01:39.536 [Pipeline] // withEnv 00:01:39.541 [Pipeline] } 00:01:39.553 [Pipeline] // stage 00:01:39.562 [Pipeline] catchError 00:01:39.564 [Pipeline] { 00:01:39.578 [Pipeline] timeout 00:01:39.578 Timeout set to expire in 1 hr 0 min 00:01:39.580 [Pipeline] { 00:01:39.593 [Pipeline] stage 00:01:39.595 [Pipeline] { (Tests) 00:01:39.609 [Pipeline] sh 00:01:39.905 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:39.905 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:39.905 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:39.905 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:01:39.905 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:39.905 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:39.905 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:01:39.905 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:39.905 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:39.905 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:39.905 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:01:39.905 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:39.905 + source /etc/os-release 00:01:39.905 ++ NAME='Fedora Linux' 00:01:39.905 ++ VERSION='39 (Cloud Edition)' 00:01:39.905 ++ ID=fedora 00:01:39.905 ++ VERSION_ID=39 00:01:39.905 ++ VERSION_CODENAME= 00:01:39.905 ++ PLATFORM_ID=platform:f39 00:01:39.905 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:01:39.905 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:39.905 ++ LOGO=fedora-logo-icon 00:01:39.905 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:01:39.905 ++ HOME_URL=https://fedoraproject.org/ 00:01:39.905 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:01:39.905 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:39.905 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:39.905 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:39.905 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:01:39.905 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:39.905 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:01:39.905 ++ SUPPORT_END=2024-11-12 00:01:39.905 ++ VARIANT='Cloud Edition' 00:01:39.905 ++ VARIANT_ID=cloud 00:01:39.905 + uname -a 00:01:39.905 Linux spdk-gp-11 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:01:39.905 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:01:40.847 Hugepages 00:01:40.847 node hugesize free / total 00:01:40.847 node0 1048576kB 0 / 0 00:01:40.847 node0 2048kB 0 / 0 00:01:40.847 node1 1048576kB 0 / 0 00:01:40.847 node1 2048kB 0 / 0 00:01:40.847 00:01:40.847 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:40.847 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:01:40.847 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:01:40.847 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:01:40.847 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:01:40.847 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:01:40.847 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:01:40.847 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:01:41.106 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:01:41.106 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:01:41.106 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:01:41.106 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:01:41.106 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:01:41.106 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:01:41.106 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:01:41.107 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:01:41.107 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:01:41.107 NVMe 0000:88:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:01:41.107 + rm -f /tmp/spdk-ld-path 00:01:41.107 + source autorun-spdk.conf 00:01:41.107 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:41.107 ++ SPDK_TEST_NVMF=1 00:01:41.107 ++ SPDK_TEST_NVME_CLI=1 00:01:41.107 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:41.107 ++ SPDK_TEST_NVMF_NICS=e810 00:01:41.107 ++ SPDK_TEST_VFIOUSER=1 00:01:41.107 ++ SPDK_RUN_UBSAN=1 00:01:41.107 ++ NET_TYPE=phy 00:01:41.107 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:41.107 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:41.107 ++ RUN_NIGHTLY=1 00:01:41.107 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:41.107 + [[ -n '' ]] 00:01:41.107 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:41.107 + for M in /var/spdk/build-*-manifest.txt 00:01:41.107 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:01:41.107 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:41.107 + for M in /var/spdk/build-*-manifest.txt 00:01:41.107 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:41.107 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:41.107 + for M in /var/spdk/build-*-manifest.txt 00:01:41.107 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:41.107 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:41.107 ++ uname 00:01:41.107 + [[ Linux == \L\i\n\u\x ]] 00:01:41.107 + sudo dmesg -T 00:01:41.107 + sudo dmesg --clear 00:01:41.107 + dmesg_pid=6056 00:01:41.107 + [[ Fedora Linux == FreeBSD ]] 00:01:41.107 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:41.107 + sudo dmesg -Tw 00:01:41.107 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:41.107 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:41.107 + [[ -x /usr/src/fio-static/fio ]] 00:01:41.107 + export FIO_BIN=/usr/src/fio-static/fio 00:01:41.107 + FIO_BIN=/usr/src/fio-static/fio 00:01:41.107 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:41.107 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:41.107 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:41.107 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:41.107 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:41.107 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:41.107 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:41.107 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:41.107 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:41.107 06:47:02 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:01:41.107 06:47:02 -- spdk/autorun.sh@20 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:41.107 06:47:02 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:41.107 06:47:02 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:01:41.107 06:47:02 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@3 -- $ SPDK_TEST_NVME_CLI=1 00:01:41.107 06:47:02 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@4 -- $ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:41.107 06:47:02 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@5 -- $ SPDK_TEST_NVMF_NICS=e810 00:01:41.107 06:47:02 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@6 -- $ SPDK_TEST_VFIOUSER=1 00:01:41.107 06:47:02 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@7 -- $ SPDK_RUN_UBSAN=1 00:01:41.107 06:47:02 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@8 -- $ NET_TYPE=phy 00:01:41.107 06:47:02 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@9 -- $ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:41.107 06:47:02 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@10 -- $ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:41.107 06:47:02 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@11 -- $ RUN_NIGHTLY=1 00:01:41.107 06:47:02 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:01:41.107 06:47:02 -- spdk/autorun.sh@25 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autobuild.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:41.107 06:47:02 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:01:41.107 06:47:02 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:41.107 06:47:02 -- scripts/common.sh@15 -- $ shopt -s extglob 00:01:41.107 06:47:02 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:41.107 06:47:02 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:41.107 06:47:02 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:41.107 06:47:02 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:41.107 06:47:02 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:41.107 06:47:02 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:41.107 06:47:02 -- paths/export.sh@5 -- $ export PATH 00:01:41.107 06:47:02 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:41.107 06:47:02 -- common/autobuild_common.sh@485 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:41.107 06:47:02 -- common/autobuild_common.sh@486 -- $ date +%s 00:01:41.107 06:47:02 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1731908822.XXXXXX 00:01:41.107 06:47:02 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1731908822.9uqMSn 00:01:41.107 06:47:02 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:01:41.107 06:47:02 -- common/autobuild_common.sh@492 -- $ '[' -n v22.11.4 ']' 00:01:41.107 06:47:02 -- common/autobuild_common.sh@493 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:41.107 06:47:02 -- common/autobuild_common.sh@493 -- $ scanbuild_exclude=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk' 00:01:41.107 06:47:02 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:41.107 06:47:02 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:41.107 06:47:02 -- common/autobuild_common.sh@502 -- $ get_config_params 00:01:41.107 06:47:02 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:01:41.107 06:47:02 -- common/autotest_common.sh@10 -- $ set +x 00:01:41.370 06:47:02 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build' 00:01:41.370 06:47:02 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:01:41.370 06:47:02 -- pm/common@17 -- $ local monitor 00:01:41.370 06:47:02 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:41.370 06:47:02 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:41.370 06:47:02 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:41.370 06:47:02 -- pm/common@21 -- $ date +%s 00:01:41.370 06:47:02 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:41.370 06:47:02 -- pm/common@21 -- $ date +%s 00:01:41.370 06:47:02 -- pm/common@25 -- $ sleep 1 00:01:41.370 06:47:02 -- pm/common@21 -- $ date +%s 00:01:41.370 06:47:02 -- pm/common@21 -- $ date +%s 00:01:41.370 06:47:02 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1731908822 00:01:41.370 06:47:02 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1731908822 00:01:41.370 06:47:02 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1731908822 00:01:41.370 06:47:02 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1731908822 00:01:41.370 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1731908822_collect-vmstat.pm.log 00:01:41.370 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1731908822_collect-cpu-load.pm.log 00:01:41.370 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1731908822_collect-cpu-temp.pm.log 00:01:41.370 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1731908822_collect-bmc-pm.bmc.pm.log 00:01:42.317 06:47:03 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:01:42.317 06:47:03 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:42.317 06:47:03 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:42.317 06:47:03 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:42.317 06:47:03 -- spdk/autobuild.sh@16 -- $ date -u 00:01:42.317 Mon Nov 18 05:47:03 AM UTC 2024 00:01:42.317 06:47:03 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:42.317 v25.01-pre-189-g83e8405e4 00:01:42.317 06:47:03 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:42.317 06:47:03 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:42.317 06:47:03 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:42.317 06:47:03 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:42.317 06:47:03 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:42.317 06:47:03 -- common/autotest_common.sh@10 -- $ set +x 00:01:42.317 ************************************ 00:01:42.317 START TEST ubsan 00:01:42.317 ************************************ 00:01:42.317 06:47:03 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:01:42.317 using ubsan 00:01:42.317 00:01:42.317 real 0m0.000s 00:01:42.317 user 0m0.000s 00:01:42.317 sys 0m0.000s 00:01:42.317 06:47:03 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:01:42.317 06:47:03 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:42.317 ************************************ 00:01:42.317 END TEST ubsan 00:01:42.317 ************************************ 00:01:42.317 06:47:03 -- spdk/autobuild.sh@27 -- $ '[' -n v22.11.4 ']' 00:01:42.317 06:47:03 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:01:42.317 06:47:03 -- common/autobuild_common.sh@442 -- $ run_test build_native_dpdk _build_native_dpdk 00:01:42.317 06:47:03 -- common/autotest_common.sh@1105 -- $ '[' 2 -le 1 ']' 00:01:42.317 06:47:03 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:42.317 06:47:03 -- common/autotest_common.sh@10 -- $ set +x 00:01:42.317 ************************************ 00:01:42.317 START TEST build_native_dpdk 00:01:42.317 ************************************ 00:01:42.317 06:47:03 build_native_dpdk -- common/autotest_common.sh@1129 -- $ _build_native_dpdk 00:01:42.317 06:47:03 build_native_dpdk -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:01:42.317 06:47:03 build_native_dpdk -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:01:42.317 06:47:03 build_native_dpdk -- common/autobuild_common.sh@50 -- $ local compiler_version 00:01:42.317 06:47:03 build_native_dpdk -- common/autobuild_common.sh@51 -- $ local compiler 00:01:42.317 06:47:03 build_native_dpdk -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:01:42.317 06:47:03 build_native_dpdk -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:01:42.317 06:47:03 build_native_dpdk -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:01:42.317 06:47:03 build_native_dpdk -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:01:42.317 06:47:03 build_native_dpdk -- common/autobuild_common.sh@61 -- $ CC=gcc 00:01:42.317 06:47:03 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:01:42.317 06:47:03 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:01:42.317 06:47:03 build_native_dpdk -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:01:42.317 06:47:03 build_native_dpdk -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:01:42.317 06:47:03 build_native_dpdk -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:01:42.317 06:47:03 build_native_dpdk -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:42.317 06:47:03 build_native_dpdk -- common/autobuild_common.sh@71 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:42.317 06:47:03 build_native_dpdk -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:01:42.317 06:47:03 build_native_dpdk -- common/autobuild_common.sh@73 -- $ [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk ]] 00:01:42.317 06:47:03 build_native_dpdk -- common/autobuild_common.sh@82 -- $ orgdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:42.317 06:47:03 build_native_dpdk -- common/autobuild_common.sh@83 -- $ git -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk log --oneline -n 5 00:01:42.318 caf0f5d395 version: 22.11.4 00:01:42.318 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:01:42.318 dc9c799c7d vhost: fix missing spinlock unlock 00:01:42.318 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:01:42.318 6ef77f2a5e net/gve: fix RX buffer size alignment 00:01:42.318 06:47:03 build_native_dpdk -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:01:42.318 06:47:03 build_native_dpdk -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:01:42.318 06:47:03 build_native_dpdk -- common/autobuild_common.sh@87 -- $ dpdk_ver=22.11.4 00:01:42.318 06:47:03 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:01:42.318 06:47:03 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:01:42.318 06:47:03 build_native_dpdk -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:01:42.318 06:47:03 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:01:42.318 06:47:03 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:01:42.318 06:47:03 build_native_dpdk -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:01:42.318 06:47:03 build_native_dpdk -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base") 00:01:42.318 06:47:03 build_native_dpdk -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n 00:01:42.318 06:47:03 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:01:42.318 06:47:03 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:01:42.318 06:47:03 build_native_dpdk -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]] 00:01:42.318 06:47:03 build_native_dpdk -- common/autobuild_common.sh@167 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:01:42.318 06:47:03 build_native_dpdk -- common/autobuild_common.sh@168 -- $ uname -s 00:01:42.318 06:47:03 build_native_dpdk -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']' 00:01:42.318 06:47:03 build_native_dpdk -- common/autobuild_common.sh@169 -- $ lt 22.11.4 21.11.0 00:01:42.318 06:47:03 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 22.11.4 '<' 21.11.0 00:01:42.318 06:47:03 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:01:42.318 06:47:03 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:01:42.318 06:47:03 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:01:42.318 06:47:03 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:01:42.318 06:47:03 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:01:42.318 06:47:03 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:01:42.318 06:47:03 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:01:42.318 06:47:03 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:01:42.318 06:47:03 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:01:42.318 06:47:03 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:01:42.318 06:47:03 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:01:42.318 06:47:03 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:01:42.318 06:47:03 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:01:42.318 06:47:03 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:01:42.318 06:47:03 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 22 00:01:42.318 06:47:03 build_native_dpdk -- scripts/common.sh@353 -- $ local d=22 00:01:42.318 06:47:03 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:01:42.318 06:47:03 build_native_dpdk -- scripts/common.sh@355 -- $ echo 22 00:01:42.318 06:47:03 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=22 00:01:42.318 06:47:03 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 21 00:01:42.318 06:47:03 build_native_dpdk -- scripts/common.sh@353 -- $ local d=21 00:01:42.318 06:47:03 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:01:42.318 06:47:03 build_native_dpdk -- scripts/common.sh@355 -- $ echo 21 00:01:42.318 06:47:03 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=21 00:01:42.318 06:47:03 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:01:42.318 06:47:03 build_native_dpdk -- scripts/common.sh@367 -- $ return 1 00:01:42.318 06:47:03 build_native_dpdk -- common/autobuild_common.sh@173 -- $ patch -p1 00:01:42.318 patching file config/rte_config.h 00:01:42.318 Hunk #1 succeeded at 60 (offset 1 line). 00:01:42.318 06:47:03 build_native_dpdk -- common/autobuild_common.sh@176 -- $ lt 22.11.4 24.07.0 00:01:42.318 06:47:03 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 22.11.4 '<' 24.07.0 00:01:42.318 06:47:03 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:01:42.318 06:47:03 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:01:42.318 06:47:03 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:01:42.318 06:47:03 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:01:42.318 06:47:03 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:01:42.318 06:47:03 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:01:42.318 06:47:03 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:01:42.318 06:47:03 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:01:42.318 06:47:03 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:01:42.318 06:47:03 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:01:42.318 06:47:03 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:01:42.318 06:47:03 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:01:42.318 06:47:03 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:01:42.318 06:47:03 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:01:42.318 06:47:03 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 22 00:01:42.318 06:47:03 build_native_dpdk -- scripts/common.sh@353 -- $ local d=22 00:01:42.318 06:47:03 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:01:42.318 06:47:03 build_native_dpdk -- scripts/common.sh@355 -- $ echo 22 00:01:42.318 06:47:03 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=22 00:01:42.318 06:47:03 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:01:42.318 06:47:03 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:01:42.318 06:47:03 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:01:42.318 06:47:03 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:01:42.318 06:47:03 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:01:42.318 06:47:03 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:01:42.318 06:47:03 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:01:42.318 06:47:03 build_native_dpdk -- scripts/common.sh@368 -- $ return 0 00:01:42.318 06:47:03 build_native_dpdk -- common/autobuild_common.sh@177 -- $ patch -p1 00:01:42.318 patching file lib/pcapng/rte_pcapng.c 00:01:42.318 Hunk #1 succeeded at 110 (offset -18 lines). 00:01:42.318 06:47:03 build_native_dpdk -- common/autobuild_common.sh@179 -- $ ge 22.11.4 24.07.0 00:01:42.318 06:47:03 build_native_dpdk -- scripts/common.sh@376 -- $ cmp_versions 22.11.4 '>=' 24.07.0 00:01:42.318 06:47:03 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:01:42.318 06:47:03 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:01:42.318 06:47:03 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:01:42.318 06:47:03 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:01:42.318 06:47:03 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:01:42.318 06:47:03 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:01:42.318 06:47:03 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=>=' 00:01:42.318 06:47:03 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:01:42.318 06:47:03 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:01:42.318 06:47:03 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:01:42.318 06:47:03 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:01:42.318 06:47:03 build_native_dpdk -- scripts/common.sh@348 -- $ : 1 00:01:42.318 06:47:03 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:01:42.318 06:47:03 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:01:42.318 06:47:03 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 22 00:01:42.318 06:47:03 build_native_dpdk -- scripts/common.sh@353 -- $ local d=22 00:01:42.318 06:47:03 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:01:42.318 06:47:03 build_native_dpdk -- scripts/common.sh@355 -- $ echo 22 00:01:42.318 06:47:03 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=22 00:01:42.318 06:47:03 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:01:42.318 06:47:03 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:01:42.318 06:47:03 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:01:42.318 06:47:03 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:01:42.318 06:47:03 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:01:42.318 06:47:03 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:01:42.318 06:47:03 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:01:42.318 06:47:03 build_native_dpdk -- scripts/common.sh@368 -- $ return 1 00:01:42.318 06:47:03 build_native_dpdk -- common/autobuild_common.sh@183 -- $ dpdk_kmods=false 00:01:42.318 06:47:03 build_native_dpdk -- common/autobuild_common.sh@184 -- $ uname -s 00:01:42.318 06:47:03 build_native_dpdk -- common/autobuild_common.sh@184 -- $ '[' Linux = FreeBSD ']' 00:01:42.318 06:47:03 build_native_dpdk -- common/autobuild_common.sh@188 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base 00:01:42.318 06:47:03 build_native_dpdk -- common/autobuild_common.sh@188 -- $ meson build-tmp --prefix=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:01:48.896 The Meson build system 00:01:48.896 Version: 1.5.0 00:01:48.896 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:01:48.896 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp 00:01:48.896 Build type: native build 00:01:48.896 Program cat found: YES (/usr/bin/cat) 00:01:48.896 Project name: DPDK 00:01:48.896 Project version: 22.11.4 00:01:48.896 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:01:48.896 C linker for the host machine: gcc ld.bfd 2.40-14 00:01:48.896 Host machine cpu family: x86_64 00:01:48.896 Host machine cpu: x86_64 00:01:48.897 Message: ## Building in Developer Mode ## 00:01:48.897 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:48.897 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/check-symbols.sh) 00:01:48.897 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/options-ibverbs-static.sh) 00:01:48.897 Program objdump found: YES (/usr/bin/objdump) 00:01:48.897 Program python3 found: YES (/usr/bin/python3) 00:01:48.897 Program cat found: YES (/usr/bin/cat) 00:01:48.897 config/meson.build:83: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:01:48.897 Checking for size of "void *" : 8 00:01:48.897 Checking for size of "void *" : 8 (cached) 00:01:48.897 Library m found: YES 00:01:48.897 Library numa found: YES 00:01:48.897 Has header "numaif.h" : YES 00:01:48.897 Library fdt found: NO 00:01:48.897 Library execinfo found: NO 00:01:48.897 Has header "execinfo.h" : YES 00:01:48.897 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:01:48.897 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:48.897 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:48.897 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:48.897 Run-time dependency openssl found: YES 3.1.1 00:01:48.897 Run-time dependency libpcap found: YES 1.10.4 00:01:48.897 Has header "pcap.h" with dependency libpcap: YES 00:01:48.897 Compiler for C supports arguments -Wcast-qual: YES 00:01:48.897 Compiler for C supports arguments -Wdeprecated: YES 00:01:48.897 Compiler for C supports arguments -Wformat: YES 00:01:48.897 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:48.897 Compiler for C supports arguments -Wformat-security: NO 00:01:48.897 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:48.897 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:48.897 Compiler for C supports arguments -Wnested-externs: YES 00:01:48.897 Compiler for C supports arguments -Wold-style-definition: YES 00:01:48.897 Compiler for C supports arguments -Wpointer-arith: YES 00:01:48.897 Compiler for C supports arguments -Wsign-compare: YES 00:01:48.897 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:48.897 Compiler for C supports arguments -Wundef: YES 00:01:48.897 Compiler for C supports arguments -Wwrite-strings: YES 00:01:48.897 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:48.897 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:48.897 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:48.897 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:48.897 Compiler for C supports arguments -mavx512f: YES 00:01:48.897 Checking if "AVX512 checking" compiles: YES 00:01:48.897 Fetching value of define "__SSE4_2__" : 1 00:01:48.897 Fetching value of define "__AES__" : 1 00:01:48.897 Fetching value of define "__AVX__" : 1 00:01:48.897 Fetching value of define "__AVX2__" : (undefined) 00:01:48.897 Fetching value of define "__AVX512BW__" : (undefined) 00:01:48.897 Fetching value of define "__AVX512CD__" : (undefined) 00:01:48.897 Fetching value of define "__AVX512DQ__" : (undefined) 00:01:48.897 Fetching value of define "__AVX512F__" : (undefined) 00:01:48.897 Fetching value of define "__AVX512VL__" : (undefined) 00:01:48.897 Fetching value of define "__PCLMUL__" : 1 00:01:48.897 Fetching value of define "__RDRND__" : 1 00:01:48.897 Fetching value of define "__RDSEED__" : (undefined) 00:01:48.897 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:48.897 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:48.897 Message: lib/kvargs: Defining dependency "kvargs" 00:01:48.897 Message: lib/telemetry: Defining dependency "telemetry" 00:01:48.897 Checking for function "getentropy" : YES 00:01:48.897 Message: lib/eal: Defining dependency "eal" 00:01:48.897 Message: lib/ring: Defining dependency "ring" 00:01:48.897 Message: lib/rcu: Defining dependency "rcu" 00:01:48.897 Message: lib/mempool: Defining dependency "mempool" 00:01:48.897 Message: lib/mbuf: Defining dependency "mbuf" 00:01:48.897 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:48.897 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:48.897 Compiler for C supports arguments -mpclmul: YES 00:01:48.897 Compiler for C supports arguments -maes: YES 00:01:48.897 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:48.897 Compiler for C supports arguments -mavx512bw: YES 00:01:48.897 Compiler for C supports arguments -mavx512dq: YES 00:01:48.897 Compiler for C supports arguments -mavx512vl: YES 00:01:48.897 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:48.897 Compiler for C supports arguments -mavx2: YES 00:01:48.897 Compiler for C supports arguments -mavx: YES 00:01:48.897 Message: lib/net: Defining dependency "net" 00:01:48.897 Message: lib/meter: Defining dependency "meter" 00:01:48.897 Message: lib/ethdev: Defining dependency "ethdev" 00:01:48.897 Message: lib/pci: Defining dependency "pci" 00:01:48.897 Message: lib/cmdline: Defining dependency "cmdline" 00:01:48.897 Message: lib/metrics: Defining dependency "metrics" 00:01:48.897 Message: lib/hash: Defining dependency "hash" 00:01:48.897 Message: lib/timer: Defining dependency "timer" 00:01:48.897 Fetching value of define "__AVX2__" : (undefined) (cached) 00:01:48.897 Compiler for C supports arguments -mavx2: YES (cached) 00:01:48.897 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:48.897 Fetching value of define "__AVX512VL__" : (undefined) (cached) 00:01:48.897 Fetching value of define "__AVX512CD__" : (undefined) (cached) 00:01:48.897 Fetching value of define "__AVX512BW__" : (undefined) (cached) 00:01:48.897 Compiler for C supports arguments -mavx512f -mavx512vl -mavx512cd -mavx512bw: YES 00:01:48.897 Message: lib/acl: Defining dependency "acl" 00:01:48.897 Message: lib/bbdev: Defining dependency "bbdev" 00:01:48.897 Message: lib/bitratestats: Defining dependency "bitratestats" 00:01:48.897 Run-time dependency libelf found: YES 0.191 00:01:48.897 Message: lib/bpf: Defining dependency "bpf" 00:01:48.897 Message: lib/cfgfile: Defining dependency "cfgfile" 00:01:48.897 Message: lib/compressdev: Defining dependency "compressdev" 00:01:48.897 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:48.897 Message: lib/distributor: Defining dependency "distributor" 00:01:48.897 Message: lib/efd: Defining dependency "efd" 00:01:48.897 Message: lib/eventdev: Defining dependency "eventdev" 00:01:48.897 Message: lib/gpudev: Defining dependency "gpudev" 00:01:48.897 Message: lib/gro: Defining dependency "gro" 00:01:48.897 Message: lib/gso: Defining dependency "gso" 00:01:48.897 Message: lib/ip_frag: Defining dependency "ip_frag" 00:01:48.897 Message: lib/jobstats: Defining dependency "jobstats" 00:01:48.897 Message: lib/latencystats: Defining dependency "latencystats" 00:01:48.897 Message: lib/lpm: Defining dependency "lpm" 00:01:48.897 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:48.897 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:01:48.897 Fetching value of define "__AVX512IFMA__" : (undefined) 00:01:48.897 Compiler for C supports arguments -mavx512f -mavx512dq -mavx512ifma: YES 00:01:48.897 Message: lib/member: Defining dependency "member" 00:01:48.897 Message: lib/pcapng: Defining dependency "pcapng" 00:01:48.897 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:48.897 Message: lib/power: Defining dependency "power" 00:01:48.897 Message: lib/rawdev: Defining dependency "rawdev" 00:01:48.897 Message: lib/regexdev: Defining dependency "regexdev" 00:01:48.897 Message: lib/dmadev: Defining dependency "dmadev" 00:01:48.897 Message: lib/rib: Defining dependency "rib" 00:01:48.897 Message: lib/reorder: Defining dependency "reorder" 00:01:48.897 Message: lib/sched: Defining dependency "sched" 00:01:48.897 Message: lib/security: Defining dependency "security" 00:01:48.897 Message: lib/stack: Defining dependency "stack" 00:01:48.897 Has header "linux/userfaultfd.h" : YES 00:01:48.897 Message: lib/vhost: Defining dependency "vhost" 00:01:48.897 Message: lib/ipsec: Defining dependency "ipsec" 00:01:48.897 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:48.897 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:01:48.897 Compiler for C supports arguments -mavx512f -mavx512dq: YES 00:01:48.897 Compiler for C supports arguments -mavx512bw: YES (cached) 00:01:48.897 Message: lib/fib: Defining dependency "fib" 00:01:48.897 Message: lib/port: Defining dependency "port" 00:01:48.897 Message: lib/pdump: Defining dependency "pdump" 00:01:48.897 Message: lib/table: Defining dependency "table" 00:01:48.897 Message: lib/pipeline: Defining dependency "pipeline" 00:01:48.897 Message: lib/graph: Defining dependency "graph" 00:01:48.897 Message: lib/node: Defining dependency "node" 00:01:48.897 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:48.897 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:48.897 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:48.897 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:48.897 Compiler for C supports arguments -Wno-sign-compare: YES 00:01:48.897 Compiler for C supports arguments -Wno-unused-value: YES 00:01:49.833 Compiler for C supports arguments -Wno-format: YES 00:01:49.833 Compiler for C supports arguments -Wno-format-security: YES 00:01:49.833 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:01:49.833 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:01:49.833 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:01:49.833 Compiler for C supports arguments -Wno-unused-parameter: YES 00:01:49.833 Fetching value of define "__AVX2__" : (undefined) (cached) 00:01:49.833 Compiler for C supports arguments -mavx2: YES (cached) 00:01:49.833 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:49.833 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:49.833 Compiler for C supports arguments -mavx512bw: YES (cached) 00:01:49.833 Compiler for C supports arguments -march=skylake-avx512: YES 00:01:49.833 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:01:49.833 Program doxygen found: YES (/usr/local/bin/doxygen) 00:01:49.833 Configuring doxy-api.conf using configuration 00:01:49.833 Program sphinx-build found: NO 00:01:49.833 Configuring rte_build_config.h using configuration 00:01:49.833 Message: 00:01:49.833 ================= 00:01:49.833 Applications Enabled 00:01:49.833 ================= 00:01:49.833 00:01:49.833 apps: 00:01:49.833 dumpcap, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, test-crypto-perf, 00:01:49.833 test-eventdev, test-fib, test-flow-perf, test-gpudev, test-pipeline, test-pmd, test-regex, test-sad, 00:01:49.833 test-security-perf, 00:01:49.833 00:01:49.833 Message: 00:01:49.833 ================= 00:01:49.833 Libraries Enabled 00:01:49.833 ================= 00:01:49.833 00:01:49.833 libs: 00:01:49.833 kvargs, telemetry, eal, ring, rcu, mempool, mbuf, net, 00:01:49.833 meter, ethdev, pci, cmdline, metrics, hash, timer, acl, 00:01:49.833 bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, efd, 00:01:49.833 eventdev, gpudev, gro, gso, ip_frag, jobstats, latencystats, lpm, 00:01:49.833 member, pcapng, power, rawdev, regexdev, dmadev, rib, reorder, 00:01:49.833 sched, security, stack, vhost, ipsec, fib, port, pdump, 00:01:49.833 table, pipeline, graph, node, 00:01:49.833 00:01:49.833 Message: 00:01:49.833 =============== 00:01:49.833 Drivers Enabled 00:01:49.833 =============== 00:01:49.833 00:01:49.833 common: 00:01:49.833 00:01:49.833 bus: 00:01:49.833 pci, vdev, 00:01:49.833 mempool: 00:01:49.833 ring, 00:01:49.833 dma: 00:01:49.833 00:01:49.833 net: 00:01:49.833 i40e, 00:01:49.833 raw: 00:01:49.833 00:01:49.833 crypto: 00:01:49.833 00:01:49.833 compress: 00:01:49.833 00:01:49.833 regex: 00:01:49.833 00:01:49.833 vdpa: 00:01:49.833 00:01:49.833 event: 00:01:49.833 00:01:49.833 baseband: 00:01:49.833 00:01:49.833 gpu: 00:01:49.833 00:01:49.833 00:01:49.833 Message: 00:01:49.833 ================= 00:01:49.833 Content Skipped 00:01:49.833 ================= 00:01:49.833 00:01:49.833 apps: 00:01:49.833 00:01:49.834 libs: 00:01:49.834 kni: explicitly disabled via build config (deprecated lib) 00:01:49.834 flow_classify: explicitly disabled via build config (deprecated lib) 00:01:49.834 00:01:49.834 drivers: 00:01:49.834 common/cpt: not in enabled drivers build config 00:01:49.834 common/dpaax: not in enabled drivers build config 00:01:49.834 common/iavf: not in enabled drivers build config 00:01:49.834 common/idpf: not in enabled drivers build config 00:01:49.834 common/mvep: not in enabled drivers build config 00:01:49.834 common/octeontx: not in enabled drivers build config 00:01:49.834 bus/auxiliary: not in enabled drivers build config 00:01:49.834 bus/dpaa: not in enabled drivers build config 00:01:49.834 bus/fslmc: not in enabled drivers build config 00:01:49.834 bus/ifpga: not in enabled drivers build config 00:01:49.834 bus/vmbus: not in enabled drivers build config 00:01:49.834 common/cnxk: not in enabled drivers build config 00:01:49.834 common/mlx5: not in enabled drivers build config 00:01:49.834 common/qat: not in enabled drivers build config 00:01:49.834 common/sfc_efx: not in enabled drivers build config 00:01:49.834 mempool/bucket: not in enabled drivers build config 00:01:49.834 mempool/cnxk: not in enabled drivers build config 00:01:49.834 mempool/dpaa: not in enabled drivers build config 00:01:49.834 mempool/dpaa2: not in enabled drivers build config 00:01:49.834 mempool/octeontx: not in enabled drivers build config 00:01:49.834 mempool/stack: not in enabled drivers build config 00:01:49.834 dma/cnxk: not in enabled drivers build config 00:01:49.834 dma/dpaa: not in enabled drivers build config 00:01:49.834 dma/dpaa2: not in enabled drivers build config 00:01:49.834 dma/hisilicon: not in enabled drivers build config 00:01:49.834 dma/idxd: not in enabled drivers build config 00:01:49.834 dma/ioat: not in enabled drivers build config 00:01:49.834 dma/skeleton: not in enabled drivers build config 00:01:49.834 net/af_packet: not in enabled drivers build config 00:01:49.834 net/af_xdp: not in enabled drivers build config 00:01:49.834 net/ark: not in enabled drivers build config 00:01:49.834 net/atlantic: not in enabled drivers build config 00:01:49.834 net/avp: not in enabled drivers build config 00:01:49.834 net/axgbe: not in enabled drivers build config 00:01:49.834 net/bnx2x: not in enabled drivers build config 00:01:49.834 net/bnxt: not in enabled drivers build config 00:01:49.834 net/bonding: not in enabled drivers build config 00:01:49.834 net/cnxk: not in enabled drivers build config 00:01:49.834 net/cxgbe: not in enabled drivers build config 00:01:49.834 net/dpaa: not in enabled drivers build config 00:01:49.834 net/dpaa2: not in enabled drivers build config 00:01:49.834 net/e1000: not in enabled drivers build config 00:01:49.834 net/ena: not in enabled drivers build config 00:01:49.834 net/enetc: not in enabled drivers build config 00:01:49.834 net/enetfec: not in enabled drivers build config 00:01:49.834 net/enic: not in enabled drivers build config 00:01:49.834 net/failsafe: not in enabled drivers build config 00:01:49.834 net/fm10k: not in enabled drivers build config 00:01:49.834 net/gve: not in enabled drivers build config 00:01:49.834 net/hinic: not in enabled drivers build config 00:01:49.834 net/hns3: not in enabled drivers build config 00:01:49.834 net/iavf: not in enabled drivers build config 00:01:49.834 net/ice: not in enabled drivers build config 00:01:49.834 net/idpf: not in enabled drivers build config 00:01:49.834 net/igc: not in enabled drivers build config 00:01:49.834 net/ionic: not in enabled drivers build config 00:01:49.834 net/ipn3ke: not in enabled drivers build config 00:01:49.834 net/ixgbe: not in enabled drivers build config 00:01:49.834 net/kni: not in enabled drivers build config 00:01:49.834 net/liquidio: not in enabled drivers build config 00:01:49.834 net/mana: not in enabled drivers build config 00:01:49.834 net/memif: not in enabled drivers build config 00:01:49.834 net/mlx4: not in enabled drivers build config 00:01:49.834 net/mlx5: not in enabled drivers build config 00:01:49.834 net/mvneta: not in enabled drivers build config 00:01:49.834 net/mvpp2: not in enabled drivers build config 00:01:49.834 net/netvsc: not in enabled drivers build config 00:01:49.834 net/nfb: not in enabled drivers build config 00:01:49.834 net/nfp: not in enabled drivers build config 00:01:49.834 net/ngbe: not in enabled drivers build config 00:01:49.834 net/null: not in enabled drivers build config 00:01:49.834 net/octeontx: not in enabled drivers build config 00:01:49.834 net/octeon_ep: not in enabled drivers build config 00:01:49.834 net/pcap: not in enabled drivers build config 00:01:49.834 net/pfe: not in enabled drivers build config 00:01:49.834 net/qede: not in enabled drivers build config 00:01:49.834 net/ring: not in enabled drivers build config 00:01:49.834 net/sfc: not in enabled drivers build config 00:01:49.834 net/softnic: not in enabled drivers build config 00:01:49.834 net/tap: not in enabled drivers build config 00:01:49.834 net/thunderx: not in enabled drivers build config 00:01:49.834 net/txgbe: not in enabled drivers build config 00:01:49.834 net/vdev_netvsc: not in enabled drivers build config 00:01:49.834 net/vhost: not in enabled drivers build config 00:01:49.834 net/virtio: not in enabled drivers build config 00:01:49.834 net/vmxnet3: not in enabled drivers build config 00:01:49.834 raw/cnxk_bphy: not in enabled drivers build config 00:01:49.834 raw/cnxk_gpio: not in enabled drivers build config 00:01:49.834 raw/dpaa2_cmdif: not in enabled drivers build config 00:01:49.834 raw/ifpga: not in enabled drivers build config 00:01:49.834 raw/ntb: not in enabled drivers build config 00:01:49.834 raw/skeleton: not in enabled drivers build config 00:01:49.834 crypto/armv8: not in enabled drivers build config 00:01:49.834 crypto/bcmfs: not in enabled drivers build config 00:01:49.834 crypto/caam_jr: not in enabled drivers build config 00:01:49.834 crypto/ccp: not in enabled drivers build config 00:01:49.834 crypto/cnxk: not in enabled drivers build config 00:01:49.834 crypto/dpaa_sec: not in enabled drivers build config 00:01:49.834 crypto/dpaa2_sec: not in enabled drivers build config 00:01:49.834 crypto/ipsec_mb: not in enabled drivers build config 00:01:49.834 crypto/mlx5: not in enabled drivers build config 00:01:49.834 crypto/mvsam: not in enabled drivers build config 00:01:49.834 crypto/nitrox: not in enabled drivers build config 00:01:49.834 crypto/null: not in enabled drivers build config 00:01:49.834 crypto/octeontx: not in enabled drivers build config 00:01:49.834 crypto/openssl: not in enabled drivers build config 00:01:49.834 crypto/scheduler: not in enabled drivers build config 00:01:49.834 crypto/uadk: not in enabled drivers build config 00:01:49.834 crypto/virtio: not in enabled drivers build config 00:01:49.834 compress/isal: not in enabled drivers build config 00:01:49.834 compress/mlx5: not in enabled drivers build config 00:01:49.834 compress/octeontx: not in enabled drivers build config 00:01:49.834 compress/zlib: not in enabled drivers build config 00:01:49.834 regex/mlx5: not in enabled drivers build config 00:01:49.834 regex/cn9k: not in enabled drivers build config 00:01:49.834 vdpa/ifc: not in enabled drivers build config 00:01:49.834 vdpa/mlx5: not in enabled drivers build config 00:01:49.834 vdpa/sfc: not in enabled drivers build config 00:01:49.834 event/cnxk: not in enabled drivers build config 00:01:49.834 event/dlb2: not in enabled drivers build config 00:01:49.834 event/dpaa: not in enabled drivers build config 00:01:49.834 event/dpaa2: not in enabled drivers build config 00:01:49.834 event/dsw: not in enabled drivers build config 00:01:49.834 event/opdl: not in enabled drivers build config 00:01:49.834 event/skeleton: not in enabled drivers build config 00:01:49.834 event/sw: not in enabled drivers build config 00:01:49.834 event/octeontx: not in enabled drivers build config 00:01:49.834 baseband/acc: not in enabled drivers build config 00:01:49.834 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:01:49.834 baseband/fpga_lte_fec: not in enabled drivers build config 00:01:49.834 baseband/la12xx: not in enabled drivers build config 00:01:49.834 baseband/null: not in enabled drivers build config 00:01:49.834 baseband/turbo_sw: not in enabled drivers build config 00:01:49.834 gpu/cuda: not in enabled drivers build config 00:01:49.834 00:01:49.834 00:01:49.834 Build targets in project: 316 00:01:49.834 00:01:49.834 DPDK 22.11.4 00:01:49.834 00:01:49.834 User defined options 00:01:49.834 libdir : lib 00:01:49.834 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:49.834 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:01:49.834 c_link_args : 00:01:49.834 enable_docs : false 00:01:49.834 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:01:49.834 enable_kmods : false 00:01:49.834 machine : native 00:01:49.834 tests : false 00:01:49.834 00:01:49.834 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:49.834 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:01:49.834 06:47:10 build_native_dpdk -- common/autobuild_common.sh@192 -- $ ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp -j48 00:01:49.834 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp' 00:01:49.834 [1/745] Generating lib/rte_kvargs_def with a custom command 00:01:49.834 [2/745] Generating lib/rte_telemetry_mingw with a custom command 00:01:49.834 [3/745] Generating lib/rte_kvargs_mingw with a custom command 00:01:49.834 [4/745] Generating lib/rte_telemetry_def with a custom command 00:01:50.099 [5/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:50.099 [6/745] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:50.099 [7/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:50.099 [8/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:50.099 [9/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:50.099 [10/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:50.099 [11/745] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:50.099 [12/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:50.099 [13/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:50.099 [14/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:50.099 [15/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:50.099 [16/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:50.099 [17/745] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:50.099 [18/745] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:50.099 [19/745] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:50.099 [20/745] Linking static target lib/librte_kvargs.a 00:01:50.099 [21/745] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:50.099 [22/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:50.099 [23/745] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:50.099 [24/745] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:50.099 [25/745] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:50.099 [26/745] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:50.099 [27/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:50.099 [28/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_log.c.o 00:01:50.099 [29/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:50.099 [30/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:50.099 [31/745] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:50.099 [32/745] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:50.099 [33/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:50.099 [34/745] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:50.366 [35/745] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:50.366 [36/745] Generating lib/rte_eal_def with a custom command 00:01:50.366 [37/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:50.366 [38/745] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:50.366 [39/745] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:50.366 [40/745] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:50.366 [41/745] Generating lib/rte_eal_mingw with a custom command 00:01:50.366 [42/745] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:50.366 [43/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:50.366 [44/745] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:50.366 [45/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:50.366 [46/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:50.366 [47/745] Generating lib/rte_ring_def with a custom command 00:01:50.366 [48/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:50.366 [49/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:50.366 [50/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:50.366 [51/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:50.366 [52/745] Generating lib/rte_rcu_def with a custom command 00:01:50.366 [53/745] Generating lib/rte_ring_mingw with a custom command 00:01:50.366 [54/745] Generating lib/rte_rcu_mingw with a custom command 00:01:50.366 [55/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_log.c.o 00:01:50.366 [56/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:50.366 [57/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:50.366 [58/745] Generating lib/rte_mempool_def with a custom command 00:01:50.366 [59/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:50.366 [60/745] Generating lib/rte_mempool_mingw with a custom command 00:01:50.366 [61/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:50.366 [62/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:50.366 [63/745] Generating lib/rte_mbuf_def with a custom command 00:01:50.366 [64/745] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:50.366 [65/745] Generating lib/rte_mbuf_mingw with a custom command 00:01:50.366 [66/745] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:50.366 [67/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:50.366 [68/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:50.366 [69/745] Generating lib/rte_meter_def with a custom command 00:01:50.366 [70/745] Generating lib/rte_net_def with a custom command 00:01:50.366 [71/745] Generating lib/rte_net_mingw with a custom command 00:01:50.366 [72/745] Generating lib/rte_meter_mingw with a custom command 00:01:50.366 [73/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:50.366 [74/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:50.366 [75/745] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:50.366 [76/745] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:50.625 [77/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:50.625 [78/745] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:50.625 [79/745] Linking static target lib/librte_ring.a 00:01:50.625 [80/745] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.625 [81/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:50.625 [82/745] Generating lib/rte_ethdev_def with a custom command 00:01:50.625 [83/745] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:50.625 [84/745] Linking target lib/librte_kvargs.so.23.0 00:01:50.625 [85/745] Linking static target lib/librte_meter.a 00:01:50.625 [86/745] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:50.625 [87/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:50.625 [88/745] Generating lib/rte_ethdev_mingw with a custom command 00:01:50.625 [89/745] Generating lib/rte_pci_def with a custom command 00:01:50.625 [90/745] Generating lib/rte_pci_mingw with a custom command 00:01:50.625 [91/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:50.891 [92/745] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:50.891 [93/745] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:50.891 [94/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:50.891 [95/745] Linking static target lib/librte_pci.a 00:01:50.891 [96/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:50.891 [97/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:50.891 [98/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:51.151 [99/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:51.151 [100/745] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.151 [101/745] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.151 [102/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:51.151 [103/745] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:51.151 [104/745] Linking static target lib/librte_telemetry.a 00:01:51.151 [105/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:51.151 [106/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:51.151 [107/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:51.151 [108/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:51.151 [109/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:51.151 [110/745] Generating lib/rte_cmdline_def with a custom command 00:01:51.151 [111/745] Generating lib/rte_cmdline_mingw with a custom command 00:01:51.151 [112/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:51.151 [113/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:51.151 [114/745] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.151 [115/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:51.151 [116/745] Generating lib/rte_metrics_mingw with a custom command 00:01:51.151 [117/745] Generating lib/rte_metrics_def with a custom command 00:01:51.151 [118/745] Generating lib/rte_hash_def with a custom command 00:01:51.151 [119/745] Generating lib/rte_hash_mingw with a custom command 00:01:51.151 [120/745] Generating lib/rte_timer_mingw with a custom command 00:01:51.151 [121/745] Generating lib/rte_timer_def with a custom command 00:01:51.416 [122/745] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:01:51.416 [123/745] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:51.416 [124/745] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:51.416 [125/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:51.416 [126/745] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:51.416 [127/745] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:51.416 [128/745] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:51.416 [129/745] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:51.416 [130/745] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:51.679 [131/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:51.679 [132/745] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:51.679 [133/745] Generating lib/rte_acl_mingw with a custom command 00:01:51.679 [134/745] Generating lib/rte_acl_def with a custom command 00:01:51.679 [135/745] Generating lib/rte_bbdev_def with a custom command 00:01:51.679 [136/745] Generating lib/rte_bbdev_mingw with a custom command 00:01:51.679 [137/745] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:51.679 [138/745] Generating lib/rte_bitratestats_def with a custom command 00:01:51.679 [139/745] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:51.679 [140/745] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:51.679 [141/745] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:51.679 [142/745] Generating lib/rte_bitratestats_mingw with a custom command 00:01:51.679 [143/745] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:51.679 [144/745] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:51.679 [145/745] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:51.679 [146/745] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.679 [147/745] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:51.944 [148/745] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:51.944 [149/745] Linking target lib/librte_telemetry.so.23.0 00:01:51.944 [150/745] Generating lib/rte_bpf_def with a custom command 00:01:51.944 [151/745] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:51.944 [152/745] Generating lib/rte_bpf_mingw with a custom command 00:01:51.944 [153/745] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:51.944 [154/745] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:51.944 [155/745] Generating lib/rte_cfgfile_mingw with a custom command 00:01:51.944 [156/745] Generating lib/rte_cfgfile_def with a custom command 00:01:51.944 [157/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:51.944 [158/745] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:51.944 [159/745] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:51.944 [160/745] Generating lib/rte_compressdev_def with a custom command 00:01:51.944 [161/745] Generating lib/rte_compressdev_mingw with a custom command 00:01:51.944 [162/745] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:51.944 [163/745] Linking static target lib/librte_rcu.a 00:01:51.944 [164/745] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:52.209 [165/745] Generating lib/rte_cryptodev_mingw with a custom command 00:01:52.209 [166/745] Generating lib/rte_cryptodev_def with a custom command 00:01:52.209 [167/745] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:52.209 [168/745] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:52.209 [169/745] Linking static target lib/librte_timer.a 00:01:52.209 [170/745] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:52.209 [171/745] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:52.210 [172/745] Linking static target lib/librte_net.a 00:01:52.210 [173/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:52.210 [174/745] Generating lib/rte_distributor_def with a custom command 00:01:52.210 [175/745] Linking static target lib/librte_cmdline.a 00:01:52.210 [176/745] Generating lib/rte_distributor_mingw with a custom command 00:01:52.210 [177/745] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:52.210 [178/745] Generating lib/rte_efd_def with a custom command 00:01:52.210 [179/745] Generating lib/rte_efd_mingw with a custom command 00:01:52.477 [180/745] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:52.477 [181/745] Linking static target lib/librte_mempool.a 00:01:52.477 [182/745] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:01:52.477 [183/745] Linking static target lib/librte_cfgfile.a 00:01:52.477 [184/745] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:01:52.477 [185/745] Linking static target lib/librte_metrics.a 00:01:52.477 [186/745] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.477 [187/745] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.739 [188/745] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.739 [189/745] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:01:52.739 [190/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:52.739 [191/745] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:52.739 [192/745] Linking static target lib/librte_eal.a 00:01:52.739 [193/745] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:01:52.739 [194/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:01:52.739 [195/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:01:53.004 [196/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:01:53.004 [197/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:01:53.004 [198/745] Generating lib/rte_eventdev_def with a custom command 00:01:53.004 [199/745] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.004 [200/745] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:01:53.004 [201/745] Linking static target lib/librte_bitratestats.a 00:01:53.004 [202/745] Generating lib/rte_eventdev_mingw with a custom command 00:01:53.004 [203/745] Generating lib/rte_gpudev_mingw with a custom command 00:01:53.004 [204/745] Generating lib/rte_gpudev_def with a custom command 00:01:53.004 [205/745] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:01:53.004 [206/745] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.004 [207/745] Generating lib/rte_gro_def with a custom command 00:01:53.271 [208/745] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:53.271 [209/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:01:53.271 [210/745] Generating lib/rte_gro_mingw with a custom command 00:01:53.271 [211/745] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:01:53.271 [212/745] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.271 [213/745] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:01:53.271 [214/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:01:53.271 [215/745] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:53.271 [216/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:01:53.539 [217/745] Generating lib/rte_gso_mingw with a custom command 00:01:53.539 [218/745] Generating lib/rte_gso_def with a custom command 00:01:53.539 [219/745] Generating symbol file lib/librte_kvargs.so.23.0.p/librte_kvargs.so.23.0.symbols 00:01:53.539 [220/745] Generating symbol file lib/librte_telemetry.so.23.0.p/librte_telemetry.so.23.0.symbols 00:01:53.539 [221/745] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:53.539 [222/745] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:01:53.539 [223/745] Linking static target lib/librte_bbdev.a 00:01:53.539 [224/745] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:01:53.539 [225/745] Generating lib/rte_ip_frag_def with a custom command 00:01:53.539 [226/745] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.539 [227/745] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:01:53.539 [228/745] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.539 [229/745] Generating lib/rte_ip_frag_mingw with a custom command 00:01:53.539 [230/745] Generating lib/rte_jobstats_def with a custom command 00:01:53.539 [231/745] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:53.539 [232/745] Generating lib/rte_jobstats_mingw with a custom command 00:01:53.811 [233/745] Generating lib/rte_latencystats_def with a custom command 00:01:53.811 [234/745] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:53.811 [235/745] Generating lib/rte_latencystats_mingw with a custom command 00:01:53.811 [236/745] Linking static target lib/librte_compressdev.a 00:01:53.811 [237/745] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:01:53.811 [238/745] Generating lib/rte_lpm_def with a custom command 00:01:53.811 [239/745] Generating lib/rte_lpm_mingw with a custom command 00:01:53.811 [240/745] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:01:53.811 [241/745] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:01:53.811 [242/745] Linking static target lib/librte_jobstats.a 00:01:53.811 [243/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:01:54.075 [244/745] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:54.075 [245/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:01:54.075 [246/745] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:01:54.075 [247/745] Generating lib/rte_member_def with a custom command 00:01:54.075 [248/745] Linking static target lib/librte_distributor.a 00:01:54.350 [249/745] Generating lib/rte_member_mingw with a custom command 00:01:54.350 [250/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:01:54.350 [251/745] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.350 [252/745] Linking static target lib/librte_bpf.a 00:01:54.350 [253/745] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:01:54.350 [254/745] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:01:54.350 [255/745] Generating lib/rte_pcapng_def with a custom command 00:01:54.622 [256/745] Generating lib/rte_pcapng_mingw with a custom command 00:01:54.622 [257/745] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.622 [258/745] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:01:54.622 [259/745] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:01:54.622 [260/745] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.622 [261/745] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:01:54.622 [262/745] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:01:54.622 [263/745] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:01:54.622 [264/745] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:01:54.622 [265/745] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:54.622 [266/745] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:54.622 [267/745] Generating lib/rte_power_def with a custom command 00:01:54.622 [268/745] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:01:54.622 [269/745] Generating lib/rte_power_mingw with a custom command 00:01:54.622 [270/745] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:01:54.622 [271/745] Linking static target lib/librte_gpudev.a 00:01:54.622 [272/745] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:54.622 [273/745] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:01:54.622 [274/745] Generating lib/rte_rawdev_def with a custom command 00:01:54.622 [275/745] Generating lib/rte_rawdev_mingw with a custom command 00:01:54.622 [276/745] Linking static target lib/librte_gro.a 00:01:54.893 [277/745] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:01:54.893 [278/745] Generating lib/rte_regexdev_def with a custom command 00:01:54.893 [279/745] Generating lib/rte_regexdev_mingw with a custom command 00:01:54.893 [280/745] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:54.893 [281/745] Generating lib/rte_dmadev_def with a custom command 00:01:54.893 [282/745] Generating lib/rte_dmadev_mingw with a custom command 00:01:54.893 [283/745] Generating lib/rte_rib_def with a custom command 00:01:54.893 [284/745] Generating lib/rte_rib_mingw with a custom command 00:01:54.893 [285/745] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.893 [286/745] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:01:54.893 [287/745] Generating lib/rte_reorder_mingw with a custom command 00:01:54.893 [288/745] Generating lib/rte_reorder_def with a custom command 00:01:54.893 [289/745] Compiling C object lib/librte_power.a.p/power_rte_power_empty_poll.c.o 00:01:55.161 [290/745] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:01:55.161 [291/745] Generating lib/rte_sched_def with a custom command 00:01:55.161 [292/745] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.161 [293/745] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.161 [294/745] Generating lib/rte_sched_mingw with a custom command 00:01:55.161 [295/745] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:01:55.161 [296/745] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:01:55.161 [297/745] Compiling C object lib/member/libsketch_avx512_tmp.a.p/rte_member_sketch_avx512.c.o 00:01:55.161 [298/745] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:01:55.161 [299/745] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:01:55.161 [300/745] Linking static target lib/member/libsketch_avx512_tmp.a 00:01:55.161 [301/745] Linking static target lib/librte_latencystats.a 00:01:55.161 [302/745] Generating lib/rte_security_mingw with a custom command 00:01:55.161 [303/745] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:01:55.161 [304/745] Generating lib/rte_security_def with a custom command 00:01:55.161 [305/745] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:01:55.161 [306/745] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:01:55.161 [307/745] Generating lib/rte_stack_mingw with a custom command 00:01:55.161 [308/745] Generating lib/rte_stack_def with a custom command 00:01:55.429 [309/745] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:01:55.429 [310/745] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:01:55.429 [311/745] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:01:55.429 [312/745] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:01:55.429 [313/745] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:01:55.429 [314/745] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:01:55.429 [315/745] Linking static target lib/librte_stack.a 00:01:55.429 [316/745] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:01:55.429 [317/745] Linking static target lib/librte_rawdev.a 00:01:55.429 [318/745] Generating lib/rte_vhost_def with a custom command 00:01:55.429 [319/745] Generating lib/rte_vhost_mingw with a custom command 00:01:55.429 [320/745] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:55.429 [321/745] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:55.429 [322/745] Linking static target lib/librte_dmadev.a 00:01:55.699 [323/745] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:01:55.699 [324/745] Linking static target lib/librte_ip_frag.a 00:01:55.699 [325/745] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.699 [326/745] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:01:55.699 [327/745] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:01:55.699 [328/745] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:01:55.699 [329/745] Generating lib/rte_ipsec_def with a custom command 00:01:55.699 [330/745] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.699 [331/745] Generating lib/rte_ipsec_mingw with a custom command 00:01:55.963 [332/745] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:01:55.963 [333/745] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:55.963 [334/745] Compiling C object lib/librte_power.a.p/power_rte_power_intel_uncore.c.o 00:01:55.963 [335/745] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.230 [336/745] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:01:56.230 [337/745] Generating lib/rte_fib_def with a custom command 00:01:56.230 [338/745] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.230 [339/745] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.230 [340/745] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:56.230 [341/745] Generating lib/rte_fib_mingw with a custom command 00:01:56.230 [342/745] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:01:56.230 [343/745] Linking static target lib/librte_regexdev.a 00:01:56.494 [344/745] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:01:56.494 [345/745] Linking static target lib/librte_gso.a 00:01:56.494 [346/745] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:01:56.494 [347/745] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:01:56.494 [348/745] Linking static target lib/librte_efd.a 00:01:56.494 [349/745] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.494 [350/745] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:01:56.494 [351/745] Linking static target lib/librte_pcapng.a 00:01:56.494 [352/745] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:01:56.764 [353/745] Linking static target lib/librte_lpm.a 00:01:56.764 [354/745] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:56.764 [355/745] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:01:56.764 [356/745] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:01:56.764 [357/745] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.764 [358/745] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:56.764 [359/745] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:56.764 [360/745] Linking static target lib/librte_reorder.a 00:01:57.029 [361/745] Compiling C object lib/acl/libavx2_tmp.a.p/acl_run_avx2.c.o 00:01:57.029 [362/745] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.029 [363/745] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:57.029 [364/745] Linking static target lib/acl/libavx2_tmp.a 00:01:57.029 [365/745] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:57.029 [366/745] Generating lib/rte_port_def with a custom command 00:01:57.029 [367/745] Generating lib/rte_port_mingw with a custom command 00:01:57.029 [368/745] Generating lib/rte_pdump_def with a custom command 00:01:57.029 [369/745] Generating lib/rte_pdump_mingw with a custom command 00:01:57.029 [370/745] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.029 [371/745] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:01:57.029 [372/745] Compiling C object lib/fib/libtrie_avx512_tmp.a.p/trie_avx512.c.o 00:01:57.029 [373/745] Compiling C object lib/fib/libdir24_8_avx512_tmp.a.p/dir24_8_avx512.c.o 00:01:57.029 [374/745] Linking static target lib/fib/libtrie_avx512_tmp.a 00:01:57.029 [375/745] Linking static target lib/fib/libdir24_8_avx512_tmp.a 00:01:57.029 [376/745] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:57.029 [377/745] Linking static target lib/librte_security.a 00:01:57.029 [378/745] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:01:57.296 [379/745] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:57.296 [380/745] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:01:57.296 [381/745] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.296 [382/745] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:01:57.296 [383/745] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:57.296 [384/745] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.296 [385/745] Linking static target lib/librte_hash.a 00:01:57.296 [386/745] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:01:57.296 [387/745] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:57.296 [388/745] Linking static target lib/librte_power.a 00:01:57.560 [389/745] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:01:57.560 [390/745] Linking static target lib/librte_rib.a 00:01:57.560 [391/745] Compiling C object lib/acl/libavx512_tmp.a.p/acl_run_avx512.c.o 00:01:57.560 [392/745] Linking static target lib/acl/libavx512_tmp.a 00:01:57.560 [393/745] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.560 [394/745] Linking static target lib/librte_acl.a 00:01:57.560 [395/745] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:01:57.560 [396/745] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:57.821 [397/745] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:01:57.821 [398/745] Linking static target lib/librte_ethdev.a 00:01:57.821 [399/745] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:01:57.821 [400/745] Generating lib/rte_table_def with a custom command 00:01:57.821 [401/745] Generating lib/rte_table_mingw with a custom command 00:01:57.821 [402/745] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.086 [403/745] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.349 [404/745] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:58.349 [405/745] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.349 [406/745] Linking static target lib/librte_mbuf.a 00:01:58.349 [407/745] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:01:58.349 [408/745] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:58.349 [409/745] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:01:58.349 [410/745] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:01:58.349 [411/745] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:01:58.349 [412/745] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:01:58.349 [413/745] Generating lib/rte_pipeline_def with a custom command 00:01:58.349 [414/745] Generating lib/rte_pipeline_mingw with a custom command 00:01:58.349 [415/745] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:01:58.349 [416/745] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:01:58.613 [417/745] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:01:58.613 [418/745] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:01:58.613 [419/745] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:01:58.613 [420/745] Linking static target lib/librte_fib.a 00:01:58.613 [421/745] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:01:58.613 [422/745] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.613 [423/745] Generating lib/rte_graph_def with a custom command 00:01:58.613 [424/745] Generating lib/rte_graph_mingw with a custom command 00:01:58.613 [425/745] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:01:58.881 [426/745] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:01:58.881 [427/745] Linking static target lib/librte_member.a 00:01:58.881 [428/745] Linking static target lib/librte_eventdev.a 00:01:58.881 [429/745] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:01:58.881 [430/745] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:01:58.881 [431/745] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:01:58.881 [432/745] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:01:58.881 [433/745] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:01:58.881 [434/745] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:01:59.148 [435/745] Compiling C object lib/librte_node.a.p/node_null.c.o 00:01:59.148 [436/745] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.148 [437/745] Generating lib/rte_node_def with a custom command 00:01:59.148 [438/745] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.148 [439/745] Generating lib/rte_node_mingw with a custom command 00:01:59.148 [440/745] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:01:59.148 [441/745] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:01:59.148 [442/745] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:59.148 [443/745] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.148 [444/745] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:59.412 [445/745] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:01:59.412 [446/745] Generating drivers/rte_bus_pci_def with a custom command 00:01:59.412 [447/745] Linking static target lib/librte_sched.a 00:01:59.412 [448/745] Generating drivers/rte_bus_pci_mingw with a custom command 00:01:59.412 [449/745] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:01:59.412 [450/745] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:59.412 [451/745] Generating drivers/rte_bus_vdev_def with a custom command 00:01:59.412 [452/745] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:59.412 [453/745] Linking static target lib/librte_cryptodev.a 00:01:59.412 [454/745] Generating drivers/rte_bus_vdev_mingw with a custom command 00:01:59.412 [455/745] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.412 [456/745] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:59.412 [457/745] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:01:59.412 [458/745] Generating drivers/rte_mempool_ring_def with a custom command 00:01:59.412 [459/745] Generating drivers/rte_mempool_ring_mingw with a custom command 00:01:59.412 [460/745] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:01:59.681 [461/745] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:01:59.681 [462/745] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:01:59.681 [463/745] Linking static target lib/librte_pdump.a 00:01:59.681 [464/745] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:59.681 [465/745] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:01:59.681 [466/745] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:01:59.681 [467/745] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:59.681 [468/745] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:01:59.681 [469/745] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:59.681 [470/745] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:01:59.681 [471/745] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:59.942 [472/745] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:01:59.942 [473/745] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:01:59.942 [474/745] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:01:59.942 [475/745] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:59.942 [476/745] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:01:59.942 [477/745] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:01:59.942 [478/745] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:02:00.209 [479/745] Compiling C object lib/librte_node.a.p/node_log.c.o 00:02:00.209 [480/745] Generating drivers/rte_net_i40e_def with a custom command 00:02:00.209 [481/745] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:02:00.209 [482/745] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:02:00.209 [483/745] Generating drivers/rte_net_i40e_mingw with a custom command 00:02:00.209 [484/745] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:02:00.209 [485/745] Linking static target lib/librte_ipsec.a 00:02:00.209 [486/745] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:02:00.209 [487/745] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:02:00.209 [488/745] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:00.209 [489/745] Linking static target lib/librte_table.a 00:02:00.209 [490/745] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:00.474 [491/745] Linking static target drivers/librte_bus_vdev.a 00:02:00.474 [492/745] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:00.474 [493/745] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:00.742 [494/745] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:02:00.742 [495/745] Compiling C object drivers/librte_bus_vdev.so.23.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:00.742 [496/745] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:02:00.742 [497/745] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:00.742 [498/745] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:02:00.742 [499/745] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:02:01.011 [500/745] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:02:01.011 [501/745] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.011 [502/745] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:02:01.011 [503/745] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:02:01.011 [504/745] Linking static target lib/librte_graph.a 00:02:01.011 [505/745] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:02:01.011 [506/745] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:02:01.011 [507/745] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:02:01.011 [508/745] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:01.011 [509/745] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:01.011 [510/745] Linking static target drivers/librte_bus_pci.a 00:02:01.011 [511/745] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:02:01.011 [512/745] Compiling C object drivers/librte_bus_pci.so.23.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:01.281 [513/745] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:02:01.545 [514/745] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.545 [515/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:02:01.810 [516/745] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.810 [517/745] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:02:01.810 [518/745] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:02:01.810 [519/745] Linking static target lib/librte_port.a 00:02:01.810 [520/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:02:01.810 [521/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:02:02.077 [522/745] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:02.077 [523/745] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.077 [524/745] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:02.077 [525/745] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:02:02.077 [526/745] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:02:02.347 [527/745] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:02.347 [528/745] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:02:02.347 [529/745] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:02.347 [530/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:02:02.347 [531/745] Compiling C object drivers/librte_mempool_ring.so.23.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:02.347 [532/745] Linking static target drivers/librte_mempool_ring.a 00:02:02.347 [533/745] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.616 [534/745] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:02:02.616 [535/745] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:02:02.616 [536/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:02:02.616 [537/745] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:02:02.885 [538/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:02:02.885 [539/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:02:02.885 [540/745] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.154 [541/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:02:03.154 [542/745] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.154 [543/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:02:03.154 [544/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:02:03.415 [545/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:02:03.415 [546/745] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:02:03.415 [547/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:02:03.683 [548/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:02:03.683 [549/745] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:02:03.683 [550/745] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:02:03.683 [551/745] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:02:03.948 [552/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:02:03.948 [553/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:02:03.948 [554/745] Compiling C object drivers/net/i40e/libi40e_avx2_lib.a.p/i40e_rxtx_vec_avx2.c.o 00:02:03.948 [555/745] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:02:03.948 [556/745] Linking static target drivers/net/i40e/libi40e_avx2_lib.a 00:02:04.218 [557/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:02:04.218 [558/745] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:02:04.483 [559/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:02:04.748 [560/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:02:04.748 [561/745] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:02:04.748 [562/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:02:04.748 [563/745] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:02:05.013 [564/745] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:02:05.013 [565/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:02:05.013 [566/745] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:02:05.013 [567/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:02:05.013 [568/745] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:02:05.013 [569/745] Linking static target drivers/net/i40e/base/libi40e_base.a 00:02:05.013 [570/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:02:05.280 [571/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:02:05.280 [572/745] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:02:05.280 [573/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:02:05.546 [574/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:02:05.546 [575/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:02:05.546 [576/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:02:05.546 [577/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:02:05.546 [578/745] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:02:05.546 [579/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:02:05.546 [580/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:02:05.546 [581/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:02:05.546 [582/745] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:02:05.809 [583/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:02:06.073 [584/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:02:06.073 [585/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:02:06.338 [586/745] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:06.338 [587/745] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:02:06.338 [588/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:02:06.602 [589/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:02:06.602 [590/745] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:06.602 [591/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:02:06.602 [592/745] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:02:06.602 [593/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:02:06.602 [594/745] Linking target lib/librte_eal.so.23.0 00:02:06.871 [595/745] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:02:06.871 [596/745] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:02:06.871 [597/745] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:02:06.871 [598/745] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:02:06.871 [599/745] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:02:06.871 [600/745] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:02:06.871 [601/745] Generating symbol file lib/librte_eal.so.23.0.p/librte_eal.so.23.0.symbols 00:02:07.133 [602/745] Linking target lib/librte_timer.so.23.0 00:02:07.133 [603/745] Linking target lib/librte_cfgfile.so.23.0 00:02:07.133 [604/745] Linking target lib/librte_stack.so.23.0 00:02:07.133 [605/745] Linking target lib/librte_ring.so.23.0 00:02:07.133 [606/745] Linking target lib/librte_graph.so.23.0 00:02:07.133 [607/745] Linking target lib/librte_jobstats.so.23.0 00:02:07.133 [608/745] Linking target lib/librte_pci.so.23.0 00:02:07.133 [609/745] Linking target drivers/librte_bus_vdev.so.23.0 00:02:07.133 [610/745] Linking target lib/librte_rawdev.so.23.0 00:02:07.133 [611/745] Linking target lib/librte_dmadev.so.23.0 00:02:07.133 [612/745] Linking target lib/librte_meter.so.23.0 00:02:07.133 [613/745] Linking target lib/librte_acl.so.23.0 00:02:07.133 [614/745] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:02:07.133 [615/745] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:02:07.133 [616/745] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:02:07.133 [617/745] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:02:07.133 [618/745] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:02:07.133 [619/745] Generating symbol file drivers/librte_bus_vdev.so.23.0.p/librte_bus_vdev.so.23.0.symbols 00:02:07.133 [620/745] Generating symbol file lib/librte_acl.so.23.0.p/librte_acl.so.23.0.symbols 00:02:07.133 [621/745] Generating symbol file lib/librte_ring.so.23.0.p/librte_ring.so.23.0.symbols 00:02:07.133 [622/745] Generating symbol file lib/librte_dmadev.so.23.0.p/librte_dmadev.so.23.0.symbols 00:02:07.133 [623/745] Generating symbol file lib/librte_graph.so.23.0.p/librte_graph.so.23.0.symbols 00:02:07.133 [624/745] Generating symbol file lib/librte_meter.so.23.0.p/librte_meter.so.23.0.symbols 00:02:07.133 [625/745] Generating symbol file lib/librte_pci.so.23.0.p/librte_pci.so.23.0.symbols 00:02:07.133 [626/745] Generating symbol file lib/librte_timer.so.23.0.p/librte_timer.so.23.0.symbols 00:02:07.393 [627/745] Linking target lib/librte_rcu.so.23.0 00:02:07.393 [628/745] Linking target lib/librte_mempool.so.23.0 00:02:07.393 [629/745] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:02:07.393 [630/745] Linking target drivers/librte_bus_pci.so.23.0 00:02:07.393 [631/745] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:02:07.393 [632/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:02:07.393 [633/745] Generating symbol file lib/librte_mempool.so.23.0.p/librte_mempool.so.23.0.symbols 00:02:07.393 [634/745] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:02:07.393 [635/745] Generating symbol file lib/librte_rcu.so.23.0.p/librte_rcu.so.23.0.symbols 00:02:07.393 [636/745] Generating symbol file drivers/librte_bus_pci.so.23.0.p/librte_bus_pci.so.23.0.symbols 00:02:07.393 [637/745] Linking target drivers/librte_mempool_ring.so.23.0 00:02:07.393 [638/745] Linking target lib/librte_rib.so.23.0 00:02:07.393 [639/745] Linking target lib/librte_mbuf.so.23.0 00:02:07.393 [640/745] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:02:07.393 [641/745] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:02:07.653 [642/745] Generating symbol file lib/librte_rib.so.23.0.p/librte_rib.so.23.0.symbols 00:02:07.653 [643/745] Generating symbol file lib/librte_mbuf.so.23.0.p/librte_mbuf.so.23.0.symbols 00:02:07.653 [644/745] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:02:07.653 [645/745] Linking target lib/librte_net.so.23.0 00:02:07.653 [646/745] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:02:07.653 [647/745] Linking target lib/librte_fib.so.23.0 00:02:07.653 [648/745] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:02:07.653 [649/745] Linking target lib/librte_regexdev.so.23.0 00:02:07.653 [650/745] Linking target lib/librte_gpudev.so.23.0 00:02:07.653 [651/745] Linking target lib/librte_sched.so.23.0 00:02:07.653 [652/745] Linking target lib/librte_reorder.so.23.0 00:02:07.653 [653/745] Linking target lib/librte_bbdev.so.23.0 00:02:07.653 [654/745] Linking target lib/librte_distributor.so.23.0 00:02:07.653 [655/745] Linking target lib/librte_compressdev.so.23.0 00:02:07.653 [656/745] Linking target lib/librte_cryptodev.so.23.0 00:02:07.912 [657/745] Generating symbol file lib/librte_net.so.23.0.p/librte_net.so.23.0.symbols 00:02:07.912 [658/745] Generating symbol file lib/librte_sched.so.23.0.p/librte_sched.so.23.0.symbols 00:02:07.912 [659/745] Linking target lib/librte_cmdline.so.23.0 00:02:07.912 [660/745] Linking target lib/librte_hash.so.23.0 00:02:07.912 [661/745] Generating symbol file lib/librte_cryptodev.so.23.0.p/librte_cryptodev.so.23.0.symbols 00:02:07.912 [662/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:02:07.912 [663/745] Linking target lib/librte_ethdev.so.23.0 00:02:07.912 [664/745] Linking target lib/librte_security.so.23.0 00:02:07.912 [665/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:02:07.912 [666/745] Generating symbol file lib/librte_hash.so.23.0.p/librte_hash.so.23.0.symbols 00:02:07.912 [667/745] Generating symbol file lib/librte_ethdev.so.23.0.p/librte_ethdev.so.23.0.symbols 00:02:08.172 [668/745] Generating symbol file lib/librte_security.so.23.0.p/librte_security.so.23.0.symbols 00:02:08.172 [669/745] Linking target lib/librte_efd.so.23.0 00:02:08.172 [670/745] Linking target lib/librte_pcapng.so.23.0 00:02:08.172 [671/745] Linking target lib/librte_lpm.so.23.0 00:02:08.172 [672/745] Linking target lib/librte_bpf.so.23.0 00:02:08.172 [673/745] Linking target lib/librte_member.so.23.0 00:02:08.172 [674/745] Linking target lib/librte_metrics.so.23.0 00:02:08.172 [675/745] Linking target lib/librte_gso.so.23.0 00:02:08.172 [676/745] Linking target lib/librte_ip_frag.so.23.0 00:02:08.172 [677/745] Linking target lib/librte_gro.so.23.0 00:02:08.172 [678/745] Linking target lib/librte_power.so.23.0 00:02:08.172 [679/745] Linking target lib/librte_ipsec.so.23.0 00:02:08.172 [680/745] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:02:08.172 [681/745] Linking target lib/librte_eventdev.so.23.0 00:02:08.172 [682/745] Generating symbol file lib/librte_pcapng.so.23.0.p/librte_pcapng.so.23.0.symbols 00:02:08.172 [683/745] Generating symbol file lib/librte_bpf.so.23.0.p/librte_bpf.so.23.0.symbols 00:02:08.172 [684/745] Generating symbol file lib/librte_lpm.so.23.0.p/librte_lpm.so.23.0.symbols 00:02:08.172 [685/745] Generating symbol file lib/librte_ip_frag.so.23.0.p/librte_ip_frag.so.23.0.symbols 00:02:08.172 [686/745] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:02:08.172 [687/745] Generating symbol file lib/librte_eventdev.so.23.0.p/librte_eventdev.so.23.0.symbols 00:02:08.172 [688/745] Generating symbol file lib/librte_metrics.so.23.0.p/librte_metrics.so.23.0.symbols 00:02:08.172 [689/745] Linking target lib/librte_pdump.so.23.0 00:02:08.430 [690/745] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:02:08.430 [691/745] Linking target lib/librte_bitratestats.so.23.0 00:02:08.430 [692/745] Linking target lib/librte_latencystats.so.23.0 00:02:08.430 [693/745] Linking target lib/librte_port.so.23.0 00:02:08.430 [694/745] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:02:08.430 [695/745] Generating symbol file lib/librte_port.so.23.0.p/librte_port.so.23.0.symbols 00:02:08.430 [696/745] Linking target lib/librte_table.so.23.0 00:02:08.688 [697/745] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:02:08.688 [698/745] Generating symbol file lib/librte_table.so.23.0.p/librte_table.so.23.0.symbols 00:02:08.688 [699/745] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:02:08.947 [700/745] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:02:08.947 [701/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:02:08.947 [702/745] Linking static target drivers/libtmp_rte_net_i40e.a 00:02:08.947 [703/745] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:02:09.205 [704/745] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:02:09.464 [705/745] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:02:09.464 [706/745] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:02:09.464 [707/745] Compiling C object drivers/librte_net_i40e.so.23.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:02:09.464 [708/745] Linking static target drivers/librte_net_i40e.a 00:02:09.464 [709/745] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:02:09.723 [710/745] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:02:09.982 [711/745] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.982 [712/745] Linking target drivers/librte_net_i40e.so.23.0 00:02:10.916 [713/745] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:02:10.916 [714/745] Linking static target lib/librte_node.a 00:02:11.174 [715/745] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.174 [716/745] Linking target lib/librte_node.so.23.0 00:02:11.433 [717/745] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:02:11.692 [718/745] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:02:12.627 [719/745] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:02:20.746 [720/745] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:52.821 [721/745] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:52.821 [722/745] Linking static target lib/librte_vhost.a 00:02:52.821 [723/745] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.821 [724/745] Linking target lib/librte_vhost.so.23.0 00:03:05.029 [725/745] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:03:05.029 [726/745] Linking static target lib/librte_pipeline.a 00:03:05.029 [727/745] Linking target app/dpdk-test-sad 00:03:05.029 [728/745] Linking target app/dpdk-test-regex 00:03:05.029 [729/745] Linking target app/dpdk-test-acl 00:03:05.029 [730/745] Linking target app/dpdk-test-fib 00:03:05.029 [731/745] Linking target app/dpdk-pdump 00:03:05.029 [732/745] Linking target app/dpdk-test-security-perf 00:03:05.029 [733/745] Linking target app/dpdk-test-cmdline 00:03:05.029 [734/745] Linking target app/dpdk-test-gpudev 00:03:05.029 [735/745] Linking target app/dpdk-dumpcap 00:03:05.029 [736/745] Linking target app/dpdk-test-flow-perf 00:03:05.029 [737/745] Linking target app/dpdk-test-pipeline 00:03:05.029 [738/745] Linking target app/dpdk-proc-info 00:03:05.029 [739/745] Linking target app/dpdk-test-crypto-perf 00:03:05.029 [740/745] Linking target app/dpdk-test-bbdev 00:03:05.029 [741/745] Linking target app/dpdk-test-eventdev 00:03:05.029 [742/745] Linking target app/dpdk-testpmd 00:03:05.029 [743/745] Linking target app/dpdk-test-compress-perf 00:03:05.967 [744/745] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:05.967 [745/745] Linking target lib/librte_pipeline.so.23.0 00:03:05.967 06:48:26 build_native_dpdk -- common/autobuild_common.sh@194 -- $ uname -s 00:03:05.967 06:48:26 build_native_dpdk -- common/autobuild_common.sh@194 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:03:05.967 06:48:26 build_native_dpdk -- common/autobuild_common.sh@207 -- $ ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp -j48 install 00:03:06.225 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp' 00:03:06.225 [0/1] Installing files. 00:03:06.489 Installing subdir /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples 00:03:06.489 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:03:06.489 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:03:06.489 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/packet_ordering/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:03:06.489 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/packet_ordering/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:03:06.490 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/skeleton/basicfwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:03:06.490 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/skeleton/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:03:06.490 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/helloworld/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:03:06.490 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/helloworld/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:03:06.490 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/obj.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:06.490 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/obj.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:06.490 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:06.490 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/conn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:06.490 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/conn.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:06.490 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:06.490 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:06.490 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:06.490 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:06.490 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:06.490 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:06.490 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:06.490 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/mirroring.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:06.490 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/hash_func.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:06.490 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/recirculation.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:06.490 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:06.490 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:06.490 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:06.490 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:06.490 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/varbit.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:06.490 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/hash_func.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:06.490 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:06.490 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/meter.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:06.490 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/registers.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:06.490 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/varbit.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:06.490 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:06.490 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/learner.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:06.490 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:06.490 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ethdev.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:06.490 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/learner.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:06.490 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/mirroring.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:06.490 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:06.490 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/recirculation.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:06.490 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/meter.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:06.490 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:06.490 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:06.490 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:06.490 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_routing_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:06.490 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:06.490 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:06.490 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/registers.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:06.490 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:06.490 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:06.490 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/pcap.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:06.490 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/packet.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:06.490 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:06.490 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:06.490 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:06.490 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_xts.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:06.490 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_cmac.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:06.490 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_tdes.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:06.490 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_hmac.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:06.490 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_ccm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:06.490 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_aes.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:06.490 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:06.490 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:06.490 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:06.490 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_rsa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:06.490 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_sha.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:06.490 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_gcm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:06.490 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:06.490 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:06.490 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-jobstats/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:06.491 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-jobstats/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:06.491 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_classify/flow_classify.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_classify 00:03:06.491 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_classify/ipv4_rules_file.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_classify 00:03:06.491 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_classify/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_classify 00:03:06.491 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:03:06.491 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:03:06.491 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/service_cores/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:03:06.491 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/service_cores/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:03:06.491 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/virtio_net.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:03:06.491 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:03:06.491 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:03:06.491 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:03:06.491 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/pkt_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common 00:03:06.491 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/sse/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/sse 00:03:06.491 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/altivec/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/altivec 00:03:06.491 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/neon/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/neon 00:03:06.491 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/flow_blocks.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:03:06.491 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:03:06.491 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:03:06.491 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_fragmentation/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:06.491 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_fragmentation/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:06.491 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_crypto/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:06.491 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_crypto/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:06.491 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:06.491 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:06.491 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/power_manager.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:06.491 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:06.491 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:06.491 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:06.491 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:06.491 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:06.491 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:06.491 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:06.491 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/power_manager.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:06.491 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:06.491 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:06.491 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:06.491 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:06.491 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:06.491 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:06.491 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:06.491 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:06.491 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:06.491 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:06.491 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/perf_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:06.491 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/perf_core.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:06.491 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:06.491 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:06.491 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:06.491 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/rxtx_callbacks/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:06.491 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/rxtx_callbacks/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:06.491 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:03:06.491 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:03:06.491 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:03:06.491 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/link_status_interrupt/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:06.491 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/link_status_interrupt/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:06.491 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bbdev_app/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:03:06.491 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bbdev_app/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:03:06.491 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:06.491 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:06.491 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:06.491 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:06.491 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:06.492 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:06.492 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:06.492 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:06.492 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:06.492 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:06.492 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t2.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:03:06.492 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/README to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:03:06.492 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t1.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:03:06.492 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/dummy.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:03:06.492 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t3.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:03:06.492 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:03:06.492 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/blk.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:03:06.492 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:03:06.492 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk_compat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:03:06.492 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/blk_spec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:03:06.492 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:03:06.492 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd 00:03:06.492 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/node/node.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/node 00:03:06.492 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/node/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/node 00:03:06.492 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:06.492 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/args.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:06.492 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/init.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:06.492 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:06.492 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:06.492 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:06.492 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/shared/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:03:06.492 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ptpclient/ptpclient.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:03:06.492 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ptpclient/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:03:06.492 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_reassembly/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:06.492 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_reassembly/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:06.492 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/parse_obj_list.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:03:06.492 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/parse_obj_list.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:03:06.492 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:03:06.492 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:03:06.492 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:03:06.492 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:03:06.492 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/vdpa_blk_compact.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:03:06.492 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:03:06.492 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:03:06.492 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/dma/dmafwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:03:06.492 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/dma/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:03:06.492 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/rte_policer.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:03:06.492 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/rte_policer.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:03:06.492 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:03:06.492 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:03:06.492 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:03:06.492 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/cat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:06.492 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/cat.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:06.492 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:06.492 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:06.492 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/timer/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:03:06.492 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/timer/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:03:06.492 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:06.492 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:06.492 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/stats.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:06.492 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_red.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:06.492 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_pie.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:06.492 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cfg_file.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:06.492 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_ov.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:06.492 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:06.492 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:06.492 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:06.492 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cmdline.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:06.492 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:06.492 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cfg_file.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:06.492 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/app_thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:06.492 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipv4_multicast/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:06.492 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipv4_multicast/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:06.492 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:06.492 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:06.492 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:06.493 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:06.493 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_route.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:06.493 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:06.493 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:06.493 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_default_v4.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:06.493 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:06.493 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:06.493 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:06.493 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_route_parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:06.493 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:06.493 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:06.493 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_default_v6.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:06.493 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:06.493 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:06.493 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_default_v4.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:06.493 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:06.493 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_fib.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:06.493 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:06.493 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:06.493 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:06.493 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:06.493 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_route_parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:06.493 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_default_v6.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:06.493 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:06.493 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:06.493 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:06.493 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:06.493 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:06.493 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:06.493 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:06.493 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:06.493 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:06.493 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:06.493 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/parser.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:06.493 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:06.493 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:06.493 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_process.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:06.493 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:06.493 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:06.493 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sp4.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:06.493 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/esp.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:06.493 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sad.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:06.493 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:06.493 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/flow.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:06.493 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:06.493 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/parser.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:06.493 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ep1.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:06.493 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sad.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:06.493 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/flow.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:06.493 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/rt.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:06.493 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:06.493 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/esp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:06.493 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ep0.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:06.493 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipip.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:06.493 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:06.493 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sp6.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:06.493 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:06.493 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:06.493 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:06.493 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:06.493 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/run_test.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:06.493 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:06.493 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:06.493 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:06.493 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:06.493 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:06.493 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:06.493 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:06.493 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:06.493 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:06.493 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/linux_test.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:06.494 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:06.494 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:06.494 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:06.494 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:06.494 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:06.494 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:06.494 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/load_env.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:06.494 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:06.494 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:06.494 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:06.494 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:06.494 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:06.494 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:06.494 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:06.494 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:06.494 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:06.494 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:06.494 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:06.494 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:06.494 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:06.494 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:06.494 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/link.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:06.494 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/action.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:06.494 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:06.494 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/conn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:06.494 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/conn.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:06.494 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:06.494 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/parser.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:06.494 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/mempool.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:06.494 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/swq.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:06.494 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/parser.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:06.494 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:06.494 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tmgr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:06.494 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/swq.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:06.494 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tmgr.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:06.494 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tap.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:06.494 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/kni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:06.494 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tap.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:06.494 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/mempool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:06.494 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:06.494 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:06.494 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/link.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:06.494 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:06.494 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/pipeline.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:06.494 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/kni.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:06.494 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:06.494 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:06.494 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:06.494 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/kni.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:06.494 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/firewall.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:06.494 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:06.494 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/flow.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:06.494 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/route.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:06.494 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:06.494 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:06.494 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/tap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:06.494 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/rss.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:06.494 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq_dcb/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:06.494 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq_dcb/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:06.495 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/ntb_fwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:03:06.495 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:03:06.495 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool 00:03:06.495 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:06.495 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:06.495 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:06.495 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:06.495 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:06.495 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:06.495 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:06.495 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process 00:03:06.495 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:06.495 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:06.495 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:06.495 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:06.495 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:06.495 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:06.495 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:06.495 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:06.495 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:03:06.495 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:06.495 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:06.495 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:06.495 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:06.495 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:06.495 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:06.495 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:06.495 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:06.495 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/shared/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:03:06.495 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/symmetric_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:06.495 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/symmetric_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:06.495 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:06.495 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:06.495 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:06.495 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:06.495 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:06.495 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/distributor/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:03:06.495 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/distributor/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:03:06.495 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-graph/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:06.495 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-graph/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:06.495 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-crypto/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:06.495 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-crypto/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:06.495 Installing lib/librte_kvargs.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:06.495 Installing lib/librte_kvargs.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:06.495 Installing lib/librte_telemetry.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:06.495 Installing lib/librte_telemetry.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:06.495 Installing lib/librte_eal.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:06.495 Installing lib/librte_eal.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:06.495 Installing lib/librte_ring.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:06.495 Installing lib/librte_ring.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:06.495 Installing lib/librte_rcu.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:06.495 Installing lib/librte_rcu.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:06.495 Installing lib/librte_mempool.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:06.495 Installing lib/librte_mempool.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:06.495 Installing lib/librte_mbuf.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:06.495 Installing lib/librte_mbuf.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:06.495 Installing lib/librte_net.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:06.495 Installing lib/librte_net.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:06.495 Installing lib/librte_meter.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:06.755 Installing lib/librte_meter.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:06.756 Installing lib/librte_ethdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:06.756 Installing lib/librte_ethdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:06.756 Installing lib/librte_pci.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:06.756 Installing lib/librte_pci.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:06.756 Installing lib/librte_cmdline.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:06.756 Installing lib/librte_cmdline.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:06.756 Installing lib/librte_metrics.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:06.756 Installing lib/librte_metrics.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:06.756 Installing lib/librte_hash.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:06.756 Installing lib/librte_hash.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:06.756 Installing lib/librte_timer.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:06.756 Installing lib/librte_timer.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:06.756 Installing lib/librte_acl.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:06.756 Installing lib/librte_acl.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:06.756 Installing lib/librte_bbdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:06.756 Installing lib/librte_bbdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:06.756 Installing lib/librte_bitratestats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:06.756 Installing lib/librte_bitratestats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:06.756 Installing lib/librte_bpf.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:06.756 Installing lib/librte_bpf.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:06.756 Installing lib/librte_cfgfile.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:06.756 Installing lib/librte_cfgfile.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:06.756 Installing lib/librte_compressdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:06.756 Installing lib/librte_compressdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:06.756 Installing lib/librte_cryptodev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:06.756 Installing lib/librte_cryptodev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:06.756 Installing lib/librte_distributor.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:06.756 Installing lib/librte_distributor.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:06.756 Installing lib/librte_efd.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:06.756 Installing lib/librte_efd.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:06.756 Installing lib/librte_eventdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:06.756 Installing lib/librte_eventdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:06.756 Installing lib/librte_gpudev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:06.756 Installing lib/librte_gpudev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:06.756 Installing lib/librte_gro.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:06.756 Installing lib/librte_gro.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:06.756 Installing lib/librte_gso.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:06.756 Installing lib/librte_gso.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:06.756 Installing lib/librte_ip_frag.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:06.756 Installing lib/librte_ip_frag.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:06.756 Installing lib/librte_jobstats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:06.756 Installing lib/librte_jobstats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:06.756 Installing lib/librte_latencystats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:06.756 Installing lib/librte_latencystats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:06.756 Installing lib/librte_lpm.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:06.756 Installing lib/librte_lpm.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:06.756 Installing lib/librte_member.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:06.756 Installing lib/librte_member.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:06.756 Installing lib/librte_pcapng.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:06.756 Installing lib/librte_pcapng.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:06.756 Installing lib/librte_power.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:06.756 Installing lib/librte_power.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:06.756 Installing lib/librte_rawdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:06.756 Installing lib/librte_rawdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:06.756 Installing lib/librte_regexdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:06.756 Installing lib/librte_regexdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:06.756 Installing lib/librte_dmadev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:06.756 Installing lib/librte_dmadev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:06.756 Installing lib/librte_rib.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:06.756 Installing lib/librte_rib.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:06.756 Installing lib/librte_reorder.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:06.756 Installing lib/librte_reorder.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:06.756 Installing lib/librte_sched.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:06.756 Installing lib/librte_sched.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:06.756 Installing lib/librte_security.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:06.756 Installing lib/librte_security.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:06.756 Installing lib/librte_stack.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:06.756 Installing lib/librte_stack.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:06.756 Installing lib/librte_vhost.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:06.756 Installing lib/librte_vhost.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:06.756 Installing lib/librte_ipsec.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:06.756 Installing lib/librte_ipsec.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:06.756 Installing lib/librte_fib.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:06.756 Installing lib/librte_fib.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:06.756 Installing lib/librte_port.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:06.756 Installing lib/librte_port.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:06.756 Installing lib/librte_pdump.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:06.756 Installing lib/librte_pdump.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:06.756 Installing lib/librte_table.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:06.756 Installing lib/librte_table.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:06.756 Installing lib/librte_pipeline.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:06.756 Installing lib/librte_pipeline.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:06.756 Installing lib/librte_graph.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:07.019 Installing lib/librte_graph.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:07.019 Installing lib/librte_node.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:07.019 Installing lib/librte_node.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:07.019 Installing drivers/librte_bus_pci.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:07.019 Installing drivers/librte_bus_pci.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0 00:03:07.019 Installing drivers/librte_bus_vdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:07.019 Installing drivers/librte_bus_vdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0 00:03:07.019 Installing drivers/librte_mempool_ring.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:07.019 Installing drivers/librte_mempool_ring.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0 00:03:07.019 Installing drivers/librte_net_i40e.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:07.019 Installing drivers/librte_net_i40e.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0 00:03:07.019 Installing app/dpdk-dumpcap to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:07.019 Installing app/dpdk-pdump to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:07.019 Installing app/dpdk-proc-info to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:07.019 Installing app/dpdk-test-acl to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:07.019 Installing app/dpdk-test-bbdev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:07.019 Installing app/dpdk-test-cmdline to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:07.019 Installing app/dpdk-test-compress-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:07.019 Installing app/dpdk-test-crypto-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:07.019 Installing app/dpdk-test-eventdev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:07.019 Installing app/dpdk-test-fib to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:07.019 Installing app/dpdk-test-flow-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:07.019 Installing app/dpdk-test-gpudev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:07.019 Installing app/dpdk-test-pipeline to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:07.019 Installing app/dpdk-testpmd to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:07.019 Installing app/dpdk-test-regex to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:07.019 Installing app/dpdk-test-sad to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:07.019 Installing app/dpdk-test-security-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:07.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/config/rte_config.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/kvargs/rte_kvargs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/telemetry/rte_telemetry.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_atomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:07.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_byteorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:07.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_cpuflags.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:07.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_cycles.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:07.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_io.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:07.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_memcpy.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:07.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_pause.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:07.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:07.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_prefetch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:07.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_rwlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:07.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_spinlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:07.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_vect.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:07.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_cpuflags.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_cycles.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_io.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_memcpy.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_pause.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_prefetch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_rtm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_rwlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_spinlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_vect.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_32.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_alarm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bitmap.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bitops.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_branch_prediction.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bus.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_class.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_compat.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_debug.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_dev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_devargs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal_memconfig.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_errno.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_epoll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_fbarray.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_hexdump.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_hypervisor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_interrupts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_keepalive.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_launch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_lcore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.020 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_log.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.020 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_malloc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.020 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_mcslock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.020 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_memory.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.020 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_memzone.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.020 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.020 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_features.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.020 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_per_lcore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.020 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pflock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.020 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_random.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.020 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_reciprocal.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.020 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_seqcount.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.020 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_seqlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.020 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_service.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.020 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_service_component.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.020 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_string_fns.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.020 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_tailq.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.020 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.020 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_ticketlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.020 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_time.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.020 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.020 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace_point.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.020 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace_point_register.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.020 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_uuid.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.020 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_version.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.020 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_vfio.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.020 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/linux/include/rte_os.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.020 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.020 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.020 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_elem.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.020 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.020 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_c11_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.020 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_generic_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.020 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_hts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.020 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.020 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.020 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.020 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek_zc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.020 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_rts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.020 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.020 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rcu/rte_rcu_qsbr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.020 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.020 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.020 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.020 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.020 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.020 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_ptype.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.020 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.020 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_dyn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.020 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ip.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.020 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_tcp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.020 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_udp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.020 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_esp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.020 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_sctp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.020 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_icmp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.020 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_arp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.020 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ether.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.020 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_macsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.020 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_vxlan.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.020 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_gre.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.020 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_gtp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.020 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_net.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.020 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_net_crc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.020 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_mpls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.020 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_higig.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.020 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ecpri.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.020 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_geneve.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.020 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_l2tpv2.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.020 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ppp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.020 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/meter/rte_meter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.020 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_cman.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.020 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.020 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.020 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.020 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_dev_info.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.020 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_flow.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.020 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_flow_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.020 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_mtr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.020 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_mtr_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.020 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_tm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.020 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_tm_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.020 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.021 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_eth_ctrl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.021 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pci/rte_pci.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.021 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.021 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.021 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_num.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.021 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.021 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.021 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_string.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.021 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_rdline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.021 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_vt100.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.021 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_socket.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.021 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_cirbuf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.021 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_portlist.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.021 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/metrics/rte_metrics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.021 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/metrics/rte_metrics_telemetry.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.021 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_fbk_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.021 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_hash_crc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.021 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.021 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_jhash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.021 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.021 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash_gfni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.021 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.021 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_generic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.021 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_sw.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.021 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_x86.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.021 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash_x86_gfni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.021 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/timer/rte_timer.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.021 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/acl/rte_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.021 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/acl/rte_acl_osdep.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.021 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.021 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev_pmd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.021 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev_op.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.021 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bitratestats/rte_bitrate.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.021 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/bpf_def.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.021 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/rte_bpf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.021 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/rte_bpf_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.021 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cfgfile/rte_cfgfile.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.021 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/compressdev/rte_compressdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.021 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/compressdev/rte_comp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.021 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.021 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.021 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.021 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.021 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto_sym.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.021 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto_asym.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.021 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.021 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/distributor/rte_distributor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.021 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/efd/rte_efd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.021 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.021 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.021 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.021 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.021 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_timer_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.021 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.021 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.021 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.021 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gpudev/rte_gpudev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.021 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gro/rte_gro.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.021 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gso/rte_gso.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.021 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ip_frag/rte_ip_frag.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.021 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/jobstats/rte_jobstats.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.021 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/latencystats/rte_latencystats.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.021 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.021 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.021 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.021 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.021 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_scalar.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.021 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.021 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_sve.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.021 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/member/rte_member.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.021 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pcapng/rte_pcapng.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.021 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.021 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_empty_poll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.021 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_intel_uncore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.021 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_pmd_mgmt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.021 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_guest_channel.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.021 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rawdev/rte_rawdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.021 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rawdev/rte_rawdev_pmd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.021 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.021 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.021 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.021 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dmadev/rte_dmadev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.022 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dmadev/rte_dmadev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.022 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rib/rte_rib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.022 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rib/rte_rib6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.022 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/reorder/rte_reorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.022 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_approx.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.022 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_red.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.022 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_sched.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.022 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_sched_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.022 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_pie.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.022 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/security/rte_security.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.022 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/security/rte_security_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.022 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.022 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_std.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.022 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.022 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_generic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.022 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_c11.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.022 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_stubs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.022 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vdpa.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.022 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.022 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost_async.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.022 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.022 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.022 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sa.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.022 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sad.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.022 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.022 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/fib/rte_fib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.022 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/fib/rte_fib6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.022 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.022 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_fd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.022 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_frag.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.022 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ras.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.022 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.022 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.022 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_sched.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.022 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_source_sink.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.022 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_sym_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.022 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_eventdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.022 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.022 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.022 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_fd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.022 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.022 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_source_sink.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.022 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdump/rte_pdump.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.022 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.022 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_hash_func.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.022 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.022 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_em.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.022 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_learner.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.022 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_selector.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.022 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_wm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.022 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.022 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.022 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_array.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.022 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.022 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_cuckoo.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.022 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_func.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.022 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.022 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_lpm_ipv6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.022 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_stub.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.022 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.022 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru_x86.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.022 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_func_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.022 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.022 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_port_in_action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.022 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_table_action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.022 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.022 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_extern.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.022 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_ctl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.022 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.022 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_worker.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.022 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_ip4_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.022 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_eth_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.022 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/bus/pci/rte_bus_pci.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.022 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.022 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.022 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-devbind.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:07.022 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-pmdinfo.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:07.022 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-telemetry.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:07.022 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-hugepages.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:07.022 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/rte_build_config.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.022 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig 00:03:07.022 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/meson-private/libdpdk.pc to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig 00:03:07.023 Installing symlink pointing to librte_kvargs.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_kvargs.so.23 00:03:07.023 Installing symlink pointing to librte_kvargs.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_kvargs.so 00:03:07.023 Installing symlink pointing to librte_telemetry.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_telemetry.so.23 00:03:07.023 Installing symlink pointing to librte_telemetry.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_telemetry.so 00:03:07.023 Installing symlink pointing to librte_eal.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eal.so.23 00:03:07.023 Installing symlink pointing to librte_eal.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eal.so 00:03:07.023 Installing symlink pointing to librte_ring.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ring.so.23 00:03:07.023 Installing symlink pointing to librte_ring.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ring.so 00:03:07.023 Installing symlink pointing to librte_rcu.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rcu.so.23 00:03:07.023 Installing symlink pointing to librte_rcu.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rcu.so 00:03:07.023 Installing symlink pointing to librte_mempool.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mempool.so.23 00:03:07.023 Installing symlink pointing to librte_mempool.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mempool.so 00:03:07.023 Installing symlink pointing to librte_mbuf.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mbuf.so.23 00:03:07.023 Installing symlink pointing to librte_mbuf.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mbuf.so 00:03:07.023 Installing symlink pointing to librte_net.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_net.so.23 00:03:07.023 Installing symlink pointing to librte_net.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_net.so 00:03:07.023 Installing symlink pointing to librte_meter.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_meter.so.23 00:03:07.023 Installing symlink pointing to librte_meter.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_meter.so 00:03:07.023 Installing symlink pointing to librte_ethdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ethdev.so.23 00:03:07.023 Installing symlink pointing to librte_ethdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ethdev.so 00:03:07.023 Installing symlink pointing to librte_pci.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pci.so.23 00:03:07.023 Installing symlink pointing to librte_pci.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pci.so 00:03:07.023 Installing symlink pointing to librte_cmdline.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cmdline.so.23 00:03:07.023 Installing symlink pointing to librte_cmdline.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cmdline.so 00:03:07.023 Installing symlink pointing to librte_metrics.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_metrics.so.23 00:03:07.023 Installing symlink pointing to librte_metrics.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_metrics.so 00:03:07.023 Installing symlink pointing to librte_hash.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_hash.so.23 00:03:07.023 Installing symlink pointing to librte_hash.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_hash.so 00:03:07.023 Installing symlink pointing to librte_timer.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_timer.so.23 00:03:07.023 Installing symlink pointing to librte_timer.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_timer.so 00:03:07.023 Installing symlink pointing to librte_acl.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_acl.so.23 00:03:07.023 Installing symlink pointing to librte_acl.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_acl.so 00:03:07.023 Installing symlink pointing to librte_bbdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bbdev.so.23 00:03:07.023 Installing symlink pointing to librte_bbdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bbdev.so 00:03:07.023 Installing symlink pointing to librte_bitratestats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bitratestats.so.23 00:03:07.023 Installing symlink pointing to librte_bitratestats.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bitratestats.so 00:03:07.023 Installing symlink pointing to librte_bpf.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bpf.so.23 00:03:07.023 Installing symlink pointing to librte_bpf.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bpf.so 00:03:07.023 Installing symlink pointing to librte_cfgfile.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cfgfile.so.23 00:03:07.023 Installing symlink pointing to librte_cfgfile.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cfgfile.so 00:03:07.023 Installing symlink pointing to librte_compressdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_compressdev.so.23 00:03:07.023 Installing symlink pointing to librte_compressdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_compressdev.so 00:03:07.023 Installing symlink pointing to librte_cryptodev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cryptodev.so.23 00:03:07.023 Installing symlink pointing to librte_cryptodev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cryptodev.so 00:03:07.023 Installing symlink pointing to librte_distributor.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_distributor.so.23 00:03:07.023 Installing symlink pointing to librte_distributor.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_distributor.so 00:03:07.023 Installing symlink pointing to librte_efd.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_efd.so.23 00:03:07.023 Installing symlink pointing to librte_efd.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_efd.so 00:03:07.023 Installing symlink pointing to librte_eventdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eventdev.so.23 00:03:07.023 Installing symlink pointing to librte_eventdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eventdev.so 00:03:07.023 Installing symlink pointing to librte_gpudev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gpudev.so.23 00:03:07.283 Installing symlink pointing to librte_gpudev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gpudev.so 00:03:07.283 Installing symlink pointing to librte_gro.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gro.so.23 00:03:07.283 Installing symlink pointing to librte_gro.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gro.so 00:03:07.283 Installing symlink pointing to librte_gso.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gso.so.23 00:03:07.283 Installing symlink pointing to librte_gso.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gso.so 00:03:07.283 Installing symlink pointing to librte_ip_frag.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ip_frag.so.23 00:03:07.283 Installing symlink pointing to librte_ip_frag.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ip_frag.so 00:03:07.283 Installing symlink pointing to librte_jobstats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_jobstats.so.23 00:03:07.283 Installing symlink pointing to librte_jobstats.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_jobstats.so 00:03:07.283 Installing symlink pointing to librte_latencystats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_latencystats.so.23 00:03:07.283 Installing symlink pointing to librte_latencystats.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_latencystats.so 00:03:07.283 Installing symlink pointing to librte_lpm.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_lpm.so.23 00:03:07.283 Installing symlink pointing to librte_lpm.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_lpm.so 00:03:07.283 Installing symlink pointing to librte_member.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_member.so.23 00:03:07.283 Installing symlink pointing to librte_member.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_member.so 00:03:07.283 Installing symlink pointing to librte_pcapng.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pcapng.so.23 00:03:07.283 Installing symlink pointing to librte_pcapng.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pcapng.so 00:03:07.283 Installing symlink pointing to librte_power.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_power.so.23 00:03:07.283 Installing symlink pointing to librte_power.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_power.so 00:03:07.283 Installing symlink pointing to librte_rawdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rawdev.so.23 00:03:07.283 Installing symlink pointing to librte_rawdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rawdev.so 00:03:07.283 Installing symlink pointing to librte_regexdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_regexdev.so.23 00:03:07.283 Installing symlink pointing to librte_regexdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_regexdev.so 00:03:07.283 Installing symlink pointing to librte_dmadev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dmadev.so.23 00:03:07.283 Installing symlink pointing to librte_dmadev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dmadev.so 00:03:07.283 Installing symlink pointing to librte_rib.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rib.so.23 00:03:07.283 Installing symlink pointing to librte_rib.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rib.so 00:03:07.283 Installing symlink pointing to librte_reorder.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_reorder.so.23 00:03:07.283 Installing symlink pointing to librte_reorder.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_reorder.so 00:03:07.283 Installing symlink pointing to librte_sched.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_sched.so.23 00:03:07.283 Installing symlink pointing to librte_sched.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_sched.so 00:03:07.283 Installing symlink pointing to librte_security.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_security.so.23 00:03:07.284 Installing symlink pointing to librte_security.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_security.so 00:03:07.284 Installing symlink pointing to librte_stack.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_stack.so.23 00:03:07.284 Installing symlink pointing to librte_stack.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_stack.so 00:03:07.284 Installing symlink pointing to librte_vhost.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_vhost.so.23 00:03:07.284 Installing symlink pointing to librte_vhost.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_vhost.so 00:03:07.284 Installing symlink pointing to librte_ipsec.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ipsec.so.23 00:03:07.284 Installing symlink pointing to librte_ipsec.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ipsec.so 00:03:07.284 Installing symlink pointing to librte_fib.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_fib.so.23 00:03:07.284 Installing symlink pointing to librte_fib.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_fib.so 00:03:07.284 Installing symlink pointing to librte_port.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_port.so.23 00:03:07.284 Installing symlink pointing to librte_port.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_port.so 00:03:07.284 Installing symlink pointing to librte_pdump.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdump.so.23 00:03:07.284 Installing symlink pointing to librte_pdump.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdump.so 00:03:07.284 Installing symlink pointing to librte_table.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_table.so.23 00:03:07.284 Installing symlink pointing to librte_table.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_table.so 00:03:07.284 Installing symlink pointing to librte_pipeline.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pipeline.so.23 00:03:07.284 Installing symlink pointing to librte_pipeline.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pipeline.so 00:03:07.284 Installing symlink pointing to librte_graph.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_graph.so.23 00:03:07.284 Installing symlink pointing to librte_graph.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_graph.so 00:03:07.284 Installing symlink pointing to librte_node.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_node.so.23 00:03:07.284 Installing symlink pointing to librte_node.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_node.so 00:03:07.284 Installing symlink pointing to librte_bus_pci.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so.23 00:03:07.284 Installing symlink pointing to librte_bus_pci.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so 00:03:07.284 Installing symlink pointing to librte_bus_vdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so.23 00:03:07.284 Installing symlink pointing to librte_bus_vdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so 00:03:07.284 Installing symlink pointing to librte_mempool_ring.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so.23 00:03:07.284 Installing symlink pointing to librte_mempool_ring.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so 00:03:07.284 Installing symlink pointing to librte_net_i40e.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so.23 00:03:07.284 './librte_bus_pci.so' -> 'dpdk/pmds-23.0/librte_bus_pci.so' 00:03:07.284 './librte_bus_pci.so.23' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23' 00:03:07.284 './librte_bus_pci.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23.0' 00:03:07.284 './librte_bus_vdev.so' -> 'dpdk/pmds-23.0/librte_bus_vdev.so' 00:03:07.284 './librte_bus_vdev.so.23' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23' 00:03:07.284 './librte_bus_vdev.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23.0' 00:03:07.284 './librte_mempool_ring.so' -> 'dpdk/pmds-23.0/librte_mempool_ring.so' 00:03:07.284 './librte_mempool_ring.so.23' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23' 00:03:07.284 './librte_mempool_ring.so.23.0' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23.0' 00:03:07.284 './librte_net_i40e.so' -> 'dpdk/pmds-23.0/librte_net_i40e.so' 00:03:07.284 './librte_net_i40e.so.23' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23' 00:03:07.284 './librte_net_i40e.so.23.0' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23.0' 00:03:07.284 Installing symlink pointing to librte_net_i40e.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so 00:03:07.284 Running custom install script '/bin/sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-23.0' 00:03:07.284 06:48:28 build_native_dpdk -- common/autobuild_common.sh@213 -- $ cat 00:03:07.284 06:48:28 build_native_dpdk -- common/autobuild_common.sh@218 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:07.284 00:03:07.284 real 1m24.856s 00:03:07.284 user 14m25.580s 00:03:07.284 sys 1m53.439s 00:03:07.284 06:48:28 build_native_dpdk -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:03:07.284 06:48:28 build_native_dpdk -- common/autotest_common.sh@10 -- $ set +x 00:03:07.284 ************************************ 00:03:07.284 END TEST build_native_dpdk 00:03:07.284 ************************************ 00:03:07.284 06:48:28 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:03:07.284 06:48:28 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:03:07.284 06:48:28 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:03:07.284 06:48:28 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:03:07.284 06:48:28 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:03:07.284 06:48:28 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:03:07.284 06:48:28 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:03:07.284 06:48:28 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build --with-shared 00:03:07.284 Using /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig for additional libs... 00:03:07.544 DPDK libraries: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:07.544 DPDK includes: //var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.544 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:03:07.803 Using 'verbs' RDMA provider 00:03:18.729 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:03:28.737 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:03:28.737 Creating mk/config.mk...done. 00:03:28.737 Creating mk/cc.flags.mk...done. 00:03:28.737 Type 'make' to build. 00:03:28.737 06:48:49 -- spdk/autobuild.sh@70 -- $ run_test make make -j48 00:03:28.737 06:48:49 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:03:28.737 06:48:49 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:03:28.737 06:48:49 -- common/autotest_common.sh@10 -- $ set +x 00:03:28.737 ************************************ 00:03:28.737 START TEST make 00:03:28.737 ************************************ 00:03:28.737 06:48:49 make -- common/autotest_common.sh@1129 -- $ make -j48 00:03:28.737 make[1]: Nothing to be done for 'all'. 00:03:30.680 The Meson build system 00:03:30.680 Version: 1.5.0 00:03:30.680 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:03:30.680 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:30.680 Build type: native build 00:03:30.680 Project name: libvfio-user 00:03:30.680 Project version: 0.0.1 00:03:30.680 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:03:30.680 C linker for the host machine: gcc ld.bfd 2.40-14 00:03:30.680 Host machine cpu family: x86_64 00:03:30.680 Host machine cpu: x86_64 00:03:30.680 Run-time dependency threads found: YES 00:03:30.680 Library dl found: YES 00:03:30.680 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:03:30.680 Run-time dependency json-c found: YES 0.17 00:03:30.680 Run-time dependency cmocka found: YES 1.1.7 00:03:30.680 Program pytest-3 found: NO 00:03:30.680 Program flake8 found: NO 00:03:30.680 Program misspell-fixer found: NO 00:03:30.680 Program restructuredtext-lint found: NO 00:03:30.680 Program valgrind found: YES (/usr/bin/valgrind) 00:03:30.680 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:03:30.680 Compiler for C supports arguments -Wmissing-declarations: YES 00:03:30.680 Compiler for C supports arguments -Wwrite-strings: YES 00:03:30.680 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:03:30.680 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:03:30.680 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:03:30.680 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:03:30.680 Build targets in project: 8 00:03:30.680 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:03:30.680 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:03:30.680 00:03:30.680 libvfio-user 0.0.1 00:03:30.680 00:03:30.680 User defined options 00:03:30.680 buildtype : debug 00:03:30.680 default_library: shared 00:03:30.680 libdir : /usr/local/lib 00:03:30.680 00:03:30.680 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:31.267 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:03:31.530 [1/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:03:31.530 [2/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:03:31.530 [3/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:03:31.530 [4/37] Compiling C object samples/lspci.p/lspci.c.o 00:03:31.530 [5/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:03:31.530 [6/37] Compiling C object samples/null.p/null.c.o 00:03:31.530 [7/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:03:31.530 [8/37] Compiling C object samples/server.p/server.c.o 00:03:31.530 [9/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:03:31.530 [10/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:03:31.530 [11/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:03:31.530 [12/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:03:31.530 [13/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:03:31.530 [14/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:03:31.530 [15/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:03:31.530 [16/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:03:31.530 [17/37] Compiling C object test/unit_tests.p/mocks.c.o 00:03:31.530 [18/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:03:31.530 [19/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:03:31.530 [20/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:03:31.530 [21/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:03:31.530 [22/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:03:31.530 [23/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:03:31.530 [24/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:03:31.530 [25/37] Compiling C object samples/client.p/client.c.o 00:03:31.530 [26/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:03:31.530 [27/37] Linking target samples/client 00:03:31.795 [28/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:03:31.795 [29/37] Linking target lib/libvfio-user.so.0.0.1 00:03:31.795 [30/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:03:31.795 [31/37] Linking target test/unit_tests 00:03:32.057 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:03:32.057 [33/37] Linking target samples/server 00:03:32.057 [34/37] Linking target samples/null 00:03:32.057 [35/37] Linking target samples/shadow_ioeventfd_server 00:03:32.057 [36/37] Linking target samples/lspci 00:03:32.057 [37/37] Linking target samples/gpio-pci-idio-16 00:03:32.057 INFO: autodetecting backend as ninja 00:03:32.057 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:32.057 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:33.003 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:03:33.003 ninja: no work to do. 00:04:11.724 CC lib/log/log.o 00:04:11.725 CC lib/log/log_flags.o 00:04:11.725 CC lib/ut/ut.o 00:04:11.725 CC lib/log/log_deprecated.o 00:04:11.725 CC lib/ut_mock/mock.o 00:04:11.725 LIB libspdk_ut.a 00:04:11.725 LIB libspdk_ut_mock.a 00:04:11.725 LIB libspdk_log.a 00:04:11.725 SO libspdk_ut_mock.so.6.0 00:04:11.725 SO libspdk_ut.so.2.0 00:04:11.725 SO libspdk_log.so.7.1 00:04:11.725 SYMLINK libspdk_ut_mock.so 00:04:11.725 SYMLINK libspdk_ut.so 00:04:11.725 SYMLINK libspdk_log.so 00:04:11.725 CXX lib/trace_parser/trace.o 00:04:11.725 CC lib/dma/dma.o 00:04:11.725 CC lib/ioat/ioat.o 00:04:11.725 CC lib/util/base64.o 00:04:11.725 CC lib/util/bit_array.o 00:04:11.725 CC lib/util/cpuset.o 00:04:11.725 CC lib/util/crc16.o 00:04:11.725 CC lib/util/crc32.o 00:04:11.725 CC lib/util/crc32c.o 00:04:11.725 CC lib/util/crc32_ieee.o 00:04:11.725 CC lib/util/crc64.o 00:04:11.725 CC lib/util/dif.o 00:04:11.725 CC lib/util/fd.o 00:04:11.725 CC lib/util/fd_group.o 00:04:11.725 CC lib/util/file.o 00:04:11.725 CC lib/util/hexlify.o 00:04:11.725 CC lib/util/iov.o 00:04:11.725 CC lib/util/math.o 00:04:11.725 CC lib/util/net.o 00:04:11.725 CC lib/util/pipe.o 00:04:11.725 CC lib/util/strerror_tls.o 00:04:11.725 CC lib/util/string.o 00:04:11.725 CC lib/util/uuid.o 00:04:11.725 CC lib/util/xor.o 00:04:11.725 CC lib/util/zipf.o 00:04:11.725 CC lib/util/md5.o 00:04:11.725 CC lib/vfio_user/host/vfio_user_pci.o 00:04:11.725 CC lib/vfio_user/host/vfio_user.o 00:04:11.725 LIB libspdk_dma.a 00:04:11.725 SO libspdk_dma.so.5.0 00:04:11.725 SYMLINK libspdk_dma.so 00:04:11.725 LIB libspdk_ioat.a 00:04:11.725 SO libspdk_ioat.so.7.0 00:04:11.725 LIB libspdk_vfio_user.a 00:04:11.725 SYMLINK libspdk_ioat.so 00:04:11.725 SO libspdk_vfio_user.so.5.0 00:04:11.725 SYMLINK libspdk_vfio_user.so 00:04:11.725 LIB libspdk_util.a 00:04:11.725 SO libspdk_util.so.10.1 00:04:11.725 SYMLINK libspdk_util.so 00:04:11.725 CC lib/conf/conf.o 00:04:11.725 CC lib/json/json_parse.o 00:04:11.725 CC lib/idxd/idxd.o 00:04:11.725 CC lib/json/json_util.o 00:04:11.725 CC lib/vmd/vmd.o 00:04:11.725 CC lib/idxd/idxd_user.o 00:04:11.725 CC lib/rdma_utils/rdma_utils.o 00:04:11.725 CC lib/json/json_write.o 00:04:11.725 CC lib/vmd/led.o 00:04:11.725 CC lib/env_dpdk/env.o 00:04:11.725 CC lib/idxd/idxd_kernel.o 00:04:11.725 CC lib/env_dpdk/memory.o 00:04:11.725 CC lib/env_dpdk/pci.o 00:04:11.725 CC lib/env_dpdk/init.o 00:04:11.725 CC lib/env_dpdk/threads.o 00:04:11.725 CC lib/env_dpdk/pci_ioat.o 00:04:11.725 CC lib/env_dpdk/pci_virtio.o 00:04:11.725 CC lib/env_dpdk/pci_vmd.o 00:04:11.725 CC lib/env_dpdk/pci_idxd.o 00:04:11.725 CC lib/env_dpdk/pci_event.o 00:04:11.725 CC lib/env_dpdk/sigbus_handler.o 00:04:11.725 CC lib/env_dpdk/pci_dpdk.o 00:04:11.725 CC lib/env_dpdk/pci_dpdk_2207.o 00:04:11.725 CC lib/env_dpdk/pci_dpdk_2211.o 00:04:11.725 LIB libspdk_conf.a 00:04:11.725 SO libspdk_conf.so.6.0 00:04:11.725 LIB libspdk_rdma_utils.a 00:04:11.725 LIB libspdk_json.a 00:04:11.725 SYMLINK libspdk_conf.so 00:04:11.725 SO libspdk_rdma_utils.so.1.0 00:04:11.725 SO libspdk_json.so.6.0 00:04:11.725 SYMLINK libspdk_rdma_utils.so 00:04:11.725 SYMLINK libspdk_json.so 00:04:11.725 CC lib/rdma_provider/common.o 00:04:11.725 CC lib/rdma_provider/rdma_provider_verbs.o 00:04:11.725 CC lib/jsonrpc/jsonrpc_server.o 00:04:11.725 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:04:11.725 CC lib/jsonrpc/jsonrpc_client.o 00:04:11.725 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:04:11.725 LIB libspdk_idxd.a 00:04:11.725 SO libspdk_idxd.so.12.1 00:04:11.725 SYMLINK libspdk_idxd.so 00:04:11.725 LIB libspdk_vmd.a 00:04:11.725 SO libspdk_vmd.so.6.0 00:04:11.725 SYMLINK libspdk_vmd.so 00:04:11.725 LIB libspdk_rdma_provider.a 00:04:11.725 SO libspdk_rdma_provider.so.7.0 00:04:11.725 LIB libspdk_jsonrpc.a 00:04:11.725 SYMLINK libspdk_rdma_provider.so 00:04:11.725 SO libspdk_jsonrpc.so.6.0 00:04:11.725 SYMLINK libspdk_jsonrpc.so 00:04:11.725 LIB libspdk_trace_parser.a 00:04:11.725 SO libspdk_trace_parser.so.6.0 00:04:11.725 CC lib/rpc/rpc.o 00:04:11.725 SYMLINK libspdk_trace_parser.so 00:04:11.984 LIB libspdk_rpc.a 00:04:11.984 SO libspdk_rpc.so.6.0 00:04:11.984 SYMLINK libspdk_rpc.so 00:04:12.243 CC lib/trace/trace_flags.o 00:04:12.243 CC lib/trace/trace.o 00:04:12.243 CC lib/trace/trace_rpc.o 00:04:12.243 CC lib/keyring/keyring.o 00:04:12.243 CC lib/keyring/keyring_rpc.o 00:04:12.243 CC lib/notify/notify.o 00:04:12.243 CC lib/notify/notify_rpc.o 00:04:12.502 LIB libspdk_notify.a 00:04:12.502 SO libspdk_notify.so.6.0 00:04:12.502 LIB libspdk_keyring.a 00:04:12.502 SYMLINK libspdk_notify.so 00:04:12.502 LIB libspdk_trace.a 00:04:12.502 SO libspdk_keyring.so.2.0 00:04:12.502 SO libspdk_trace.so.11.0 00:04:12.502 SYMLINK libspdk_keyring.so 00:04:12.502 SYMLINK libspdk_trace.so 00:04:12.761 CC lib/sock/sock.o 00:04:12.761 CC lib/sock/sock_rpc.o 00:04:12.761 CC lib/thread/thread.o 00:04:12.761 CC lib/thread/iobuf.o 00:04:12.761 LIB libspdk_env_dpdk.a 00:04:12.761 SO libspdk_env_dpdk.so.15.1 00:04:13.019 SYMLINK libspdk_env_dpdk.so 00:04:13.279 LIB libspdk_sock.a 00:04:13.279 SO libspdk_sock.so.10.0 00:04:13.279 SYMLINK libspdk_sock.so 00:04:13.539 CC lib/nvme/nvme_ctrlr_cmd.o 00:04:13.539 CC lib/nvme/nvme_ctrlr.o 00:04:13.539 CC lib/nvme/nvme_fabric.o 00:04:13.539 CC lib/nvme/nvme_ns_cmd.o 00:04:13.539 CC lib/nvme/nvme_ns.o 00:04:13.539 CC lib/nvme/nvme_pcie_common.o 00:04:13.539 CC lib/nvme/nvme_pcie.o 00:04:13.539 CC lib/nvme/nvme_qpair.o 00:04:13.539 CC lib/nvme/nvme.o 00:04:13.539 CC lib/nvme/nvme_quirks.o 00:04:13.539 CC lib/nvme/nvme_transport.o 00:04:13.539 CC lib/nvme/nvme_discovery.o 00:04:13.539 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:04:13.539 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:04:13.539 CC lib/nvme/nvme_tcp.o 00:04:13.539 CC lib/nvme/nvme_opal.o 00:04:13.539 CC lib/nvme/nvme_io_msg.o 00:04:13.539 CC lib/nvme/nvme_poll_group.o 00:04:13.539 CC lib/nvme/nvme_zns.o 00:04:13.539 CC lib/nvme/nvme_stubs.o 00:04:13.539 CC lib/nvme/nvme_auth.o 00:04:13.539 CC lib/nvme/nvme_cuse.o 00:04:13.539 CC lib/nvme/nvme_vfio_user.o 00:04:13.539 CC lib/nvme/nvme_rdma.o 00:04:14.477 LIB libspdk_thread.a 00:04:14.477 SO libspdk_thread.so.11.0 00:04:14.477 SYMLINK libspdk_thread.so 00:04:14.736 CC lib/accel/accel.o 00:04:14.736 CC lib/accel/accel_rpc.o 00:04:14.736 CC lib/fsdev/fsdev.o 00:04:14.736 CC lib/accel/accel_sw.o 00:04:14.736 CC lib/fsdev/fsdev_io.o 00:04:14.736 CC lib/fsdev/fsdev_rpc.o 00:04:14.736 CC lib/blob/blobstore.o 00:04:14.736 CC lib/init/json_config.o 00:04:14.736 CC lib/blob/request.o 00:04:14.736 CC lib/init/subsystem.o 00:04:14.736 CC lib/blob/zeroes.o 00:04:14.736 CC lib/init/subsystem_rpc.o 00:04:14.736 CC lib/blob/blob_bs_dev.o 00:04:14.736 CC lib/init/rpc.o 00:04:14.736 CC lib/vfu_tgt/tgt_endpoint.o 00:04:14.736 CC lib/virtio/virtio.o 00:04:14.736 CC lib/vfu_tgt/tgt_rpc.o 00:04:14.736 CC lib/virtio/virtio_vhost_user.o 00:04:14.736 CC lib/virtio/virtio_vfio_user.o 00:04:14.736 CC lib/virtio/virtio_pci.o 00:04:14.995 LIB libspdk_init.a 00:04:14.995 SO libspdk_init.so.6.0 00:04:14.995 LIB libspdk_virtio.a 00:04:14.995 LIB libspdk_vfu_tgt.a 00:04:14.995 SYMLINK libspdk_init.so 00:04:14.995 SO libspdk_virtio.so.7.0 00:04:14.995 SO libspdk_vfu_tgt.so.3.0 00:04:14.995 SYMLINK libspdk_vfu_tgt.so 00:04:14.995 SYMLINK libspdk_virtio.so 00:04:15.254 CC lib/event/app.o 00:04:15.254 CC lib/event/reactor.o 00:04:15.254 CC lib/event/log_rpc.o 00:04:15.254 CC lib/event/app_rpc.o 00:04:15.254 CC lib/event/scheduler_static.o 00:04:15.514 LIB libspdk_fsdev.a 00:04:15.514 SO libspdk_fsdev.so.2.0 00:04:15.514 SYMLINK libspdk_fsdev.so 00:04:15.514 LIB libspdk_event.a 00:04:15.773 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:04:15.773 SO libspdk_event.so.14.0 00:04:15.773 SYMLINK libspdk_event.so 00:04:15.773 LIB libspdk_accel.a 00:04:15.773 SO libspdk_accel.so.16.0 00:04:16.032 SYMLINK libspdk_accel.so 00:04:16.032 LIB libspdk_nvme.a 00:04:16.032 SO libspdk_nvme.so.15.0 00:04:16.032 CC lib/bdev/bdev.o 00:04:16.032 CC lib/bdev/bdev_rpc.o 00:04:16.032 CC lib/bdev/bdev_zone.o 00:04:16.032 CC lib/bdev/part.o 00:04:16.032 CC lib/bdev/scsi_nvme.o 00:04:16.292 SYMLINK libspdk_nvme.so 00:04:16.292 LIB libspdk_fuse_dispatcher.a 00:04:16.292 SO libspdk_fuse_dispatcher.so.1.0 00:04:16.292 SYMLINK libspdk_fuse_dispatcher.so 00:04:17.671 LIB libspdk_blob.a 00:04:17.671 SO libspdk_blob.so.11.0 00:04:17.931 SYMLINK libspdk_blob.so 00:04:17.931 CC lib/lvol/lvol.o 00:04:17.931 CC lib/blobfs/blobfs.o 00:04:17.931 CC lib/blobfs/tree.o 00:04:18.867 LIB libspdk_bdev.a 00:04:18.867 LIB libspdk_blobfs.a 00:04:18.867 SO libspdk_bdev.so.17.0 00:04:18.867 SO libspdk_blobfs.so.10.0 00:04:18.867 SYMLINK libspdk_blobfs.so 00:04:18.867 SYMLINK libspdk_bdev.so 00:04:18.867 LIB libspdk_lvol.a 00:04:18.867 SO libspdk_lvol.so.10.0 00:04:18.867 SYMLINK libspdk_lvol.so 00:04:19.133 CC lib/scsi/dev.o 00:04:19.133 CC lib/nvmf/ctrlr.o 00:04:19.133 CC lib/nvmf/ctrlr_discovery.o 00:04:19.133 CC lib/scsi/lun.o 00:04:19.133 CC lib/nbd/nbd.o 00:04:19.133 CC lib/scsi/port.o 00:04:19.133 CC lib/nvmf/ctrlr_bdev.o 00:04:19.133 CC lib/nbd/nbd_rpc.o 00:04:19.133 CC lib/scsi/scsi.o 00:04:19.133 CC lib/nvmf/subsystem.o 00:04:19.133 CC lib/scsi/scsi_bdev.o 00:04:19.133 CC lib/nvmf/nvmf.o 00:04:19.133 CC lib/scsi/scsi_pr.o 00:04:19.133 CC lib/scsi/scsi_rpc.o 00:04:19.133 CC lib/nvmf/nvmf_rpc.o 00:04:19.133 CC lib/ublk/ublk.o 00:04:19.133 CC lib/scsi/task.o 00:04:19.133 CC lib/ublk/ublk_rpc.o 00:04:19.133 CC lib/nvmf/transport.o 00:04:19.133 CC lib/ftl/ftl_core.o 00:04:19.133 CC lib/nvmf/stubs.o 00:04:19.133 CC lib/nvmf/tcp.o 00:04:19.133 CC lib/ftl/ftl_init.o 00:04:19.133 CC lib/nvmf/vfio_user.o 00:04:19.133 CC lib/nvmf/mdns_server.o 00:04:19.133 CC lib/ftl/ftl_layout.o 00:04:19.133 CC lib/ftl/ftl_debug.o 00:04:19.133 CC lib/nvmf/rdma.o 00:04:19.133 CC lib/nvmf/auth.o 00:04:19.133 CC lib/ftl/ftl_io.o 00:04:19.133 CC lib/ftl/ftl_sb.o 00:04:19.133 CC lib/ftl/ftl_l2p.o 00:04:19.133 CC lib/ftl/ftl_l2p_flat.o 00:04:19.133 CC lib/ftl/ftl_nv_cache.o 00:04:19.133 CC lib/ftl/ftl_band.o 00:04:19.133 CC lib/ftl/ftl_band_ops.o 00:04:19.133 CC lib/ftl/ftl_writer.o 00:04:19.133 CC lib/ftl/ftl_rq.o 00:04:19.133 CC lib/ftl/ftl_reloc.o 00:04:19.133 CC lib/ftl/ftl_l2p_cache.o 00:04:19.133 CC lib/ftl/ftl_p2l.o 00:04:19.133 CC lib/ftl/ftl_p2l_log.o 00:04:19.133 CC lib/ftl/mngt/ftl_mngt.o 00:04:19.133 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:04:19.133 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:04:19.133 CC lib/ftl/mngt/ftl_mngt_startup.o 00:04:19.133 CC lib/ftl/mngt/ftl_mngt_md.o 00:04:19.133 CC lib/ftl/mngt/ftl_mngt_misc.o 00:04:19.397 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:04:19.397 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:04:19.397 CC lib/ftl/mngt/ftl_mngt_band.o 00:04:19.397 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:04:19.397 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:04:19.397 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:04:19.397 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:04:19.397 CC lib/ftl/utils/ftl_conf.o 00:04:19.397 CC lib/ftl/utils/ftl_md.o 00:04:19.397 CC lib/ftl/utils/ftl_mempool.o 00:04:19.397 CC lib/ftl/utils/ftl_bitmap.o 00:04:19.397 CC lib/ftl/utils/ftl_property.o 00:04:19.656 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:04:19.656 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:04:19.656 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:04:19.656 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:04:19.656 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:04:19.656 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:04:19.656 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:04:19.656 CC lib/ftl/upgrade/ftl_sb_v3.o 00:04:19.656 CC lib/ftl/upgrade/ftl_sb_v5.o 00:04:19.656 CC lib/ftl/nvc/ftl_nvc_dev.o 00:04:19.656 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:04:19.656 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:04:19.656 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:04:19.656 CC lib/ftl/base/ftl_base_dev.o 00:04:19.918 CC lib/ftl/base/ftl_base_bdev.o 00:04:19.918 CC lib/ftl/ftl_trace.o 00:04:19.918 LIB libspdk_nbd.a 00:04:19.918 SO libspdk_nbd.so.7.0 00:04:19.918 SYMLINK libspdk_nbd.so 00:04:19.918 LIB libspdk_scsi.a 00:04:19.918 SO libspdk_scsi.so.9.0 00:04:20.178 SYMLINK libspdk_scsi.so 00:04:20.178 LIB libspdk_ublk.a 00:04:20.178 SO libspdk_ublk.so.3.0 00:04:20.178 SYMLINK libspdk_ublk.so 00:04:20.178 CC lib/iscsi/conn.o 00:04:20.178 CC lib/iscsi/init_grp.o 00:04:20.178 CC lib/vhost/vhost.o 00:04:20.178 CC lib/vhost/vhost_rpc.o 00:04:20.178 CC lib/iscsi/iscsi.o 00:04:20.178 CC lib/iscsi/param.o 00:04:20.178 CC lib/vhost/vhost_scsi.o 00:04:20.178 CC lib/iscsi/portal_grp.o 00:04:20.178 CC lib/vhost/vhost_blk.o 00:04:20.178 CC lib/iscsi/tgt_node.o 00:04:20.178 CC lib/vhost/rte_vhost_user.o 00:04:20.178 CC lib/iscsi/iscsi_subsystem.o 00:04:20.178 CC lib/iscsi/iscsi_rpc.o 00:04:20.178 CC lib/iscsi/task.o 00:04:20.747 LIB libspdk_ftl.a 00:04:20.747 SO libspdk_ftl.so.9.0 00:04:21.007 SYMLINK libspdk_ftl.so 00:04:21.575 LIB libspdk_vhost.a 00:04:21.575 SO libspdk_vhost.so.8.0 00:04:21.575 SYMLINK libspdk_vhost.so 00:04:21.836 LIB libspdk_nvmf.a 00:04:21.836 LIB libspdk_iscsi.a 00:04:21.836 SO libspdk_nvmf.so.20.0 00:04:21.836 SO libspdk_iscsi.so.8.0 00:04:21.836 SYMLINK libspdk_iscsi.so 00:04:22.095 SYMLINK libspdk_nvmf.so 00:04:22.355 CC module/env_dpdk/env_dpdk_rpc.o 00:04:22.355 CC module/vfu_device/vfu_virtio.o 00:04:22.355 CC module/vfu_device/vfu_virtio_blk.o 00:04:22.355 CC module/vfu_device/vfu_virtio_scsi.o 00:04:22.355 CC module/vfu_device/vfu_virtio_rpc.o 00:04:22.355 CC module/vfu_device/vfu_virtio_fs.o 00:04:22.355 CC module/accel/ioat/accel_ioat.o 00:04:22.355 CC module/keyring/linux/keyring.o 00:04:22.355 CC module/sock/posix/posix.o 00:04:22.355 CC module/blob/bdev/blob_bdev.o 00:04:22.355 CC module/accel/ioat/accel_ioat_rpc.o 00:04:22.355 CC module/keyring/linux/keyring_rpc.o 00:04:22.355 CC module/keyring/file/keyring.o 00:04:22.355 CC module/accel/dsa/accel_dsa.o 00:04:22.355 CC module/accel/iaa/accel_iaa.o 00:04:22.355 CC module/keyring/file/keyring_rpc.o 00:04:22.355 CC module/accel/dsa/accel_dsa_rpc.o 00:04:22.355 CC module/scheduler/gscheduler/gscheduler.o 00:04:22.355 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:04:22.355 CC module/accel/iaa/accel_iaa_rpc.o 00:04:22.355 CC module/scheduler/dynamic/scheduler_dynamic.o 00:04:22.355 CC module/accel/error/accel_error.o 00:04:22.355 CC module/fsdev/aio/fsdev_aio.o 00:04:22.355 CC module/fsdev/aio/fsdev_aio_rpc.o 00:04:22.355 CC module/accel/error/accel_error_rpc.o 00:04:22.355 CC module/fsdev/aio/linux_aio_mgr.o 00:04:22.355 LIB libspdk_env_dpdk_rpc.a 00:04:22.355 SO libspdk_env_dpdk_rpc.so.6.0 00:04:22.614 SYMLINK libspdk_env_dpdk_rpc.so 00:04:22.614 LIB libspdk_keyring_linux.a 00:04:22.614 LIB libspdk_keyring_file.a 00:04:22.614 LIB libspdk_scheduler_gscheduler.a 00:04:22.614 SO libspdk_keyring_linux.so.1.0 00:04:22.614 SO libspdk_scheduler_gscheduler.so.4.0 00:04:22.614 SO libspdk_keyring_file.so.2.0 00:04:22.614 LIB libspdk_accel_ioat.a 00:04:22.614 LIB libspdk_accel_error.a 00:04:22.614 LIB libspdk_scheduler_dynamic.a 00:04:22.614 LIB libspdk_accel_iaa.a 00:04:22.614 LIB libspdk_scheduler_dpdk_governor.a 00:04:22.614 SO libspdk_accel_ioat.so.6.0 00:04:22.614 SYMLINK libspdk_scheduler_gscheduler.so 00:04:22.614 SYMLINK libspdk_keyring_linux.so 00:04:22.614 SYMLINK libspdk_keyring_file.so 00:04:22.614 SO libspdk_scheduler_dynamic.so.4.0 00:04:22.614 SO libspdk_accel_error.so.2.0 00:04:22.614 SO libspdk_scheduler_dpdk_governor.so.4.0 00:04:22.614 SO libspdk_accel_iaa.so.3.0 00:04:22.614 SYMLINK libspdk_accel_ioat.so 00:04:22.614 LIB libspdk_blob_bdev.a 00:04:22.614 SYMLINK libspdk_scheduler_dynamic.so 00:04:22.614 LIB libspdk_accel_dsa.a 00:04:22.614 SYMLINK libspdk_scheduler_dpdk_governor.so 00:04:22.614 SYMLINK libspdk_accel_error.so 00:04:22.614 SYMLINK libspdk_accel_iaa.so 00:04:22.614 SO libspdk_blob_bdev.so.11.0 00:04:22.614 SO libspdk_accel_dsa.so.5.0 00:04:22.614 SYMLINK libspdk_blob_bdev.so 00:04:22.874 SYMLINK libspdk_accel_dsa.so 00:04:22.874 LIB libspdk_vfu_device.a 00:04:22.874 SO libspdk_vfu_device.so.3.0 00:04:22.874 CC module/bdev/null/bdev_null.o 00:04:22.874 CC module/bdev/gpt/gpt.o 00:04:22.874 CC module/bdev/null/bdev_null_rpc.o 00:04:22.874 CC module/bdev/gpt/vbdev_gpt.o 00:04:22.874 CC module/blobfs/bdev/blobfs_bdev.o 00:04:22.874 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:04:22.874 CC module/bdev/delay/vbdev_delay.o 00:04:22.874 CC module/bdev/delay/vbdev_delay_rpc.o 00:04:23.134 CC module/bdev/lvol/vbdev_lvol.o 00:04:23.134 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:04:23.134 CC module/bdev/malloc/bdev_malloc.o 00:04:23.134 CC module/bdev/malloc/bdev_malloc_rpc.o 00:04:23.134 CC module/bdev/error/vbdev_error.o 00:04:23.134 CC module/bdev/nvme/bdev_nvme.o 00:04:23.134 CC module/bdev/nvme/bdev_nvme_rpc.o 00:04:23.134 CC module/bdev/error/vbdev_error_rpc.o 00:04:23.134 CC module/bdev/split/vbdev_split.o 00:04:23.134 CC module/bdev/raid/bdev_raid.o 00:04:23.134 CC module/bdev/nvme/nvme_rpc.o 00:04:23.134 CC module/bdev/split/vbdev_split_rpc.o 00:04:23.134 CC module/bdev/iscsi/bdev_iscsi.o 00:04:23.134 CC module/bdev/raid/bdev_raid_rpc.o 00:04:23.134 CC module/bdev/aio/bdev_aio.o 00:04:23.134 CC module/bdev/passthru/vbdev_passthru.o 00:04:23.134 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:04:23.134 CC module/bdev/nvme/bdev_mdns_client.o 00:04:23.134 CC module/bdev/raid/bdev_raid_sb.o 00:04:23.134 CC module/bdev/aio/bdev_aio_rpc.o 00:04:23.134 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:04:23.134 CC module/bdev/nvme/vbdev_opal.o 00:04:23.134 CC module/bdev/raid/raid0.o 00:04:23.134 CC module/bdev/nvme/vbdev_opal_rpc.o 00:04:23.134 CC module/bdev/ftl/bdev_ftl.o 00:04:23.134 CC module/bdev/raid/raid1.o 00:04:23.134 CC module/bdev/zone_block/vbdev_zone_block.o 00:04:23.134 CC module/bdev/ftl/bdev_ftl_rpc.o 00:04:23.134 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:04:23.134 CC module/bdev/raid/concat.o 00:04:23.134 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:04:23.134 CC module/bdev/virtio/bdev_virtio_scsi.o 00:04:23.134 CC module/bdev/virtio/bdev_virtio_blk.o 00:04:23.134 CC module/bdev/virtio/bdev_virtio_rpc.o 00:04:23.134 SYMLINK libspdk_vfu_device.so 00:04:23.394 LIB libspdk_fsdev_aio.a 00:04:23.394 LIB libspdk_sock_posix.a 00:04:23.394 SO libspdk_fsdev_aio.so.1.0 00:04:23.394 SO libspdk_sock_posix.so.6.0 00:04:23.394 LIB libspdk_blobfs_bdev.a 00:04:23.394 SO libspdk_blobfs_bdev.so.6.0 00:04:23.394 SYMLINK libspdk_fsdev_aio.so 00:04:23.394 LIB libspdk_bdev_gpt.a 00:04:23.394 SYMLINK libspdk_sock_posix.so 00:04:23.394 LIB libspdk_bdev_split.a 00:04:23.394 SYMLINK libspdk_blobfs_bdev.so 00:04:23.394 LIB libspdk_bdev_iscsi.a 00:04:23.394 SO libspdk_bdev_gpt.so.6.0 00:04:23.394 LIB libspdk_bdev_ftl.a 00:04:23.394 LIB libspdk_bdev_error.a 00:04:23.394 SO libspdk_bdev_split.so.6.0 00:04:23.394 LIB libspdk_bdev_null.a 00:04:23.394 SO libspdk_bdev_iscsi.so.6.0 00:04:23.394 SO libspdk_bdev_ftl.so.6.0 00:04:23.653 SO libspdk_bdev_error.so.6.0 00:04:23.653 SO libspdk_bdev_null.so.6.0 00:04:23.653 SYMLINK libspdk_bdev_gpt.so 00:04:23.653 SYMLINK libspdk_bdev_split.so 00:04:23.653 SYMLINK libspdk_bdev_iscsi.so 00:04:23.653 SYMLINK libspdk_bdev_ftl.so 00:04:23.653 SYMLINK libspdk_bdev_error.so 00:04:23.653 LIB libspdk_bdev_passthru.a 00:04:23.653 SYMLINK libspdk_bdev_null.so 00:04:23.653 LIB libspdk_bdev_zone_block.a 00:04:23.653 SO libspdk_bdev_passthru.so.6.0 00:04:23.653 LIB libspdk_bdev_aio.a 00:04:23.653 SO libspdk_bdev_zone_block.so.6.0 00:04:23.653 LIB libspdk_bdev_malloc.a 00:04:23.653 LIB libspdk_bdev_delay.a 00:04:23.653 SO libspdk_bdev_aio.so.6.0 00:04:23.653 SO libspdk_bdev_malloc.so.6.0 00:04:23.653 SO libspdk_bdev_delay.so.6.0 00:04:23.653 SYMLINK libspdk_bdev_passthru.so 00:04:23.653 SYMLINK libspdk_bdev_zone_block.so 00:04:23.653 SYMLINK libspdk_bdev_aio.so 00:04:23.653 SYMLINK libspdk_bdev_malloc.so 00:04:23.653 SYMLINK libspdk_bdev_delay.so 00:04:23.912 LIB libspdk_bdev_lvol.a 00:04:23.912 SO libspdk_bdev_lvol.so.6.0 00:04:23.912 LIB libspdk_bdev_virtio.a 00:04:23.912 SO libspdk_bdev_virtio.so.6.0 00:04:23.912 SYMLINK libspdk_bdev_lvol.so 00:04:23.912 SYMLINK libspdk_bdev_virtio.so 00:04:24.171 LIB libspdk_bdev_raid.a 00:04:24.430 SO libspdk_bdev_raid.so.6.0 00:04:24.430 SYMLINK libspdk_bdev_raid.so 00:04:25.810 LIB libspdk_bdev_nvme.a 00:04:25.810 SO libspdk_bdev_nvme.so.7.1 00:04:25.810 SYMLINK libspdk_bdev_nvme.so 00:04:26.070 CC module/event/subsystems/iobuf/iobuf.o 00:04:26.070 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:04:26.070 CC module/event/subsystems/keyring/keyring.o 00:04:26.070 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:04:26.070 CC module/event/subsystems/vmd/vmd.o 00:04:26.070 CC module/event/subsystems/sock/sock.o 00:04:26.070 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:04:26.070 CC module/event/subsystems/vmd/vmd_rpc.o 00:04:26.070 CC module/event/subsystems/scheduler/scheduler.o 00:04:26.070 CC module/event/subsystems/fsdev/fsdev.o 00:04:26.329 LIB libspdk_event_keyring.a 00:04:26.329 LIB libspdk_event_vhost_blk.a 00:04:26.329 LIB libspdk_event_vfu_tgt.a 00:04:26.329 LIB libspdk_event_vmd.a 00:04:26.329 LIB libspdk_event_fsdev.a 00:04:26.329 LIB libspdk_event_scheduler.a 00:04:26.329 LIB libspdk_event_sock.a 00:04:26.329 SO libspdk_event_keyring.so.1.0 00:04:26.329 SO libspdk_event_vhost_blk.so.3.0 00:04:26.329 LIB libspdk_event_iobuf.a 00:04:26.329 SO libspdk_event_vfu_tgt.so.3.0 00:04:26.329 SO libspdk_event_fsdev.so.1.0 00:04:26.329 SO libspdk_event_scheduler.so.4.0 00:04:26.329 SO libspdk_event_vmd.so.6.0 00:04:26.329 SO libspdk_event_sock.so.5.0 00:04:26.329 SO libspdk_event_iobuf.so.3.0 00:04:26.329 SYMLINK libspdk_event_keyring.so 00:04:26.329 SYMLINK libspdk_event_vhost_blk.so 00:04:26.329 SYMLINK libspdk_event_fsdev.so 00:04:26.329 SYMLINK libspdk_event_vfu_tgt.so 00:04:26.329 SYMLINK libspdk_event_scheduler.so 00:04:26.329 SYMLINK libspdk_event_sock.so 00:04:26.329 SYMLINK libspdk_event_vmd.so 00:04:26.329 SYMLINK libspdk_event_iobuf.so 00:04:26.589 CC module/event/subsystems/accel/accel.o 00:04:26.849 LIB libspdk_event_accel.a 00:04:26.849 SO libspdk_event_accel.so.6.0 00:04:26.849 SYMLINK libspdk_event_accel.so 00:04:27.108 CC module/event/subsystems/bdev/bdev.o 00:04:27.108 LIB libspdk_event_bdev.a 00:04:27.108 SO libspdk_event_bdev.so.6.0 00:04:27.367 SYMLINK libspdk_event_bdev.so 00:04:27.367 CC module/event/subsystems/scsi/scsi.o 00:04:27.367 CC module/event/subsystems/nbd/nbd.o 00:04:27.367 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:04:27.367 CC module/event/subsystems/ublk/ublk.o 00:04:27.367 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:04:27.627 LIB libspdk_event_nbd.a 00:04:27.627 LIB libspdk_event_ublk.a 00:04:27.627 LIB libspdk_event_scsi.a 00:04:27.627 SO libspdk_event_nbd.so.6.0 00:04:27.627 SO libspdk_event_ublk.so.3.0 00:04:27.627 SO libspdk_event_scsi.so.6.0 00:04:27.627 SYMLINK libspdk_event_ublk.so 00:04:27.627 SYMLINK libspdk_event_nbd.so 00:04:27.627 SYMLINK libspdk_event_scsi.so 00:04:27.627 LIB libspdk_event_nvmf.a 00:04:27.627 SO libspdk_event_nvmf.so.6.0 00:04:27.886 SYMLINK libspdk_event_nvmf.so 00:04:27.886 CC module/event/subsystems/iscsi/iscsi.o 00:04:27.886 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:04:27.886 LIB libspdk_event_vhost_scsi.a 00:04:27.886 LIB libspdk_event_iscsi.a 00:04:28.145 SO libspdk_event_vhost_scsi.so.3.0 00:04:28.145 SO libspdk_event_iscsi.so.6.0 00:04:28.145 SYMLINK libspdk_event_vhost_scsi.so 00:04:28.145 SYMLINK libspdk_event_iscsi.so 00:04:28.145 SO libspdk.so.6.0 00:04:28.145 SYMLINK libspdk.so 00:04:28.416 CC app/trace_record/trace_record.o 00:04:28.416 CC app/spdk_top/spdk_top.o 00:04:28.416 CC app/spdk_nvme_identify/identify.o 00:04:28.416 CC app/spdk_nvme_discover/discovery_aer.o 00:04:28.416 CC app/spdk_lspci/spdk_lspci.o 00:04:28.416 CXX app/trace/trace.o 00:04:28.416 CC app/spdk_nvme_perf/perf.o 00:04:28.416 CC test/rpc_client/rpc_client_test.o 00:04:28.416 TEST_HEADER include/spdk/accel.h 00:04:28.416 TEST_HEADER include/spdk/accel_module.h 00:04:28.416 TEST_HEADER include/spdk/assert.h 00:04:28.416 TEST_HEADER include/spdk/barrier.h 00:04:28.416 TEST_HEADER include/spdk/base64.h 00:04:28.416 TEST_HEADER include/spdk/bdev.h 00:04:28.416 TEST_HEADER include/spdk/bdev_module.h 00:04:28.416 TEST_HEADER include/spdk/bdev_zone.h 00:04:28.416 TEST_HEADER include/spdk/bit_array.h 00:04:28.416 TEST_HEADER include/spdk/bit_pool.h 00:04:28.416 TEST_HEADER include/spdk/blob_bdev.h 00:04:28.416 TEST_HEADER include/spdk/blobfs_bdev.h 00:04:28.416 TEST_HEADER include/spdk/blobfs.h 00:04:28.416 TEST_HEADER include/spdk/blob.h 00:04:28.416 TEST_HEADER include/spdk/conf.h 00:04:28.416 TEST_HEADER include/spdk/config.h 00:04:28.416 TEST_HEADER include/spdk/cpuset.h 00:04:28.416 TEST_HEADER include/spdk/crc16.h 00:04:28.416 TEST_HEADER include/spdk/crc32.h 00:04:28.416 TEST_HEADER include/spdk/crc64.h 00:04:28.416 TEST_HEADER include/spdk/dma.h 00:04:28.416 TEST_HEADER include/spdk/dif.h 00:04:28.416 TEST_HEADER include/spdk/endian.h 00:04:28.416 TEST_HEADER include/spdk/env_dpdk.h 00:04:28.416 TEST_HEADER include/spdk/env.h 00:04:28.416 TEST_HEADER include/spdk/event.h 00:04:28.416 TEST_HEADER include/spdk/fd.h 00:04:28.416 TEST_HEADER include/spdk/fd_group.h 00:04:28.416 TEST_HEADER include/spdk/file.h 00:04:28.416 TEST_HEADER include/spdk/fsdev.h 00:04:28.416 TEST_HEADER include/spdk/fsdev_module.h 00:04:28.416 TEST_HEADER include/spdk/ftl.h 00:04:28.416 TEST_HEADER include/spdk/fuse_dispatcher.h 00:04:28.416 TEST_HEADER include/spdk/gpt_spec.h 00:04:28.416 TEST_HEADER include/spdk/hexlify.h 00:04:28.416 TEST_HEADER include/spdk/idxd.h 00:04:28.416 TEST_HEADER include/spdk/histogram_data.h 00:04:28.416 TEST_HEADER include/spdk/idxd_spec.h 00:04:28.416 TEST_HEADER include/spdk/init.h 00:04:28.416 TEST_HEADER include/spdk/ioat.h 00:04:28.416 TEST_HEADER include/spdk/ioat_spec.h 00:04:28.416 TEST_HEADER include/spdk/iscsi_spec.h 00:04:28.416 TEST_HEADER include/spdk/json.h 00:04:28.416 TEST_HEADER include/spdk/jsonrpc.h 00:04:28.416 TEST_HEADER include/spdk/keyring.h 00:04:28.416 TEST_HEADER include/spdk/keyring_module.h 00:04:28.416 TEST_HEADER include/spdk/likely.h 00:04:28.416 TEST_HEADER include/spdk/log.h 00:04:28.416 TEST_HEADER include/spdk/lvol.h 00:04:28.416 TEST_HEADER include/spdk/md5.h 00:04:28.416 TEST_HEADER include/spdk/memory.h 00:04:28.416 TEST_HEADER include/spdk/mmio.h 00:04:28.416 TEST_HEADER include/spdk/nbd.h 00:04:28.416 TEST_HEADER include/spdk/net.h 00:04:28.416 TEST_HEADER include/spdk/nvme.h 00:04:28.416 TEST_HEADER include/spdk/notify.h 00:04:28.416 TEST_HEADER include/spdk/nvme_intel.h 00:04:28.416 TEST_HEADER include/spdk/nvme_ocssd.h 00:04:28.416 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:04:28.416 TEST_HEADER include/spdk/nvme_spec.h 00:04:28.416 TEST_HEADER include/spdk/nvme_zns.h 00:04:28.416 TEST_HEADER include/spdk/nvmf_cmd.h 00:04:28.417 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:04:28.417 TEST_HEADER include/spdk/nvmf.h 00:04:28.417 TEST_HEADER include/spdk/nvmf_transport.h 00:04:28.417 TEST_HEADER include/spdk/nvmf_spec.h 00:04:28.417 TEST_HEADER include/spdk/opal.h 00:04:28.417 TEST_HEADER include/spdk/opal_spec.h 00:04:28.417 TEST_HEADER include/spdk/pci_ids.h 00:04:28.417 TEST_HEADER include/spdk/pipe.h 00:04:28.417 TEST_HEADER include/spdk/queue.h 00:04:28.417 TEST_HEADER include/spdk/reduce.h 00:04:28.417 TEST_HEADER include/spdk/rpc.h 00:04:28.417 TEST_HEADER include/spdk/scheduler.h 00:04:28.417 TEST_HEADER include/spdk/scsi.h 00:04:28.417 TEST_HEADER include/spdk/scsi_spec.h 00:04:28.417 TEST_HEADER include/spdk/sock.h 00:04:28.417 TEST_HEADER include/spdk/stdinc.h 00:04:28.417 TEST_HEADER include/spdk/string.h 00:04:28.417 TEST_HEADER include/spdk/thread.h 00:04:28.417 TEST_HEADER include/spdk/trace.h 00:04:28.417 TEST_HEADER include/spdk/trace_parser.h 00:04:28.417 TEST_HEADER include/spdk/tree.h 00:04:28.417 TEST_HEADER include/spdk/ublk.h 00:04:28.417 TEST_HEADER include/spdk/util.h 00:04:28.417 TEST_HEADER include/spdk/uuid.h 00:04:28.417 TEST_HEADER include/spdk/version.h 00:04:28.417 TEST_HEADER include/spdk/vfio_user_pci.h 00:04:28.417 TEST_HEADER include/spdk/vfio_user_spec.h 00:04:28.417 TEST_HEADER include/spdk/vhost.h 00:04:28.417 TEST_HEADER include/spdk/vmd.h 00:04:28.417 TEST_HEADER include/spdk/xor.h 00:04:28.417 TEST_HEADER include/spdk/zipf.h 00:04:28.417 CXX test/cpp_headers/accel.o 00:04:28.417 CXX test/cpp_headers/accel_module.o 00:04:28.417 CC app/spdk_dd/spdk_dd.o 00:04:28.417 CXX test/cpp_headers/assert.o 00:04:28.417 CXX test/cpp_headers/barrier.o 00:04:28.417 CXX test/cpp_headers/base64.o 00:04:28.417 CXX test/cpp_headers/bdev.o 00:04:28.417 CXX test/cpp_headers/bdev_module.o 00:04:28.417 CXX test/cpp_headers/bdev_zone.o 00:04:28.417 CXX test/cpp_headers/bit_array.o 00:04:28.417 CXX test/cpp_headers/bit_pool.o 00:04:28.417 CC app/iscsi_tgt/iscsi_tgt.o 00:04:28.417 CXX test/cpp_headers/blob_bdev.o 00:04:28.417 CXX test/cpp_headers/blobfs_bdev.o 00:04:28.417 CXX test/cpp_headers/blobfs.o 00:04:28.417 CXX test/cpp_headers/blob.o 00:04:28.417 CXX test/cpp_headers/conf.o 00:04:28.417 CXX test/cpp_headers/config.o 00:04:28.417 CXX test/cpp_headers/cpuset.o 00:04:28.417 CXX test/cpp_headers/crc16.o 00:04:28.417 CC examples/interrupt_tgt/interrupt_tgt.o 00:04:28.417 CC app/nvmf_tgt/nvmf_main.o 00:04:28.417 CC app/spdk_tgt/spdk_tgt.o 00:04:28.417 CXX test/cpp_headers/crc32.o 00:04:28.417 CC app/fio/nvme/fio_plugin.o 00:04:28.417 CC test/app/jsoncat/jsoncat.o 00:04:28.417 CC test/app/stub/stub.o 00:04:28.417 CC test/thread/poller_perf/poller_perf.o 00:04:28.417 CC test/app/histogram_perf/histogram_perf.o 00:04:28.417 CC test/env/pci/pci_ut.o 00:04:28.417 CC test/env/memory/memory_ut.o 00:04:28.417 CC examples/ioat/verify/verify.o 00:04:28.417 CC test/env/vtophys/vtophys.o 00:04:28.417 CC examples/util/zipf/zipf.o 00:04:28.417 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:04:28.417 CC examples/ioat/perf/perf.o 00:04:28.686 CC app/fio/bdev/fio_plugin.o 00:04:28.686 CC test/dma/test_dma/test_dma.o 00:04:28.686 CC test/app/bdev_svc/bdev_svc.o 00:04:28.686 LINK spdk_lspci 00:04:28.686 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:04:28.686 CC test/env/mem_callbacks/mem_callbacks.o 00:04:28.952 LINK rpc_client_test 00:04:28.952 LINK spdk_nvme_discover 00:04:28.952 LINK jsoncat 00:04:28.952 LINK poller_perf 00:04:28.952 LINK histogram_perf 00:04:28.952 LINK vtophys 00:04:28.952 CXX test/cpp_headers/crc64.o 00:04:28.952 CXX test/cpp_headers/dif.o 00:04:28.952 LINK interrupt_tgt 00:04:28.952 CXX test/cpp_headers/dma.o 00:04:28.952 LINK zipf 00:04:28.952 CXX test/cpp_headers/endian.o 00:04:28.952 CXX test/cpp_headers/env_dpdk.o 00:04:28.952 CXX test/cpp_headers/env.o 00:04:28.952 LINK nvmf_tgt 00:04:28.952 CXX test/cpp_headers/event.o 00:04:28.952 CXX test/cpp_headers/fd_group.o 00:04:28.952 CXX test/cpp_headers/fd.o 00:04:28.952 CXX test/cpp_headers/file.o 00:04:28.952 LINK env_dpdk_post_init 00:04:28.952 LINK stub 00:04:28.952 LINK iscsi_tgt 00:04:28.952 CXX test/cpp_headers/fsdev.o 00:04:28.952 CXX test/cpp_headers/fsdev_module.o 00:04:28.952 LINK spdk_trace_record 00:04:28.952 CXX test/cpp_headers/ftl.o 00:04:28.952 CXX test/cpp_headers/fuse_dispatcher.o 00:04:28.952 CXX test/cpp_headers/gpt_spec.o 00:04:28.952 CXX test/cpp_headers/hexlify.o 00:04:28.952 LINK spdk_tgt 00:04:28.952 CXX test/cpp_headers/histogram_data.o 00:04:28.952 LINK verify 00:04:28.952 LINK ioat_perf 00:04:28.952 LINK bdev_svc 00:04:28.952 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:04:29.220 CXX test/cpp_headers/idxd.o 00:04:29.220 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:04:29.220 CXX test/cpp_headers/idxd_spec.o 00:04:29.220 CXX test/cpp_headers/init.o 00:04:29.220 LINK mem_callbacks 00:04:29.220 CXX test/cpp_headers/ioat.o 00:04:29.220 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:04:29.220 CXX test/cpp_headers/ioat_spec.o 00:04:29.220 CXX test/cpp_headers/iscsi_spec.o 00:04:29.220 CXX test/cpp_headers/json.o 00:04:29.220 LINK spdk_dd 00:04:29.220 LINK spdk_trace 00:04:29.220 CXX test/cpp_headers/jsonrpc.o 00:04:29.220 CXX test/cpp_headers/keyring.o 00:04:29.220 CXX test/cpp_headers/keyring_module.o 00:04:29.220 CXX test/cpp_headers/likely.o 00:04:29.220 CXX test/cpp_headers/log.o 00:04:29.220 CXX test/cpp_headers/lvol.o 00:04:29.487 CXX test/cpp_headers/md5.o 00:04:29.487 CXX test/cpp_headers/memory.o 00:04:29.487 CXX test/cpp_headers/mmio.o 00:04:29.487 CXX test/cpp_headers/nbd.o 00:04:29.487 CXX test/cpp_headers/net.o 00:04:29.487 CXX test/cpp_headers/notify.o 00:04:29.487 CXX test/cpp_headers/nvme.o 00:04:29.487 LINK pci_ut 00:04:29.487 CXX test/cpp_headers/nvme_intel.o 00:04:29.487 CXX test/cpp_headers/nvme_ocssd.o 00:04:29.487 CXX test/cpp_headers/nvme_ocssd_spec.o 00:04:29.487 CXX test/cpp_headers/nvme_spec.o 00:04:29.487 CXX test/cpp_headers/nvme_zns.o 00:04:29.487 CXX test/cpp_headers/nvmf_cmd.o 00:04:29.487 CXX test/cpp_headers/nvmf_fc_spec.o 00:04:29.487 CXX test/cpp_headers/nvmf.o 00:04:29.487 CXX test/cpp_headers/nvmf_spec.o 00:04:29.487 CXX test/cpp_headers/nvmf_transport.o 00:04:29.487 CC test/event/event_perf/event_perf.o 00:04:29.487 CC test/event/reactor/reactor.o 00:04:29.487 CXX test/cpp_headers/opal.o 00:04:29.487 LINK nvme_fuzz 00:04:29.487 CC test/event/reactor_perf/reactor_perf.o 00:04:29.753 CC examples/sock/hello_world/hello_sock.o 00:04:29.753 CC examples/vmd/lsvmd/lsvmd.o 00:04:29.753 CXX test/cpp_headers/opal_spec.o 00:04:29.753 CC examples/thread/thread/thread_ex.o 00:04:29.753 CC examples/idxd/perf/perf.o 00:04:29.753 CC test/event/app_repeat/app_repeat.o 00:04:29.753 CXX test/cpp_headers/pci_ids.o 00:04:29.753 LINK test_dma 00:04:29.753 CXX test/cpp_headers/pipe.o 00:04:29.753 CC examples/vmd/led/led.o 00:04:29.753 CC test/event/scheduler/scheduler.o 00:04:29.753 CXX test/cpp_headers/queue.o 00:04:29.753 CXX test/cpp_headers/reduce.o 00:04:29.753 CXX test/cpp_headers/rpc.o 00:04:29.753 CXX test/cpp_headers/scheduler.o 00:04:29.753 CXX test/cpp_headers/scsi.o 00:04:29.753 CXX test/cpp_headers/scsi_spec.o 00:04:29.753 CXX test/cpp_headers/sock.o 00:04:29.753 CXX test/cpp_headers/stdinc.o 00:04:29.753 CXX test/cpp_headers/string.o 00:04:29.753 CXX test/cpp_headers/thread.o 00:04:29.753 CXX test/cpp_headers/trace.o 00:04:29.753 CXX test/cpp_headers/trace_parser.o 00:04:29.753 LINK spdk_bdev 00:04:29.753 CXX test/cpp_headers/tree.o 00:04:29.753 LINK spdk_nvme 00:04:29.753 CXX test/cpp_headers/ublk.o 00:04:29.753 CXX test/cpp_headers/util.o 00:04:29.753 CXX test/cpp_headers/uuid.o 00:04:30.014 LINK event_perf 00:04:30.014 CXX test/cpp_headers/version.o 00:04:30.014 CXX test/cpp_headers/vfio_user_pci.o 00:04:30.014 LINK reactor 00:04:30.014 LINK reactor_perf 00:04:30.014 CXX test/cpp_headers/vfio_user_spec.o 00:04:30.014 LINK lsvmd 00:04:30.014 CC app/vhost/vhost.o 00:04:30.014 CXX test/cpp_headers/vhost.o 00:04:30.014 CXX test/cpp_headers/vmd.o 00:04:30.014 CXX test/cpp_headers/xor.o 00:04:30.014 CXX test/cpp_headers/zipf.o 00:04:30.014 LINK spdk_nvme_perf 00:04:30.014 LINK spdk_nvme_identify 00:04:30.014 LINK app_repeat 00:04:30.014 LINK led 00:04:30.014 LINK vhost_fuzz 00:04:30.014 LINK memory_ut 00:04:30.014 LINK spdk_top 00:04:30.014 LINK hello_sock 00:04:30.274 LINK scheduler 00:04:30.274 LINK thread 00:04:30.274 CC test/nvme/overhead/overhead.o 00:04:30.274 CC test/nvme/reset/reset.o 00:04:30.274 CC test/nvme/compliance/nvme_compliance.o 00:04:30.274 CC test/nvme/reserve/reserve.o 00:04:30.274 CC test/nvme/sgl/sgl.o 00:04:30.274 CC test/nvme/boot_partition/boot_partition.o 00:04:30.274 CC test/nvme/startup/startup.o 00:04:30.274 CC test/nvme/fdp/fdp.o 00:04:30.274 CC test/nvme/connect_stress/connect_stress.o 00:04:30.274 CC test/nvme/doorbell_aers/doorbell_aers.o 00:04:30.274 CC test/nvme/cuse/cuse.o 00:04:30.274 CC test/nvme/err_injection/err_injection.o 00:04:30.274 CC test/nvme/e2edp/nvme_dp.o 00:04:30.274 CC test/nvme/aer/aer.o 00:04:30.274 CC test/nvme/simple_copy/simple_copy.o 00:04:30.274 CC test/nvme/fused_ordering/fused_ordering.o 00:04:30.274 LINK vhost 00:04:30.274 LINK idxd_perf 00:04:30.274 CC test/accel/dif/dif.o 00:04:30.274 CC test/blobfs/mkfs/mkfs.o 00:04:30.534 CC test/lvol/esnap/esnap.o 00:04:30.534 CC examples/nvme/nvme_manage/nvme_manage.o 00:04:30.534 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:04:30.534 CC examples/nvme/cmb_copy/cmb_copy.o 00:04:30.534 CC examples/nvme/abort/abort.o 00:04:30.534 CC examples/nvme/hello_world/hello_world.o 00:04:30.534 CC examples/nvme/reconnect/reconnect.o 00:04:30.534 CC examples/nvme/hotplug/hotplug.o 00:04:30.535 CC examples/nvme/arbitration/arbitration.o 00:04:30.535 LINK connect_stress 00:04:30.535 LINK err_injection 00:04:30.535 LINK reserve 00:04:30.535 LINK doorbell_aers 00:04:30.535 LINK fused_ordering 00:04:30.795 LINK reset 00:04:30.795 LINK boot_partition 00:04:30.795 LINK mkfs 00:04:30.795 LINK startup 00:04:30.795 LINK nvme_dp 00:04:30.795 CC examples/accel/perf/accel_perf.o 00:04:30.795 LINK aer 00:04:30.795 CC examples/blob/hello_world/hello_blob.o 00:04:30.795 CC examples/blob/cli/blobcli.o 00:04:30.795 LINK fdp 00:04:30.795 CC examples/fsdev/hello_world/hello_fsdev.o 00:04:30.795 LINK nvme_compliance 00:04:30.795 LINK simple_copy 00:04:30.795 LINK sgl 00:04:30.795 LINK hello_world 00:04:30.795 LINK overhead 00:04:30.795 LINK cmb_copy 00:04:31.054 LINK pmr_persistence 00:04:31.054 LINK hotplug 00:04:31.054 LINK reconnect 00:04:31.054 LINK abort 00:04:31.054 LINK arbitration 00:04:31.054 LINK hello_blob 00:04:31.054 LINK hello_fsdev 00:04:31.054 LINK dif 00:04:31.313 LINK nvme_manage 00:04:31.313 LINK blobcli 00:04:31.313 LINK accel_perf 00:04:31.572 LINK iscsi_fuzz 00:04:31.572 CC test/bdev/bdevio/bdevio.o 00:04:31.572 CC examples/bdev/hello_world/hello_bdev.o 00:04:31.572 CC examples/bdev/bdevperf/bdevperf.o 00:04:31.830 LINK hello_bdev 00:04:31.830 LINK bdevio 00:04:32.089 LINK cuse 00:04:32.350 LINK bdevperf 00:04:32.920 CC examples/nvmf/nvmf/nvmf.o 00:04:33.179 LINK nvmf 00:04:35.728 LINK esnap 00:04:35.728 00:04:35.728 real 1m7.561s 00:04:35.728 user 9m2.469s 00:04:35.728 sys 1m59.280s 00:04:35.728 06:49:56 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:04:35.728 06:49:56 make -- common/autotest_common.sh@10 -- $ set +x 00:04:35.728 ************************************ 00:04:35.728 END TEST make 00:04:35.728 ************************************ 00:04:35.728 06:49:56 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:04:35.728 06:49:56 -- pm/common@29 -- $ signal_monitor_resources TERM 00:04:35.728 06:49:56 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:04:35.728 06:49:56 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:35.728 06:49:56 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:04:35.728 06:49:56 -- pm/common@44 -- $ pid=6100 00:04:35.728 06:49:56 -- pm/common@50 -- $ kill -TERM 6100 00:04:35.728 06:49:56 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:35.728 06:49:56 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:04:35.728 06:49:56 -- pm/common@44 -- $ pid=6102 00:04:35.728 06:49:56 -- pm/common@50 -- $ kill -TERM 6102 00:04:35.728 06:49:56 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:35.728 06:49:56 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:04:35.728 06:49:56 -- pm/common@44 -- $ pid=6104 00:04:35.728 06:49:56 -- pm/common@50 -- $ kill -TERM 6104 00:04:35.728 06:49:56 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:35.728 06:49:56 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:04:35.728 06:49:56 -- pm/common@44 -- $ pid=6135 00:04:35.728 06:49:56 -- pm/common@50 -- $ sudo -E kill -TERM 6135 00:04:35.987 06:49:56 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:04:35.987 06:49:56 -- spdk/autorun.sh@27 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:04:35.987 06:49:56 -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:35.987 06:49:56 -- common/autotest_common.sh@1693 -- # lcov --version 00:04:35.987 06:49:56 -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:35.987 06:49:56 -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:35.987 06:49:56 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:35.987 06:49:56 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:35.987 06:49:56 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:35.987 06:49:56 -- scripts/common.sh@336 -- # IFS=.-: 00:04:35.987 06:49:56 -- scripts/common.sh@336 -- # read -ra ver1 00:04:35.987 06:49:56 -- scripts/common.sh@337 -- # IFS=.-: 00:04:35.987 06:49:56 -- scripts/common.sh@337 -- # read -ra ver2 00:04:35.987 06:49:56 -- scripts/common.sh@338 -- # local 'op=<' 00:04:35.987 06:49:56 -- scripts/common.sh@340 -- # ver1_l=2 00:04:35.987 06:49:56 -- scripts/common.sh@341 -- # ver2_l=1 00:04:35.987 06:49:56 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:35.987 06:49:56 -- scripts/common.sh@344 -- # case "$op" in 00:04:35.987 06:49:56 -- scripts/common.sh@345 -- # : 1 00:04:35.987 06:49:56 -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:35.987 06:49:56 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:35.987 06:49:56 -- scripts/common.sh@365 -- # decimal 1 00:04:35.987 06:49:56 -- scripts/common.sh@353 -- # local d=1 00:04:35.987 06:49:56 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:35.987 06:49:56 -- scripts/common.sh@355 -- # echo 1 00:04:35.987 06:49:56 -- scripts/common.sh@365 -- # ver1[v]=1 00:04:35.987 06:49:56 -- scripts/common.sh@366 -- # decimal 2 00:04:35.987 06:49:56 -- scripts/common.sh@353 -- # local d=2 00:04:35.987 06:49:56 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:35.987 06:49:56 -- scripts/common.sh@355 -- # echo 2 00:04:35.987 06:49:56 -- scripts/common.sh@366 -- # ver2[v]=2 00:04:35.987 06:49:56 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:35.987 06:49:56 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:35.987 06:49:56 -- scripts/common.sh@368 -- # return 0 00:04:35.987 06:49:56 -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:35.987 06:49:56 -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:35.987 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:35.987 --rc genhtml_branch_coverage=1 00:04:35.987 --rc genhtml_function_coverage=1 00:04:35.987 --rc genhtml_legend=1 00:04:35.987 --rc geninfo_all_blocks=1 00:04:35.987 --rc geninfo_unexecuted_blocks=1 00:04:35.987 00:04:35.987 ' 00:04:35.987 06:49:56 -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:35.987 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:35.987 --rc genhtml_branch_coverage=1 00:04:35.987 --rc genhtml_function_coverage=1 00:04:35.987 --rc genhtml_legend=1 00:04:35.987 --rc geninfo_all_blocks=1 00:04:35.987 --rc geninfo_unexecuted_blocks=1 00:04:35.987 00:04:35.987 ' 00:04:35.987 06:49:56 -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:35.987 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:35.987 --rc genhtml_branch_coverage=1 00:04:35.987 --rc genhtml_function_coverage=1 00:04:35.987 --rc genhtml_legend=1 00:04:35.987 --rc geninfo_all_blocks=1 00:04:35.987 --rc geninfo_unexecuted_blocks=1 00:04:35.987 00:04:35.987 ' 00:04:35.987 06:49:56 -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:35.987 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:35.987 --rc genhtml_branch_coverage=1 00:04:35.987 --rc genhtml_function_coverage=1 00:04:35.987 --rc genhtml_legend=1 00:04:35.988 --rc geninfo_all_blocks=1 00:04:35.988 --rc geninfo_unexecuted_blocks=1 00:04:35.988 00:04:35.988 ' 00:04:35.988 06:49:56 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:35.988 06:49:56 -- nvmf/common.sh@7 -- # uname -s 00:04:35.988 06:49:56 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:35.988 06:49:56 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:35.988 06:49:56 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:35.988 06:49:56 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:35.988 06:49:56 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:35.988 06:49:56 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:35.988 06:49:56 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:35.988 06:49:56 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:35.988 06:49:56 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:35.988 06:49:56 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:35.988 06:49:56 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:04:35.988 06:49:56 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:04:35.988 06:49:56 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:35.988 06:49:56 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:35.988 06:49:56 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:04:35.988 06:49:56 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:35.988 06:49:56 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:35.988 06:49:56 -- scripts/common.sh@15 -- # shopt -s extglob 00:04:35.988 06:49:56 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:35.988 06:49:56 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:35.988 06:49:56 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:35.988 06:49:56 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:35.988 06:49:56 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:35.988 06:49:56 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:35.988 06:49:56 -- paths/export.sh@5 -- # export PATH 00:04:35.988 06:49:56 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:35.988 06:49:56 -- nvmf/common.sh@51 -- # : 0 00:04:35.988 06:49:56 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:35.988 06:49:56 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:35.988 06:49:56 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:35.988 06:49:56 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:35.988 06:49:56 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:35.988 06:49:56 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:35.988 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:35.988 06:49:56 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:35.988 06:49:56 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:35.988 06:49:56 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:35.988 06:49:56 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:04:35.988 06:49:56 -- spdk/autotest.sh@32 -- # uname -s 00:04:35.988 06:49:56 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:04:35.988 06:49:56 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:04:35.988 06:49:56 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:04:35.988 06:49:56 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:04:35.988 06:49:56 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:04:35.988 06:49:56 -- spdk/autotest.sh@44 -- # modprobe nbd 00:04:35.988 06:49:56 -- spdk/autotest.sh@46 -- # type -P udevadm 00:04:35.988 06:49:56 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:04:35.988 06:49:56 -- spdk/autotest.sh@48 -- # udevadm_pid=86999 00:04:35.988 06:49:56 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:04:36.246 06:49:56 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:04:36.246 06:49:56 -- pm/common@17 -- # local monitor 00:04:36.246 06:49:56 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:36.246 06:49:56 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:36.246 06:49:56 -- pm/common@21 -- # date +%s 00:04:36.246 06:49:56 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:36.246 06:49:56 -- pm/common@21 -- # date +%s 00:04:36.246 06:49:56 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:36.246 06:49:56 -- pm/common@21 -- # date +%s 00:04:36.246 06:49:56 -- pm/common@25 -- # sleep 1 00:04:36.246 06:49:56 -- pm/common@21 -- # date +%s 00:04:36.246 06:49:56 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1731908996 00:04:36.246 06:49:56 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1731908996 00:04:36.246 06:49:56 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1731908996 00:04:36.246 06:49:56 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1731908996 00:04:36.246 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1731908996_collect-vmstat.pm.log 00:04:36.246 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1731908996_collect-cpu-load.pm.log 00:04:36.246 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1731908996_collect-cpu-temp.pm.log 00:04:36.246 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1731908996_collect-bmc-pm.bmc.pm.log 00:04:37.185 06:49:57 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:04:37.185 06:49:57 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:04:37.185 06:49:57 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:37.185 06:49:57 -- common/autotest_common.sh@10 -- # set +x 00:04:37.185 06:49:57 -- spdk/autotest.sh@59 -- # create_test_list 00:04:37.185 06:49:57 -- common/autotest_common.sh@752 -- # xtrace_disable 00:04:37.185 06:49:57 -- common/autotest_common.sh@10 -- # set +x 00:04:37.185 06:49:58 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:04:37.185 06:49:58 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:37.185 06:49:58 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:37.185 06:49:58 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:04:37.185 06:49:58 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:37.185 06:49:58 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:04:37.185 06:49:58 -- common/autotest_common.sh@1457 -- # uname 00:04:37.185 06:49:58 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:04:37.185 06:49:58 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:04:37.185 06:49:58 -- common/autotest_common.sh@1477 -- # uname 00:04:37.185 06:49:58 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:04:37.185 06:49:58 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:04:37.185 06:49:58 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:04:37.185 lcov: LCOV version 1.15 00:04:37.185 06:49:58 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:05:09.313 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:05:09.313 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:05:14.600 06:50:35 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:05:14.600 06:50:35 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:14.600 06:50:35 -- common/autotest_common.sh@10 -- # set +x 00:05:14.600 06:50:35 -- spdk/autotest.sh@78 -- # rm -f 00:05:14.600 06:50:35 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:15.544 0000:88:00.0 (8086 0a54): Already using the nvme driver 00:05:15.544 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:05:15.544 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:05:15.544 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:05:15.544 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:05:15.544 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:05:15.544 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:05:15.544 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:05:15.544 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:05:15.544 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:05:15.544 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:05:15.544 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:05:15.544 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:05:15.544 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:05:15.544 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:05:15.544 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:05:15.544 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:05:15.805 06:50:36 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:05:15.805 06:50:36 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:05:15.805 06:50:36 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:05:15.805 06:50:36 -- common/autotest_common.sh@1658 -- # local nvme bdf 00:05:15.805 06:50:36 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:05:15.805 06:50:36 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:05:15.805 06:50:36 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:05:15.805 06:50:36 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:15.805 06:50:36 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:15.805 06:50:36 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:05:15.805 06:50:36 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:15.805 06:50:36 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:15.805 06:50:36 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:05:15.805 06:50:36 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:05:15.805 06:50:36 -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:05:15.805 No valid GPT data, bailing 00:05:15.805 06:50:36 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:15.805 06:50:36 -- scripts/common.sh@394 -- # pt= 00:05:15.805 06:50:36 -- scripts/common.sh@395 -- # return 1 00:05:15.805 06:50:36 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:05:15.805 1+0 records in 00:05:15.805 1+0 records out 00:05:15.805 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00228676 s, 459 MB/s 00:05:15.805 06:50:36 -- spdk/autotest.sh@105 -- # sync 00:05:15.805 06:50:36 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:05:15.805 06:50:36 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:05:15.805 06:50:36 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:05:18.352 06:50:38 -- spdk/autotest.sh@111 -- # uname -s 00:05:18.352 06:50:38 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:05:18.352 06:50:38 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:05:18.352 06:50:38 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:05:18.921 Hugepages 00:05:18.921 node hugesize free / total 00:05:18.921 node0 1048576kB 0 / 0 00:05:18.921 node0 2048kB 0 / 0 00:05:18.921 node1 1048576kB 0 / 0 00:05:18.921 node1 2048kB 0 / 0 00:05:18.921 00:05:18.921 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:18.921 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:05:18.921 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:05:18.921 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:05:18.921 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:05:18.921 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:05:18.921 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:05:18.921 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:05:18.921 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:05:18.921 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:05:18.921 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:05:18.921 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:05:18.921 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:05:18.921 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:05:18.921 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:05:18.921 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:05:18.921 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:05:19.181 NVMe 0000:88:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:05:19.181 06:50:39 -- spdk/autotest.sh@117 -- # uname -s 00:05:19.181 06:50:39 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:05:19.181 06:50:39 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:05:19.181 06:50:39 -- common/autotest_common.sh@1516 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:20.563 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:20.563 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:20.563 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:20.563 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:20.563 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:20.563 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:20.563 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:20.563 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:20.563 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:20.563 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:20.563 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:20.563 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:20.563 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:20.563 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:20.563 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:20.563 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:21.134 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:05:21.395 06:50:42 -- common/autotest_common.sh@1517 -- # sleep 1 00:05:22.334 06:50:43 -- common/autotest_common.sh@1518 -- # bdfs=() 00:05:22.334 06:50:43 -- common/autotest_common.sh@1518 -- # local bdfs 00:05:22.334 06:50:43 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:05:22.334 06:50:43 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:05:22.334 06:50:43 -- common/autotest_common.sh@1498 -- # bdfs=() 00:05:22.334 06:50:43 -- common/autotest_common.sh@1498 -- # local bdfs 00:05:22.334 06:50:43 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:22.334 06:50:43 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:22.334 06:50:43 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:05:22.594 06:50:43 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:05:22.594 06:50:43 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:88:00.0 00:05:22.594 06:50:43 -- common/autotest_common.sh@1522 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:23.534 Waiting for block devices as requested 00:05:23.794 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:05:23.794 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:05:23.794 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:05:24.055 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:05:24.055 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:05:24.055 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:05:24.055 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:05:24.315 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:05:24.315 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:05:24.315 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:05:24.575 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:05:24.575 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:05:24.575 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:05:24.575 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:05:24.836 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:05:24.836 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:05:24.836 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:05:25.095 06:50:45 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:05:25.095 06:50:45 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:88:00.0 00:05:25.095 06:50:45 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 00:05:25.095 06:50:45 -- common/autotest_common.sh@1487 -- # grep 0000:88:00.0/nvme/nvme 00:05:25.095 06:50:45 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 00:05:25.095 06:50:45 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 ]] 00:05:25.095 06:50:45 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 00:05:25.095 06:50:45 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:05:25.095 06:50:45 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:05:25.095 06:50:45 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:05:25.095 06:50:45 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:05:25.095 06:50:45 -- common/autotest_common.sh@1531 -- # grep oacs 00:05:25.095 06:50:45 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:05:25.095 06:50:45 -- common/autotest_common.sh@1531 -- # oacs=' 0xf' 00:05:25.095 06:50:45 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:05:25.095 06:50:45 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:05:25.095 06:50:45 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:05:25.095 06:50:45 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:05:25.095 06:50:45 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:25.095 06:50:45 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:05:25.095 06:50:45 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:05:25.095 06:50:45 -- common/autotest_common.sh@1543 -- # continue 00:05:25.095 06:50:45 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:05:25.095 06:50:45 -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:25.095 06:50:45 -- common/autotest_common.sh@10 -- # set +x 00:05:25.095 06:50:45 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:05:25.095 06:50:45 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:25.095 06:50:45 -- common/autotest_common.sh@10 -- # set +x 00:05:25.095 06:50:45 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:26.479 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:26.479 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:26.479 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:26.479 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:26.479 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:26.479 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:26.479 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:26.479 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:26.479 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:26.479 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:26.479 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:26.479 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:26.479 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:26.479 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:26.479 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:26.479 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:27.422 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:05:27.422 06:50:48 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:05:27.422 06:50:48 -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:27.422 06:50:48 -- common/autotest_common.sh@10 -- # set +x 00:05:27.682 06:50:48 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:05:27.682 06:50:48 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:05:27.682 06:50:48 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:05:27.682 06:50:48 -- common/autotest_common.sh@1563 -- # bdfs=() 00:05:27.682 06:50:48 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:05:27.682 06:50:48 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:05:27.682 06:50:48 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:05:27.682 06:50:48 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:05:27.682 06:50:48 -- common/autotest_common.sh@1498 -- # bdfs=() 00:05:27.682 06:50:48 -- common/autotest_common.sh@1498 -- # local bdfs 00:05:27.682 06:50:48 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:27.682 06:50:48 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:27.682 06:50:48 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:05:27.682 06:50:48 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:05:27.682 06:50:48 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:88:00.0 00:05:27.682 06:50:48 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:05:27.682 06:50:48 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:88:00.0/device 00:05:27.682 06:50:48 -- common/autotest_common.sh@1566 -- # device=0x0a54 00:05:27.682 06:50:48 -- common/autotest_common.sh@1567 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:05:27.682 06:50:48 -- common/autotest_common.sh@1568 -- # bdfs+=($bdf) 00:05:27.682 06:50:48 -- common/autotest_common.sh@1572 -- # (( 1 > 0 )) 00:05:27.682 06:50:48 -- common/autotest_common.sh@1573 -- # printf '%s\n' 0000:88:00.0 00:05:27.682 06:50:48 -- common/autotest_common.sh@1579 -- # [[ -z 0000:88:00.0 ]] 00:05:27.682 06:50:48 -- common/autotest_common.sh@1584 -- # spdk_tgt_pid=97617 00:05:27.682 06:50:48 -- common/autotest_common.sh@1583 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:27.682 06:50:48 -- common/autotest_common.sh@1585 -- # waitforlisten 97617 00:05:27.682 06:50:48 -- common/autotest_common.sh@835 -- # '[' -z 97617 ']' 00:05:27.682 06:50:48 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:27.682 06:50:48 -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:27.682 06:50:48 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:27.682 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:27.682 06:50:48 -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:27.682 06:50:48 -- common/autotest_common.sh@10 -- # set +x 00:05:27.682 [2024-11-18 06:50:48.545966] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:05:27.682 [2024-11-18 06:50:48.546053] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97617 ] 00:05:27.682 [2024-11-18 06:50:48.611256] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:27.682 [2024-11-18 06:50:48.654913] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:27.941 06:50:48 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:27.941 06:50:48 -- common/autotest_common.sh@868 -- # return 0 00:05:27.941 06:50:48 -- common/autotest_common.sh@1587 -- # bdf_id=0 00:05:27.941 06:50:48 -- common/autotest_common.sh@1588 -- # for bdf in "${bdfs[@]}" 00:05:27.941 06:50:48 -- common/autotest_common.sh@1589 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:88:00.0 00:05:31.235 nvme0n1 00:05:31.235 06:50:51 -- common/autotest_common.sh@1591 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:05:31.495 [2024-11-18 06:50:52.238417] nvme_opal.c:2063:spdk_opal_cmd_revert_tper: *ERROR*: Error on starting admin SP session with error 18 00:05:31.495 [2024-11-18 06:50:52.238461] vbdev_opal_rpc.c: 134:rpc_bdev_nvme_opal_revert: *ERROR*: Revert TPer failure: 18 00:05:31.495 request: 00:05:31.495 { 00:05:31.495 "nvme_ctrlr_name": "nvme0", 00:05:31.495 "password": "test", 00:05:31.495 "method": "bdev_nvme_opal_revert", 00:05:31.495 "req_id": 1 00:05:31.495 } 00:05:31.495 Got JSON-RPC error response 00:05:31.495 response: 00:05:31.495 { 00:05:31.495 "code": -32603, 00:05:31.495 "message": "Internal error" 00:05:31.495 } 00:05:31.495 06:50:52 -- common/autotest_common.sh@1591 -- # true 00:05:31.495 06:50:52 -- common/autotest_common.sh@1592 -- # (( ++bdf_id )) 00:05:31.495 06:50:52 -- common/autotest_common.sh@1595 -- # killprocess 97617 00:05:31.495 06:50:52 -- common/autotest_common.sh@954 -- # '[' -z 97617 ']' 00:05:31.495 06:50:52 -- common/autotest_common.sh@958 -- # kill -0 97617 00:05:31.495 06:50:52 -- common/autotest_common.sh@959 -- # uname 00:05:31.495 06:50:52 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:31.495 06:50:52 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 97617 00:05:31.495 06:50:52 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:31.495 06:50:52 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:31.495 06:50:52 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 97617' 00:05:31.495 killing process with pid 97617 00:05:31.495 06:50:52 -- common/autotest_common.sh@973 -- # kill 97617 00:05:31.495 06:50:52 -- common/autotest_common.sh@978 -- # wait 97617 00:05:31.495 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:31.495 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:31.495 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:31.495 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:31.496 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:31.496 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:31.496 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:31.496 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:31.496 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:31.496 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:31.496 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:31.496 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:31.496 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:31.496 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:31.496 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:31.496 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:31.496 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:31.496 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:31.496 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:31.496 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:31.496 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:31.496 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:31.496 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:31.496 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:31.496 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:31.496 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:31.496 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:31.496 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:31.496 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:31.496 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:31.496 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:31.496 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:31.496 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:31.496 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:31.496 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:31.496 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:31.496 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:31.496 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:31.496 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:31.496 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:31.496 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:31.496 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:31.496 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:31.496 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:31.496 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:31.496 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:31.496 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:31.496 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:31.496 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:31.496 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:31.496 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:31.496 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:31.496 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:31.496 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:31.496 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:31.496 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:31.496 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:31.496 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:31.496 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:31.496 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:31.496 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:31.496 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:31.496 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:31.496 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:31.496 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:31.496 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:31.496 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:31.496 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:31.496 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:31.496 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:31.496 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:31.496 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:31.496 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:31.496 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:31.496 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:31.496 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:31.496 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:31.496 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:31.496 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:31.496 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:31.496 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:31.496 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:31.496 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:31.496 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:31.496 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:31.496 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:31.496 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:31.496 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:31.496 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:31.496 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:31.496 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:31.496 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:31.496 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:31.496 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:31.496 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:31.496 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:31.496 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:31.496 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:31.496 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:31.496 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:31.496 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:31.496 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:31.496 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:31.496 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:31.496 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:31.496 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:31.496 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:31.496 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:31.496 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:31.496 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:31.496 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:31.496 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:31.496 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:31.496 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:31.496 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:31.496 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:31.496 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:31.496 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:31.496 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:31.496 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:31.496 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:31.496 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:31.496 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:31.496 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:31.496 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:31.496 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:31.496 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:31.496 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:31.496 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:31.496 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:31.496 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:31.496 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:31.496 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:31.496 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:31.496 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:31.496 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:31.496 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:31.496 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:31.496 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:31.496 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:31.496 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:31.496 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:31.496 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:31.496 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:31.496 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:31.496 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:31.496 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:31.496 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:31.496 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:31.496 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:31.496 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:31.496 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:31.496 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:31.496 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:31.497 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:31.497 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:31.497 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:31.497 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:31.497 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:31.497 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:31.497 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:31.497 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:31.497 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:31.497 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:31.497 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:31.497 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:31.497 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:31.497 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:31.497 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:31.497 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:31.497 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:31.497 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:31.497 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:31.497 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:31.497 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:31.497 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:31.497 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:31.497 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:31.497 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:31.497 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:31.497 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:31.497 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:31.497 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:31.497 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:31.497 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:31.497 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:31.497 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:31.497 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:31.497 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:31.497 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:31.497 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:31.497 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:31.497 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:31.497 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:31.497 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:31.497 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:33.399 06:50:54 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:05:33.399 06:50:54 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:05:33.399 06:50:54 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:33.399 06:50:54 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:33.399 06:50:54 -- spdk/autotest.sh@149 -- # timing_enter lib 00:05:33.399 06:50:54 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:33.399 06:50:54 -- common/autotest_common.sh@10 -- # set +x 00:05:33.399 06:50:54 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:05:33.399 06:50:54 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:05:33.399 06:50:54 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:33.399 06:50:54 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:33.399 06:50:54 -- common/autotest_common.sh@10 -- # set +x 00:05:33.399 ************************************ 00:05:33.399 START TEST env 00:05:33.399 ************************************ 00:05:33.399 06:50:54 env -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:05:33.399 * Looking for test storage... 00:05:33.399 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:05:33.399 06:50:54 env -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:33.399 06:50:54 env -- common/autotest_common.sh@1693 -- # lcov --version 00:05:33.399 06:50:54 env -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:33.399 06:50:54 env -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:33.399 06:50:54 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:33.399 06:50:54 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:33.399 06:50:54 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:33.399 06:50:54 env -- scripts/common.sh@336 -- # IFS=.-: 00:05:33.399 06:50:54 env -- scripts/common.sh@336 -- # read -ra ver1 00:05:33.399 06:50:54 env -- scripts/common.sh@337 -- # IFS=.-: 00:05:33.399 06:50:54 env -- scripts/common.sh@337 -- # read -ra ver2 00:05:33.399 06:50:54 env -- scripts/common.sh@338 -- # local 'op=<' 00:05:33.399 06:50:54 env -- scripts/common.sh@340 -- # ver1_l=2 00:05:33.399 06:50:54 env -- scripts/common.sh@341 -- # ver2_l=1 00:05:33.399 06:50:54 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:33.399 06:50:54 env -- scripts/common.sh@344 -- # case "$op" in 00:05:33.399 06:50:54 env -- scripts/common.sh@345 -- # : 1 00:05:33.399 06:50:54 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:33.399 06:50:54 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:33.399 06:50:54 env -- scripts/common.sh@365 -- # decimal 1 00:05:33.399 06:50:54 env -- scripts/common.sh@353 -- # local d=1 00:05:33.399 06:50:54 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:33.399 06:50:54 env -- scripts/common.sh@355 -- # echo 1 00:05:33.399 06:50:54 env -- scripts/common.sh@365 -- # ver1[v]=1 00:05:33.399 06:50:54 env -- scripts/common.sh@366 -- # decimal 2 00:05:33.399 06:50:54 env -- scripts/common.sh@353 -- # local d=2 00:05:33.399 06:50:54 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:33.399 06:50:54 env -- scripts/common.sh@355 -- # echo 2 00:05:33.399 06:50:54 env -- scripts/common.sh@366 -- # ver2[v]=2 00:05:33.399 06:50:54 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:33.399 06:50:54 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:33.399 06:50:54 env -- scripts/common.sh@368 -- # return 0 00:05:33.399 06:50:54 env -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:33.399 06:50:54 env -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:33.399 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:33.399 --rc genhtml_branch_coverage=1 00:05:33.399 --rc genhtml_function_coverage=1 00:05:33.399 --rc genhtml_legend=1 00:05:33.399 --rc geninfo_all_blocks=1 00:05:33.399 --rc geninfo_unexecuted_blocks=1 00:05:33.399 00:05:33.399 ' 00:05:33.399 06:50:54 env -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:33.399 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:33.399 --rc genhtml_branch_coverage=1 00:05:33.399 --rc genhtml_function_coverage=1 00:05:33.399 --rc genhtml_legend=1 00:05:33.399 --rc geninfo_all_blocks=1 00:05:33.399 --rc geninfo_unexecuted_blocks=1 00:05:33.399 00:05:33.399 ' 00:05:33.399 06:50:54 env -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:33.399 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:33.399 --rc genhtml_branch_coverage=1 00:05:33.399 --rc genhtml_function_coverage=1 00:05:33.399 --rc genhtml_legend=1 00:05:33.399 --rc geninfo_all_blocks=1 00:05:33.399 --rc geninfo_unexecuted_blocks=1 00:05:33.399 00:05:33.399 ' 00:05:33.399 06:50:54 env -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:33.399 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:33.399 --rc genhtml_branch_coverage=1 00:05:33.399 --rc genhtml_function_coverage=1 00:05:33.399 --rc genhtml_legend=1 00:05:33.399 --rc geninfo_all_blocks=1 00:05:33.399 --rc geninfo_unexecuted_blocks=1 00:05:33.399 00:05:33.399 ' 00:05:33.399 06:50:54 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:05:33.399 06:50:54 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:33.399 06:50:54 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:33.399 06:50:54 env -- common/autotest_common.sh@10 -- # set +x 00:05:33.399 ************************************ 00:05:33.399 START TEST env_memory 00:05:33.399 ************************************ 00:05:33.399 06:50:54 env.env_memory -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:05:33.399 00:05:33.399 00:05:33.399 CUnit - A unit testing framework for C - Version 2.1-3 00:05:33.399 http://cunit.sourceforge.net/ 00:05:33.399 00:05:33.399 00:05:33.399 Suite: memory 00:05:33.399 Test: alloc and free memory map ...[2024-11-18 06:50:54.250666] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:33.399 passed 00:05:33.399 Test: mem map translation ...[2024-11-18 06:50:54.271814] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:33.399 [2024-11-18 06:50:54.271835] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:33.399 [2024-11-18 06:50:54.271892] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:33.399 [2024-11-18 06:50:54.271904] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:33.399 passed 00:05:33.399 Test: mem map registration ...[2024-11-18 06:50:54.314976] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:05:33.399 [2024-11-18 06:50:54.315010] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:05:33.399 passed 00:05:33.399 Test: mem map adjacent registrations ...passed 00:05:33.399 00:05:33.399 Run Summary: Type Total Ran Passed Failed Inactive 00:05:33.399 suites 1 1 n/a 0 0 00:05:33.399 tests 4 4 4 0 0 00:05:33.399 asserts 152 152 152 0 n/a 00:05:33.399 00:05:33.399 Elapsed time = 0.146 seconds 00:05:33.399 00:05:33.399 real 0m0.155s 00:05:33.399 user 0m0.147s 00:05:33.399 sys 0m0.007s 00:05:33.400 06:50:54 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:33.400 06:50:54 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:33.400 ************************************ 00:05:33.400 END TEST env_memory 00:05:33.400 ************************************ 00:05:33.659 06:50:54 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:33.659 06:50:54 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:33.659 06:50:54 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:33.659 06:50:54 env -- common/autotest_common.sh@10 -- # set +x 00:05:33.659 ************************************ 00:05:33.659 START TEST env_vtophys 00:05:33.659 ************************************ 00:05:33.659 06:50:54 env.env_vtophys -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:33.659 EAL: lib.eal log level changed from notice to debug 00:05:33.659 EAL: Detected lcore 0 as core 0 on socket 0 00:05:33.659 EAL: Detected lcore 1 as core 1 on socket 0 00:05:33.659 EAL: Detected lcore 2 as core 2 on socket 0 00:05:33.659 EAL: Detected lcore 3 as core 3 on socket 0 00:05:33.659 EAL: Detected lcore 4 as core 4 on socket 0 00:05:33.659 EAL: Detected lcore 5 as core 5 on socket 0 00:05:33.659 EAL: Detected lcore 6 as core 8 on socket 0 00:05:33.659 EAL: Detected lcore 7 as core 9 on socket 0 00:05:33.659 EAL: Detected lcore 8 as core 10 on socket 0 00:05:33.659 EAL: Detected lcore 9 as core 11 on socket 0 00:05:33.659 EAL: Detected lcore 10 as core 12 on socket 0 00:05:33.659 EAL: Detected lcore 11 as core 13 on socket 0 00:05:33.659 EAL: Detected lcore 12 as core 0 on socket 1 00:05:33.659 EAL: Detected lcore 13 as core 1 on socket 1 00:05:33.659 EAL: Detected lcore 14 as core 2 on socket 1 00:05:33.659 EAL: Detected lcore 15 as core 3 on socket 1 00:05:33.659 EAL: Detected lcore 16 as core 4 on socket 1 00:05:33.659 EAL: Detected lcore 17 as core 5 on socket 1 00:05:33.659 EAL: Detected lcore 18 as core 8 on socket 1 00:05:33.659 EAL: Detected lcore 19 as core 9 on socket 1 00:05:33.659 EAL: Detected lcore 20 as core 10 on socket 1 00:05:33.659 EAL: Detected lcore 21 as core 11 on socket 1 00:05:33.659 EAL: Detected lcore 22 as core 12 on socket 1 00:05:33.659 EAL: Detected lcore 23 as core 13 on socket 1 00:05:33.659 EAL: Detected lcore 24 as core 0 on socket 0 00:05:33.659 EAL: Detected lcore 25 as core 1 on socket 0 00:05:33.659 EAL: Detected lcore 26 as core 2 on socket 0 00:05:33.659 EAL: Detected lcore 27 as core 3 on socket 0 00:05:33.659 EAL: Detected lcore 28 as core 4 on socket 0 00:05:33.659 EAL: Detected lcore 29 as core 5 on socket 0 00:05:33.659 EAL: Detected lcore 30 as core 8 on socket 0 00:05:33.659 EAL: Detected lcore 31 as core 9 on socket 0 00:05:33.659 EAL: Detected lcore 32 as core 10 on socket 0 00:05:33.659 EAL: Detected lcore 33 as core 11 on socket 0 00:05:33.659 EAL: Detected lcore 34 as core 12 on socket 0 00:05:33.659 EAL: Detected lcore 35 as core 13 on socket 0 00:05:33.659 EAL: Detected lcore 36 as core 0 on socket 1 00:05:33.659 EAL: Detected lcore 37 as core 1 on socket 1 00:05:33.659 EAL: Detected lcore 38 as core 2 on socket 1 00:05:33.659 EAL: Detected lcore 39 as core 3 on socket 1 00:05:33.659 EAL: Detected lcore 40 as core 4 on socket 1 00:05:33.659 EAL: Detected lcore 41 as core 5 on socket 1 00:05:33.659 EAL: Detected lcore 42 as core 8 on socket 1 00:05:33.659 EAL: Detected lcore 43 as core 9 on socket 1 00:05:33.659 EAL: Detected lcore 44 as core 10 on socket 1 00:05:33.659 EAL: Detected lcore 45 as core 11 on socket 1 00:05:33.659 EAL: Detected lcore 46 as core 12 on socket 1 00:05:33.659 EAL: Detected lcore 47 as core 13 on socket 1 00:05:33.659 EAL: Maximum logical cores by configuration: 128 00:05:33.659 EAL: Detected CPU lcores: 48 00:05:33.659 EAL: Detected NUMA nodes: 2 00:05:33.659 EAL: Checking presence of .so 'librte_eal.so.23.0' 00:05:33.659 EAL: Detected shared linkage of DPDK 00:05:33.659 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so.23.0 00:05:33.659 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so.23.0 00:05:33.659 EAL: Registered [vdev] bus. 00:05:33.659 EAL: bus.vdev log level changed from disabled to notice 00:05:33.659 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so.23.0 00:05:33.659 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so.23.0 00:05:33.659 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:05:33.659 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:05:33.659 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so 00:05:33.659 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so 00:05:33.659 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so 00:05:33.659 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so 00:05:33.659 EAL: No shared files mode enabled, IPC will be disabled 00:05:33.659 EAL: No shared files mode enabled, IPC is disabled 00:05:33.659 EAL: Bus pci wants IOVA as 'DC' 00:05:33.659 EAL: Bus vdev wants IOVA as 'DC' 00:05:33.659 EAL: Buses did not request a specific IOVA mode. 00:05:33.659 EAL: IOMMU is available, selecting IOVA as VA mode. 00:05:33.660 EAL: Selected IOVA mode 'VA' 00:05:33.660 EAL: Probing VFIO support... 00:05:33.660 EAL: IOMMU type 1 (Type 1) is supported 00:05:33.660 EAL: IOMMU type 7 (sPAPR) is not supported 00:05:33.660 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:05:33.660 EAL: VFIO support initialized 00:05:33.660 EAL: Ask a virtual area of 0x2e000 bytes 00:05:33.660 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:33.660 EAL: Setting up physically contiguous memory... 00:05:33.660 EAL: Setting maximum number of open files to 524288 00:05:33.660 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:33.660 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:05:33.660 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:33.660 EAL: Ask a virtual area of 0x61000 bytes 00:05:33.660 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:33.660 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:33.660 EAL: Ask a virtual area of 0x400000000 bytes 00:05:33.660 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:33.660 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:33.660 EAL: Ask a virtual area of 0x61000 bytes 00:05:33.660 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:33.660 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:33.660 EAL: Ask a virtual area of 0x400000000 bytes 00:05:33.660 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:33.660 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:33.660 EAL: Ask a virtual area of 0x61000 bytes 00:05:33.660 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:33.660 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:33.660 EAL: Ask a virtual area of 0x400000000 bytes 00:05:33.660 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:33.660 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:33.660 EAL: Ask a virtual area of 0x61000 bytes 00:05:33.660 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:33.660 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:33.660 EAL: Ask a virtual area of 0x400000000 bytes 00:05:33.660 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:33.660 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:33.660 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:05:33.660 EAL: Ask a virtual area of 0x61000 bytes 00:05:33.660 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:05:33.660 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:33.660 EAL: Ask a virtual area of 0x400000000 bytes 00:05:33.660 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:05:33.660 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:05:33.660 EAL: Ask a virtual area of 0x61000 bytes 00:05:33.660 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:05:33.660 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:33.660 EAL: Ask a virtual area of 0x400000000 bytes 00:05:33.660 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:05:33.660 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:05:33.660 EAL: Ask a virtual area of 0x61000 bytes 00:05:33.660 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:05:33.660 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:33.660 EAL: Ask a virtual area of 0x400000000 bytes 00:05:33.660 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:05:33.660 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:05:33.660 EAL: Ask a virtual area of 0x61000 bytes 00:05:33.660 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:05:33.660 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:33.660 EAL: Ask a virtual area of 0x400000000 bytes 00:05:33.660 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:05:33.660 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:05:33.660 EAL: Hugepages will be freed exactly as allocated. 00:05:33.660 EAL: No shared files mode enabled, IPC is disabled 00:05:33.660 EAL: No shared files mode enabled, IPC is disabled 00:05:33.660 EAL: TSC frequency is ~2700000 KHz 00:05:33.660 EAL: Main lcore 0 is ready (tid=7f1fd489da00;cpuset=[0]) 00:05:33.660 EAL: Trying to obtain current memory policy. 00:05:33.660 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:33.660 EAL: Restoring previous memory policy: 0 00:05:33.660 EAL: request: mp_malloc_sync 00:05:33.660 EAL: No shared files mode enabled, IPC is disabled 00:05:33.660 EAL: Heap on socket 0 was expanded by 2MB 00:05:33.660 EAL: No shared files mode enabled, IPC is disabled 00:05:33.660 EAL: No shared files mode enabled, IPC is disabled 00:05:33.660 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:33.660 EAL: Mem event callback 'spdk:(nil)' registered 00:05:33.660 00:05:33.660 00:05:33.660 CUnit - A unit testing framework for C - Version 2.1-3 00:05:33.660 http://cunit.sourceforge.net/ 00:05:33.660 00:05:33.660 00:05:33.660 Suite: components_suite 00:05:33.660 Test: vtophys_malloc_test ...passed 00:05:33.660 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:33.660 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:33.660 EAL: Restoring previous memory policy: 4 00:05:33.660 EAL: Calling mem event callback 'spdk:(nil)' 00:05:33.660 EAL: request: mp_malloc_sync 00:05:33.660 EAL: No shared files mode enabled, IPC is disabled 00:05:33.660 EAL: Heap on socket 0 was expanded by 4MB 00:05:33.660 EAL: Calling mem event callback 'spdk:(nil)' 00:05:33.660 EAL: request: mp_malloc_sync 00:05:33.660 EAL: No shared files mode enabled, IPC is disabled 00:05:33.660 EAL: Heap on socket 0 was shrunk by 4MB 00:05:33.660 EAL: Trying to obtain current memory policy. 00:05:33.660 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:33.660 EAL: Restoring previous memory policy: 4 00:05:33.660 EAL: Calling mem event callback 'spdk:(nil)' 00:05:33.660 EAL: request: mp_malloc_sync 00:05:33.660 EAL: No shared files mode enabled, IPC is disabled 00:05:33.660 EAL: Heap on socket 0 was expanded by 6MB 00:05:33.660 EAL: Calling mem event callback 'spdk:(nil)' 00:05:33.660 EAL: request: mp_malloc_sync 00:05:33.660 EAL: No shared files mode enabled, IPC is disabled 00:05:33.660 EAL: Heap on socket 0 was shrunk by 6MB 00:05:33.660 EAL: Trying to obtain current memory policy. 00:05:33.660 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:33.660 EAL: Restoring previous memory policy: 4 00:05:33.660 EAL: Calling mem event callback 'spdk:(nil)' 00:05:33.660 EAL: request: mp_malloc_sync 00:05:33.660 EAL: No shared files mode enabled, IPC is disabled 00:05:33.660 EAL: Heap on socket 0 was expanded by 10MB 00:05:33.660 EAL: Calling mem event callback 'spdk:(nil)' 00:05:33.660 EAL: request: mp_malloc_sync 00:05:33.660 EAL: No shared files mode enabled, IPC is disabled 00:05:33.660 EAL: Heap on socket 0 was shrunk by 10MB 00:05:33.660 EAL: Trying to obtain current memory policy. 00:05:33.660 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:33.660 EAL: Restoring previous memory policy: 4 00:05:33.660 EAL: Calling mem event callback 'spdk:(nil)' 00:05:33.660 EAL: request: mp_malloc_sync 00:05:33.660 EAL: No shared files mode enabled, IPC is disabled 00:05:33.660 EAL: Heap on socket 0 was expanded by 18MB 00:05:33.660 EAL: Calling mem event callback 'spdk:(nil)' 00:05:33.660 EAL: request: mp_malloc_sync 00:05:33.660 EAL: No shared files mode enabled, IPC is disabled 00:05:33.660 EAL: Heap on socket 0 was shrunk by 18MB 00:05:33.660 EAL: Trying to obtain current memory policy. 00:05:33.660 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:33.660 EAL: Restoring previous memory policy: 4 00:05:33.660 EAL: Calling mem event callback 'spdk:(nil)' 00:05:33.660 EAL: request: mp_malloc_sync 00:05:33.660 EAL: No shared files mode enabled, IPC is disabled 00:05:33.660 EAL: Heap on socket 0 was expanded by 34MB 00:05:33.660 EAL: Calling mem event callback 'spdk:(nil)' 00:05:33.660 EAL: request: mp_malloc_sync 00:05:33.660 EAL: No shared files mode enabled, IPC is disabled 00:05:33.660 EAL: Heap on socket 0 was shrunk by 34MB 00:05:33.660 EAL: Trying to obtain current memory policy. 00:05:33.660 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:33.660 EAL: Restoring previous memory policy: 4 00:05:33.660 EAL: Calling mem event callback 'spdk:(nil)' 00:05:33.660 EAL: request: mp_malloc_sync 00:05:33.660 EAL: No shared files mode enabled, IPC is disabled 00:05:33.660 EAL: Heap on socket 0 was expanded by 66MB 00:05:33.660 EAL: Calling mem event callback 'spdk:(nil)' 00:05:33.660 EAL: request: mp_malloc_sync 00:05:33.660 EAL: No shared files mode enabled, IPC is disabled 00:05:33.660 EAL: Heap on socket 0 was shrunk by 66MB 00:05:33.660 EAL: Trying to obtain current memory policy. 00:05:33.660 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:33.660 EAL: Restoring previous memory policy: 4 00:05:33.660 EAL: Calling mem event callback 'spdk:(nil)' 00:05:33.660 EAL: request: mp_malloc_sync 00:05:33.660 EAL: No shared files mode enabled, IPC is disabled 00:05:33.660 EAL: Heap on socket 0 was expanded by 130MB 00:05:33.660 EAL: Calling mem event callback 'spdk:(nil)' 00:05:33.919 EAL: request: mp_malloc_sync 00:05:33.919 EAL: No shared files mode enabled, IPC is disabled 00:05:33.919 EAL: Heap on socket 0 was shrunk by 130MB 00:05:33.919 EAL: Trying to obtain current memory policy. 00:05:33.919 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:33.919 EAL: Restoring previous memory policy: 4 00:05:33.919 EAL: Calling mem event callback 'spdk:(nil)' 00:05:33.919 EAL: request: mp_malloc_sync 00:05:33.919 EAL: No shared files mode enabled, IPC is disabled 00:05:33.919 EAL: Heap on socket 0 was expanded by 258MB 00:05:33.919 EAL: Calling mem event callback 'spdk:(nil)' 00:05:33.919 EAL: request: mp_malloc_sync 00:05:33.919 EAL: No shared files mode enabled, IPC is disabled 00:05:33.919 EAL: Heap on socket 0 was shrunk by 258MB 00:05:33.919 EAL: Trying to obtain current memory policy. 00:05:33.919 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:34.179 EAL: Restoring previous memory policy: 4 00:05:34.179 EAL: Calling mem event callback 'spdk:(nil)' 00:05:34.179 EAL: request: mp_malloc_sync 00:05:34.179 EAL: No shared files mode enabled, IPC is disabled 00:05:34.179 EAL: Heap on socket 0 was expanded by 514MB 00:05:34.179 EAL: Calling mem event callback 'spdk:(nil)' 00:05:34.179 EAL: request: mp_malloc_sync 00:05:34.179 EAL: No shared files mode enabled, IPC is disabled 00:05:34.179 EAL: Heap on socket 0 was shrunk by 514MB 00:05:34.179 EAL: Trying to obtain current memory policy. 00:05:34.179 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:34.748 EAL: Restoring previous memory policy: 4 00:05:34.748 EAL: Calling mem event callback 'spdk:(nil)' 00:05:34.748 EAL: request: mp_malloc_sync 00:05:34.748 EAL: No shared files mode enabled, IPC is disabled 00:05:34.748 EAL: Heap on socket 0 was expanded by 1026MB 00:05:34.748 EAL: Calling mem event callback 'spdk:(nil)' 00:05:35.007 EAL: request: mp_malloc_sync 00:05:35.007 EAL: No shared files mode enabled, IPC is disabled 00:05:35.007 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:35.007 passed 00:05:35.007 00:05:35.007 Run Summary: Type Total Ran Passed Failed Inactive 00:05:35.007 suites 1 1 n/a 0 0 00:05:35.007 tests 2 2 2 0 0 00:05:35.007 asserts 497 497 497 0 n/a 00:05:35.007 00:05:35.007 Elapsed time = 1.324 seconds 00:05:35.007 EAL: Calling mem event callback 'spdk:(nil)' 00:05:35.007 EAL: request: mp_malloc_sync 00:05:35.007 EAL: No shared files mode enabled, IPC is disabled 00:05:35.007 EAL: Heap on socket 0 was shrunk by 2MB 00:05:35.007 EAL: No shared files mode enabled, IPC is disabled 00:05:35.007 EAL: No shared files mode enabled, IPC is disabled 00:05:35.007 EAL: No shared files mode enabled, IPC is disabled 00:05:35.007 00:05:35.007 real 0m1.440s 00:05:35.007 user 0m0.846s 00:05:35.007 sys 0m0.561s 00:05:35.007 06:50:55 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:35.007 06:50:55 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:35.007 ************************************ 00:05:35.007 END TEST env_vtophys 00:05:35.007 ************************************ 00:05:35.007 06:50:55 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:05:35.007 06:50:55 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:35.007 06:50:55 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:35.007 06:50:55 env -- common/autotest_common.sh@10 -- # set +x 00:05:35.007 ************************************ 00:05:35.007 START TEST env_pci 00:05:35.007 ************************************ 00:05:35.007 06:50:55 env.env_pci -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:05:35.007 00:05:35.007 00:05:35.007 CUnit - A unit testing framework for C - Version 2.1-3 00:05:35.007 http://cunit.sourceforge.net/ 00:05:35.007 00:05:35.007 00:05:35.007 Suite: pci 00:05:35.007 Test: pci_hook ...[2024-11-18 06:50:55.920697] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 98514 has claimed it 00:05:35.007 EAL: Cannot find device (10000:00:01.0) 00:05:35.007 EAL: Failed to attach device on primary process 00:05:35.007 passed 00:05:35.007 00:05:35.007 Run Summary: Type Total Ran Passed Failed Inactive 00:05:35.007 suites 1 1 n/a 0 0 00:05:35.007 tests 1 1 1 0 0 00:05:35.007 asserts 25 25 25 0 n/a 00:05:35.007 00:05:35.007 Elapsed time = 0.020 seconds 00:05:35.007 00:05:35.007 real 0m0.031s 00:05:35.007 user 0m0.008s 00:05:35.007 sys 0m0.023s 00:05:35.007 06:50:55 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:35.007 06:50:55 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:35.007 ************************************ 00:05:35.007 END TEST env_pci 00:05:35.007 ************************************ 00:05:35.007 06:50:55 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:35.007 06:50:55 env -- env/env.sh@15 -- # uname 00:05:35.007 06:50:55 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:35.007 06:50:55 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:35.007 06:50:55 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:35.007 06:50:55 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:05:35.007 06:50:55 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:35.007 06:50:55 env -- common/autotest_common.sh@10 -- # set +x 00:05:35.268 ************************************ 00:05:35.268 START TEST env_dpdk_post_init 00:05:35.268 ************************************ 00:05:35.268 06:50:55 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:35.268 EAL: Detected CPU lcores: 48 00:05:35.268 EAL: Detected NUMA nodes: 2 00:05:35.268 EAL: Detected shared linkage of DPDK 00:05:35.268 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:35.268 EAL: Selected IOVA mode 'VA' 00:05:35.268 EAL: VFIO support initialized 00:05:35.268 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:35.268 EAL: Using IOMMU type 1 (Type 1) 00:05:35.268 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:00:04.0 (socket 0) 00:05:35.268 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:00:04.1 (socket 0) 00:05:35.268 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:00:04.2 (socket 0) 00:05:35.268 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:00:04.3 (socket 0) 00:05:35.268 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:00:04.4 (socket 0) 00:05:35.268 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:00:04.5 (socket 0) 00:05:35.268 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:00:04.6 (socket 0) 00:05:35.268 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:00:04.7 (socket 0) 00:05:35.268 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:80:04.0 (socket 1) 00:05:35.268 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:80:04.1 (socket 1) 00:05:35.268 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:80:04.2 (socket 1) 00:05:35.268 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:80:04.3 (socket 1) 00:05:35.529 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:80:04.4 (socket 1) 00:05:35.529 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:80:04.5 (socket 1) 00:05:35.529 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:80:04.6 (socket 1) 00:05:35.529 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:80:04.7 (socket 1) 00:05:36.102 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:88:00.0 (socket 1) 00:05:39.386 EAL: Releasing PCI mapped resource for 0000:88:00.0 00:05:39.386 EAL: Calling pci_unmap_resource for 0000:88:00.0 at 0x202001040000 00:05:39.645 Starting DPDK initialization... 00:05:39.645 Starting SPDK post initialization... 00:05:39.645 SPDK NVMe probe 00:05:39.645 Attaching to 0000:88:00.0 00:05:39.645 Attached to 0000:88:00.0 00:05:39.645 Cleaning up... 00:05:39.645 00:05:39.645 real 0m4.387s 00:05:39.645 user 0m3.297s 00:05:39.645 sys 0m0.149s 00:05:39.645 06:51:00 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:39.645 06:51:00 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:39.645 ************************************ 00:05:39.645 END TEST env_dpdk_post_init 00:05:39.645 ************************************ 00:05:39.645 06:51:00 env -- env/env.sh@26 -- # uname 00:05:39.645 06:51:00 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:39.645 06:51:00 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:39.645 06:51:00 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:39.645 06:51:00 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:39.645 06:51:00 env -- common/autotest_common.sh@10 -- # set +x 00:05:39.645 ************************************ 00:05:39.645 START TEST env_mem_callbacks 00:05:39.645 ************************************ 00:05:39.645 06:51:00 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:39.645 EAL: Detected CPU lcores: 48 00:05:39.645 EAL: Detected NUMA nodes: 2 00:05:39.645 EAL: Detected shared linkage of DPDK 00:05:39.645 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:39.645 EAL: Selected IOVA mode 'VA' 00:05:39.645 EAL: VFIO support initialized 00:05:39.645 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:39.645 00:05:39.645 00:05:39.645 CUnit - A unit testing framework for C - Version 2.1-3 00:05:39.645 http://cunit.sourceforge.net/ 00:05:39.645 00:05:39.645 00:05:39.645 Suite: memory 00:05:39.645 Test: test ... 00:05:39.645 register 0x200000200000 2097152 00:05:39.645 malloc 3145728 00:05:39.645 register 0x200000400000 4194304 00:05:39.645 buf 0x200000500000 len 3145728 PASSED 00:05:39.645 malloc 64 00:05:39.645 buf 0x2000004fff40 len 64 PASSED 00:05:39.645 malloc 4194304 00:05:39.645 register 0x200000800000 6291456 00:05:39.645 buf 0x200000a00000 len 4194304 PASSED 00:05:39.645 free 0x200000500000 3145728 00:05:39.645 free 0x2000004fff40 64 00:05:39.645 unregister 0x200000400000 4194304 PASSED 00:05:39.645 free 0x200000a00000 4194304 00:05:39.645 unregister 0x200000800000 6291456 PASSED 00:05:39.645 malloc 8388608 00:05:39.645 register 0x200000400000 10485760 00:05:39.645 buf 0x200000600000 len 8388608 PASSED 00:05:39.645 free 0x200000600000 8388608 00:05:39.645 unregister 0x200000400000 10485760 PASSED 00:05:39.645 passed 00:05:39.645 00:05:39.645 Run Summary: Type Total Ran Passed Failed Inactive 00:05:39.645 suites 1 1 n/a 0 0 00:05:39.645 tests 1 1 1 0 0 00:05:39.645 asserts 15 15 15 0 n/a 00:05:39.645 00:05:39.645 Elapsed time = 0.005 seconds 00:05:39.645 00:05:39.645 real 0m0.048s 00:05:39.645 user 0m0.012s 00:05:39.645 sys 0m0.036s 00:05:39.645 06:51:00 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:39.645 06:51:00 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:39.645 ************************************ 00:05:39.645 END TEST env_mem_callbacks 00:05:39.645 ************************************ 00:05:39.645 00:05:39.645 real 0m6.445s 00:05:39.645 user 0m4.515s 00:05:39.645 sys 0m0.977s 00:05:39.645 06:51:00 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:39.645 06:51:00 env -- common/autotest_common.sh@10 -- # set +x 00:05:39.645 ************************************ 00:05:39.645 END TEST env 00:05:39.645 ************************************ 00:05:39.645 06:51:00 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:39.645 06:51:00 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:39.645 06:51:00 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:39.645 06:51:00 -- common/autotest_common.sh@10 -- # set +x 00:05:39.645 ************************************ 00:05:39.645 START TEST rpc 00:05:39.645 ************************************ 00:05:39.645 06:51:00 rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:39.645 * Looking for test storage... 00:05:39.645 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:39.645 06:51:00 rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:39.645 06:51:00 rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:05:39.645 06:51:00 rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:39.905 06:51:00 rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:39.905 06:51:00 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:39.905 06:51:00 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:39.905 06:51:00 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:39.905 06:51:00 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:39.905 06:51:00 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:39.905 06:51:00 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:39.905 06:51:00 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:39.905 06:51:00 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:39.905 06:51:00 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:39.905 06:51:00 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:39.905 06:51:00 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:39.905 06:51:00 rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:39.905 06:51:00 rpc -- scripts/common.sh@345 -- # : 1 00:05:39.905 06:51:00 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:39.905 06:51:00 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:39.905 06:51:00 rpc -- scripts/common.sh@365 -- # decimal 1 00:05:39.905 06:51:00 rpc -- scripts/common.sh@353 -- # local d=1 00:05:39.905 06:51:00 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:39.905 06:51:00 rpc -- scripts/common.sh@355 -- # echo 1 00:05:39.905 06:51:00 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:39.905 06:51:00 rpc -- scripts/common.sh@366 -- # decimal 2 00:05:39.905 06:51:00 rpc -- scripts/common.sh@353 -- # local d=2 00:05:39.905 06:51:00 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:39.905 06:51:00 rpc -- scripts/common.sh@355 -- # echo 2 00:05:39.906 06:51:00 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:39.906 06:51:00 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:39.906 06:51:00 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:39.906 06:51:00 rpc -- scripts/common.sh@368 -- # return 0 00:05:39.906 06:51:00 rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:39.906 06:51:00 rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:39.906 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:39.906 --rc genhtml_branch_coverage=1 00:05:39.906 --rc genhtml_function_coverage=1 00:05:39.906 --rc genhtml_legend=1 00:05:39.906 --rc geninfo_all_blocks=1 00:05:39.906 --rc geninfo_unexecuted_blocks=1 00:05:39.906 00:05:39.906 ' 00:05:39.906 06:51:00 rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:39.906 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:39.906 --rc genhtml_branch_coverage=1 00:05:39.906 --rc genhtml_function_coverage=1 00:05:39.906 --rc genhtml_legend=1 00:05:39.906 --rc geninfo_all_blocks=1 00:05:39.906 --rc geninfo_unexecuted_blocks=1 00:05:39.906 00:05:39.906 ' 00:05:39.906 06:51:00 rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:39.906 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:39.906 --rc genhtml_branch_coverage=1 00:05:39.906 --rc genhtml_function_coverage=1 00:05:39.906 --rc genhtml_legend=1 00:05:39.906 --rc geninfo_all_blocks=1 00:05:39.906 --rc geninfo_unexecuted_blocks=1 00:05:39.906 00:05:39.906 ' 00:05:39.906 06:51:00 rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:39.906 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:39.906 --rc genhtml_branch_coverage=1 00:05:39.906 --rc genhtml_function_coverage=1 00:05:39.906 --rc genhtml_legend=1 00:05:39.906 --rc geninfo_all_blocks=1 00:05:39.906 --rc geninfo_unexecuted_blocks=1 00:05:39.906 00:05:39.906 ' 00:05:39.906 06:51:00 rpc -- rpc/rpc.sh@65 -- # spdk_pid=99237 00:05:39.906 06:51:00 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:05:39.906 06:51:00 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:39.906 06:51:00 rpc -- rpc/rpc.sh@67 -- # waitforlisten 99237 00:05:39.906 06:51:00 rpc -- common/autotest_common.sh@835 -- # '[' -z 99237 ']' 00:05:39.906 06:51:00 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:39.906 06:51:00 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:39.906 06:51:00 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:39.906 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:39.906 06:51:00 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:39.906 06:51:00 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:39.906 [2024-11-18 06:51:00.748018] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:05:39.906 [2024-11-18 06:51:00.748136] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid99237 ] 00:05:39.906 [2024-11-18 06:51:00.819886] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:39.906 [2024-11-18 06:51:00.868744] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:39.906 [2024-11-18 06:51:00.868830] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 99237' to capture a snapshot of events at runtime. 00:05:39.906 [2024-11-18 06:51:00.868844] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:39.906 [2024-11-18 06:51:00.868855] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:39.906 [2024-11-18 06:51:00.868864] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid99237 for offline analysis/debug. 00:05:39.906 [2024-11-18 06:51:00.869400] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:40.165 06:51:01 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:40.165 06:51:01 rpc -- common/autotest_common.sh@868 -- # return 0 00:05:40.165 06:51:01 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:40.165 06:51:01 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:40.165 06:51:01 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:40.165 06:51:01 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:40.165 06:51:01 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:40.165 06:51:01 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:40.165 06:51:01 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:40.424 ************************************ 00:05:40.424 START TEST rpc_integrity 00:05:40.424 ************************************ 00:05:40.424 06:51:01 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:05:40.424 06:51:01 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:40.424 06:51:01 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:40.424 06:51:01 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:40.424 06:51:01 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:40.424 06:51:01 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:40.425 06:51:01 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:40.425 06:51:01 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:40.425 06:51:01 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:40.425 06:51:01 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:40.425 06:51:01 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:40.425 06:51:01 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:40.425 06:51:01 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:40.425 06:51:01 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:40.425 06:51:01 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:40.425 06:51:01 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:40.425 06:51:01 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:40.425 06:51:01 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:40.425 { 00:05:40.425 "name": "Malloc0", 00:05:40.425 "aliases": [ 00:05:40.425 "254da33f-8ce3-4056-a63a-3d41456cf627" 00:05:40.425 ], 00:05:40.425 "product_name": "Malloc disk", 00:05:40.425 "block_size": 512, 00:05:40.425 "num_blocks": 16384, 00:05:40.425 "uuid": "254da33f-8ce3-4056-a63a-3d41456cf627", 00:05:40.425 "assigned_rate_limits": { 00:05:40.425 "rw_ios_per_sec": 0, 00:05:40.425 "rw_mbytes_per_sec": 0, 00:05:40.425 "r_mbytes_per_sec": 0, 00:05:40.425 "w_mbytes_per_sec": 0 00:05:40.425 }, 00:05:40.425 "claimed": false, 00:05:40.425 "zoned": false, 00:05:40.425 "supported_io_types": { 00:05:40.425 "read": true, 00:05:40.425 "write": true, 00:05:40.425 "unmap": true, 00:05:40.425 "flush": true, 00:05:40.425 "reset": true, 00:05:40.425 "nvme_admin": false, 00:05:40.425 "nvme_io": false, 00:05:40.425 "nvme_io_md": false, 00:05:40.425 "write_zeroes": true, 00:05:40.425 "zcopy": true, 00:05:40.425 "get_zone_info": false, 00:05:40.425 "zone_management": false, 00:05:40.425 "zone_append": false, 00:05:40.425 "compare": false, 00:05:40.425 "compare_and_write": false, 00:05:40.425 "abort": true, 00:05:40.425 "seek_hole": false, 00:05:40.425 "seek_data": false, 00:05:40.425 "copy": true, 00:05:40.425 "nvme_iov_md": false 00:05:40.425 }, 00:05:40.425 "memory_domains": [ 00:05:40.425 { 00:05:40.425 "dma_device_id": "system", 00:05:40.425 "dma_device_type": 1 00:05:40.425 }, 00:05:40.425 { 00:05:40.425 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:40.425 "dma_device_type": 2 00:05:40.425 } 00:05:40.425 ], 00:05:40.425 "driver_specific": {} 00:05:40.425 } 00:05:40.425 ]' 00:05:40.425 06:51:01 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:40.425 06:51:01 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:40.425 06:51:01 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:40.425 06:51:01 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:40.425 06:51:01 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:40.425 [2024-11-18 06:51:01.259611] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:40.425 [2024-11-18 06:51:01.259652] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:40.425 [2024-11-18 06:51:01.259674] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1d1e8d0 00:05:40.425 [2024-11-18 06:51:01.259688] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:40.425 [2024-11-18 06:51:01.261049] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:40.425 [2024-11-18 06:51:01.261071] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:40.425 Passthru0 00:05:40.425 06:51:01 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:40.425 06:51:01 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:40.425 06:51:01 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:40.425 06:51:01 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:40.425 06:51:01 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:40.425 06:51:01 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:40.425 { 00:05:40.425 "name": "Malloc0", 00:05:40.425 "aliases": [ 00:05:40.425 "254da33f-8ce3-4056-a63a-3d41456cf627" 00:05:40.425 ], 00:05:40.425 "product_name": "Malloc disk", 00:05:40.425 "block_size": 512, 00:05:40.425 "num_blocks": 16384, 00:05:40.425 "uuid": "254da33f-8ce3-4056-a63a-3d41456cf627", 00:05:40.425 "assigned_rate_limits": { 00:05:40.425 "rw_ios_per_sec": 0, 00:05:40.425 "rw_mbytes_per_sec": 0, 00:05:40.425 "r_mbytes_per_sec": 0, 00:05:40.425 "w_mbytes_per_sec": 0 00:05:40.425 }, 00:05:40.425 "claimed": true, 00:05:40.425 "claim_type": "exclusive_write", 00:05:40.425 "zoned": false, 00:05:40.425 "supported_io_types": { 00:05:40.425 "read": true, 00:05:40.425 "write": true, 00:05:40.425 "unmap": true, 00:05:40.425 "flush": true, 00:05:40.425 "reset": true, 00:05:40.425 "nvme_admin": false, 00:05:40.425 "nvme_io": false, 00:05:40.425 "nvme_io_md": false, 00:05:40.425 "write_zeroes": true, 00:05:40.425 "zcopy": true, 00:05:40.425 "get_zone_info": false, 00:05:40.425 "zone_management": false, 00:05:40.425 "zone_append": false, 00:05:40.425 "compare": false, 00:05:40.425 "compare_and_write": false, 00:05:40.425 "abort": true, 00:05:40.425 "seek_hole": false, 00:05:40.425 "seek_data": false, 00:05:40.425 "copy": true, 00:05:40.425 "nvme_iov_md": false 00:05:40.425 }, 00:05:40.425 "memory_domains": [ 00:05:40.425 { 00:05:40.425 "dma_device_id": "system", 00:05:40.425 "dma_device_type": 1 00:05:40.425 }, 00:05:40.425 { 00:05:40.425 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:40.425 "dma_device_type": 2 00:05:40.425 } 00:05:40.425 ], 00:05:40.425 "driver_specific": {} 00:05:40.425 }, 00:05:40.425 { 00:05:40.425 "name": "Passthru0", 00:05:40.425 "aliases": [ 00:05:40.425 "fbab358d-ee9b-5d10-8c0a-48567f2a258b" 00:05:40.425 ], 00:05:40.425 "product_name": "passthru", 00:05:40.425 "block_size": 512, 00:05:40.425 "num_blocks": 16384, 00:05:40.425 "uuid": "fbab358d-ee9b-5d10-8c0a-48567f2a258b", 00:05:40.425 "assigned_rate_limits": { 00:05:40.425 "rw_ios_per_sec": 0, 00:05:40.425 "rw_mbytes_per_sec": 0, 00:05:40.425 "r_mbytes_per_sec": 0, 00:05:40.425 "w_mbytes_per_sec": 0 00:05:40.425 }, 00:05:40.425 "claimed": false, 00:05:40.425 "zoned": false, 00:05:40.425 "supported_io_types": { 00:05:40.425 "read": true, 00:05:40.425 "write": true, 00:05:40.425 "unmap": true, 00:05:40.425 "flush": true, 00:05:40.425 "reset": true, 00:05:40.425 "nvme_admin": false, 00:05:40.425 "nvme_io": false, 00:05:40.425 "nvme_io_md": false, 00:05:40.425 "write_zeroes": true, 00:05:40.425 "zcopy": true, 00:05:40.425 "get_zone_info": false, 00:05:40.425 "zone_management": false, 00:05:40.425 "zone_append": false, 00:05:40.425 "compare": false, 00:05:40.425 "compare_and_write": false, 00:05:40.425 "abort": true, 00:05:40.425 "seek_hole": false, 00:05:40.425 "seek_data": false, 00:05:40.425 "copy": true, 00:05:40.425 "nvme_iov_md": false 00:05:40.425 }, 00:05:40.425 "memory_domains": [ 00:05:40.425 { 00:05:40.425 "dma_device_id": "system", 00:05:40.425 "dma_device_type": 1 00:05:40.425 }, 00:05:40.425 { 00:05:40.425 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:40.425 "dma_device_type": 2 00:05:40.425 } 00:05:40.425 ], 00:05:40.425 "driver_specific": { 00:05:40.425 "passthru": { 00:05:40.425 "name": "Passthru0", 00:05:40.425 "base_bdev_name": "Malloc0" 00:05:40.425 } 00:05:40.425 } 00:05:40.425 } 00:05:40.425 ]' 00:05:40.425 06:51:01 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:40.425 06:51:01 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:40.425 06:51:01 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:40.425 06:51:01 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:40.425 06:51:01 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:40.425 06:51:01 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:40.425 06:51:01 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:40.425 06:51:01 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:40.425 06:51:01 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:40.425 06:51:01 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:40.425 06:51:01 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:40.425 06:51:01 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:40.425 06:51:01 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:40.425 06:51:01 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:40.425 06:51:01 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:40.425 06:51:01 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:40.425 06:51:01 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:40.425 00:05:40.425 real 0m0.207s 00:05:40.425 user 0m0.139s 00:05:40.425 sys 0m0.016s 00:05:40.425 06:51:01 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:40.425 06:51:01 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:40.425 ************************************ 00:05:40.425 END TEST rpc_integrity 00:05:40.425 ************************************ 00:05:40.425 06:51:01 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:40.425 06:51:01 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:40.426 06:51:01 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:40.426 06:51:01 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:40.685 ************************************ 00:05:40.685 START TEST rpc_plugins 00:05:40.685 ************************************ 00:05:40.685 06:51:01 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:05:40.685 06:51:01 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:40.685 06:51:01 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:40.685 06:51:01 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:40.685 06:51:01 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:40.685 06:51:01 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:40.685 06:51:01 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:40.685 06:51:01 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:40.685 06:51:01 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:40.685 06:51:01 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:40.685 06:51:01 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:40.685 { 00:05:40.685 "name": "Malloc1", 00:05:40.685 "aliases": [ 00:05:40.685 "77ab0a1d-fb9c-4c55-a12d-7ea29d20c594" 00:05:40.685 ], 00:05:40.685 "product_name": "Malloc disk", 00:05:40.685 "block_size": 4096, 00:05:40.685 "num_blocks": 256, 00:05:40.685 "uuid": "77ab0a1d-fb9c-4c55-a12d-7ea29d20c594", 00:05:40.685 "assigned_rate_limits": { 00:05:40.685 "rw_ios_per_sec": 0, 00:05:40.685 "rw_mbytes_per_sec": 0, 00:05:40.685 "r_mbytes_per_sec": 0, 00:05:40.685 "w_mbytes_per_sec": 0 00:05:40.685 }, 00:05:40.685 "claimed": false, 00:05:40.685 "zoned": false, 00:05:40.685 "supported_io_types": { 00:05:40.685 "read": true, 00:05:40.685 "write": true, 00:05:40.685 "unmap": true, 00:05:40.685 "flush": true, 00:05:40.685 "reset": true, 00:05:40.685 "nvme_admin": false, 00:05:40.685 "nvme_io": false, 00:05:40.685 "nvme_io_md": false, 00:05:40.685 "write_zeroes": true, 00:05:40.685 "zcopy": true, 00:05:40.685 "get_zone_info": false, 00:05:40.685 "zone_management": false, 00:05:40.685 "zone_append": false, 00:05:40.685 "compare": false, 00:05:40.685 "compare_and_write": false, 00:05:40.685 "abort": true, 00:05:40.685 "seek_hole": false, 00:05:40.685 "seek_data": false, 00:05:40.685 "copy": true, 00:05:40.685 "nvme_iov_md": false 00:05:40.685 }, 00:05:40.685 "memory_domains": [ 00:05:40.685 { 00:05:40.685 "dma_device_id": "system", 00:05:40.685 "dma_device_type": 1 00:05:40.685 }, 00:05:40.685 { 00:05:40.685 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:40.685 "dma_device_type": 2 00:05:40.685 } 00:05:40.685 ], 00:05:40.685 "driver_specific": {} 00:05:40.685 } 00:05:40.685 ]' 00:05:40.685 06:51:01 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:40.685 06:51:01 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:40.685 06:51:01 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:40.685 06:51:01 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:40.685 06:51:01 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:40.685 06:51:01 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:40.685 06:51:01 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:40.685 06:51:01 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:40.685 06:51:01 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:40.685 06:51:01 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:40.685 06:51:01 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:40.685 06:51:01 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:40.685 06:51:01 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:40.685 00:05:40.685 real 0m0.104s 00:05:40.685 user 0m0.068s 00:05:40.685 sys 0m0.009s 00:05:40.685 06:51:01 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:40.685 06:51:01 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:40.685 ************************************ 00:05:40.685 END TEST rpc_plugins 00:05:40.685 ************************************ 00:05:40.685 06:51:01 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:40.685 06:51:01 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:40.685 06:51:01 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:40.685 06:51:01 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:40.685 ************************************ 00:05:40.685 START TEST rpc_trace_cmd_test 00:05:40.685 ************************************ 00:05:40.685 06:51:01 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:05:40.685 06:51:01 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:40.685 06:51:01 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:40.685 06:51:01 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:40.685 06:51:01 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:40.685 06:51:01 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:40.685 06:51:01 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:40.685 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid99237", 00:05:40.685 "tpoint_group_mask": "0x8", 00:05:40.685 "iscsi_conn": { 00:05:40.685 "mask": "0x2", 00:05:40.685 "tpoint_mask": "0x0" 00:05:40.685 }, 00:05:40.685 "scsi": { 00:05:40.685 "mask": "0x4", 00:05:40.685 "tpoint_mask": "0x0" 00:05:40.685 }, 00:05:40.685 "bdev": { 00:05:40.685 "mask": "0x8", 00:05:40.685 "tpoint_mask": "0xffffffffffffffff" 00:05:40.685 }, 00:05:40.685 "nvmf_rdma": { 00:05:40.685 "mask": "0x10", 00:05:40.685 "tpoint_mask": "0x0" 00:05:40.685 }, 00:05:40.685 "nvmf_tcp": { 00:05:40.685 "mask": "0x20", 00:05:40.685 "tpoint_mask": "0x0" 00:05:40.685 }, 00:05:40.685 "ftl": { 00:05:40.685 "mask": "0x40", 00:05:40.685 "tpoint_mask": "0x0" 00:05:40.685 }, 00:05:40.685 "blobfs": { 00:05:40.685 "mask": "0x80", 00:05:40.685 "tpoint_mask": "0x0" 00:05:40.685 }, 00:05:40.685 "dsa": { 00:05:40.685 "mask": "0x200", 00:05:40.685 "tpoint_mask": "0x0" 00:05:40.685 }, 00:05:40.685 "thread": { 00:05:40.685 "mask": "0x400", 00:05:40.685 "tpoint_mask": "0x0" 00:05:40.685 }, 00:05:40.685 "nvme_pcie": { 00:05:40.685 "mask": "0x800", 00:05:40.685 "tpoint_mask": "0x0" 00:05:40.685 }, 00:05:40.685 "iaa": { 00:05:40.685 "mask": "0x1000", 00:05:40.685 "tpoint_mask": "0x0" 00:05:40.685 }, 00:05:40.685 "nvme_tcp": { 00:05:40.685 "mask": "0x2000", 00:05:40.685 "tpoint_mask": "0x0" 00:05:40.685 }, 00:05:40.685 "bdev_nvme": { 00:05:40.685 "mask": "0x4000", 00:05:40.685 "tpoint_mask": "0x0" 00:05:40.685 }, 00:05:40.685 "sock": { 00:05:40.685 "mask": "0x8000", 00:05:40.685 "tpoint_mask": "0x0" 00:05:40.685 }, 00:05:40.685 "blob": { 00:05:40.685 "mask": "0x10000", 00:05:40.685 "tpoint_mask": "0x0" 00:05:40.685 }, 00:05:40.685 "bdev_raid": { 00:05:40.685 "mask": "0x20000", 00:05:40.685 "tpoint_mask": "0x0" 00:05:40.685 }, 00:05:40.685 "scheduler": { 00:05:40.685 "mask": "0x40000", 00:05:40.685 "tpoint_mask": "0x0" 00:05:40.685 } 00:05:40.685 }' 00:05:40.685 06:51:01 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:40.685 06:51:01 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:05:40.685 06:51:01 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:40.685 06:51:01 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:40.686 06:51:01 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:40.945 06:51:01 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:40.945 06:51:01 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:40.945 06:51:01 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:40.945 06:51:01 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:40.945 06:51:01 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:40.945 00:05:40.945 real 0m0.184s 00:05:40.945 user 0m0.164s 00:05:40.945 sys 0m0.014s 00:05:40.945 06:51:01 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:40.945 06:51:01 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:40.945 ************************************ 00:05:40.945 END TEST rpc_trace_cmd_test 00:05:40.945 ************************************ 00:05:40.945 06:51:01 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:40.945 06:51:01 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:40.945 06:51:01 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:40.945 06:51:01 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:40.945 06:51:01 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:40.945 06:51:01 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:40.945 ************************************ 00:05:40.945 START TEST rpc_daemon_integrity 00:05:40.945 ************************************ 00:05:40.945 06:51:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:05:40.945 06:51:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:40.945 06:51:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:40.945 06:51:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:40.945 06:51:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:40.945 06:51:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:40.945 06:51:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:40.945 06:51:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:40.945 06:51:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:40.945 06:51:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:40.945 06:51:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:40.945 06:51:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:40.945 06:51:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:40.945 06:51:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:40.945 06:51:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:40.945 06:51:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:40.945 06:51:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:40.945 06:51:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:40.945 { 00:05:40.945 "name": "Malloc2", 00:05:40.945 "aliases": [ 00:05:40.945 "fad4772d-b031-478b-b6f1-55010fc68902" 00:05:40.945 ], 00:05:40.945 "product_name": "Malloc disk", 00:05:40.945 "block_size": 512, 00:05:40.945 "num_blocks": 16384, 00:05:40.945 "uuid": "fad4772d-b031-478b-b6f1-55010fc68902", 00:05:40.945 "assigned_rate_limits": { 00:05:40.945 "rw_ios_per_sec": 0, 00:05:40.945 "rw_mbytes_per_sec": 0, 00:05:40.945 "r_mbytes_per_sec": 0, 00:05:40.945 "w_mbytes_per_sec": 0 00:05:40.945 }, 00:05:40.945 "claimed": false, 00:05:40.945 "zoned": false, 00:05:40.945 "supported_io_types": { 00:05:40.945 "read": true, 00:05:40.945 "write": true, 00:05:40.945 "unmap": true, 00:05:40.945 "flush": true, 00:05:40.945 "reset": true, 00:05:40.945 "nvme_admin": false, 00:05:40.945 "nvme_io": false, 00:05:40.945 "nvme_io_md": false, 00:05:40.945 "write_zeroes": true, 00:05:40.945 "zcopy": true, 00:05:40.945 "get_zone_info": false, 00:05:40.945 "zone_management": false, 00:05:40.945 "zone_append": false, 00:05:40.945 "compare": false, 00:05:40.945 "compare_and_write": false, 00:05:40.945 "abort": true, 00:05:40.945 "seek_hole": false, 00:05:40.945 "seek_data": false, 00:05:40.945 "copy": true, 00:05:40.945 "nvme_iov_md": false 00:05:40.945 }, 00:05:40.945 "memory_domains": [ 00:05:40.945 { 00:05:40.945 "dma_device_id": "system", 00:05:40.945 "dma_device_type": 1 00:05:40.945 }, 00:05:40.945 { 00:05:40.945 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:40.945 "dma_device_type": 2 00:05:40.945 } 00:05:40.945 ], 00:05:40.945 "driver_specific": {} 00:05:40.945 } 00:05:40.945 ]' 00:05:40.945 06:51:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:40.945 06:51:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:40.946 06:51:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:40.946 06:51:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:40.946 06:51:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:40.946 [2024-11-18 06:51:01.889890] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:40.946 [2024-11-18 06:51:01.889941] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:40.946 [2024-11-18 06:51:01.889965] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1d1dd50 00:05:40.946 [2024-11-18 06:51:01.889978] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:40.946 [2024-11-18 06:51:01.891205] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:40.946 [2024-11-18 06:51:01.891228] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:40.946 Passthru0 00:05:40.946 06:51:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:40.946 06:51:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:40.946 06:51:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:40.946 06:51:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:40.946 06:51:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:40.946 06:51:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:40.946 { 00:05:40.946 "name": "Malloc2", 00:05:40.946 "aliases": [ 00:05:40.946 "fad4772d-b031-478b-b6f1-55010fc68902" 00:05:40.946 ], 00:05:40.946 "product_name": "Malloc disk", 00:05:40.946 "block_size": 512, 00:05:40.946 "num_blocks": 16384, 00:05:40.946 "uuid": "fad4772d-b031-478b-b6f1-55010fc68902", 00:05:40.946 "assigned_rate_limits": { 00:05:40.946 "rw_ios_per_sec": 0, 00:05:40.946 "rw_mbytes_per_sec": 0, 00:05:40.946 "r_mbytes_per_sec": 0, 00:05:40.946 "w_mbytes_per_sec": 0 00:05:40.946 }, 00:05:40.946 "claimed": true, 00:05:40.946 "claim_type": "exclusive_write", 00:05:40.946 "zoned": false, 00:05:40.946 "supported_io_types": { 00:05:40.946 "read": true, 00:05:40.946 "write": true, 00:05:40.946 "unmap": true, 00:05:40.946 "flush": true, 00:05:40.946 "reset": true, 00:05:40.946 "nvme_admin": false, 00:05:40.946 "nvme_io": false, 00:05:40.946 "nvme_io_md": false, 00:05:40.946 "write_zeroes": true, 00:05:40.946 "zcopy": true, 00:05:40.946 "get_zone_info": false, 00:05:40.946 "zone_management": false, 00:05:40.946 "zone_append": false, 00:05:40.946 "compare": false, 00:05:40.946 "compare_and_write": false, 00:05:40.946 "abort": true, 00:05:40.946 "seek_hole": false, 00:05:40.946 "seek_data": false, 00:05:40.946 "copy": true, 00:05:40.946 "nvme_iov_md": false 00:05:40.946 }, 00:05:40.946 "memory_domains": [ 00:05:40.946 { 00:05:40.946 "dma_device_id": "system", 00:05:40.946 "dma_device_type": 1 00:05:40.946 }, 00:05:40.946 { 00:05:40.946 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:40.946 "dma_device_type": 2 00:05:40.946 } 00:05:40.946 ], 00:05:40.946 "driver_specific": {} 00:05:40.946 }, 00:05:40.946 { 00:05:40.946 "name": "Passthru0", 00:05:40.946 "aliases": [ 00:05:40.946 "d6f9de17-5265-57ae-b078-2f546eeac271" 00:05:40.946 ], 00:05:40.946 "product_name": "passthru", 00:05:40.946 "block_size": 512, 00:05:40.946 "num_blocks": 16384, 00:05:40.946 "uuid": "d6f9de17-5265-57ae-b078-2f546eeac271", 00:05:40.946 "assigned_rate_limits": { 00:05:40.946 "rw_ios_per_sec": 0, 00:05:40.946 "rw_mbytes_per_sec": 0, 00:05:40.946 "r_mbytes_per_sec": 0, 00:05:40.946 "w_mbytes_per_sec": 0 00:05:40.946 }, 00:05:40.946 "claimed": false, 00:05:40.946 "zoned": false, 00:05:40.946 "supported_io_types": { 00:05:40.946 "read": true, 00:05:40.946 "write": true, 00:05:40.946 "unmap": true, 00:05:40.946 "flush": true, 00:05:40.946 "reset": true, 00:05:40.946 "nvme_admin": false, 00:05:40.946 "nvme_io": false, 00:05:40.946 "nvme_io_md": false, 00:05:40.946 "write_zeroes": true, 00:05:40.946 "zcopy": true, 00:05:40.946 "get_zone_info": false, 00:05:40.946 "zone_management": false, 00:05:40.946 "zone_append": false, 00:05:40.946 "compare": false, 00:05:40.946 "compare_and_write": false, 00:05:40.946 "abort": true, 00:05:40.946 "seek_hole": false, 00:05:40.946 "seek_data": false, 00:05:40.946 "copy": true, 00:05:40.946 "nvme_iov_md": false 00:05:40.946 }, 00:05:40.946 "memory_domains": [ 00:05:40.946 { 00:05:40.946 "dma_device_id": "system", 00:05:40.946 "dma_device_type": 1 00:05:40.946 }, 00:05:40.946 { 00:05:40.946 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:40.946 "dma_device_type": 2 00:05:40.946 } 00:05:40.946 ], 00:05:40.946 "driver_specific": { 00:05:40.946 "passthru": { 00:05:40.946 "name": "Passthru0", 00:05:40.946 "base_bdev_name": "Malloc2" 00:05:40.946 } 00:05:40.946 } 00:05:40.946 } 00:05:40.946 ]' 00:05:40.946 06:51:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:41.205 06:51:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:41.205 06:51:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:41.205 06:51:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:41.205 06:51:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:41.205 06:51:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:41.205 06:51:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:41.205 06:51:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:41.205 06:51:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:41.205 06:51:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:41.205 06:51:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:41.205 06:51:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:41.205 06:51:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:41.205 06:51:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:41.205 06:51:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:41.205 06:51:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:41.205 06:51:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:41.205 00:05:41.205 real 0m0.222s 00:05:41.205 user 0m0.145s 00:05:41.205 sys 0m0.023s 00:05:41.205 06:51:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:41.205 06:51:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:41.205 ************************************ 00:05:41.205 END TEST rpc_daemon_integrity 00:05:41.205 ************************************ 00:05:41.205 06:51:02 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:41.205 06:51:02 rpc -- rpc/rpc.sh@84 -- # killprocess 99237 00:05:41.205 06:51:02 rpc -- common/autotest_common.sh@954 -- # '[' -z 99237 ']' 00:05:41.205 06:51:02 rpc -- common/autotest_common.sh@958 -- # kill -0 99237 00:05:41.205 06:51:02 rpc -- common/autotest_common.sh@959 -- # uname 00:05:41.205 06:51:02 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:41.205 06:51:02 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 99237 00:05:41.205 06:51:02 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:41.205 06:51:02 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:41.205 06:51:02 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 99237' 00:05:41.205 killing process with pid 99237 00:05:41.205 06:51:02 rpc -- common/autotest_common.sh@973 -- # kill 99237 00:05:41.205 06:51:02 rpc -- common/autotest_common.sh@978 -- # wait 99237 00:05:41.516 00:05:41.516 real 0m1.883s 00:05:41.516 user 0m2.337s 00:05:41.516 sys 0m0.597s 00:05:41.516 06:51:02 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:41.516 06:51:02 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:41.516 ************************************ 00:05:41.516 END TEST rpc 00:05:41.516 ************************************ 00:05:41.516 06:51:02 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:41.516 06:51:02 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:41.516 06:51:02 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:41.516 06:51:02 -- common/autotest_common.sh@10 -- # set +x 00:05:41.516 ************************************ 00:05:41.516 START TEST skip_rpc 00:05:41.516 ************************************ 00:05:41.516 06:51:02 skip_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:41.775 * Looking for test storage... 00:05:41.775 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:41.775 06:51:02 skip_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:41.775 06:51:02 skip_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:05:41.775 06:51:02 skip_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:41.776 06:51:02 skip_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:41.776 06:51:02 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:41.776 06:51:02 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:41.776 06:51:02 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:41.776 06:51:02 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:41.776 06:51:02 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:41.776 06:51:02 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:41.776 06:51:02 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:41.776 06:51:02 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:41.776 06:51:02 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:41.776 06:51:02 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:41.776 06:51:02 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:41.776 06:51:02 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:41.776 06:51:02 skip_rpc -- scripts/common.sh@345 -- # : 1 00:05:41.776 06:51:02 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:41.776 06:51:02 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:41.776 06:51:02 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:41.776 06:51:02 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:05:41.776 06:51:02 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:41.776 06:51:02 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:05:41.776 06:51:02 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:41.776 06:51:02 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:41.776 06:51:02 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:05:41.776 06:51:02 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:41.776 06:51:02 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:05:41.776 06:51:02 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:41.776 06:51:02 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:41.776 06:51:02 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:41.776 06:51:02 skip_rpc -- scripts/common.sh@368 -- # return 0 00:05:41.776 06:51:02 skip_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:41.776 06:51:02 skip_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:41.776 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.776 --rc genhtml_branch_coverage=1 00:05:41.776 --rc genhtml_function_coverage=1 00:05:41.776 --rc genhtml_legend=1 00:05:41.776 --rc geninfo_all_blocks=1 00:05:41.776 --rc geninfo_unexecuted_blocks=1 00:05:41.776 00:05:41.776 ' 00:05:41.776 06:51:02 skip_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:41.776 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.776 --rc genhtml_branch_coverage=1 00:05:41.776 --rc genhtml_function_coverage=1 00:05:41.776 --rc genhtml_legend=1 00:05:41.776 --rc geninfo_all_blocks=1 00:05:41.776 --rc geninfo_unexecuted_blocks=1 00:05:41.776 00:05:41.776 ' 00:05:41.776 06:51:02 skip_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:41.776 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.776 --rc genhtml_branch_coverage=1 00:05:41.776 --rc genhtml_function_coverage=1 00:05:41.776 --rc genhtml_legend=1 00:05:41.776 --rc geninfo_all_blocks=1 00:05:41.776 --rc geninfo_unexecuted_blocks=1 00:05:41.776 00:05:41.776 ' 00:05:41.776 06:51:02 skip_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:41.776 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.776 --rc genhtml_branch_coverage=1 00:05:41.776 --rc genhtml_function_coverage=1 00:05:41.776 --rc genhtml_legend=1 00:05:41.776 --rc geninfo_all_blocks=1 00:05:41.776 --rc geninfo_unexecuted_blocks=1 00:05:41.776 00:05:41.776 ' 00:05:41.776 06:51:02 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:41.776 06:51:02 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:41.776 06:51:02 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:41.776 06:51:02 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:41.776 06:51:02 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:41.776 06:51:02 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:41.776 ************************************ 00:05:41.776 START TEST skip_rpc 00:05:41.776 ************************************ 00:05:41.776 06:51:02 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:05:41.776 06:51:02 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=99738 00:05:41.776 06:51:02 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:41.776 06:51:02 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:41.776 06:51:02 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:41.776 [2024-11-18 06:51:02.713076] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:05:41.776 [2024-11-18 06:51:02.713160] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid99738 ] 00:05:42.036 [2024-11-18 06:51:02.778296] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:42.036 [2024-11-18 06:51:02.824143] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.305 06:51:07 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:47.305 06:51:07 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:05:47.305 06:51:07 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:47.305 06:51:07 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:05:47.305 06:51:07 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:47.305 06:51:07 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:05:47.305 06:51:07 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:47.305 06:51:07 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:05:47.305 06:51:07 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:47.305 06:51:07 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:47.305 06:51:07 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:47.305 06:51:07 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:05:47.305 06:51:07 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:47.305 06:51:07 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:47.305 06:51:07 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:47.305 06:51:07 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:47.305 06:51:07 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 99738 00:05:47.305 06:51:07 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 99738 ']' 00:05:47.305 06:51:07 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 99738 00:05:47.305 06:51:07 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:05:47.305 06:51:07 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:47.305 06:51:07 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 99738 00:05:47.305 06:51:07 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:47.305 06:51:07 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:47.305 06:51:07 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 99738' 00:05:47.305 killing process with pid 99738 00:05:47.305 06:51:07 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 99738 00:05:47.305 06:51:07 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 99738 00:05:47.305 00:05:47.305 real 0m5.405s 00:05:47.305 user 0m5.107s 00:05:47.305 sys 0m0.302s 00:05:47.305 06:51:08 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:47.305 06:51:08 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:47.305 ************************************ 00:05:47.305 END TEST skip_rpc 00:05:47.305 ************************************ 00:05:47.305 06:51:08 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:47.305 06:51:08 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:47.305 06:51:08 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:47.305 06:51:08 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:47.305 ************************************ 00:05:47.305 START TEST skip_rpc_with_json 00:05:47.305 ************************************ 00:05:47.305 06:51:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:05:47.305 06:51:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:47.305 06:51:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=100753 00:05:47.305 06:51:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:47.305 06:51:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:47.305 06:51:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 100753 00:05:47.306 06:51:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 100753 ']' 00:05:47.306 06:51:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:47.306 06:51:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:47.306 06:51:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:47.306 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:47.306 06:51:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:47.306 06:51:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:47.306 [2024-11-18 06:51:08.166913] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:05:47.306 [2024-11-18 06:51:08.167024] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100753 ] 00:05:47.306 [2024-11-18 06:51:08.232453] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:47.306 [2024-11-18 06:51:08.282005] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.565 06:51:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:47.565 06:51:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:05:47.565 06:51:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:47.565 06:51:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:47.565 06:51:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:47.824 [2024-11-18 06:51:08.545361] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:47.824 request: 00:05:47.824 { 00:05:47.824 "trtype": "tcp", 00:05:47.824 "method": "nvmf_get_transports", 00:05:47.824 "req_id": 1 00:05:47.824 } 00:05:47.824 Got JSON-RPC error response 00:05:47.824 response: 00:05:47.824 { 00:05:47.824 "code": -19, 00:05:47.824 "message": "No such device" 00:05:47.824 } 00:05:47.824 06:51:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:47.824 06:51:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:47.824 06:51:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:47.824 06:51:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:47.824 [2024-11-18 06:51:08.553458] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:47.824 06:51:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:47.824 06:51:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:47.824 06:51:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:47.824 06:51:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:47.824 06:51:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:47.824 06:51:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:47.824 { 00:05:47.824 "subsystems": [ 00:05:47.824 { 00:05:47.824 "subsystem": "fsdev", 00:05:47.824 "config": [ 00:05:47.824 { 00:05:47.824 "method": "fsdev_set_opts", 00:05:47.824 "params": { 00:05:47.824 "fsdev_io_pool_size": 65535, 00:05:47.824 "fsdev_io_cache_size": 256 00:05:47.824 } 00:05:47.824 } 00:05:47.824 ] 00:05:47.824 }, 00:05:47.824 { 00:05:47.824 "subsystem": "vfio_user_target", 00:05:47.824 "config": null 00:05:47.824 }, 00:05:47.824 { 00:05:47.824 "subsystem": "keyring", 00:05:47.824 "config": [] 00:05:47.824 }, 00:05:47.824 { 00:05:47.824 "subsystem": "iobuf", 00:05:47.824 "config": [ 00:05:47.824 { 00:05:47.824 "method": "iobuf_set_options", 00:05:47.824 "params": { 00:05:47.824 "small_pool_count": 8192, 00:05:47.824 "large_pool_count": 1024, 00:05:47.824 "small_bufsize": 8192, 00:05:47.824 "large_bufsize": 135168, 00:05:47.824 "enable_numa": false 00:05:47.824 } 00:05:47.824 } 00:05:47.824 ] 00:05:47.824 }, 00:05:47.824 { 00:05:47.824 "subsystem": "sock", 00:05:47.824 "config": [ 00:05:47.824 { 00:05:47.824 "method": "sock_set_default_impl", 00:05:47.824 "params": { 00:05:47.824 "impl_name": "posix" 00:05:47.824 } 00:05:47.824 }, 00:05:47.824 { 00:05:47.824 "method": "sock_impl_set_options", 00:05:47.824 "params": { 00:05:47.824 "impl_name": "ssl", 00:05:47.824 "recv_buf_size": 4096, 00:05:47.824 "send_buf_size": 4096, 00:05:47.824 "enable_recv_pipe": true, 00:05:47.824 "enable_quickack": false, 00:05:47.824 "enable_placement_id": 0, 00:05:47.824 "enable_zerocopy_send_server": true, 00:05:47.824 "enable_zerocopy_send_client": false, 00:05:47.824 "zerocopy_threshold": 0, 00:05:47.824 "tls_version": 0, 00:05:47.824 "enable_ktls": false 00:05:47.824 } 00:05:47.824 }, 00:05:47.824 { 00:05:47.824 "method": "sock_impl_set_options", 00:05:47.824 "params": { 00:05:47.824 "impl_name": "posix", 00:05:47.824 "recv_buf_size": 2097152, 00:05:47.824 "send_buf_size": 2097152, 00:05:47.824 "enable_recv_pipe": true, 00:05:47.824 "enable_quickack": false, 00:05:47.824 "enable_placement_id": 0, 00:05:47.824 "enable_zerocopy_send_server": true, 00:05:47.824 "enable_zerocopy_send_client": false, 00:05:47.824 "zerocopy_threshold": 0, 00:05:47.824 "tls_version": 0, 00:05:47.824 "enable_ktls": false 00:05:47.824 } 00:05:47.824 } 00:05:47.824 ] 00:05:47.824 }, 00:05:47.824 { 00:05:47.824 "subsystem": "vmd", 00:05:47.824 "config": [] 00:05:47.824 }, 00:05:47.824 { 00:05:47.824 "subsystem": "accel", 00:05:47.824 "config": [ 00:05:47.824 { 00:05:47.824 "method": "accel_set_options", 00:05:47.824 "params": { 00:05:47.824 "small_cache_size": 128, 00:05:47.824 "large_cache_size": 16, 00:05:47.824 "task_count": 2048, 00:05:47.824 "sequence_count": 2048, 00:05:47.824 "buf_count": 2048 00:05:47.824 } 00:05:47.824 } 00:05:47.824 ] 00:05:47.824 }, 00:05:47.824 { 00:05:47.824 "subsystem": "bdev", 00:05:47.824 "config": [ 00:05:47.824 { 00:05:47.824 "method": "bdev_set_options", 00:05:47.824 "params": { 00:05:47.824 "bdev_io_pool_size": 65535, 00:05:47.824 "bdev_io_cache_size": 256, 00:05:47.824 "bdev_auto_examine": true, 00:05:47.824 "iobuf_small_cache_size": 128, 00:05:47.824 "iobuf_large_cache_size": 16 00:05:47.824 } 00:05:47.824 }, 00:05:47.824 { 00:05:47.824 "method": "bdev_raid_set_options", 00:05:47.824 "params": { 00:05:47.824 "process_window_size_kb": 1024, 00:05:47.824 "process_max_bandwidth_mb_sec": 0 00:05:47.824 } 00:05:47.824 }, 00:05:47.824 { 00:05:47.824 "method": "bdev_iscsi_set_options", 00:05:47.824 "params": { 00:05:47.824 "timeout_sec": 30 00:05:47.824 } 00:05:47.824 }, 00:05:47.824 { 00:05:47.824 "method": "bdev_nvme_set_options", 00:05:47.824 "params": { 00:05:47.824 "action_on_timeout": "none", 00:05:47.824 "timeout_us": 0, 00:05:47.824 "timeout_admin_us": 0, 00:05:47.824 "keep_alive_timeout_ms": 10000, 00:05:47.824 "arbitration_burst": 0, 00:05:47.824 "low_priority_weight": 0, 00:05:47.824 "medium_priority_weight": 0, 00:05:47.824 "high_priority_weight": 0, 00:05:47.824 "nvme_adminq_poll_period_us": 10000, 00:05:47.824 "nvme_ioq_poll_period_us": 0, 00:05:47.824 "io_queue_requests": 0, 00:05:47.824 "delay_cmd_submit": true, 00:05:47.824 "transport_retry_count": 4, 00:05:47.824 "bdev_retry_count": 3, 00:05:47.824 "transport_ack_timeout": 0, 00:05:47.824 "ctrlr_loss_timeout_sec": 0, 00:05:47.824 "reconnect_delay_sec": 0, 00:05:47.824 "fast_io_fail_timeout_sec": 0, 00:05:47.824 "disable_auto_failback": false, 00:05:47.824 "generate_uuids": false, 00:05:47.824 "transport_tos": 0, 00:05:47.824 "nvme_error_stat": false, 00:05:47.824 "rdma_srq_size": 0, 00:05:47.824 "io_path_stat": false, 00:05:47.824 "allow_accel_sequence": false, 00:05:47.824 "rdma_max_cq_size": 0, 00:05:47.824 "rdma_cm_event_timeout_ms": 0, 00:05:47.824 "dhchap_digests": [ 00:05:47.824 "sha256", 00:05:47.824 "sha384", 00:05:47.825 "sha512" 00:05:47.825 ], 00:05:47.825 "dhchap_dhgroups": [ 00:05:47.825 "null", 00:05:47.825 "ffdhe2048", 00:05:47.825 "ffdhe3072", 00:05:47.825 "ffdhe4096", 00:05:47.825 "ffdhe6144", 00:05:47.825 "ffdhe8192" 00:05:47.825 ] 00:05:47.825 } 00:05:47.825 }, 00:05:47.825 { 00:05:47.825 "method": "bdev_nvme_set_hotplug", 00:05:47.825 "params": { 00:05:47.825 "period_us": 100000, 00:05:47.825 "enable": false 00:05:47.825 } 00:05:47.825 }, 00:05:47.825 { 00:05:47.825 "method": "bdev_wait_for_examine" 00:05:47.825 } 00:05:47.825 ] 00:05:47.825 }, 00:05:47.825 { 00:05:47.825 "subsystem": "scsi", 00:05:47.825 "config": null 00:05:47.825 }, 00:05:47.825 { 00:05:47.825 "subsystem": "scheduler", 00:05:47.825 "config": [ 00:05:47.825 { 00:05:47.825 "method": "framework_set_scheduler", 00:05:47.825 "params": { 00:05:47.825 "name": "static" 00:05:47.825 } 00:05:47.825 } 00:05:47.825 ] 00:05:47.825 }, 00:05:47.825 { 00:05:47.825 "subsystem": "vhost_scsi", 00:05:47.825 "config": [] 00:05:47.825 }, 00:05:47.825 { 00:05:47.825 "subsystem": "vhost_blk", 00:05:47.825 "config": [] 00:05:47.825 }, 00:05:47.825 { 00:05:47.825 "subsystem": "ublk", 00:05:47.825 "config": [] 00:05:47.825 }, 00:05:47.825 { 00:05:47.825 "subsystem": "nbd", 00:05:47.825 "config": [] 00:05:47.825 }, 00:05:47.825 { 00:05:47.825 "subsystem": "nvmf", 00:05:47.825 "config": [ 00:05:47.825 { 00:05:47.825 "method": "nvmf_set_config", 00:05:47.825 "params": { 00:05:47.825 "discovery_filter": "match_any", 00:05:47.825 "admin_cmd_passthru": { 00:05:47.825 "identify_ctrlr": false 00:05:47.825 }, 00:05:47.825 "dhchap_digests": [ 00:05:47.825 "sha256", 00:05:47.825 "sha384", 00:05:47.825 "sha512" 00:05:47.825 ], 00:05:47.825 "dhchap_dhgroups": [ 00:05:47.825 "null", 00:05:47.825 "ffdhe2048", 00:05:47.825 "ffdhe3072", 00:05:47.825 "ffdhe4096", 00:05:47.825 "ffdhe6144", 00:05:47.825 "ffdhe8192" 00:05:47.825 ] 00:05:47.825 } 00:05:47.825 }, 00:05:47.825 { 00:05:47.825 "method": "nvmf_set_max_subsystems", 00:05:47.825 "params": { 00:05:47.825 "max_subsystems": 1024 00:05:47.825 } 00:05:47.825 }, 00:05:47.825 { 00:05:47.825 "method": "nvmf_set_crdt", 00:05:47.825 "params": { 00:05:47.825 "crdt1": 0, 00:05:47.825 "crdt2": 0, 00:05:47.825 "crdt3": 0 00:05:47.825 } 00:05:47.825 }, 00:05:47.825 { 00:05:47.825 "method": "nvmf_create_transport", 00:05:47.825 "params": { 00:05:47.825 "trtype": "TCP", 00:05:47.825 "max_queue_depth": 128, 00:05:47.825 "max_io_qpairs_per_ctrlr": 127, 00:05:47.825 "in_capsule_data_size": 4096, 00:05:47.825 "max_io_size": 131072, 00:05:47.825 "io_unit_size": 131072, 00:05:47.825 "max_aq_depth": 128, 00:05:47.825 "num_shared_buffers": 511, 00:05:47.825 "buf_cache_size": 4294967295, 00:05:47.825 "dif_insert_or_strip": false, 00:05:47.825 "zcopy": false, 00:05:47.825 "c2h_success": true, 00:05:47.825 "sock_priority": 0, 00:05:47.825 "abort_timeout_sec": 1, 00:05:47.825 "ack_timeout": 0, 00:05:47.825 "data_wr_pool_size": 0 00:05:47.825 } 00:05:47.825 } 00:05:47.825 ] 00:05:47.825 }, 00:05:47.825 { 00:05:47.825 "subsystem": "iscsi", 00:05:47.825 "config": [ 00:05:47.825 { 00:05:47.825 "method": "iscsi_set_options", 00:05:47.825 "params": { 00:05:47.825 "node_base": "iqn.2016-06.io.spdk", 00:05:47.825 "max_sessions": 128, 00:05:47.825 "max_connections_per_session": 2, 00:05:47.825 "max_queue_depth": 64, 00:05:47.825 "default_time2wait": 2, 00:05:47.825 "default_time2retain": 20, 00:05:47.825 "first_burst_length": 8192, 00:05:47.825 "immediate_data": true, 00:05:47.825 "allow_duplicated_isid": false, 00:05:47.825 "error_recovery_level": 0, 00:05:47.825 "nop_timeout": 60, 00:05:47.825 "nop_in_interval": 30, 00:05:47.825 "disable_chap": false, 00:05:47.825 "require_chap": false, 00:05:47.825 "mutual_chap": false, 00:05:47.825 "chap_group": 0, 00:05:47.825 "max_large_datain_per_connection": 64, 00:05:47.825 "max_r2t_per_connection": 4, 00:05:47.825 "pdu_pool_size": 36864, 00:05:47.825 "immediate_data_pool_size": 16384, 00:05:47.825 "data_out_pool_size": 2048 00:05:47.825 } 00:05:47.825 } 00:05:47.825 ] 00:05:47.825 } 00:05:47.825 ] 00:05:47.825 } 00:05:47.825 06:51:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:47.825 06:51:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 100753 00:05:47.825 06:51:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 100753 ']' 00:05:47.825 06:51:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 100753 00:05:47.825 06:51:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:05:47.825 06:51:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:47.825 06:51:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 100753 00:05:47.825 06:51:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:47.825 06:51:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:47.825 06:51:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 100753' 00:05:47.825 killing process with pid 100753 00:05:47.825 06:51:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 100753 00:05:47.825 06:51:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 100753 00:05:48.392 06:51:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=101055 00:05:48.392 06:51:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:48.393 06:51:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:53.663 06:51:14 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 101055 00:05:53.663 06:51:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 101055 ']' 00:05:53.663 06:51:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 101055 00:05:53.663 06:51:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:05:53.663 06:51:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:53.663 06:51:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 101055 00:05:53.663 06:51:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:53.663 06:51:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:53.663 06:51:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 101055' 00:05:53.663 killing process with pid 101055 00:05:53.663 06:51:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 101055 00:05:53.663 06:51:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 101055 00:05:53.663 06:51:14 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:53.663 06:51:14 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:53.663 00:05:53.663 real 0m6.421s 00:05:53.663 user 0m6.086s 00:05:53.663 sys 0m0.670s 00:05:53.663 06:51:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:53.663 06:51:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:53.663 ************************************ 00:05:53.663 END TEST skip_rpc_with_json 00:05:53.663 ************************************ 00:05:53.663 06:51:14 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:53.663 06:51:14 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:53.663 06:51:14 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:53.663 06:51:14 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:53.663 ************************************ 00:05:53.663 START TEST skip_rpc_with_delay 00:05:53.663 ************************************ 00:05:53.663 06:51:14 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:05:53.663 06:51:14 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:53.663 06:51:14 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:05:53.663 06:51:14 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:53.663 06:51:14 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:53.663 06:51:14 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:53.663 06:51:14 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:53.663 06:51:14 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:53.663 06:51:14 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:53.663 06:51:14 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:53.663 06:51:14 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:53.663 06:51:14 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:53.663 06:51:14 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:53.923 [2024-11-18 06:51:14.646423] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:53.923 06:51:14 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:05:53.923 06:51:14 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:53.923 06:51:14 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:53.923 06:51:14 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:53.923 00:05:53.923 real 0m0.073s 00:05:53.923 user 0m0.050s 00:05:53.923 sys 0m0.022s 00:05:53.923 06:51:14 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:53.923 06:51:14 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:53.923 ************************************ 00:05:53.923 END TEST skip_rpc_with_delay 00:05:53.923 ************************************ 00:05:53.923 06:51:14 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:53.923 06:51:14 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:53.923 06:51:14 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:53.923 06:51:14 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:53.923 06:51:14 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:53.923 06:51:14 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:53.923 ************************************ 00:05:53.923 START TEST exit_on_failed_rpc_init 00:05:53.923 ************************************ 00:05:53.923 06:51:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:05:53.923 06:51:14 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=101784 00:05:53.923 06:51:14 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:53.923 06:51:14 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 101784 00:05:53.923 06:51:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 101784 ']' 00:05:53.923 06:51:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:53.923 06:51:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:53.923 06:51:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:53.924 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:53.924 06:51:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:53.924 06:51:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:53.924 [2024-11-18 06:51:14.763717] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:05:53.924 [2024-11-18 06:51:14.763804] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid101784 ] 00:05:53.924 [2024-11-18 06:51:14.828047] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:53.924 [2024-11-18 06:51:14.871566] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.182 06:51:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:54.182 06:51:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:05:54.182 06:51:15 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:54.182 06:51:15 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:54.182 06:51:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:05:54.182 06:51:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:54.182 06:51:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:54.182 06:51:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:54.183 06:51:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:54.183 06:51:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:54.183 06:51:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:54.183 06:51:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:54.183 06:51:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:54.183 06:51:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:54.183 06:51:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:54.441 [2024-11-18 06:51:15.182369] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:05:54.441 [2024-11-18 06:51:15.182444] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid101794 ] 00:05:54.441 [2024-11-18 06:51:15.248240] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:54.441 [2024-11-18 06:51:15.295959] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:54.441 [2024-11-18 06:51:15.296082] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:54.441 [2024-11-18 06:51:15.296107] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:54.441 [2024-11-18 06:51:15.296119] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:54.441 06:51:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:05:54.441 06:51:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:54.441 06:51:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:05:54.441 06:51:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:05:54.441 06:51:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:05:54.441 06:51:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:54.441 06:51:15 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:54.441 06:51:15 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 101784 00:05:54.441 06:51:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 101784 ']' 00:05:54.441 06:51:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 101784 00:05:54.441 06:51:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:05:54.441 06:51:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:54.441 06:51:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 101784 00:05:54.441 06:51:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:54.441 06:51:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:54.441 06:51:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 101784' 00:05:54.441 killing process with pid 101784 00:05:54.441 06:51:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 101784 00:05:54.441 06:51:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 101784 00:05:55.009 00:05:55.009 real 0m1.055s 00:05:55.009 user 0m1.142s 00:05:55.009 sys 0m0.428s 00:05:55.009 06:51:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:55.009 06:51:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:55.009 ************************************ 00:05:55.009 END TEST exit_on_failed_rpc_init 00:05:55.009 ************************************ 00:05:55.009 06:51:15 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:55.009 00:05:55.009 real 0m13.310s 00:05:55.009 user 0m12.577s 00:05:55.009 sys 0m1.607s 00:05:55.009 06:51:15 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:55.009 06:51:15 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:55.009 ************************************ 00:05:55.009 END TEST skip_rpc 00:05:55.009 ************************************ 00:05:55.009 06:51:15 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:55.009 06:51:15 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:55.009 06:51:15 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:55.009 06:51:15 -- common/autotest_common.sh@10 -- # set +x 00:05:55.009 ************************************ 00:05:55.009 START TEST rpc_client 00:05:55.009 ************************************ 00:05:55.009 06:51:15 rpc_client -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:55.009 * Looking for test storage... 00:05:55.009 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:05:55.009 06:51:15 rpc_client -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:55.009 06:51:15 rpc_client -- common/autotest_common.sh@1693 -- # lcov --version 00:05:55.009 06:51:15 rpc_client -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:55.009 06:51:15 rpc_client -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:55.009 06:51:15 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:55.009 06:51:15 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:55.009 06:51:15 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:55.009 06:51:15 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:05:55.009 06:51:15 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:05:55.009 06:51:15 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:05:55.009 06:51:15 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:05:55.009 06:51:15 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:05:55.009 06:51:15 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:05:55.009 06:51:15 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:05:55.009 06:51:15 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:55.009 06:51:15 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:05:55.009 06:51:15 rpc_client -- scripts/common.sh@345 -- # : 1 00:05:55.009 06:51:15 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:55.009 06:51:15 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:55.009 06:51:15 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:05:55.009 06:51:15 rpc_client -- scripts/common.sh@353 -- # local d=1 00:05:55.009 06:51:15 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:55.009 06:51:15 rpc_client -- scripts/common.sh@355 -- # echo 1 00:05:55.269 06:51:15 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:05:55.269 06:51:15 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:05:55.269 06:51:15 rpc_client -- scripts/common.sh@353 -- # local d=2 00:05:55.269 06:51:15 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:55.269 06:51:15 rpc_client -- scripts/common.sh@355 -- # echo 2 00:05:55.270 06:51:15 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:05:55.270 06:51:15 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:55.270 06:51:15 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:55.270 06:51:15 rpc_client -- scripts/common.sh@368 -- # return 0 00:05:55.270 06:51:15 rpc_client -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:55.270 06:51:15 rpc_client -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:55.270 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:55.270 --rc genhtml_branch_coverage=1 00:05:55.270 --rc genhtml_function_coverage=1 00:05:55.270 --rc genhtml_legend=1 00:05:55.270 --rc geninfo_all_blocks=1 00:05:55.270 --rc geninfo_unexecuted_blocks=1 00:05:55.270 00:05:55.270 ' 00:05:55.270 06:51:15 rpc_client -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:55.270 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:55.270 --rc genhtml_branch_coverage=1 00:05:55.270 --rc genhtml_function_coverage=1 00:05:55.270 --rc genhtml_legend=1 00:05:55.270 --rc geninfo_all_blocks=1 00:05:55.270 --rc geninfo_unexecuted_blocks=1 00:05:55.270 00:05:55.270 ' 00:05:55.270 06:51:15 rpc_client -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:55.270 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:55.270 --rc genhtml_branch_coverage=1 00:05:55.270 --rc genhtml_function_coverage=1 00:05:55.270 --rc genhtml_legend=1 00:05:55.270 --rc geninfo_all_blocks=1 00:05:55.270 --rc geninfo_unexecuted_blocks=1 00:05:55.270 00:05:55.270 ' 00:05:55.270 06:51:15 rpc_client -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:55.270 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:55.270 --rc genhtml_branch_coverage=1 00:05:55.270 --rc genhtml_function_coverage=1 00:05:55.270 --rc genhtml_legend=1 00:05:55.270 --rc geninfo_all_blocks=1 00:05:55.270 --rc geninfo_unexecuted_blocks=1 00:05:55.270 00:05:55.270 ' 00:05:55.270 06:51:15 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:05:55.270 OK 00:05:55.270 06:51:16 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:55.270 00:05:55.270 real 0m0.165s 00:05:55.270 user 0m0.104s 00:05:55.270 sys 0m0.071s 00:05:55.270 06:51:16 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:55.270 06:51:16 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:55.270 ************************************ 00:05:55.270 END TEST rpc_client 00:05:55.270 ************************************ 00:05:55.270 06:51:16 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:55.270 06:51:16 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:55.270 06:51:16 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:55.270 06:51:16 -- common/autotest_common.sh@10 -- # set +x 00:05:55.270 ************************************ 00:05:55.270 START TEST json_config 00:05:55.270 ************************************ 00:05:55.270 06:51:16 json_config -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:55.270 06:51:16 json_config -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:55.270 06:51:16 json_config -- common/autotest_common.sh@1693 -- # lcov --version 00:05:55.270 06:51:16 json_config -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:55.270 06:51:16 json_config -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:55.270 06:51:16 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:55.270 06:51:16 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:55.270 06:51:16 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:55.270 06:51:16 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:05:55.270 06:51:16 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:05:55.270 06:51:16 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:05:55.270 06:51:16 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:05:55.270 06:51:16 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:05:55.270 06:51:16 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:05:55.270 06:51:16 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:05:55.270 06:51:16 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:55.270 06:51:16 json_config -- scripts/common.sh@344 -- # case "$op" in 00:05:55.270 06:51:16 json_config -- scripts/common.sh@345 -- # : 1 00:05:55.270 06:51:16 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:55.270 06:51:16 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:55.270 06:51:16 json_config -- scripts/common.sh@365 -- # decimal 1 00:05:55.270 06:51:16 json_config -- scripts/common.sh@353 -- # local d=1 00:05:55.270 06:51:16 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:55.270 06:51:16 json_config -- scripts/common.sh@355 -- # echo 1 00:05:55.270 06:51:16 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:05:55.270 06:51:16 json_config -- scripts/common.sh@366 -- # decimal 2 00:05:55.270 06:51:16 json_config -- scripts/common.sh@353 -- # local d=2 00:05:55.270 06:51:16 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:55.270 06:51:16 json_config -- scripts/common.sh@355 -- # echo 2 00:05:55.270 06:51:16 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:05:55.270 06:51:16 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:55.270 06:51:16 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:55.270 06:51:16 json_config -- scripts/common.sh@368 -- # return 0 00:05:55.270 06:51:16 json_config -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:55.270 06:51:16 json_config -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:55.270 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:55.270 --rc genhtml_branch_coverage=1 00:05:55.270 --rc genhtml_function_coverage=1 00:05:55.270 --rc genhtml_legend=1 00:05:55.270 --rc geninfo_all_blocks=1 00:05:55.270 --rc geninfo_unexecuted_blocks=1 00:05:55.270 00:05:55.270 ' 00:05:55.270 06:51:16 json_config -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:55.270 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:55.270 --rc genhtml_branch_coverage=1 00:05:55.270 --rc genhtml_function_coverage=1 00:05:55.270 --rc genhtml_legend=1 00:05:55.270 --rc geninfo_all_blocks=1 00:05:55.270 --rc geninfo_unexecuted_blocks=1 00:05:55.270 00:05:55.270 ' 00:05:55.270 06:51:16 json_config -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:55.270 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:55.270 --rc genhtml_branch_coverage=1 00:05:55.270 --rc genhtml_function_coverage=1 00:05:55.270 --rc genhtml_legend=1 00:05:55.270 --rc geninfo_all_blocks=1 00:05:55.270 --rc geninfo_unexecuted_blocks=1 00:05:55.270 00:05:55.270 ' 00:05:55.270 06:51:16 json_config -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:55.270 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:55.270 --rc genhtml_branch_coverage=1 00:05:55.270 --rc genhtml_function_coverage=1 00:05:55.270 --rc genhtml_legend=1 00:05:55.270 --rc geninfo_all_blocks=1 00:05:55.270 --rc geninfo_unexecuted_blocks=1 00:05:55.270 00:05:55.270 ' 00:05:55.270 06:51:16 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:55.270 06:51:16 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:55.270 06:51:16 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:55.270 06:51:16 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:55.270 06:51:16 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:55.270 06:51:16 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:55.270 06:51:16 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:55.270 06:51:16 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:55.270 06:51:16 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:55.270 06:51:16 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:55.270 06:51:16 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:55.270 06:51:16 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:55.270 06:51:16 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:05:55.270 06:51:16 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:05:55.270 06:51:16 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:55.270 06:51:16 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:55.270 06:51:16 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:55.270 06:51:16 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:55.270 06:51:16 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:55.270 06:51:16 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:05:55.270 06:51:16 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:55.270 06:51:16 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:55.270 06:51:16 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:55.270 06:51:16 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:55.270 06:51:16 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:55.271 06:51:16 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:55.271 06:51:16 json_config -- paths/export.sh@5 -- # export PATH 00:05:55.271 06:51:16 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:55.271 06:51:16 json_config -- nvmf/common.sh@51 -- # : 0 00:05:55.271 06:51:16 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:55.271 06:51:16 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:55.271 06:51:16 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:55.271 06:51:16 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:55.271 06:51:16 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:55.271 06:51:16 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:55.271 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:55.271 06:51:16 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:55.271 06:51:16 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:55.271 06:51:16 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:55.271 06:51:16 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:55.271 06:51:16 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:55.271 06:51:16 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:55.271 06:51:16 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:55.271 06:51:16 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:55.271 06:51:16 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:05:55.271 06:51:16 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:05:55.271 06:51:16 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:55.271 06:51:16 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:05:55.271 06:51:16 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:55.271 06:51:16 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:05:55.271 06:51:16 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:05:55.271 06:51:16 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:05:55.271 06:51:16 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:05:55.271 06:51:16 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:55.271 06:51:16 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:05:55.271 INFO: JSON configuration test init 00:05:55.271 06:51:16 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:05:55.271 06:51:16 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:05:55.271 06:51:16 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:55.271 06:51:16 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:55.271 06:51:16 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:05:55.271 06:51:16 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:55.271 06:51:16 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:55.271 06:51:16 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:05:55.271 06:51:16 json_config -- json_config/common.sh@9 -- # local app=target 00:05:55.271 06:51:16 json_config -- json_config/common.sh@10 -- # shift 00:05:55.271 06:51:16 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:55.271 06:51:16 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:55.271 06:51:16 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:55.271 06:51:16 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:55.271 06:51:16 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:55.271 06:51:16 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=102054 00:05:55.271 06:51:16 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:55.271 06:51:16 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:55.271 Waiting for target to run... 00:05:55.271 06:51:16 json_config -- json_config/common.sh@25 -- # waitforlisten 102054 /var/tmp/spdk_tgt.sock 00:05:55.271 06:51:16 json_config -- common/autotest_common.sh@835 -- # '[' -z 102054 ']' 00:05:55.271 06:51:16 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:55.271 06:51:16 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:55.271 06:51:16 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:55.271 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:55.271 06:51:16 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:55.271 06:51:16 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:55.530 [2024-11-18 06:51:16.270247] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:05:55.530 [2024-11-18 06:51:16.270340] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid102054 ] 00:05:56.097 [2024-11-18 06:51:16.797127] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:56.097 [2024-11-18 06:51:16.838048] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.355 06:51:17 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:56.355 06:51:17 json_config -- common/autotest_common.sh@868 -- # return 0 00:05:56.355 06:51:17 json_config -- json_config/common.sh@26 -- # echo '' 00:05:56.355 00:05:56.355 06:51:17 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:05:56.355 06:51:17 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:05:56.355 06:51:17 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:56.355 06:51:17 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:56.355 06:51:17 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:05:56.355 06:51:17 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:05:56.355 06:51:17 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:56.355 06:51:17 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:56.355 06:51:17 json_config -- json_config/json_config.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:56.355 06:51:17 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:05:56.355 06:51:17 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:59.646 06:51:20 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:05:59.646 06:51:20 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:05:59.646 06:51:20 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:59.646 06:51:20 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:59.646 06:51:20 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:05:59.646 06:51:20 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:59.646 06:51:20 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:05:59.646 06:51:20 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:05:59.646 06:51:20 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:05:59.646 06:51:20 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:05:59.646 06:51:20 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:05:59.646 06:51:20 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:59.904 06:51:20 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:05:59.904 06:51:20 json_config -- json_config/json_config.sh@51 -- # local get_types 00:05:59.904 06:51:20 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:05:59.904 06:51:20 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:05:59.904 06:51:20 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:05:59.904 06:51:20 json_config -- json_config/json_config.sh@54 -- # sort 00:05:59.904 06:51:20 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:05:59.904 06:51:20 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:05:59.904 06:51:20 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:05:59.904 06:51:20 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:05:59.904 06:51:20 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:59.904 06:51:20 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:59.904 06:51:20 json_config -- json_config/json_config.sh@62 -- # return 0 00:05:59.904 06:51:20 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:05:59.904 06:51:20 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:05:59.904 06:51:20 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:05:59.904 06:51:20 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:05:59.904 06:51:20 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:05:59.904 06:51:20 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:05:59.904 06:51:20 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:59.904 06:51:20 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:59.904 06:51:20 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:59.904 06:51:20 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:05:59.904 06:51:20 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:05:59.904 06:51:20 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:59.904 06:51:20 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:06:00.163 MallocForNvmf0 00:06:00.163 06:51:21 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:06:00.163 06:51:21 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:06:00.421 MallocForNvmf1 00:06:00.421 06:51:21 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:06:00.421 06:51:21 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:06:00.680 [2024-11-18 06:51:21.546537] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:00.680 06:51:21 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:00.680 06:51:21 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:00.938 06:51:21 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:06:00.938 06:51:21 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:06:01.196 06:51:22 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:06:01.196 06:51:22 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:06:01.455 06:51:22 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:06:01.455 06:51:22 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:06:01.713 [2024-11-18 06:51:22.617992] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:06:01.713 06:51:22 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:06:01.713 06:51:22 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:01.713 06:51:22 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:01.713 06:51:22 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:06:01.713 06:51:22 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:01.713 06:51:22 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:01.713 06:51:22 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:06:01.713 06:51:22 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:01.713 06:51:22 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:01.971 MallocBdevForConfigChangeCheck 00:06:02.230 06:51:22 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:06:02.230 06:51:22 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:02.230 06:51:22 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:02.230 06:51:22 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:06:02.230 06:51:22 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:02.489 06:51:23 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:06:02.489 INFO: shutting down applications... 00:06:02.489 06:51:23 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:06:02.489 06:51:23 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:06:02.489 06:51:23 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:06:02.489 06:51:23 json_config -- json_config/json_config.sh@340 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:06:04.392 Calling clear_iscsi_subsystem 00:06:04.392 Calling clear_nvmf_subsystem 00:06:04.392 Calling clear_nbd_subsystem 00:06:04.392 Calling clear_ublk_subsystem 00:06:04.392 Calling clear_vhost_blk_subsystem 00:06:04.392 Calling clear_vhost_scsi_subsystem 00:06:04.392 Calling clear_bdev_subsystem 00:06:04.392 06:51:25 json_config -- json_config/json_config.sh@344 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:06:04.392 06:51:25 json_config -- json_config/json_config.sh@350 -- # count=100 00:06:04.392 06:51:25 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:06:04.392 06:51:25 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:04.392 06:51:25 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:06:04.392 06:51:25 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:06:04.651 06:51:25 json_config -- json_config/json_config.sh@352 -- # break 00:06:04.651 06:51:25 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:06:04.651 06:51:25 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:06:04.651 06:51:25 json_config -- json_config/common.sh@31 -- # local app=target 00:06:04.651 06:51:25 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:04.651 06:51:25 json_config -- json_config/common.sh@35 -- # [[ -n 102054 ]] 00:06:04.651 06:51:25 json_config -- json_config/common.sh@38 -- # kill -SIGINT 102054 00:06:04.651 06:51:25 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:04.651 06:51:25 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:04.651 06:51:25 json_config -- json_config/common.sh@41 -- # kill -0 102054 00:06:04.651 06:51:25 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:06:05.221 06:51:25 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:06:05.221 06:51:25 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:05.221 06:51:25 json_config -- json_config/common.sh@41 -- # kill -0 102054 00:06:05.221 06:51:25 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:05.221 06:51:25 json_config -- json_config/common.sh@43 -- # break 00:06:05.221 06:51:25 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:05.221 06:51:25 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:05.221 SPDK target shutdown done 00:06:05.221 06:51:25 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:06:05.221 INFO: relaunching applications... 00:06:05.221 06:51:25 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:05.221 06:51:25 json_config -- json_config/common.sh@9 -- # local app=target 00:06:05.221 06:51:25 json_config -- json_config/common.sh@10 -- # shift 00:06:05.221 06:51:25 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:05.221 06:51:25 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:05.221 06:51:25 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:06:05.221 06:51:25 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:05.221 06:51:25 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:05.221 06:51:25 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=103372 00:06:05.221 06:51:25 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:05.221 06:51:25 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:05.221 Waiting for target to run... 00:06:05.222 06:51:25 json_config -- json_config/common.sh@25 -- # waitforlisten 103372 /var/tmp/spdk_tgt.sock 00:06:05.222 06:51:25 json_config -- common/autotest_common.sh@835 -- # '[' -z 103372 ']' 00:06:05.222 06:51:25 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:05.222 06:51:25 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:05.222 06:51:25 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:05.222 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:05.222 06:51:25 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:05.222 06:51:25 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:05.222 [2024-11-18 06:51:26.015826] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:06:05.222 [2024-11-18 06:51:26.015905] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid103372 ] 00:06:05.792 [2024-11-18 06:51:26.507502] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:05.792 [2024-11-18 06:51:26.548639] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.085 [2024-11-18 06:51:29.591022] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:09.085 [2024-11-18 06:51:29.623465] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:06:09.085 06:51:29 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:09.085 06:51:29 json_config -- common/autotest_common.sh@868 -- # return 0 00:06:09.085 06:51:29 json_config -- json_config/common.sh@26 -- # echo '' 00:06:09.085 00:06:09.085 06:51:29 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:06:09.085 06:51:29 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:06:09.085 INFO: Checking if target configuration is the same... 00:06:09.085 06:51:29 json_config -- json_config/json_config.sh@385 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:09.085 06:51:29 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:06:09.085 06:51:29 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:09.085 + '[' 2 -ne 2 ']' 00:06:09.085 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:06:09.085 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:06:09.085 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:09.085 +++ basename /dev/fd/62 00:06:09.085 ++ mktemp /tmp/62.XXX 00:06:09.085 + tmp_file_1=/tmp/62.zfJ 00:06:09.085 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:09.085 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:09.085 + tmp_file_2=/tmp/spdk_tgt_config.json.UCd 00:06:09.085 + ret=0 00:06:09.085 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:09.344 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:09.344 + diff -u /tmp/62.zfJ /tmp/spdk_tgt_config.json.UCd 00:06:09.344 + echo 'INFO: JSON config files are the same' 00:06:09.344 INFO: JSON config files are the same 00:06:09.344 + rm /tmp/62.zfJ /tmp/spdk_tgt_config.json.UCd 00:06:09.344 + exit 0 00:06:09.344 06:51:30 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:06:09.344 06:51:30 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:06:09.344 INFO: changing configuration and checking if this can be detected... 00:06:09.344 06:51:30 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:09.344 06:51:30 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:09.603 06:51:30 json_config -- json_config/json_config.sh@394 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:09.603 06:51:30 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:06:09.603 06:51:30 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:09.603 + '[' 2 -ne 2 ']' 00:06:09.603 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:06:09.603 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:06:09.603 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:09.603 +++ basename /dev/fd/62 00:06:09.603 ++ mktemp /tmp/62.XXX 00:06:09.603 + tmp_file_1=/tmp/62.zqV 00:06:09.603 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:09.603 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:09.603 + tmp_file_2=/tmp/spdk_tgt_config.json.hWF 00:06:09.603 + ret=0 00:06:09.603 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:09.861 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:10.120 + diff -u /tmp/62.zqV /tmp/spdk_tgt_config.json.hWF 00:06:10.120 + ret=1 00:06:10.120 + echo '=== Start of file: /tmp/62.zqV ===' 00:06:10.120 + cat /tmp/62.zqV 00:06:10.120 + echo '=== End of file: /tmp/62.zqV ===' 00:06:10.120 + echo '' 00:06:10.120 + echo '=== Start of file: /tmp/spdk_tgt_config.json.hWF ===' 00:06:10.120 + cat /tmp/spdk_tgt_config.json.hWF 00:06:10.120 + echo '=== End of file: /tmp/spdk_tgt_config.json.hWF ===' 00:06:10.120 + echo '' 00:06:10.120 + rm /tmp/62.zqV /tmp/spdk_tgt_config.json.hWF 00:06:10.120 + exit 1 00:06:10.120 06:51:30 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:06:10.120 INFO: configuration change detected. 00:06:10.120 06:51:30 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:06:10.120 06:51:30 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:06:10.120 06:51:30 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:10.120 06:51:30 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:10.120 06:51:30 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:06:10.120 06:51:30 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:06:10.120 06:51:30 json_config -- json_config/json_config.sh@324 -- # [[ -n 103372 ]] 00:06:10.120 06:51:30 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:06:10.120 06:51:30 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:06:10.120 06:51:30 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:10.120 06:51:30 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:10.120 06:51:30 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:06:10.120 06:51:30 json_config -- json_config/json_config.sh@200 -- # uname -s 00:06:10.120 06:51:30 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:06:10.120 06:51:30 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:06:10.120 06:51:30 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:06:10.120 06:51:30 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:06:10.120 06:51:30 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:10.120 06:51:30 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:10.120 06:51:30 json_config -- json_config/json_config.sh@330 -- # killprocess 103372 00:06:10.120 06:51:30 json_config -- common/autotest_common.sh@954 -- # '[' -z 103372 ']' 00:06:10.120 06:51:30 json_config -- common/autotest_common.sh@958 -- # kill -0 103372 00:06:10.120 06:51:30 json_config -- common/autotest_common.sh@959 -- # uname 00:06:10.120 06:51:30 json_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:10.120 06:51:30 json_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 103372 00:06:10.120 06:51:30 json_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:10.120 06:51:30 json_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:10.120 06:51:30 json_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 103372' 00:06:10.120 killing process with pid 103372 00:06:10.120 06:51:30 json_config -- common/autotest_common.sh@973 -- # kill 103372 00:06:10.120 06:51:30 json_config -- common/autotest_common.sh@978 -- # wait 103372 00:06:12.023 06:51:32 json_config -- json_config/json_config.sh@333 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:12.023 06:51:32 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:06:12.023 06:51:32 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:12.023 06:51:32 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:12.023 06:51:32 json_config -- json_config/json_config.sh@335 -- # return 0 00:06:12.023 06:51:32 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:06:12.023 INFO: Success 00:06:12.023 00:06:12.023 real 0m16.502s 00:06:12.023 user 0m18.417s 00:06:12.023 sys 0m2.295s 00:06:12.023 06:51:32 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:12.023 06:51:32 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:12.023 ************************************ 00:06:12.023 END TEST json_config 00:06:12.023 ************************************ 00:06:12.023 06:51:32 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:06:12.023 06:51:32 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:12.023 06:51:32 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:12.023 06:51:32 -- common/autotest_common.sh@10 -- # set +x 00:06:12.023 ************************************ 00:06:12.023 START TEST json_config_extra_key 00:06:12.023 ************************************ 00:06:12.023 06:51:32 json_config_extra_key -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:06:12.023 06:51:32 json_config_extra_key -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:12.023 06:51:32 json_config_extra_key -- common/autotest_common.sh@1693 -- # lcov --version 00:06:12.023 06:51:32 json_config_extra_key -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:12.023 06:51:32 json_config_extra_key -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:12.023 06:51:32 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:12.023 06:51:32 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:12.023 06:51:32 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:12.024 06:51:32 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:06:12.024 06:51:32 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:06:12.024 06:51:32 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:06:12.024 06:51:32 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:06:12.024 06:51:32 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:06:12.024 06:51:32 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:06:12.024 06:51:32 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:06:12.024 06:51:32 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:12.024 06:51:32 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:06:12.024 06:51:32 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:06:12.024 06:51:32 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:12.024 06:51:32 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:12.024 06:51:32 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:06:12.024 06:51:32 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:06:12.024 06:51:32 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:12.024 06:51:32 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:06:12.024 06:51:32 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:06:12.024 06:51:32 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:06:12.024 06:51:32 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:06:12.024 06:51:32 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:12.024 06:51:32 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:06:12.024 06:51:32 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:06:12.024 06:51:32 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:12.024 06:51:32 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:12.024 06:51:32 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:06:12.024 06:51:32 json_config_extra_key -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:12.024 06:51:32 json_config_extra_key -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:12.024 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.024 --rc genhtml_branch_coverage=1 00:06:12.024 --rc genhtml_function_coverage=1 00:06:12.024 --rc genhtml_legend=1 00:06:12.024 --rc geninfo_all_blocks=1 00:06:12.024 --rc geninfo_unexecuted_blocks=1 00:06:12.024 00:06:12.024 ' 00:06:12.024 06:51:32 json_config_extra_key -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:12.024 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.024 --rc genhtml_branch_coverage=1 00:06:12.024 --rc genhtml_function_coverage=1 00:06:12.024 --rc genhtml_legend=1 00:06:12.024 --rc geninfo_all_blocks=1 00:06:12.024 --rc geninfo_unexecuted_blocks=1 00:06:12.024 00:06:12.024 ' 00:06:12.024 06:51:32 json_config_extra_key -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:12.024 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.024 --rc genhtml_branch_coverage=1 00:06:12.024 --rc genhtml_function_coverage=1 00:06:12.024 --rc genhtml_legend=1 00:06:12.024 --rc geninfo_all_blocks=1 00:06:12.024 --rc geninfo_unexecuted_blocks=1 00:06:12.024 00:06:12.024 ' 00:06:12.024 06:51:32 json_config_extra_key -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:12.024 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.024 --rc genhtml_branch_coverage=1 00:06:12.024 --rc genhtml_function_coverage=1 00:06:12.024 --rc genhtml_legend=1 00:06:12.024 --rc geninfo_all_blocks=1 00:06:12.024 --rc geninfo_unexecuted_blocks=1 00:06:12.024 00:06:12.024 ' 00:06:12.024 06:51:32 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:12.024 06:51:32 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:06:12.024 06:51:32 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:12.024 06:51:32 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:12.024 06:51:32 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:12.024 06:51:32 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:12.024 06:51:32 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:12.024 06:51:32 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:12.024 06:51:32 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:12.024 06:51:32 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:12.024 06:51:32 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:12.024 06:51:32 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:12.024 06:51:32 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:12.024 06:51:32 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:12.024 06:51:32 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:12.024 06:51:32 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:12.024 06:51:32 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:12.024 06:51:32 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:12.024 06:51:32 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:12.024 06:51:32 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:06:12.024 06:51:32 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:12.024 06:51:32 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:12.024 06:51:32 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:12.024 06:51:32 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:12.024 06:51:32 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:12.024 06:51:32 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:12.024 06:51:32 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:06:12.024 06:51:32 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:12.024 06:51:32 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:06:12.024 06:51:32 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:12.024 06:51:32 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:12.024 06:51:32 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:12.024 06:51:32 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:12.024 06:51:32 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:12.024 06:51:32 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:12.024 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:12.024 06:51:32 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:12.024 06:51:32 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:12.024 06:51:32 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:12.024 06:51:32 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:06:12.024 06:51:32 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:06:12.024 06:51:32 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:06:12.024 06:51:32 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:06:12.024 06:51:32 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:06:12.024 06:51:32 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:06:12.024 06:51:32 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:06:12.024 06:51:32 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:06:12.024 06:51:32 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:06:12.024 06:51:32 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:12.024 06:51:32 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:06:12.024 INFO: launching applications... 00:06:12.024 06:51:32 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:06:12.024 06:51:32 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:06:12.024 06:51:32 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:06:12.024 06:51:32 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:12.024 06:51:32 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:12.024 06:51:32 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:06:12.024 06:51:32 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:12.024 06:51:32 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:12.024 06:51:32 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=104302 00:06:12.024 06:51:32 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:06:12.024 06:51:32 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:12.025 Waiting for target to run... 00:06:12.025 06:51:32 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 104302 /var/tmp/spdk_tgt.sock 00:06:12.025 06:51:32 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 104302 ']' 00:06:12.025 06:51:32 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:12.025 06:51:32 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:12.025 06:51:32 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:12.025 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:12.025 06:51:32 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:12.025 06:51:32 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:12.025 [2024-11-18 06:51:32.815823] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:06:12.025 [2024-11-18 06:51:32.815919] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid104302 ] 00:06:12.283 [2024-11-18 06:51:33.158712] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.283 [2024-11-18 06:51:33.189104] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.850 06:51:33 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:12.851 06:51:33 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:06:12.851 06:51:33 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:06:12.851 00:06:12.851 06:51:33 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:06:12.851 INFO: shutting down applications... 00:06:12.851 06:51:33 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:06:12.851 06:51:33 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:06:12.851 06:51:33 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:12.851 06:51:33 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 104302 ]] 00:06:12.851 06:51:33 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 104302 00:06:12.851 06:51:33 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:12.851 06:51:33 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:12.851 06:51:33 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 104302 00:06:12.851 06:51:33 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:13.417 06:51:34 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:13.417 06:51:34 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:13.417 06:51:34 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 104302 00:06:13.417 06:51:34 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:13.417 06:51:34 json_config_extra_key -- json_config/common.sh@43 -- # break 00:06:13.417 06:51:34 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:13.417 06:51:34 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:13.417 SPDK target shutdown done 00:06:13.417 06:51:34 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:06:13.417 Success 00:06:13.417 00:06:13.417 real 0m1.676s 00:06:13.417 user 0m1.602s 00:06:13.417 sys 0m0.458s 00:06:13.417 06:51:34 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:13.417 06:51:34 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:13.417 ************************************ 00:06:13.417 END TEST json_config_extra_key 00:06:13.417 ************************************ 00:06:13.417 06:51:34 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:13.417 06:51:34 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:13.417 06:51:34 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:13.417 06:51:34 -- common/autotest_common.sh@10 -- # set +x 00:06:13.417 ************************************ 00:06:13.417 START TEST alias_rpc 00:06:13.417 ************************************ 00:06:13.417 06:51:34 alias_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:13.417 * Looking for test storage... 00:06:13.677 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:06:13.677 06:51:34 alias_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:13.677 06:51:34 alias_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:06:13.677 06:51:34 alias_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:13.677 06:51:34 alias_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:13.677 06:51:34 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:13.677 06:51:34 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:13.677 06:51:34 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:13.677 06:51:34 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:06:13.677 06:51:34 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:06:13.677 06:51:34 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:06:13.677 06:51:34 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:06:13.677 06:51:34 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:06:13.677 06:51:34 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:06:13.677 06:51:34 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:06:13.677 06:51:34 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:13.677 06:51:34 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:06:13.677 06:51:34 alias_rpc -- scripts/common.sh@345 -- # : 1 00:06:13.677 06:51:34 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:13.677 06:51:34 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:13.677 06:51:34 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:06:13.677 06:51:34 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:06:13.677 06:51:34 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:13.677 06:51:34 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:06:13.677 06:51:34 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:13.677 06:51:34 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:06:13.677 06:51:34 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:06:13.677 06:51:34 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:13.677 06:51:34 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:06:13.677 06:51:34 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:13.677 06:51:34 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:13.677 06:51:34 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:13.677 06:51:34 alias_rpc -- scripts/common.sh@368 -- # return 0 00:06:13.677 06:51:34 alias_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:13.677 06:51:34 alias_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:13.677 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:13.677 --rc genhtml_branch_coverage=1 00:06:13.677 --rc genhtml_function_coverage=1 00:06:13.677 --rc genhtml_legend=1 00:06:13.677 --rc geninfo_all_blocks=1 00:06:13.677 --rc geninfo_unexecuted_blocks=1 00:06:13.677 00:06:13.677 ' 00:06:13.677 06:51:34 alias_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:13.677 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:13.677 --rc genhtml_branch_coverage=1 00:06:13.677 --rc genhtml_function_coverage=1 00:06:13.677 --rc genhtml_legend=1 00:06:13.677 --rc geninfo_all_blocks=1 00:06:13.677 --rc geninfo_unexecuted_blocks=1 00:06:13.677 00:06:13.677 ' 00:06:13.677 06:51:34 alias_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:13.677 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:13.677 --rc genhtml_branch_coverage=1 00:06:13.677 --rc genhtml_function_coverage=1 00:06:13.677 --rc genhtml_legend=1 00:06:13.677 --rc geninfo_all_blocks=1 00:06:13.677 --rc geninfo_unexecuted_blocks=1 00:06:13.677 00:06:13.677 ' 00:06:13.677 06:51:34 alias_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:13.677 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:13.677 --rc genhtml_branch_coverage=1 00:06:13.677 --rc genhtml_function_coverage=1 00:06:13.677 --rc genhtml_legend=1 00:06:13.677 --rc geninfo_all_blocks=1 00:06:13.677 --rc geninfo_unexecuted_blocks=1 00:06:13.677 00:06:13.677 ' 00:06:13.677 06:51:34 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:13.677 06:51:34 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=104500 00:06:13.677 06:51:34 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:13.677 06:51:34 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 104500 00:06:13.677 06:51:34 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 104500 ']' 00:06:13.677 06:51:34 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:13.677 06:51:34 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:13.677 06:51:34 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:13.677 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:13.677 06:51:34 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:13.677 06:51:34 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:13.678 [2024-11-18 06:51:34.543757] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:06:13.678 [2024-11-18 06:51:34.543878] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid104500 ] 00:06:13.678 [2024-11-18 06:51:34.616461] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:13.936 [2024-11-18 06:51:34.665251] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.936 06:51:34 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:13.936 06:51:34 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:13.936 06:51:34 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:06:14.503 06:51:35 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 104500 00:06:14.503 06:51:35 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 104500 ']' 00:06:14.503 06:51:35 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 104500 00:06:14.503 06:51:35 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:06:14.503 06:51:35 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:14.503 06:51:35 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 104500 00:06:14.503 06:51:35 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:14.503 06:51:35 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:14.503 06:51:35 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 104500' 00:06:14.503 killing process with pid 104500 00:06:14.503 06:51:35 alias_rpc -- common/autotest_common.sh@973 -- # kill 104500 00:06:14.503 06:51:35 alias_rpc -- common/autotest_common.sh@978 -- # wait 104500 00:06:14.761 00:06:14.761 real 0m1.267s 00:06:14.761 user 0m1.394s 00:06:14.761 sys 0m0.435s 00:06:14.761 06:51:35 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:14.761 06:51:35 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:14.761 ************************************ 00:06:14.761 END TEST alias_rpc 00:06:14.761 ************************************ 00:06:14.761 06:51:35 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:06:14.761 06:51:35 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:14.761 06:51:35 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:14.761 06:51:35 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:14.761 06:51:35 -- common/autotest_common.sh@10 -- # set +x 00:06:14.761 ************************************ 00:06:14.761 START TEST spdkcli_tcp 00:06:14.761 ************************************ 00:06:14.761 06:51:35 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:14.761 * Looking for test storage... 00:06:14.761 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:06:14.761 06:51:35 spdkcli_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:14.761 06:51:35 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:06:14.761 06:51:35 spdkcli_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:15.020 06:51:35 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:15.020 06:51:35 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:15.020 06:51:35 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:15.020 06:51:35 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:15.020 06:51:35 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:06:15.020 06:51:35 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:06:15.020 06:51:35 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:06:15.020 06:51:35 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:06:15.020 06:51:35 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:06:15.021 06:51:35 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:06:15.021 06:51:35 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:06:15.021 06:51:35 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:15.021 06:51:35 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:06:15.021 06:51:35 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:06:15.021 06:51:35 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:15.021 06:51:35 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:15.021 06:51:35 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:06:15.021 06:51:35 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:06:15.021 06:51:35 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:15.021 06:51:35 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:06:15.021 06:51:35 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:06:15.021 06:51:35 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:06:15.021 06:51:35 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:06:15.021 06:51:35 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:15.021 06:51:35 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:06:15.021 06:51:35 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:06:15.021 06:51:35 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:15.021 06:51:35 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:15.021 06:51:35 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:06:15.021 06:51:35 spdkcli_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:15.021 06:51:35 spdkcli_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:15.021 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:15.021 --rc genhtml_branch_coverage=1 00:06:15.021 --rc genhtml_function_coverage=1 00:06:15.021 --rc genhtml_legend=1 00:06:15.021 --rc geninfo_all_blocks=1 00:06:15.021 --rc geninfo_unexecuted_blocks=1 00:06:15.021 00:06:15.021 ' 00:06:15.021 06:51:35 spdkcli_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:15.021 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:15.021 --rc genhtml_branch_coverage=1 00:06:15.021 --rc genhtml_function_coverage=1 00:06:15.021 --rc genhtml_legend=1 00:06:15.021 --rc geninfo_all_blocks=1 00:06:15.021 --rc geninfo_unexecuted_blocks=1 00:06:15.021 00:06:15.021 ' 00:06:15.021 06:51:35 spdkcli_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:15.021 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:15.021 --rc genhtml_branch_coverage=1 00:06:15.021 --rc genhtml_function_coverage=1 00:06:15.021 --rc genhtml_legend=1 00:06:15.021 --rc geninfo_all_blocks=1 00:06:15.021 --rc geninfo_unexecuted_blocks=1 00:06:15.021 00:06:15.021 ' 00:06:15.021 06:51:35 spdkcli_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:15.021 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:15.021 --rc genhtml_branch_coverage=1 00:06:15.021 --rc genhtml_function_coverage=1 00:06:15.021 --rc genhtml_legend=1 00:06:15.021 --rc geninfo_all_blocks=1 00:06:15.021 --rc geninfo_unexecuted_blocks=1 00:06:15.021 00:06:15.021 ' 00:06:15.021 06:51:35 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:06:15.021 06:51:35 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:06:15.021 06:51:35 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:06:15.021 06:51:35 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:06:15.021 06:51:35 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:06:15.021 06:51:35 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:06:15.021 06:51:35 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:06:15.021 06:51:35 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:15.021 06:51:35 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:15.021 06:51:35 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=104814 00:06:15.021 06:51:35 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:06:15.021 06:51:35 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 104814 00:06:15.021 06:51:35 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 104814 ']' 00:06:15.021 06:51:35 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:15.021 06:51:35 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:15.021 06:51:35 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:15.021 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:15.021 06:51:35 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:15.021 06:51:35 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:15.021 [2024-11-18 06:51:35.871616] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:06:15.021 [2024-11-18 06:51:35.871700] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid104814 ] 00:06:15.021 [2024-11-18 06:51:35.936236] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:15.021 [2024-11-18 06:51:35.983246] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:15.021 [2024-11-18 06:51:35.983249] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.280 06:51:36 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:15.280 06:51:36 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:06:15.280 06:51:36 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=104823 00:06:15.280 06:51:36 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:06:15.280 06:51:36 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:06:15.538 [ 00:06:15.538 "bdev_malloc_delete", 00:06:15.538 "bdev_malloc_create", 00:06:15.538 "bdev_null_resize", 00:06:15.538 "bdev_null_delete", 00:06:15.538 "bdev_null_create", 00:06:15.538 "bdev_nvme_cuse_unregister", 00:06:15.538 "bdev_nvme_cuse_register", 00:06:15.538 "bdev_opal_new_user", 00:06:15.538 "bdev_opal_set_lock_state", 00:06:15.538 "bdev_opal_delete", 00:06:15.538 "bdev_opal_get_info", 00:06:15.538 "bdev_opal_create", 00:06:15.538 "bdev_nvme_opal_revert", 00:06:15.538 "bdev_nvme_opal_init", 00:06:15.538 "bdev_nvme_send_cmd", 00:06:15.538 "bdev_nvme_set_keys", 00:06:15.538 "bdev_nvme_get_path_iostat", 00:06:15.538 "bdev_nvme_get_mdns_discovery_info", 00:06:15.538 "bdev_nvme_stop_mdns_discovery", 00:06:15.538 "bdev_nvme_start_mdns_discovery", 00:06:15.538 "bdev_nvme_set_multipath_policy", 00:06:15.538 "bdev_nvme_set_preferred_path", 00:06:15.538 "bdev_nvme_get_io_paths", 00:06:15.538 "bdev_nvme_remove_error_injection", 00:06:15.538 "bdev_nvme_add_error_injection", 00:06:15.538 "bdev_nvme_get_discovery_info", 00:06:15.538 "bdev_nvme_stop_discovery", 00:06:15.538 "bdev_nvme_start_discovery", 00:06:15.538 "bdev_nvme_get_controller_health_info", 00:06:15.538 "bdev_nvme_disable_controller", 00:06:15.538 "bdev_nvme_enable_controller", 00:06:15.538 "bdev_nvme_reset_controller", 00:06:15.538 "bdev_nvme_get_transport_statistics", 00:06:15.538 "bdev_nvme_apply_firmware", 00:06:15.538 "bdev_nvme_detach_controller", 00:06:15.538 "bdev_nvme_get_controllers", 00:06:15.538 "bdev_nvme_attach_controller", 00:06:15.538 "bdev_nvme_set_hotplug", 00:06:15.538 "bdev_nvme_set_options", 00:06:15.538 "bdev_passthru_delete", 00:06:15.538 "bdev_passthru_create", 00:06:15.538 "bdev_lvol_set_parent_bdev", 00:06:15.538 "bdev_lvol_set_parent", 00:06:15.538 "bdev_lvol_check_shallow_copy", 00:06:15.538 "bdev_lvol_start_shallow_copy", 00:06:15.538 "bdev_lvol_grow_lvstore", 00:06:15.538 "bdev_lvol_get_lvols", 00:06:15.538 "bdev_lvol_get_lvstores", 00:06:15.538 "bdev_lvol_delete", 00:06:15.538 "bdev_lvol_set_read_only", 00:06:15.538 "bdev_lvol_resize", 00:06:15.538 "bdev_lvol_decouple_parent", 00:06:15.538 "bdev_lvol_inflate", 00:06:15.538 "bdev_lvol_rename", 00:06:15.538 "bdev_lvol_clone_bdev", 00:06:15.538 "bdev_lvol_clone", 00:06:15.538 "bdev_lvol_snapshot", 00:06:15.538 "bdev_lvol_create", 00:06:15.538 "bdev_lvol_delete_lvstore", 00:06:15.538 "bdev_lvol_rename_lvstore", 00:06:15.538 "bdev_lvol_create_lvstore", 00:06:15.538 "bdev_raid_set_options", 00:06:15.538 "bdev_raid_remove_base_bdev", 00:06:15.538 "bdev_raid_add_base_bdev", 00:06:15.538 "bdev_raid_delete", 00:06:15.538 "bdev_raid_create", 00:06:15.538 "bdev_raid_get_bdevs", 00:06:15.538 "bdev_error_inject_error", 00:06:15.538 "bdev_error_delete", 00:06:15.538 "bdev_error_create", 00:06:15.538 "bdev_split_delete", 00:06:15.538 "bdev_split_create", 00:06:15.538 "bdev_delay_delete", 00:06:15.538 "bdev_delay_create", 00:06:15.538 "bdev_delay_update_latency", 00:06:15.538 "bdev_zone_block_delete", 00:06:15.538 "bdev_zone_block_create", 00:06:15.538 "blobfs_create", 00:06:15.538 "blobfs_detect", 00:06:15.538 "blobfs_set_cache_size", 00:06:15.538 "bdev_aio_delete", 00:06:15.538 "bdev_aio_rescan", 00:06:15.538 "bdev_aio_create", 00:06:15.538 "bdev_ftl_set_property", 00:06:15.538 "bdev_ftl_get_properties", 00:06:15.538 "bdev_ftl_get_stats", 00:06:15.538 "bdev_ftl_unmap", 00:06:15.538 "bdev_ftl_unload", 00:06:15.538 "bdev_ftl_delete", 00:06:15.538 "bdev_ftl_load", 00:06:15.538 "bdev_ftl_create", 00:06:15.538 "bdev_virtio_attach_controller", 00:06:15.538 "bdev_virtio_scsi_get_devices", 00:06:15.538 "bdev_virtio_detach_controller", 00:06:15.538 "bdev_virtio_blk_set_hotplug", 00:06:15.538 "bdev_iscsi_delete", 00:06:15.538 "bdev_iscsi_create", 00:06:15.538 "bdev_iscsi_set_options", 00:06:15.538 "accel_error_inject_error", 00:06:15.538 "ioat_scan_accel_module", 00:06:15.538 "dsa_scan_accel_module", 00:06:15.538 "iaa_scan_accel_module", 00:06:15.538 "vfu_virtio_create_fs_endpoint", 00:06:15.538 "vfu_virtio_create_scsi_endpoint", 00:06:15.538 "vfu_virtio_scsi_remove_target", 00:06:15.538 "vfu_virtio_scsi_add_target", 00:06:15.538 "vfu_virtio_create_blk_endpoint", 00:06:15.538 "vfu_virtio_delete_endpoint", 00:06:15.538 "keyring_file_remove_key", 00:06:15.538 "keyring_file_add_key", 00:06:15.538 "keyring_linux_set_options", 00:06:15.538 "fsdev_aio_delete", 00:06:15.538 "fsdev_aio_create", 00:06:15.538 "iscsi_get_histogram", 00:06:15.538 "iscsi_enable_histogram", 00:06:15.538 "iscsi_set_options", 00:06:15.538 "iscsi_get_auth_groups", 00:06:15.538 "iscsi_auth_group_remove_secret", 00:06:15.538 "iscsi_auth_group_add_secret", 00:06:15.538 "iscsi_delete_auth_group", 00:06:15.538 "iscsi_create_auth_group", 00:06:15.538 "iscsi_set_discovery_auth", 00:06:15.538 "iscsi_get_options", 00:06:15.538 "iscsi_target_node_request_logout", 00:06:15.538 "iscsi_target_node_set_redirect", 00:06:15.538 "iscsi_target_node_set_auth", 00:06:15.538 "iscsi_target_node_add_lun", 00:06:15.538 "iscsi_get_stats", 00:06:15.538 "iscsi_get_connections", 00:06:15.538 "iscsi_portal_group_set_auth", 00:06:15.538 "iscsi_start_portal_group", 00:06:15.538 "iscsi_delete_portal_group", 00:06:15.538 "iscsi_create_portal_group", 00:06:15.538 "iscsi_get_portal_groups", 00:06:15.538 "iscsi_delete_target_node", 00:06:15.538 "iscsi_target_node_remove_pg_ig_maps", 00:06:15.538 "iscsi_target_node_add_pg_ig_maps", 00:06:15.538 "iscsi_create_target_node", 00:06:15.538 "iscsi_get_target_nodes", 00:06:15.538 "iscsi_delete_initiator_group", 00:06:15.538 "iscsi_initiator_group_remove_initiators", 00:06:15.538 "iscsi_initiator_group_add_initiators", 00:06:15.538 "iscsi_create_initiator_group", 00:06:15.538 "iscsi_get_initiator_groups", 00:06:15.538 "nvmf_set_crdt", 00:06:15.538 "nvmf_set_config", 00:06:15.538 "nvmf_set_max_subsystems", 00:06:15.538 "nvmf_stop_mdns_prr", 00:06:15.538 "nvmf_publish_mdns_prr", 00:06:15.538 "nvmf_subsystem_get_listeners", 00:06:15.538 "nvmf_subsystem_get_qpairs", 00:06:15.538 "nvmf_subsystem_get_controllers", 00:06:15.538 "nvmf_get_stats", 00:06:15.538 "nvmf_get_transports", 00:06:15.538 "nvmf_create_transport", 00:06:15.538 "nvmf_get_targets", 00:06:15.538 "nvmf_delete_target", 00:06:15.538 "nvmf_create_target", 00:06:15.538 "nvmf_subsystem_allow_any_host", 00:06:15.538 "nvmf_subsystem_set_keys", 00:06:15.538 "nvmf_subsystem_remove_host", 00:06:15.538 "nvmf_subsystem_add_host", 00:06:15.538 "nvmf_ns_remove_host", 00:06:15.538 "nvmf_ns_add_host", 00:06:15.538 "nvmf_subsystem_remove_ns", 00:06:15.538 "nvmf_subsystem_set_ns_ana_group", 00:06:15.538 "nvmf_subsystem_add_ns", 00:06:15.538 "nvmf_subsystem_listener_set_ana_state", 00:06:15.538 "nvmf_discovery_get_referrals", 00:06:15.538 "nvmf_discovery_remove_referral", 00:06:15.538 "nvmf_discovery_add_referral", 00:06:15.538 "nvmf_subsystem_remove_listener", 00:06:15.538 "nvmf_subsystem_add_listener", 00:06:15.538 "nvmf_delete_subsystem", 00:06:15.538 "nvmf_create_subsystem", 00:06:15.538 "nvmf_get_subsystems", 00:06:15.538 "env_dpdk_get_mem_stats", 00:06:15.538 "nbd_get_disks", 00:06:15.538 "nbd_stop_disk", 00:06:15.538 "nbd_start_disk", 00:06:15.538 "ublk_recover_disk", 00:06:15.538 "ublk_get_disks", 00:06:15.538 "ublk_stop_disk", 00:06:15.538 "ublk_start_disk", 00:06:15.538 "ublk_destroy_target", 00:06:15.538 "ublk_create_target", 00:06:15.538 "virtio_blk_create_transport", 00:06:15.538 "virtio_blk_get_transports", 00:06:15.538 "vhost_controller_set_coalescing", 00:06:15.538 "vhost_get_controllers", 00:06:15.539 "vhost_delete_controller", 00:06:15.539 "vhost_create_blk_controller", 00:06:15.539 "vhost_scsi_controller_remove_target", 00:06:15.539 "vhost_scsi_controller_add_target", 00:06:15.539 "vhost_start_scsi_controller", 00:06:15.539 "vhost_create_scsi_controller", 00:06:15.539 "thread_set_cpumask", 00:06:15.539 "scheduler_set_options", 00:06:15.539 "framework_get_governor", 00:06:15.539 "framework_get_scheduler", 00:06:15.539 "framework_set_scheduler", 00:06:15.539 "framework_get_reactors", 00:06:15.539 "thread_get_io_channels", 00:06:15.539 "thread_get_pollers", 00:06:15.539 "thread_get_stats", 00:06:15.539 "framework_monitor_context_switch", 00:06:15.539 "spdk_kill_instance", 00:06:15.539 "log_enable_timestamps", 00:06:15.539 "log_get_flags", 00:06:15.539 "log_clear_flag", 00:06:15.539 "log_set_flag", 00:06:15.539 "log_get_level", 00:06:15.539 "log_set_level", 00:06:15.539 "log_get_print_level", 00:06:15.539 "log_set_print_level", 00:06:15.539 "framework_enable_cpumask_locks", 00:06:15.539 "framework_disable_cpumask_locks", 00:06:15.539 "framework_wait_init", 00:06:15.539 "framework_start_init", 00:06:15.539 "scsi_get_devices", 00:06:15.539 "bdev_get_histogram", 00:06:15.539 "bdev_enable_histogram", 00:06:15.539 "bdev_set_qos_limit", 00:06:15.539 "bdev_set_qd_sampling_period", 00:06:15.539 "bdev_get_bdevs", 00:06:15.539 "bdev_reset_iostat", 00:06:15.539 "bdev_get_iostat", 00:06:15.539 "bdev_examine", 00:06:15.539 "bdev_wait_for_examine", 00:06:15.539 "bdev_set_options", 00:06:15.539 "accel_get_stats", 00:06:15.539 "accel_set_options", 00:06:15.539 "accel_set_driver", 00:06:15.539 "accel_crypto_key_destroy", 00:06:15.539 "accel_crypto_keys_get", 00:06:15.539 "accel_crypto_key_create", 00:06:15.539 "accel_assign_opc", 00:06:15.539 "accel_get_module_info", 00:06:15.539 "accel_get_opc_assignments", 00:06:15.539 "vmd_rescan", 00:06:15.539 "vmd_remove_device", 00:06:15.539 "vmd_enable", 00:06:15.539 "sock_get_default_impl", 00:06:15.539 "sock_set_default_impl", 00:06:15.539 "sock_impl_set_options", 00:06:15.539 "sock_impl_get_options", 00:06:15.539 "iobuf_get_stats", 00:06:15.539 "iobuf_set_options", 00:06:15.539 "keyring_get_keys", 00:06:15.539 "vfu_tgt_set_base_path", 00:06:15.539 "framework_get_pci_devices", 00:06:15.539 "framework_get_config", 00:06:15.539 "framework_get_subsystems", 00:06:15.539 "fsdev_set_opts", 00:06:15.539 "fsdev_get_opts", 00:06:15.539 "trace_get_info", 00:06:15.539 "trace_get_tpoint_group_mask", 00:06:15.539 "trace_disable_tpoint_group", 00:06:15.539 "trace_enable_tpoint_group", 00:06:15.539 "trace_clear_tpoint_mask", 00:06:15.539 "trace_set_tpoint_mask", 00:06:15.539 "notify_get_notifications", 00:06:15.539 "notify_get_types", 00:06:15.539 "spdk_get_version", 00:06:15.539 "rpc_get_methods" 00:06:15.539 ] 00:06:15.539 06:51:36 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:06:15.539 06:51:36 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:15.539 06:51:36 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:15.797 06:51:36 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:06:15.797 06:51:36 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 104814 00:06:15.797 06:51:36 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 104814 ']' 00:06:15.797 06:51:36 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 104814 00:06:15.797 06:51:36 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:06:15.797 06:51:36 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:15.797 06:51:36 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 104814 00:06:15.797 06:51:36 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:15.797 06:51:36 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:15.797 06:51:36 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 104814' 00:06:15.797 killing process with pid 104814 00:06:15.797 06:51:36 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 104814 00:06:15.797 06:51:36 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 104814 00:06:16.058 00:06:16.058 real 0m1.290s 00:06:16.058 user 0m2.308s 00:06:16.058 sys 0m0.480s 00:06:16.058 06:51:36 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:16.058 06:51:36 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:16.058 ************************************ 00:06:16.058 END TEST spdkcli_tcp 00:06:16.058 ************************************ 00:06:16.058 06:51:36 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:16.058 06:51:36 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:16.058 06:51:36 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:16.058 06:51:36 -- common/autotest_common.sh@10 -- # set +x 00:06:16.058 ************************************ 00:06:16.058 START TEST dpdk_mem_utility 00:06:16.058 ************************************ 00:06:16.058 06:51:37 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:16.318 * Looking for test storage... 00:06:16.318 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:06:16.318 06:51:37 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:16.318 06:51:37 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lcov --version 00:06:16.318 06:51:37 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:16.318 06:51:37 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:16.318 06:51:37 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:16.318 06:51:37 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:16.318 06:51:37 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:16.318 06:51:37 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:06:16.318 06:51:37 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:06:16.318 06:51:37 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:06:16.318 06:51:37 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:06:16.319 06:51:37 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:06:16.319 06:51:37 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:06:16.319 06:51:37 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:06:16.319 06:51:37 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:16.319 06:51:37 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:06:16.319 06:51:37 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:06:16.319 06:51:37 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:16.319 06:51:37 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:16.319 06:51:37 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:06:16.319 06:51:37 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:06:16.319 06:51:37 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:16.319 06:51:37 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:06:16.319 06:51:37 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:06:16.319 06:51:37 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:06:16.319 06:51:37 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:06:16.319 06:51:37 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:16.319 06:51:37 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:06:16.319 06:51:37 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:06:16.319 06:51:37 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:16.319 06:51:37 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:16.319 06:51:37 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:06:16.319 06:51:37 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:16.319 06:51:37 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:16.319 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:16.319 --rc genhtml_branch_coverage=1 00:06:16.319 --rc genhtml_function_coverage=1 00:06:16.319 --rc genhtml_legend=1 00:06:16.319 --rc geninfo_all_blocks=1 00:06:16.319 --rc geninfo_unexecuted_blocks=1 00:06:16.319 00:06:16.319 ' 00:06:16.319 06:51:37 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:16.319 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:16.319 --rc genhtml_branch_coverage=1 00:06:16.319 --rc genhtml_function_coverage=1 00:06:16.319 --rc genhtml_legend=1 00:06:16.319 --rc geninfo_all_blocks=1 00:06:16.319 --rc geninfo_unexecuted_blocks=1 00:06:16.319 00:06:16.319 ' 00:06:16.319 06:51:37 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:16.319 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:16.319 --rc genhtml_branch_coverage=1 00:06:16.319 --rc genhtml_function_coverage=1 00:06:16.319 --rc genhtml_legend=1 00:06:16.319 --rc geninfo_all_blocks=1 00:06:16.319 --rc geninfo_unexecuted_blocks=1 00:06:16.319 00:06:16.319 ' 00:06:16.319 06:51:37 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:16.319 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:16.319 --rc genhtml_branch_coverage=1 00:06:16.319 --rc genhtml_function_coverage=1 00:06:16.319 --rc genhtml_legend=1 00:06:16.319 --rc geninfo_all_blocks=1 00:06:16.319 --rc geninfo_unexecuted_blocks=1 00:06:16.319 00:06:16.319 ' 00:06:16.319 06:51:37 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:16.319 06:51:37 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=105028 00:06:16.319 06:51:37 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:16.319 06:51:37 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 105028 00:06:16.319 06:51:37 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 105028 ']' 00:06:16.319 06:51:37 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:16.319 06:51:37 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:16.319 06:51:37 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:16.319 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:16.319 06:51:37 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:16.319 06:51:37 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:16.319 [2024-11-18 06:51:37.203872] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:06:16.319 [2024-11-18 06:51:37.203966] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid105028 ] 00:06:16.319 [2024-11-18 06:51:37.268570] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.578 [2024-11-18 06:51:37.314017] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.838 06:51:37 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:16.838 06:51:37 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:06:16.838 06:51:37 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:16.838 06:51:37 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:16.838 06:51:37 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:16.838 06:51:37 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:16.838 { 00:06:16.838 "filename": "/tmp/spdk_mem_dump.txt" 00:06:16.838 } 00:06:16.838 06:51:37 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:16.838 06:51:37 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:16.838 DPDK memory size 810.000000 MiB in 1 heap(s) 00:06:16.838 1 heaps totaling size 810.000000 MiB 00:06:16.838 size: 810.000000 MiB heap id: 0 00:06:16.838 end heaps---------- 00:06:16.838 9 mempools totaling size 595.772034 MiB 00:06:16.838 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:16.838 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:16.838 size: 92.545471 MiB name: bdev_io_105028 00:06:16.838 size: 50.003479 MiB name: msgpool_105028 00:06:16.838 size: 36.509338 MiB name: fsdev_io_105028 00:06:16.838 size: 21.763794 MiB name: PDU_Pool 00:06:16.838 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:16.838 size: 4.133484 MiB name: evtpool_105028 00:06:16.838 size: 0.026123 MiB name: Session_Pool 00:06:16.838 end mempools------- 00:06:16.838 6 memzones totaling size 4.142822 MiB 00:06:16.838 size: 1.000366 MiB name: RG_ring_0_105028 00:06:16.838 size: 1.000366 MiB name: RG_ring_1_105028 00:06:16.838 size: 1.000366 MiB name: RG_ring_4_105028 00:06:16.838 size: 1.000366 MiB name: RG_ring_5_105028 00:06:16.838 size: 0.125366 MiB name: RG_ring_2_105028 00:06:16.838 size: 0.015991 MiB name: RG_ring_3_105028 00:06:16.838 end memzones------- 00:06:16.838 06:51:37 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:06:16.838 heap id: 0 total size: 810.000000 MiB number of busy elements: 44 number of free elements: 15 00:06:16.838 list of free elements. size: 10.862488 MiB 00:06:16.838 element at address: 0x200018a00000 with size: 0.999878 MiB 00:06:16.838 element at address: 0x200018c00000 with size: 0.999878 MiB 00:06:16.838 element at address: 0x200000400000 with size: 0.998535 MiB 00:06:16.838 element at address: 0x200031800000 with size: 0.994446 MiB 00:06:16.838 element at address: 0x200006400000 with size: 0.959839 MiB 00:06:16.838 element at address: 0x200012c00000 with size: 0.954285 MiB 00:06:16.838 element at address: 0x200018e00000 with size: 0.936584 MiB 00:06:16.838 element at address: 0x200000200000 with size: 0.717346 MiB 00:06:16.838 element at address: 0x20001a600000 with size: 0.582886 MiB 00:06:16.838 element at address: 0x200000c00000 with size: 0.495422 MiB 00:06:16.838 element at address: 0x20000a600000 with size: 0.490723 MiB 00:06:16.838 element at address: 0x200019000000 with size: 0.485657 MiB 00:06:16.838 element at address: 0x200003e00000 with size: 0.481934 MiB 00:06:16.838 element at address: 0x200027a00000 with size: 0.410034 MiB 00:06:16.838 element at address: 0x200000800000 with size: 0.355042 MiB 00:06:16.838 list of standard malloc elements. size: 199.218628 MiB 00:06:16.838 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:06:16.838 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:06:16.838 element at address: 0x200018afff80 with size: 1.000122 MiB 00:06:16.838 element at address: 0x200018cfff80 with size: 1.000122 MiB 00:06:16.838 element at address: 0x200018efff80 with size: 1.000122 MiB 00:06:16.838 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:06:16.838 element at address: 0x200018eeff00 with size: 0.062622 MiB 00:06:16.838 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:06:16.838 element at address: 0x200018eefdc0 with size: 0.000305 MiB 00:06:16.838 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:06:16.838 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:06:16.838 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:06:16.838 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:06:16.838 element at address: 0x2000004ffb80 with size: 0.000183 MiB 00:06:16.838 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:06:16.838 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:06:16.838 element at address: 0x20000085ae40 with size: 0.000183 MiB 00:06:16.838 element at address: 0x20000085b040 with size: 0.000183 MiB 00:06:16.838 element at address: 0x20000085f300 with size: 0.000183 MiB 00:06:16.838 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:06:16.838 element at address: 0x20000087f680 with size: 0.000183 MiB 00:06:16.838 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:06:16.838 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:06:16.838 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:06:16.838 element at address: 0x200000cff000 with size: 0.000183 MiB 00:06:16.838 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:06:16.838 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:06:16.838 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:06:16.838 element at address: 0x200003efb980 with size: 0.000183 MiB 00:06:16.838 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:06:16.838 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:06:16.838 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:06:16.838 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:06:16.838 element at address: 0x200012cf44c0 with size: 0.000183 MiB 00:06:16.838 element at address: 0x200018eefc40 with size: 0.000183 MiB 00:06:16.838 element at address: 0x200018eefd00 with size: 0.000183 MiB 00:06:16.838 element at address: 0x2000190bc740 with size: 0.000183 MiB 00:06:16.838 element at address: 0x20001a695380 with size: 0.000183 MiB 00:06:16.838 element at address: 0x20001a695440 with size: 0.000183 MiB 00:06:16.838 element at address: 0x200027a68f80 with size: 0.000183 MiB 00:06:16.838 element at address: 0x200027a69040 with size: 0.000183 MiB 00:06:16.838 element at address: 0x200027a6fc40 with size: 0.000183 MiB 00:06:16.838 element at address: 0x200027a6fe40 with size: 0.000183 MiB 00:06:16.838 element at address: 0x200027a6ff00 with size: 0.000183 MiB 00:06:16.838 list of memzone associated elements. size: 599.918884 MiB 00:06:16.838 element at address: 0x20001a695500 with size: 211.416748 MiB 00:06:16.838 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:16.838 element at address: 0x200027a6ffc0 with size: 157.562561 MiB 00:06:16.838 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:16.838 element at address: 0x200012df4780 with size: 92.045044 MiB 00:06:16.838 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_105028_0 00:06:16.838 element at address: 0x200000dff380 with size: 48.003052 MiB 00:06:16.838 associated memzone info: size: 48.002930 MiB name: MP_msgpool_105028_0 00:06:16.838 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:06:16.838 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_105028_0 00:06:16.838 element at address: 0x2000191be940 with size: 20.255554 MiB 00:06:16.838 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:16.838 element at address: 0x2000319feb40 with size: 18.005066 MiB 00:06:16.838 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:16.838 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:06:16.838 associated memzone info: size: 3.000122 MiB name: MP_evtpool_105028_0 00:06:16.838 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:06:16.838 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_105028 00:06:16.838 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:06:16.838 associated memzone info: size: 1.007996 MiB name: MP_evtpool_105028 00:06:16.838 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:06:16.838 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:16.838 element at address: 0x2000190bc800 with size: 1.008118 MiB 00:06:16.838 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:16.839 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:06:16.839 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:16.839 element at address: 0x200003efba40 with size: 1.008118 MiB 00:06:16.839 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:16.839 element at address: 0x200000cff180 with size: 1.000488 MiB 00:06:16.839 associated memzone info: size: 1.000366 MiB name: RG_ring_0_105028 00:06:16.839 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:06:16.839 associated memzone info: size: 1.000366 MiB name: RG_ring_1_105028 00:06:16.839 element at address: 0x200012cf4580 with size: 1.000488 MiB 00:06:16.839 associated memzone info: size: 1.000366 MiB name: RG_ring_4_105028 00:06:16.839 element at address: 0x2000318fe940 with size: 1.000488 MiB 00:06:16.839 associated memzone info: size: 1.000366 MiB name: RG_ring_5_105028 00:06:16.839 element at address: 0x20000087f740 with size: 0.500488 MiB 00:06:16.839 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_105028 00:06:16.839 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:06:16.839 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_105028 00:06:16.839 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:06:16.839 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:16.839 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:06:16.839 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:16.839 element at address: 0x20001907c540 with size: 0.250488 MiB 00:06:16.839 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:16.839 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:06:16.839 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_105028 00:06:16.839 element at address: 0x20000085f3c0 with size: 0.125488 MiB 00:06:16.839 associated memzone info: size: 0.125366 MiB name: RG_ring_2_105028 00:06:16.839 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:06:16.839 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:16.839 element at address: 0x200027a69100 with size: 0.023743 MiB 00:06:16.839 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:16.839 element at address: 0x20000085b100 with size: 0.016113 MiB 00:06:16.839 associated memzone info: size: 0.015991 MiB name: RG_ring_3_105028 00:06:16.839 element at address: 0x200027a6f240 with size: 0.002441 MiB 00:06:16.839 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:16.839 element at address: 0x2000004ffc40 with size: 0.000305 MiB 00:06:16.839 associated memzone info: size: 0.000183 MiB name: MP_msgpool_105028 00:06:16.839 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:06:16.839 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_105028 00:06:16.839 element at address: 0x20000085af00 with size: 0.000305 MiB 00:06:16.839 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_105028 00:06:16.839 element at address: 0x200027a6fd00 with size: 0.000305 MiB 00:06:16.839 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:16.839 06:51:37 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:16.839 06:51:37 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 105028 00:06:16.839 06:51:37 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 105028 ']' 00:06:16.839 06:51:37 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 105028 00:06:16.839 06:51:37 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:06:16.839 06:51:37 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:16.839 06:51:37 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 105028 00:06:16.839 06:51:37 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:16.839 06:51:37 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:16.839 06:51:37 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 105028' 00:06:16.839 killing process with pid 105028 00:06:16.839 06:51:37 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 105028 00:06:16.839 06:51:37 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 105028 00:06:17.407 00:06:17.407 real 0m1.080s 00:06:17.407 user 0m1.042s 00:06:17.407 sys 0m0.423s 00:06:17.407 06:51:38 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:17.407 06:51:38 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:17.407 ************************************ 00:06:17.407 END TEST dpdk_mem_utility 00:06:17.407 ************************************ 00:06:17.407 06:51:38 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:06:17.407 06:51:38 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:17.407 06:51:38 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:17.407 06:51:38 -- common/autotest_common.sh@10 -- # set +x 00:06:17.407 ************************************ 00:06:17.407 START TEST event 00:06:17.407 ************************************ 00:06:17.407 06:51:38 event -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:06:17.407 * Looking for test storage... 00:06:17.407 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:06:17.407 06:51:38 event -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:17.407 06:51:38 event -- common/autotest_common.sh@1693 -- # lcov --version 00:06:17.407 06:51:38 event -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:17.407 06:51:38 event -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:17.407 06:51:38 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:17.407 06:51:38 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:17.407 06:51:38 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:17.407 06:51:38 event -- scripts/common.sh@336 -- # IFS=.-: 00:06:17.407 06:51:38 event -- scripts/common.sh@336 -- # read -ra ver1 00:06:17.407 06:51:38 event -- scripts/common.sh@337 -- # IFS=.-: 00:06:17.407 06:51:38 event -- scripts/common.sh@337 -- # read -ra ver2 00:06:17.407 06:51:38 event -- scripts/common.sh@338 -- # local 'op=<' 00:06:17.407 06:51:38 event -- scripts/common.sh@340 -- # ver1_l=2 00:06:17.407 06:51:38 event -- scripts/common.sh@341 -- # ver2_l=1 00:06:17.407 06:51:38 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:17.407 06:51:38 event -- scripts/common.sh@344 -- # case "$op" in 00:06:17.407 06:51:38 event -- scripts/common.sh@345 -- # : 1 00:06:17.407 06:51:38 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:17.407 06:51:38 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:17.407 06:51:38 event -- scripts/common.sh@365 -- # decimal 1 00:06:17.407 06:51:38 event -- scripts/common.sh@353 -- # local d=1 00:06:17.407 06:51:38 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:17.407 06:51:38 event -- scripts/common.sh@355 -- # echo 1 00:06:17.407 06:51:38 event -- scripts/common.sh@365 -- # ver1[v]=1 00:06:17.407 06:51:38 event -- scripts/common.sh@366 -- # decimal 2 00:06:17.407 06:51:38 event -- scripts/common.sh@353 -- # local d=2 00:06:17.407 06:51:38 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:17.407 06:51:38 event -- scripts/common.sh@355 -- # echo 2 00:06:17.407 06:51:38 event -- scripts/common.sh@366 -- # ver2[v]=2 00:06:17.407 06:51:38 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:17.407 06:51:38 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:17.407 06:51:38 event -- scripts/common.sh@368 -- # return 0 00:06:17.407 06:51:38 event -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:17.407 06:51:38 event -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:17.407 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:17.407 --rc genhtml_branch_coverage=1 00:06:17.407 --rc genhtml_function_coverage=1 00:06:17.407 --rc genhtml_legend=1 00:06:17.407 --rc geninfo_all_blocks=1 00:06:17.407 --rc geninfo_unexecuted_blocks=1 00:06:17.407 00:06:17.407 ' 00:06:17.407 06:51:38 event -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:17.407 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:17.407 --rc genhtml_branch_coverage=1 00:06:17.407 --rc genhtml_function_coverage=1 00:06:17.407 --rc genhtml_legend=1 00:06:17.407 --rc geninfo_all_blocks=1 00:06:17.407 --rc geninfo_unexecuted_blocks=1 00:06:17.407 00:06:17.407 ' 00:06:17.407 06:51:38 event -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:17.407 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:17.407 --rc genhtml_branch_coverage=1 00:06:17.407 --rc genhtml_function_coverage=1 00:06:17.407 --rc genhtml_legend=1 00:06:17.407 --rc geninfo_all_blocks=1 00:06:17.407 --rc geninfo_unexecuted_blocks=1 00:06:17.407 00:06:17.407 ' 00:06:17.407 06:51:38 event -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:17.407 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:17.407 --rc genhtml_branch_coverage=1 00:06:17.407 --rc genhtml_function_coverage=1 00:06:17.407 --rc genhtml_legend=1 00:06:17.407 --rc geninfo_all_blocks=1 00:06:17.407 --rc geninfo_unexecuted_blocks=1 00:06:17.407 00:06:17.407 ' 00:06:17.407 06:51:38 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:06:17.407 06:51:38 event -- bdev/nbd_common.sh@6 -- # set -e 00:06:17.407 06:51:38 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:17.407 06:51:38 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:06:17.407 06:51:38 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:17.407 06:51:38 event -- common/autotest_common.sh@10 -- # set +x 00:06:17.407 ************************************ 00:06:17.407 START TEST event_perf 00:06:17.407 ************************************ 00:06:17.407 06:51:38 event.event_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:17.407 Running I/O for 1 seconds...[2024-11-18 06:51:38.324419] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:06:17.407 [2024-11-18 06:51:38.324584] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid105225 ] 00:06:17.666 [2024-11-18 06:51:38.394812] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:17.666 [2024-11-18 06:51:38.447146] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:17.666 [2024-11-18 06:51:38.447253] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:17.666 [2024-11-18 06:51:38.447351] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:17.666 [2024-11-18 06:51:38.447359] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.602 Running I/O for 1 seconds... 00:06:18.602 lcore 0: 236503 00:06:18.602 lcore 1: 236502 00:06:18.602 lcore 2: 236503 00:06:18.602 lcore 3: 236502 00:06:18.602 done. 00:06:18.602 00:06:18.602 real 0m1.184s 00:06:18.602 user 0m4.102s 00:06:18.602 sys 0m0.076s 00:06:18.602 06:51:39 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:18.602 06:51:39 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:06:18.602 ************************************ 00:06:18.602 END TEST event_perf 00:06:18.602 ************************************ 00:06:18.602 06:51:39 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:18.602 06:51:39 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:06:18.602 06:51:39 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:18.602 06:51:39 event -- common/autotest_common.sh@10 -- # set +x 00:06:18.602 ************************************ 00:06:18.602 START TEST event_reactor 00:06:18.602 ************************************ 00:06:18.602 06:51:39 event.event_reactor -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:18.602 [2024-11-18 06:51:39.560895] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:06:18.602 [2024-11-18 06:51:39.560964] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid105380 ] 00:06:18.862 [2024-11-18 06:51:39.628656] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:18.862 [2024-11-18 06:51:39.672365] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.799 test_start 00:06:19.799 oneshot 00:06:19.799 tick 100 00:06:19.799 tick 100 00:06:19.799 tick 250 00:06:19.799 tick 100 00:06:19.799 tick 100 00:06:19.799 tick 250 00:06:19.799 tick 100 00:06:19.799 tick 500 00:06:19.799 tick 100 00:06:19.799 tick 100 00:06:19.799 tick 250 00:06:19.799 tick 100 00:06:19.799 tick 100 00:06:19.799 test_end 00:06:19.799 00:06:19.799 real 0m1.169s 00:06:19.799 user 0m1.094s 00:06:19.799 sys 0m0.070s 00:06:19.799 06:51:40 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:19.799 06:51:40 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:06:19.799 ************************************ 00:06:19.799 END TEST event_reactor 00:06:19.799 ************************************ 00:06:19.799 06:51:40 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:19.799 06:51:40 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:06:19.799 06:51:40 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:19.799 06:51:40 event -- common/autotest_common.sh@10 -- # set +x 00:06:19.799 ************************************ 00:06:19.799 START TEST event_reactor_perf 00:06:19.799 ************************************ 00:06:19.799 06:51:40 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:20.058 [2024-11-18 06:51:40.783216] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:06:20.058 [2024-11-18 06:51:40.783285] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid105540 ] 00:06:20.058 [2024-11-18 06:51:40.849221] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:20.058 [2024-11-18 06:51:40.892996] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.995 test_start 00:06:20.995 test_end 00:06:20.995 Performance: 446184 events per second 00:06:20.995 00:06:20.995 real 0m1.167s 00:06:20.995 user 0m1.100s 00:06:20.995 sys 0m0.063s 00:06:20.995 06:51:41 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:20.995 06:51:41 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:06:20.995 ************************************ 00:06:20.995 END TEST event_reactor_perf 00:06:20.995 ************************************ 00:06:20.995 06:51:41 event -- event/event.sh@49 -- # uname -s 00:06:20.995 06:51:41 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:20.995 06:51:41 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:20.995 06:51:41 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:20.995 06:51:41 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:20.995 06:51:41 event -- common/autotest_common.sh@10 -- # set +x 00:06:21.255 ************************************ 00:06:21.255 START TEST event_scheduler 00:06:21.255 ************************************ 00:06:21.255 06:51:41 event.event_scheduler -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:21.255 * Looking for test storage... 00:06:21.255 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:06:21.255 06:51:42 event.event_scheduler -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:21.255 06:51:42 event.event_scheduler -- common/autotest_common.sh@1693 -- # lcov --version 00:06:21.255 06:51:42 event.event_scheduler -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:21.255 06:51:42 event.event_scheduler -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:21.255 06:51:42 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:21.255 06:51:42 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:21.255 06:51:42 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:21.255 06:51:42 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:06:21.255 06:51:42 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:06:21.255 06:51:42 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:06:21.255 06:51:42 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:06:21.255 06:51:42 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:06:21.255 06:51:42 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:06:21.255 06:51:42 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:06:21.255 06:51:42 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:21.255 06:51:42 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:06:21.255 06:51:42 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:06:21.255 06:51:42 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:21.256 06:51:42 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:21.256 06:51:42 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:06:21.256 06:51:42 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:06:21.256 06:51:42 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:21.256 06:51:42 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:06:21.256 06:51:42 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:06:21.256 06:51:42 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:06:21.256 06:51:42 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:06:21.256 06:51:42 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:21.256 06:51:42 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:06:21.256 06:51:42 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:06:21.256 06:51:42 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:21.256 06:51:42 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:21.256 06:51:42 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:06:21.256 06:51:42 event.event_scheduler -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:21.256 06:51:42 event.event_scheduler -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:21.256 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:21.256 --rc genhtml_branch_coverage=1 00:06:21.256 --rc genhtml_function_coverage=1 00:06:21.256 --rc genhtml_legend=1 00:06:21.256 --rc geninfo_all_blocks=1 00:06:21.256 --rc geninfo_unexecuted_blocks=1 00:06:21.256 00:06:21.256 ' 00:06:21.256 06:51:42 event.event_scheduler -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:21.256 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:21.256 --rc genhtml_branch_coverage=1 00:06:21.256 --rc genhtml_function_coverage=1 00:06:21.256 --rc genhtml_legend=1 00:06:21.256 --rc geninfo_all_blocks=1 00:06:21.256 --rc geninfo_unexecuted_blocks=1 00:06:21.256 00:06:21.256 ' 00:06:21.256 06:51:42 event.event_scheduler -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:21.256 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:21.256 --rc genhtml_branch_coverage=1 00:06:21.256 --rc genhtml_function_coverage=1 00:06:21.256 --rc genhtml_legend=1 00:06:21.256 --rc geninfo_all_blocks=1 00:06:21.256 --rc geninfo_unexecuted_blocks=1 00:06:21.256 00:06:21.256 ' 00:06:21.256 06:51:42 event.event_scheduler -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:21.256 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:21.256 --rc genhtml_branch_coverage=1 00:06:21.256 --rc genhtml_function_coverage=1 00:06:21.256 --rc genhtml_legend=1 00:06:21.256 --rc geninfo_all_blocks=1 00:06:21.256 --rc geninfo_unexecuted_blocks=1 00:06:21.256 00:06:21.256 ' 00:06:21.256 06:51:42 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:21.256 06:51:42 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=105728 00:06:21.256 06:51:42 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:21.256 06:51:42 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:21.256 06:51:42 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 105728 00:06:21.256 06:51:42 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 105728 ']' 00:06:21.256 06:51:42 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:21.256 06:51:42 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:21.256 06:51:42 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:21.256 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:21.256 06:51:42 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:21.256 06:51:42 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:21.256 [2024-11-18 06:51:42.169657] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:06:21.256 [2024-11-18 06:51:42.169737] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid105728 ] 00:06:21.515 [2024-11-18 06:51:42.237533] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:21.515 [2024-11-18 06:51:42.290073] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.515 [2024-11-18 06:51:42.290135] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:21.515 [2024-11-18 06:51:42.290202] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:21.515 [2024-11-18 06:51:42.290205] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:21.515 06:51:42 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:21.515 06:51:42 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:06:21.515 06:51:42 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:21.515 06:51:42 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:21.515 06:51:42 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:21.515 [2024-11-18 06:51:42.423323] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:06:21.515 [2024-11-18 06:51:42.423351] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:06:21.515 [2024-11-18 06:51:42.423376] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:21.515 [2024-11-18 06:51:42.423397] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:21.515 [2024-11-18 06:51:42.423414] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:21.515 06:51:42 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:21.515 06:51:42 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:21.515 06:51:42 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:21.515 06:51:42 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:21.774 [2024-11-18 06:51:42.523581] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:21.774 06:51:42 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:21.774 06:51:42 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:21.774 06:51:42 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:21.774 06:51:42 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:21.774 06:51:42 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:21.774 ************************************ 00:06:21.774 START TEST scheduler_create_thread 00:06:21.774 ************************************ 00:06:21.774 06:51:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:06:21.774 06:51:42 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:21.774 06:51:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:21.774 06:51:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:21.774 2 00:06:21.774 06:51:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:21.774 06:51:42 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:21.774 06:51:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:21.774 06:51:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:21.774 3 00:06:21.774 06:51:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:21.774 06:51:42 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:21.774 06:51:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:21.774 06:51:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:21.774 4 00:06:21.774 06:51:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:21.774 06:51:42 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:21.774 06:51:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:21.774 06:51:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:21.774 5 00:06:21.775 06:51:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:21.775 06:51:42 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:21.775 06:51:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:21.775 06:51:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:21.775 6 00:06:21.775 06:51:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:21.775 06:51:42 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:21.775 06:51:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:21.775 06:51:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:21.775 7 00:06:21.775 06:51:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:21.775 06:51:42 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:21.775 06:51:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:21.775 06:51:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:21.775 8 00:06:21.775 06:51:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:21.775 06:51:42 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:21.775 06:51:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:21.775 06:51:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:21.775 9 00:06:21.775 06:51:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:21.775 06:51:42 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:21.775 06:51:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:21.775 06:51:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:21.775 10 00:06:21.775 06:51:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:21.775 06:51:42 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:21.775 06:51:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:21.775 06:51:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:21.775 06:51:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:21.775 06:51:42 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:21.775 06:51:42 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:21.775 06:51:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:21.775 06:51:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:21.775 06:51:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:21.775 06:51:42 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:21.775 06:51:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:21.775 06:51:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:21.775 06:51:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:21.775 06:51:42 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:21.775 06:51:42 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:21.775 06:51:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:21.775 06:51:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:22.341 06:51:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:22.341 00:06:22.341 real 0m0.592s 00:06:22.341 user 0m0.011s 00:06:22.341 sys 0m0.002s 00:06:22.341 06:51:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:22.341 06:51:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:22.341 ************************************ 00:06:22.341 END TEST scheduler_create_thread 00:06:22.341 ************************************ 00:06:22.341 06:51:43 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:22.341 06:51:43 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 105728 00:06:22.341 06:51:43 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 105728 ']' 00:06:22.341 06:51:43 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 105728 00:06:22.341 06:51:43 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:06:22.341 06:51:43 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:22.341 06:51:43 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 105728 00:06:22.341 06:51:43 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:06:22.341 06:51:43 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:06:22.341 06:51:43 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 105728' 00:06:22.341 killing process with pid 105728 00:06:22.341 06:51:43 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 105728 00:06:22.341 06:51:43 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 105728 00:06:22.910 [2024-11-18 06:51:43.623634] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:22.910 00:06:22.910 real 0m1.816s 00:06:22.910 user 0m2.539s 00:06:22.910 sys 0m0.358s 00:06:22.910 06:51:43 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:22.910 06:51:43 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:22.910 ************************************ 00:06:22.910 END TEST event_scheduler 00:06:22.910 ************************************ 00:06:22.910 06:51:43 event -- event/event.sh@51 -- # modprobe -n nbd 00:06:22.910 06:51:43 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:22.910 06:51:43 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:22.910 06:51:43 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:22.910 06:51:43 event -- common/autotest_common.sh@10 -- # set +x 00:06:22.910 ************************************ 00:06:22.910 START TEST app_repeat 00:06:22.910 ************************************ 00:06:22.910 06:51:43 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:06:22.910 06:51:43 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:22.910 06:51:43 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:22.910 06:51:43 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:06:22.910 06:51:43 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:22.910 06:51:43 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:06:22.910 06:51:43 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:06:22.910 06:51:43 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:06:22.910 06:51:43 event.app_repeat -- event/event.sh@19 -- # repeat_pid=106042 00:06:22.910 06:51:43 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:22.911 06:51:43 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:22.911 06:51:43 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 106042' 00:06:22.911 Process app_repeat pid: 106042 00:06:22.911 06:51:43 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:22.911 06:51:43 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:22.911 spdk_app_start Round 0 00:06:22.911 06:51:43 event.app_repeat -- event/event.sh@25 -- # waitforlisten 106042 /var/tmp/spdk-nbd.sock 00:06:22.911 06:51:43 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 106042 ']' 00:06:22.911 06:51:43 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:22.911 06:51:43 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:22.911 06:51:43 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:22.911 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:22.911 06:51:43 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:22.911 06:51:43 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:23.170 [2024-11-18 06:51:43.895398] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:06:23.171 [2024-11-18 06:51:43.895465] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid106042 ] 00:06:23.171 [2024-11-18 06:51:43.959480] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:23.171 [2024-11-18 06:51:44.003257] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:23.171 [2024-11-18 06:51:44.003261] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.171 06:51:44 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:23.171 06:51:44 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:23.171 06:51:44 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:23.429 Malloc0 00:06:23.688 06:51:44 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:23.947 Malloc1 00:06:23.947 06:51:44 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:23.947 06:51:44 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:23.947 06:51:44 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:23.947 06:51:44 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:23.947 06:51:44 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:23.947 06:51:44 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:23.947 06:51:44 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:23.947 06:51:44 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:23.947 06:51:44 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:23.947 06:51:44 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:23.947 06:51:44 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:23.947 06:51:44 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:23.947 06:51:44 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:23.947 06:51:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:23.947 06:51:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:23.947 06:51:44 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:24.205 /dev/nbd0 00:06:24.205 06:51:45 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:24.205 06:51:45 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:24.205 06:51:45 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:24.205 06:51:45 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:24.205 06:51:45 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:24.205 06:51:45 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:24.205 06:51:45 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:24.205 06:51:45 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:24.205 06:51:45 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:24.205 06:51:45 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:24.205 06:51:45 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:24.205 1+0 records in 00:06:24.205 1+0 records out 00:06:24.205 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000235706 s, 17.4 MB/s 00:06:24.205 06:51:45 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:24.205 06:51:45 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:24.205 06:51:45 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:24.205 06:51:45 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:24.206 06:51:45 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:24.206 06:51:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:24.206 06:51:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:24.206 06:51:45 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:24.464 /dev/nbd1 00:06:24.464 06:51:45 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:24.464 06:51:45 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:24.464 06:51:45 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:24.464 06:51:45 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:24.464 06:51:45 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:24.464 06:51:45 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:24.464 06:51:45 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:24.464 06:51:45 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:24.464 06:51:45 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:24.464 06:51:45 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:24.464 06:51:45 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:24.464 1+0 records in 00:06:24.464 1+0 records out 00:06:24.464 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000195388 s, 21.0 MB/s 00:06:24.464 06:51:45 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:24.464 06:51:45 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:24.464 06:51:45 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:24.464 06:51:45 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:24.464 06:51:45 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:24.464 06:51:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:24.464 06:51:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:24.464 06:51:45 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:24.464 06:51:45 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:24.464 06:51:45 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:24.723 06:51:45 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:24.723 { 00:06:24.723 "nbd_device": "/dev/nbd0", 00:06:24.723 "bdev_name": "Malloc0" 00:06:24.723 }, 00:06:24.723 { 00:06:24.723 "nbd_device": "/dev/nbd1", 00:06:24.723 "bdev_name": "Malloc1" 00:06:24.723 } 00:06:24.723 ]' 00:06:24.723 06:51:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:24.723 { 00:06:24.723 "nbd_device": "/dev/nbd0", 00:06:24.723 "bdev_name": "Malloc0" 00:06:24.723 }, 00:06:24.723 { 00:06:24.723 "nbd_device": "/dev/nbd1", 00:06:24.723 "bdev_name": "Malloc1" 00:06:24.723 } 00:06:24.723 ]' 00:06:24.723 06:51:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:24.723 06:51:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:24.723 /dev/nbd1' 00:06:24.723 06:51:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:24.723 /dev/nbd1' 00:06:24.723 06:51:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:24.723 06:51:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:24.723 06:51:45 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:24.723 06:51:45 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:24.723 06:51:45 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:24.723 06:51:45 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:24.723 06:51:45 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:24.723 06:51:45 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:24.723 06:51:45 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:24.723 06:51:45 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:24.723 06:51:45 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:24.723 06:51:45 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:24.723 256+0 records in 00:06:24.723 256+0 records out 00:06:24.723 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00512012 s, 205 MB/s 00:06:24.723 06:51:45 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:24.723 06:51:45 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:24.982 256+0 records in 00:06:24.982 256+0 records out 00:06:24.982 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0207237 s, 50.6 MB/s 00:06:24.982 06:51:45 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:24.982 06:51:45 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:24.982 256+0 records in 00:06:24.982 256+0 records out 00:06:24.982 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0219303 s, 47.8 MB/s 00:06:24.982 06:51:45 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:24.982 06:51:45 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:24.982 06:51:45 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:24.982 06:51:45 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:24.982 06:51:45 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:24.982 06:51:45 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:24.982 06:51:45 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:24.982 06:51:45 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:24.982 06:51:45 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:24.982 06:51:45 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:24.982 06:51:45 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:24.982 06:51:45 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:24.982 06:51:45 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:24.982 06:51:45 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:24.982 06:51:45 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:24.982 06:51:45 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:24.982 06:51:45 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:24.982 06:51:45 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:24.982 06:51:45 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:25.240 06:51:46 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:25.240 06:51:46 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:25.240 06:51:46 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:25.240 06:51:46 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:25.240 06:51:46 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:25.240 06:51:46 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:25.240 06:51:46 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:25.240 06:51:46 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:25.240 06:51:46 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:25.240 06:51:46 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:25.498 06:51:46 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:25.498 06:51:46 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:25.498 06:51:46 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:25.498 06:51:46 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:25.498 06:51:46 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:25.498 06:51:46 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:25.498 06:51:46 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:25.498 06:51:46 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:25.498 06:51:46 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:25.498 06:51:46 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:25.499 06:51:46 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:25.757 06:51:46 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:25.757 06:51:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:25.757 06:51:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:25.757 06:51:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:25.757 06:51:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:25.757 06:51:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:25.757 06:51:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:25.757 06:51:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:25.757 06:51:46 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:25.757 06:51:46 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:25.757 06:51:46 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:25.757 06:51:46 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:25.757 06:51:46 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:26.015 06:51:46 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:26.274 [2024-11-18 06:51:47.110697] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:26.274 [2024-11-18 06:51:47.153238] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.274 [2024-11-18 06:51:47.153241] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:26.274 [2024-11-18 06:51:47.211263] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:26.274 [2024-11-18 06:51:47.211339] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:29.557 06:51:49 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:29.557 06:51:49 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:29.557 spdk_app_start Round 1 00:06:29.557 06:51:49 event.app_repeat -- event/event.sh@25 -- # waitforlisten 106042 /var/tmp/spdk-nbd.sock 00:06:29.557 06:51:49 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 106042 ']' 00:06:29.557 06:51:49 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:29.557 06:51:49 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:29.557 06:51:49 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:29.557 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:29.557 06:51:49 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:29.557 06:51:49 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:29.557 06:51:50 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:29.557 06:51:50 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:29.557 06:51:50 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:29.557 Malloc0 00:06:29.557 06:51:50 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:29.815 Malloc1 00:06:30.073 06:51:50 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:30.073 06:51:50 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:30.073 06:51:50 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:30.073 06:51:50 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:30.074 06:51:50 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:30.074 06:51:50 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:30.074 06:51:50 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:30.074 06:51:50 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:30.074 06:51:50 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:30.074 06:51:50 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:30.074 06:51:50 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:30.074 06:51:50 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:30.074 06:51:50 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:30.074 06:51:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:30.074 06:51:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:30.074 06:51:50 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:30.331 /dev/nbd0 00:06:30.331 06:51:51 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:30.331 06:51:51 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:30.331 06:51:51 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:30.331 06:51:51 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:30.331 06:51:51 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:30.331 06:51:51 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:30.331 06:51:51 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:30.331 06:51:51 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:30.331 06:51:51 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:30.331 06:51:51 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:30.331 06:51:51 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:30.331 1+0 records in 00:06:30.331 1+0 records out 00:06:30.331 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000159287 s, 25.7 MB/s 00:06:30.331 06:51:51 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:30.331 06:51:51 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:30.331 06:51:51 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:30.331 06:51:51 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:30.331 06:51:51 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:30.331 06:51:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:30.331 06:51:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:30.331 06:51:51 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:30.589 /dev/nbd1 00:06:30.589 06:51:51 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:30.589 06:51:51 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:30.589 06:51:51 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:30.589 06:51:51 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:30.589 06:51:51 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:30.589 06:51:51 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:30.589 06:51:51 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:30.589 06:51:51 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:30.589 06:51:51 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:30.589 06:51:51 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:30.589 06:51:51 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:30.589 1+0 records in 00:06:30.589 1+0 records out 00:06:30.589 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000170275 s, 24.1 MB/s 00:06:30.589 06:51:51 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:30.589 06:51:51 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:30.589 06:51:51 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:30.589 06:51:51 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:30.590 06:51:51 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:30.590 06:51:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:30.590 06:51:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:30.590 06:51:51 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:30.590 06:51:51 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:30.590 06:51:51 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:30.848 06:51:51 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:30.848 { 00:06:30.848 "nbd_device": "/dev/nbd0", 00:06:30.848 "bdev_name": "Malloc0" 00:06:30.848 }, 00:06:30.848 { 00:06:30.848 "nbd_device": "/dev/nbd1", 00:06:30.848 "bdev_name": "Malloc1" 00:06:30.848 } 00:06:30.848 ]' 00:06:30.848 06:51:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:30.848 { 00:06:30.848 "nbd_device": "/dev/nbd0", 00:06:30.848 "bdev_name": "Malloc0" 00:06:30.848 }, 00:06:30.848 { 00:06:30.848 "nbd_device": "/dev/nbd1", 00:06:30.848 "bdev_name": "Malloc1" 00:06:30.848 } 00:06:30.848 ]' 00:06:30.848 06:51:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:30.848 06:51:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:30.848 /dev/nbd1' 00:06:30.848 06:51:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:30.848 /dev/nbd1' 00:06:30.848 06:51:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:30.848 06:51:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:30.848 06:51:51 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:30.848 06:51:51 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:30.848 06:51:51 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:30.848 06:51:51 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:30.848 06:51:51 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:30.848 06:51:51 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:30.848 06:51:51 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:30.848 06:51:51 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:30.848 06:51:51 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:30.848 06:51:51 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:30.848 256+0 records in 00:06:30.848 256+0 records out 00:06:30.848 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00434173 s, 242 MB/s 00:06:30.849 06:51:51 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:30.849 06:51:51 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:30.849 256+0 records in 00:06:30.849 256+0 records out 00:06:30.849 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0211433 s, 49.6 MB/s 00:06:30.849 06:51:51 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:30.849 06:51:51 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:30.849 256+0 records in 00:06:30.849 256+0 records out 00:06:30.849 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0226408 s, 46.3 MB/s 00:06:30.849 06:51:51 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:30.849 06:51:51 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:30.849 06:51:51 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:30.849 06:51:51 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:30.849 06:51:51 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:30.849 06:51:51 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:30.849 06:51:51 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:30.849 06:51:51 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:30.849 06:51:51 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:30.849 06:51:51 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:30.849 06:51:51 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:30.849 06:51:51 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:30.849 06:51:51 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:30.849 06:51:51 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:30.849 06:51:51 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:30.849 06:51:51 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:30.849 06:51:51 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:30.849 06:51:51 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:30.849 06:51:51 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:31.107 06:51:52 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:31.365 06:51:52 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:31.365 06:51:52 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:31.365 06:51:52 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:31.365 06:51:52 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:31.365 06:51:52 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:31.365 06:51:52 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:31.365 06:51:52 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:31.365 06:51:52 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:31.365 06:51:52 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:31.624 06:51:52 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:31.624 06:51:52 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:31.624 06:51:52 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:31.624 06:51:52 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:31.624 06:51:52 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:31.624 06:51:52 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:31.624 06:51:52 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:31.624 06:51:52 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:31.624 06:51:52 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:31.624 06:51:52 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:31.624 06:51:52 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:31.882 06:51:52 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:31.882 06:51:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:31.882 06:51:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:31.882 06:51:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:31.882 06:51:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:31.882 06:51:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:31.882 06:51:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:31.882 06:51:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:31.882 06:51:52 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:31.882 06:51:52 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:31.882 06:51:52 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:31.882 06:51:52 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:31.882 06:51:52 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:32.140 06:51:52 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:32.399 [2024-11-18 06:51:53.175945] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:32.399 [2024-11-18 06:51:53.219190] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.399 [2024-11-18 06:51:53.219190] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:32.399 [2024-11-18 06:51:53.277953] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:32.399 [2024-11-18 06:51:53.278026] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:35.684 06:51:55 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:35.684 06:51:55 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:35.684 spdk_app_start Round 2 00:06:35.684 06:51:55 event.app_repeat -- event/event.sh@25 -- # waitforlisten 106042 /var/tmp/spdk-nbd.sock 00:06:35.684 06:51:55 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 106042 ']' 00:06:35.684 06:51:55 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:35.684 06:51:55 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:35.684 06:51:55 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:35.684 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:35.684 06:51:55 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:35.684 06:51:55 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:35.684 06:51:56 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:35.684 06:51:56 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:35.684 06:51:56 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:35.684 Malloc0 00:06:35.684 06:51:56 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:35.943 Malloc1 00:06:35.943 06:51:56 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:35.943 06:51:56 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:35.943 06:51:56 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:35.943 06:51:56 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:35.943 06:51:56 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:35.943 06:51:56 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:35.943 06:51:56 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:35.943 06:51:56 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:35.943 06:51:56 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:35.943 06:51:56 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:35.943 06:51:56 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:35.943 06:51:56 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:35.943 06:51:56 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:35.943 06:51:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:35.943 06:51:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:35.943 06:51:56 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:36.202 /dev/nbd0 00:06:36.202 06:51:57 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:36.202 06:51:57 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:36.202 06:51:57 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:36.202 06:51:57 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:36.202 06:51:57 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:36.202 06:51:57 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:36.202 06:51:57 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:36.202 06:51:57 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:36.202 06:51:57 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:36.202 06:51:57 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:36.202 06:51:57 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:36.202 1+0 records in 00:06:36.202 1+0 records out 00:06:36.202 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000219737 s, 18.6 MB/s 00:06:36.202 06:51:57 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:36.202 06:51:57 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:36.202 06:51:57 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:36.202 06:51:57 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:36.202 06:51:57 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:36.202 06:51:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:36.202 06:51:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:36.202 06:51:57 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:36.461 /dev/nbd1 00:06:36.461 06:51:57 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:36.461 06:51:57 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:36.461 06:51:57 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:36.461 06:51:57 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:36.461 06:51:57 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:36.461 06:51:57 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:36.461 06:51:57 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:36.461 06:51:57 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:36.461 06:51:57 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:36.461 06:51:57 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:36.461 06:51:57 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:36.461 1+0 records in 00:06:36.461 1+0 records out 00:06:36.461 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00014629 s, 28.0 MB/s 00:06:36.461 06:51:57 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:36.461 06:51:57 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:36.461 06:51:57 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:36.461 06:51:57 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:36.461 06:51:57 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:36.461 06:51:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:36.461 06:51:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:36.461 06:51:57 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:36.461 06:51:57 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:36.461 06:51:57 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:36.719 06:51:57 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:36.719 { 00:06:36.719 "nbd_device": "/dev/nbd0", 00:06:36.719 "bdev_name": "Malloc0" 00:06:36.719 }, 00:06:36.719 { 00:06:36.719 "nbd_device": "/dev/nbd1", 00:06:36.719 "bdev_name": "Malloc1" 00:06:36.719 } 00:06:36.719 ]' 00:06:36.719 06:51:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:36.719 { 00:06:36.719 "nbd_device": "/dev/nbd0", 00:06:36.719 "bdev_name": "Malloc0" 00:06:36.719 }, 00:06:36.719 { 00:06:36.719 "nbd_device": "/dev/nbd1", 00:06:36.719 "bdev_name": "Malloc1" 00:06:36.719 } 00:06:36.719 ]' 00:06:36.719 06:51:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:36.976 06:51:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:36.976 /dev/nbd1' 00:06:36.976 06:51:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:36.976 /dev/nbd1' 00:06:36.976 06:51:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:36.976 06:51:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:36.976 06:51:57 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:36.976 06:51:57 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:36.977 06:51:57 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:36.977 06:51:57 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:36.977 06:51:57 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:36.977 06:51:57 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:36.977 06:51:57 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:36.977 06:51:57 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:36.977 06:51:57 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:36.977 06:51:57 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:36.977 256+0 records in 00:06:36.977 256+0 records out 00:06:36.977 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00482259 s, 217 MB/s 00:06:36.977 06:51:57 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:36.977 06:51:57 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:36.977 256+0 records in 00:06:36.977 256+0 records out 00:06:36.977 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0200442 s, 52.3 MB/s 00:06:36.977 06:51:57 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:36.977 06:51:57 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:36.977 256+0 records in 00:06:36.977 256+0 records out 00:06:36.977 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0212127 s, 49.4 MB/s 00:06:36.977 06:51:57 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:36.977 06:51:57 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:36.977 06:51:57 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:36.977 06:51:57 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:36.977 06:51:57 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:36.977 06:51:57 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:36.977 06:51:57 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:36.977 06:51:57 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:36.977 06:51:57 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:36.977 06:51:57 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:36.977 06:51:57 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:36.977 06:51:57 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:36.977 06:51:57 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:36.977 06:51:57 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:36.977 06:51:57 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:36.977 06:51:57 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:36.977 06:51:57 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:36.977 06:51:57 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:36.977 06:51:57 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:37.235 06:51:58 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:37.235 06:51:58 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:37.235 06:51:58 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:37.235 06:51:58 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:37.235 06:51:58 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:37.236 06:51:58 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:37.236 06:51:58 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:37.236 06:51:58 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:37.236 06:51:58 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:37.236 06:51:58 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:37.494 06:51:58 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:37.494 06:51:58 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:37.494 06:51:58 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:37.494 06:51:58 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:37.494 06:51:58 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:37.494 06:51:58 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:37.494 06:51:58 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:37.494 06:51:58 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:37.494 06:51:58 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:37.494 06:51:58 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:37.494 06:51:58 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:37.752 06:51:58 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:37.752 06:51:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:37.752 06:51:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:37.752 06:51:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:37.752 06:51:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:37.752 06:51:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:37.752 06:51:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:37.752 06:51:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:37.752 06:51:58 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:37.752 06:51:58 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:37.752 06:51:58 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:37.752 06:51:58 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:37.752 06:51:58 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:38.319 06:51:58 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:38.319 [2024-11-18 06:51:59.173801] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:38.319 [2024-11-18 06:51:59.215507] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:38.319 [2024-11-18 06:51:59.215529] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.319 [2024-11-18 06:51:59.269546] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:38.319 [2024-11-18 06:51:59.269611] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:41.610 06:52:01 event.app_repeat -- event/event.sh@38 -- # waitforlisten 106042 /var/tmp/spdk-nbd.sock 00:06:41.610 06:52:01 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 106042 ']' 00:06:41.610 06:52:01 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:41.610 06:52:01 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:41.610 06:52:01 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:41.610 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:41.610 06:52:01 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:41.610 06:52:01 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:41.610 06:52:02 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:41.610 06:52:02 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:41.610 06:52:02 event.app_repeat -- event/event.sh@39 -- # killprocess 106042 00:06:41.610 06:52:02 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 106042 ']' 00:06:41.610 06:52:02 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 106042 00:06:41.610 06:52:02 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:06:41.610 06:52:02 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:41.610 06:52:02 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 106042 00:06:41.610 06:52:02 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:41.610 06:52:02 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:41.610 06:52:02 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 106042' 00:06:41.610 killing process with pid 106042 00:06:41.610 06:52:02 event.app_repeat -- common/autotest_common.sh@973 -- # kill 106042 00:06:41.610 06:52:02 event.app_repeat -- common/autotest_common.sh@978 -- # wait 106042 00:06:41.610 spdk_app_start is called in Round 0. 00:06:41.610 Shutdown signal received, stop current app iteration 00:06:41.610 Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 reinitialization... 00:06:41.610 spdk_app_start is called in Round 1. 00:06:41.610 Shutdown signal received, stop current app iteration 00:06:41.610 Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 reinitialization... 00:06:41.610 spdk_app_start is called in Round 2. 00:06:41.610 Shutdown signal received, stop current app iteration 00:06:41.610 Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 reinitialization... 00:06:41.610 spdk_app_start is called in Round 3. 00:06:41.610 Shutdown signal received, stop current app iteration 00:06:41.610 06:52:02 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:41.610 06:52:02 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:41.610 00:06:41.610 real 0m18.596s 00:06:41.610 user 0m41.295s 00:06:41.610 sys 0m3.269s 00:06:41.610 06:52:02 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:41.610 06:52:02 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:41.610 ************************************ 00:06:41.610 END TEST app_repeat 00:06:41.610 ************************************ 00:06:41.610 06:52:02 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:41.610 06:52:02 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:41.610 06:52:02 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:41.610 06:52:02 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:41.610 06:52:02 event -- common/autotest_common.sh@10 -- # set +x 00:06:41.610 ************************************ 00:06:41.610 START TEST cpu_locks 00:06:41.610 ************************************ 00:06:41.610 06:52:02 event.cpu_locks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:41.610 * Looking for test storage... 00:06:41.610 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:06:41.610 06:52:02 event.cpu_locks -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:41.610 06:52:02 event.cpu_locks -- common/autotest_common.sh@1693 -- # lcov --version 00:06:41.610 06:52:02 event.cpu_locks -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:41.869 06:52:02 event.cpu_locks -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:41.869 06:52:02 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:41.869 06:52:02 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:41.869 06:52:02 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:41.869 06:52:02 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:06:41.869 06:52:02 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:06:41.869 06:52:02 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:06:41.869 06:52:02 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:06:41.869 06:52:02 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:06:41.869 06:52:02 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:06:41.869 06:52:02 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:06:41.869 06:52:02 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:41.869 06:52:02 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:06:41.869 06:52:02 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:06:41.869 06:52:02 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:41.869 06:52:02 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:41.869 06:52:02 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:06:41.869 06:52:02 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:06:41.869 06:52:02 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:41.869 06:52:02 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:06:41.869 06:52:02 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:06:41.869 06:52:02 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:06:41.869 06:52:02 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:06:41.869 06:52:02 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:41.869 06:52:02 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:06:41.869 06:52:02 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:06:41.869 06:52:02 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:41.869 06:52:02 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:41.869 06:52:02 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:06:41.869 06:52:02 event.cpu_locks -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:41.869 06:52:02 event.cpu_locks -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:41.869 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:41.869 --rc genhtml_branch_coverage=1 00:06:41.869 --rc genhtml_function_coverage=1 00:06:41.869 --rc genhtml_legend=1 00:06:41.869 --rc geninfo_all_blocks=1 00:06:41.869 --rc geninfo_unexecuted_blocks=1 00:06:41.869 00:06:41.869 ' 00:06:41.869 06:52:02 event.cpu_locks -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:41.869 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:41.869 --rc genhtml_branch_coverage=1 00:06:41.869 --rc genhtml_function_coverage=1 00:06:41.869 --rc genhtml_legend=1 00:06:41.869 --rc geninfo_all_blocks=1 00:06:41.869 --rc geninfo_unexecuted_blocks=1 00:06:41.869 00:06:41.869 ' 00:06:41.869 06:52:02 event.cpu_locks -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:41.869 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:41.869 --rc genhtml_branch_coverage=1 00:06:41.869 --rc genhtml_function_coverage=1 00:06:41.869 --rc genhtml_legend=1 00:06:41.869 --rc geninfo_all_blocks=1 00:06:41.869 --rc geninfo_unexecuted_blocks=1 00:06:41.869 00:06:41.869 ' 00:06:41.869 06:52:02 event.cpu_locks -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:41.869 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:41.869 --rc genhtml_branch_coverage=1 00:06:41.869 --rc genhtml_function_coverage=1 00:06:41.869 --rc genhtml_legend=1 00:06:41.869 --rc geninfo_all_blocks=1 00:06:41.869 --rc geninfo_unexecuted_blocks=1 00:06:41.869 00:06:41.869 ' 00:06:41.869 06:52:02 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:41.870 06:52:02 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:41.870 06:52:02 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:41.870 06:52:02 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:41.870 06:52:02 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:41.870 06:52:02 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:41.870 06:52:02 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:41.870 ************************************ 00:06:41.870 START TEST default_locks 00:06:41.870 ************************************ 00:06:41.870 06:52:02 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:06:41.870 06:52:02 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=108517 00:06:41.870 06:52:02 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:41.870 06:52:02 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 108517 00:06:41.870 06:52:02 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 108517 ']' 00:06:41.870 06:52:02 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:41.870 06:52:02 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:41.870 06:52:02 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:41.870 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:41.870 06:52:02 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:41.870 06:52:02 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:41.870 [2024-11-18 06:52:02.748229] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:06:41.870 [2024-11-18 06:52:02.748314] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid108517 ] 00:06:41.870 [2024-11-18 06:52:02.819589] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.128 [2024-11-18 06:52:02.869365] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.386 06:52:03 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:42.386 06:52:03 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:06:42.386 06:52:03 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 108517 00:06:42.386 06:52:03 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 108517 00:06:42.386 06:52:03 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:42.386 lslocks: write error 00:06:42.386 06:52:03 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 108517 00:06:42.386 06:52:03 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 108517 ']' 00:06:42.386 06:52:03 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 108517 00:06:42.386 06:52:03 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:06:42.386 06:52:03 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:42.386 06:52:03 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 108517 00:06:42.644 06:52:03 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:42.644 06:52:03 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:42.644 06:52:03 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 108517' 00:06:42.644 killing process with pid 108517 00:06:42.644 06:52:03 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 108517 00:06:42.644 06:52:03 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 108517 00:06:42.903 06:52:03 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 108517 00:06:42.903 06:52:03 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:06:42.903 06:52:03 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 108517 00:06:42.903 06:52:03 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:42.903 06:52:03 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:42.903 06:52:03 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:42.903 06:52:03 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:42.903 06:52:03 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 108517 00:06:42.903 06:52:03 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 108517 ']' 00:06:42.903 06:52:03 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:42.903 06:52:03 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:42.903 06:52:03 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:42.903 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:42.903 06:52:03 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:42.903 06:52:03 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:42.903 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (108517) - No such process 00:06:42.903 ERROR: process (pid: 108517) is no longer running 00:06:42.904 06:52:03 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:42.904 06:52:03 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:06:42.904 06:52:03 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:06:42.904 06:52:03 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:42.904 06:52:03 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:42.904 06:52:03 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:42.904 06:52:03 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:42.904 06:52:03 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:42.904 06:52:03 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:42.904 06:52:03 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:42.904 00:06:42.904 real 0m1.058s 00:06:42.904 user 0m1.012s 00:06:42.904 sys 0m0.513s 00:06:42.904 06:52:03 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:42.904 06:52:03 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:42.904 ************************************ 00:06:42.904 END TEST default_locks 00:06:42.904 ************************************ 00:06:42.904 06:52:03 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:42.904 06:52:03 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:42.904 06:52:03 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:42.904 06:52:03 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:42.904 ************************************ 00:06:42.904 START TEST default_locks_via_rpc 00:06:42.904 ************************************ 00:06:42.904 06:52:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:06:42.904 06:52:03 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=108648 00:06:42.904 06:52:03 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:42.904 06:52:03 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 108648 00:06:42.904 06:52:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 108648 ']' 00:06:42.904 06:52:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:42.904 06:52:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:42.904 06:52:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:42.904 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:42.904 06:52:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:42.904 06:52:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:42.904 [2024-11-18 06:52:03.859977] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:06:42.904 [2024-11-18 06:52:03.860067] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid108648 ] 00:06:43.163 [2024-11-18 06:52:03.930274] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:43.163 [2024-11-18 06:52:03.977996] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.422 06:52:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:43.422 06:52:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:43.422 06:52:04 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:43.422 06:52:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:43.422 06:52:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:43.422 06:52:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:43.422 06:52:04 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:43.422 06:52:04 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:43.422 06:52:04 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:43.422 06:52:04 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:43.422 06:52:04 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:43.422 06:52:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:43.422 06:52:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:43.422 06:52:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:43.422 06:52:04 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 108648 00:06:43.422 06:52:04 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 108648 00:06:43.422 06:52:04 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:43.681 06:52:04 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 108648 00:06:43.681 06:52:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 108648 ']' 00:06:43.681 06:52:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 108648 00:06:43.681 06:52:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:06:43.681 06:52:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:43.681 06:52:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 108648 00:06:43.681 06:52:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:43.681 06:52:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:43.681 06:52:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 108648' 00:06:43.681 killing process with pid 108648 00:06:43.681 06:52:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 108648 00:06:43.681 06:52:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 108648 00:06:44.249 00:06:44.249 real 0m1.128s 00:06:44.249 user 0m1.093s 00:06:44.249 sys 0m0.499s 00:06:44.249 06:52:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:44.249 06:52:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:44.249 ************************************ 00:06:44.249 END TEST default_locks_via_rpc 00:06:44.249 ************************************ 00:06:44.249 06:52:04 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:44.249 06:52:04 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:44.249 06:52:04 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:44.249 06:52:04 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:44.249 ************************************ 00:06:44.249 START TEST non_locking_app_on_locked_coremask 00:06:44.249 ************************************ 00:06:44.249 06:52:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:06:44.249 06:52:04 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=108849 00:06:44.249 06:52:04 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:44.249 06:52:04 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 108849 /var/tmp/spdk.sock 00:06:44.249 06:52:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 108849 ']' 00:06:44.249 06:52:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:44.249 06:52:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:44.249 06:52:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:44.249 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:44.249 06:52:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:44.249 06:52:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:44.249 [2024-11-18 06:52:05.034030] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:06:44.249 [2024-11-18 06:52:05.034118] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid108849 ] 00:06:44.249 [2024-11-18 06:52:05.102580] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.249 [2024-11-18 06:52:05.151258] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.508 06:52:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:44.508 06:52:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:44.508 06:52:05 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=108879 00:06:44.508 06:52:05 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:44.508 06:52:05 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 108879 /var/tmp/spdk2.sock 00:06:44.508 06:52:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 108879 ']' 00:06:44.508 06:52:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:44.508 06:52:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:44.508 06:52:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:44.508 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:44.508 06:52:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:44.508 06:52:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:44.508 [2024-11-18 06:52:05.465959] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:06:44.508 [2024-11-18 06:52:05.466054] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid108879 ] 00:06:44.767 [2024-11-18 06:52:05.566651] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:44.767 [2024-11-18 06:52:05.566690] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.767 [2024-11-18 06:52:05.658546] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.703 06:52:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:45.703 06:52:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:45.703 06:52:06 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 108849 00:06:45.703 06:52:06 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 108849 00:06:45.703 06:52:06 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:45.962 lslocks: write error 00:06:45.962 06:52:06 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 108849 00:06:45.962 06:52:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 108849 ']' 00:06:45.962 06:52:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 108849 00:06:45.962 06:52:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:45.962 06:52:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:45.962 06:52:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 108849 00:06:45.962 06:52:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:45.962 06:52:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:45.962 06:52:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 108849' 00:06:45.962 killing process with pid 108849 00:06:45.962 06:52:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 108849 00:06:45.962 06:52:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 108849 00:06:46.900 06:52:07 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 108879 00:06:46.900 06:52:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 108879 ']' 00:06:46.900 06:52:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 108879 00:06:46.900 06:52:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:46.900 06:52:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:46.900 06:52:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 108879 00:06:46.900 06:52:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:46.900 06:52:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:46.900 06:52:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 108879' 00:06:46.900 killing process with pid 108879 00:06:46.900 06:52:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 108879 00:06:46.900 06:52:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 108879 00:06:47.158 00:06:47.158 real 0m3.026s 00:06:47.158 user 0m3.281s 00:06:47.158 sys 0m0.982s 00:06:47.158 06:52:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:47.158 06:52:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:47.158 ************************************ 00:06:47.158 END TEST non_locking_app_on_locked_coremask 00:06:47.158 ************************************ 00:06:47.158 06:52:08 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:47.158 06:52:08 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:47.158 06:52:08 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:47.158 06:52:08 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:47.158 ************************************ 00:06:47.158 START TEST locking_app_on_unlocked_coremask 00:06:47.158 ************************************ 00:06:47.158 06:52:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:06:47.158 06:52:08 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=109190 00:06:47.158 06:52:08 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:47.158 06:52:08 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 109190 /var/tmp/spdk.sock 00:06:47.158 06:52:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 109190 ']' 00:06:47.158 06:52:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:47.158 06:52:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:47.159 06:52:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:47.159 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:47.159 06:52:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:47.159 06:52:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:47.159 [2024-11-18 06:52:08.112455] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:06:47.159 [2024-11-18 06:52:08.112563] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid109190 ] 00:06:47.417 [2024-11-18 06:52:08.180800] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:47.417 [2024-11-18 06:52:08.180845] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:47.417 [2024-11-18 06:52:08.230143] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.677 06:52:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:47.677 06:52:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:47.677 06:52:08 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=109312 00:06:47.677 06:52:08 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:47.677 06:52:08 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 109312 /var/tmp/spdk2.sock 00:06:47.677 06:52:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 109312 ']' 00:06:47.677 06:52:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:47.677 06:52:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:47.677 06:52:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:47.677 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:47.677 06:52:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:47.677 06:52:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:47.677 [2024-11-18 06:52:08.534300] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:06:47.677 [2024-11-18 06:52:08.534373] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid109312 ] 00:06:47.677 [2024-11-18 06:52:08.629942] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:47.936 [2024-11-18 06:52:08.718587] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.502 06:52:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:48.502 06:52:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:48.502 06:52:09 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 109312 00:06:48.502 06:52:09 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 109312 00:06:48.502 06:52:09 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:48.760 lslocks: write error 00:06:48.760 06:52:09 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 109190 00:06:48.760 06:52:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 109190 ']' 00:06:48.760 06:52:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 109190 00:06:48.760 06:52:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:48.760 06:52:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:48.760 06:52:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 109190 00:06:48.760 06:52:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:48.760 06:52:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:48.760 06:52:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 109190' 00:06:48.760 killing process with pid 109190 00:06:48.760 06:52:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 109190 00:06:48.760 06:52:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 109190 00:06:49.697 06:52:10 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 109312 00:06:49.697 06:52:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 109312 ']' 00:06:49.697 06:52:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 109312 00:06:49.697 06:52:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:49.697 06:52:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:49.697 06:52:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 109312 00:06:49.697 06:52:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:49.697 06:52:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:49.697 06:52:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 109312' 00:06:49.697 killing process with pid 109312 00:06:49.697 06:52:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 109312 00:06:49.697 06:52:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 109312 00:06:49.955 00:06:49.955 real 0m2.735s 00:06:49.955 user 0m2.767s 00:06:49.955 sys 0m0.948s 00:06:49.955 06:52:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:49.955 06:52:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:49.955 ************************************ 00:06:49.956 END TEST locking_app_on_unlocked_coremask 00:06:49.956 ************************************ 00:06:49.956 06:52:10 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:49.956 06:52:10 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:49.956 06:52:10 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:49.956 06:52:10 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:49.956 ************************************ 00:06:49.956 START TEST locking_app_on_locked_coremask 00:06:49.956 ************************************ 00:06:49.956 06:52:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:06:49.956 06:52:10 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=109611 00:06:49.956 06:52:10 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:49.956 06:52:10 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 109611 /var/tmp/spdk.sock 00:06:49.956 06:52:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 109611 ']' 00:06:49.956 06:52:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:49.956 06:52:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:49.956 06:52:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:49.956 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:49.956 06:52:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:49.956 06:52:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:49.956 [2024-11-18 06:52:10.898241] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:06:49.956 [2024-11-18 06:52:10.898346] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid109611 ] 00:06:50.215 [2024-11-18 06:52:10.963773] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:50.215 [2024-11-18 06:52:11.006006] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.474 06:52:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:50.474 06:52:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:50.474 06:52:11 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=109624 00:06:50.474 06:52:11 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 109624 /var/tmp/spdk2.sock 00:06:50.474 06:52:11 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:50.474 06:52:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:50.474 06:52:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 109624 /var/tmp/spdk2.sock 00:06:50.474 06:52:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:50.474 06:52:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:50.474 06:52:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:50.474 06:52:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:50.474 06:52:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 109624 /var/tmp/spdk2.sock 00:06:50.474 06:52:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 109624 ']' 00:06:50.474 06:52:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:50.474 06:52:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:50.474 06:52:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:50.474 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:50.474 06:52:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:50.474 06:52:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:50.474 [2024-11-18 06:52:11.308478] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:06:50.474 [2024-11-18 06:52:11.308575] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid109624 ] 00:06:50.474 [2024-11-18 06:52:11.413114] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 109611 has claimed it. 00:06:50.474 [2024-11-18 06:52:11.413173] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:51.408 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (109624) - No such process 00:06:51.408 ERROR: process (pid: 109624) is no longer running 00:06:51.408 06:52:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:51.408 06:52:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:51.408 06:52:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:51.408 06:52:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:51.408 06:52:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:51.408 06:52:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:51.408 06:52:12 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 109611 00:06:51.408 06:52:12 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 109611 00:06:51.408 06:52:12 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:51.408 lslocks: write error 00:06:51.408 06:52:12 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 109611 00:06:51.408 06:52:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 109611 ']' 00:06:51.408 06:52:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 109611 00:06:51.408 06:52:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:51.408 06:52:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:51.408 06:52:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 109611 00:06:51.408 06:52:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:51.408 06:52:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:51.408 06:52:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 109611' 00:06:51.408 killing process with pid 109611 00:06:51.408 06:52:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 109611 00:06:51.408 06:52:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 109611 00:06:51.976 00:06:51.976 real 0m1.820s 00:06:51.976 user 0m2.038s 00:06:51.976 sys 0m0.579s 00:06:51.976 06:52:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:51.976 06:52:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:51.976 ************************************ 00:06:51.976 END TEST locking_app_on_locked_coremask 00:06:51.976 ************************************ 00:06:51.976 06:52:12 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:51.976 06:52:12 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:51.976 06:52:12 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:51.976 06:52:12 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:51.976 ************************************ 00:06:51.976 START TEST locking_overlapped_coremask 00:06:51.976 ************************************ 00:06:51.976 06:52:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:06:51.976 06:52:12 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=109787 00:06:51.976 06:52:12 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:06:51.976 06:52:12 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 109787 /var/tmp/spdk.sock 00:06:51.976 06:52:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 109787 ']' 00:06:51.976 06:52:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:51.976 06:52:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:51.976 06:52:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:51.976 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:51.976 06:52:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:51.976 06:52:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:51.976 [2024-11-18 06:52:12.772962] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:06:51.976 [2024-11-18 06:52:12.773050] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid109787 ] 00:06:51.976 [2024-11-18 06:52:12.842824] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:51.976 [2024-11-18 06:52:12.893963] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:51.976 [2024-11-18 06:52:12.894035] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.976 [2024-11-18 06:52:12.894031] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:52.235 06:52:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:52.235 06:52:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:52.235 06:52:13 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=109920 00:06:52.235 06:52:13 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 109920 /var/tmp/spdk2.sock 00:06:52.235 06:52:13 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:52.235 06:52:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:52.235 06:52:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 109920 /var/tmp/spdk2.sock 00:06:52.235 06:52:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:52.235 06:52:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:52.235 06:52:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:52.235 06:52:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:52.235 06:52:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 109920 /var/tmp/spdk2.sock 00:06:52.235 06:52:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 109920 ']' 00:06:52.235 06:52:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:52.235 06:52:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:52.235 06:52:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:52.235 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:52.235 06:52:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:52.235 06:52:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:52.493 [2024-11-18 06:52:13.217011] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:06:52.493 [2024-11-18 06:52:13.217107] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid109920 ] 00:06:52.493 [2024-11-18 06:52:13.321658] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 109787 has claimed it. 00:06:52.493 [2024-11-18 06:52:13.321720] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:53.059 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (109920) - No such process 00:06:53.059 ERROR: process (pid: 109920) is no longer running 00:06:53.059 06:52:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:53.059 06:52:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:53.059 06:52:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:53.059 06:52:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:53.059 06:52:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:53.059 06:52:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:53.059 06:52:13 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:53.059 06:52:13 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:53.059 06:52:13 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:53.060 06:52:13 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:53.060 06:52:13 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 109787 00:06:53.060 06:52:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 109787 ']' 00:06:53.060 06:52:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 109787 00:06:53.060 06:52:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:06:53.060 06:52:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:53.060 06:52:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 109787 00:06:53.060 06:52:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:53.060 06:52:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:53.060 06:52:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 109787' 00:06:53.060 killing process with pid 109787 00:06:53.060 06:52:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 109787 00:06:53.060 06:52:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 109787 00:06:53.626 00:06:53.626 real 0m1.627s 00:06:53.626 user 0m4.563s 00:06:53.626 sys 0m0.470s 00:06:53.626 06:52:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:53.626 06:52:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:53.626 ************************************ 00:06:53.626 END TEST locking_overlapped_coremask 00:06:53.626 ************************************ 00:06:53.626 06:52:14 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:53.626 06:52:14 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:53.626 06:52:14 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:53.626 06:52:14 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:53.626 ************************************ 00:06:53.626 START TEST locking_overlapped_coremask_via_rpc 00:06:53.626 ************************************ 00:06:53.626 06:52:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:06:53.626 06:52:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=110082 00:06:53.626 06:52:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:53.627 06:52:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 110082 /var/tmp/spdk.sock 00:06:53.627 06:52:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 110082 ']' 00:06:53.627 06:52:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:53.627 06:52:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:53.627 06:52:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:53.627 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:53.627 06:52:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:53.627 06:52:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:53.627 [2024-11-18 06:52:14.450133] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:06:53.627 [2024-11-18 06:52:14.450240] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid110082 ] 00:06:53.627 [2024-11-18 06:52:14.514224] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:53.627 [2024-11-18 06:52:14.514262] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:53.627 [2024-11-18 06:52:14.559031] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:53.627 [2024-11-18 06:52:14.562513] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:53.627 [2024-11-18 06:52:14.562529] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.885 06:52:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:53.885 06:52:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:53.885 06:52:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=110092 00:06:53.885 06:52:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:53.885 06:52:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 110092 /var/tmp/spdk2.sock 00:06:53.885 06:52:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 110092 ']' 00:06:53.885 06:52:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:53.885 06:52:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:53.885 06:52:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:53.885 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:53.885 06:52:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:53.885 06:52:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:54.144 [2024-11-18 06:52:14.889211] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:06:54.144 [2024-11-18 06:52:14.889304] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid110092 ] 00:06:54.144 [2024-11-18 06:52:14.994297] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:54.144 [2024-11-18 06:52:14.994338] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:54.144 [2024-11-18 06:52:15.092585] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:54.144 [2024-11-18 06:52:15.095542] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:06:54.144 [2024-11-18 06:52:15.095544] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:55.079 06:52:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:55.079 06:52:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:55.079 06:52:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:55.079 06:52:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:55.079 06:52:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:55.079 06:52:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:55.079 06:52:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:55.079 06:52:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:06:55.079 06:52:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:55.079 06:52:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:06:55.079 06:52:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:55.079 06:52:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:06:55.079 06:52:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:55.079 06:52:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:55.079 06:52:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:55.079 06:52:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:55.079 [2024-11-18 06:52:15.887589] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 110082 has claimed it. 00:06:55.079 request: 00:06:55.079 { 00:06:55.079 "method": "framework_enable_cpumask_locks", 00:06:55.079 "req_id": 1 00:06:55.079 } 00:06:55.079 Got JSON-RPC error response 00:06:55.079 response: 00:06:55.079 { 00:06:55.079 "code": -32603, 00:06:55.079 "message": "Failed to claim CPU core: 2" 00:06:55.079 } 00:06:55.079 06:52:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:06:55.079 06:52:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:06:55.079 06:52:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:55.079 06:52:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:55.079 06:52:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:55.079 06:52:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 110082 /var/tmp/spdk.sock 00:06:55.079 06:52:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 110082 ']' 00:06:55.079 06:52:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:55.079 06:52:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:55.079 06:52:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:55.079 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:55.079 06:52:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:55.079 06:52:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:55.337 06:52:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:55.337 06:52:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:55.337 06:52:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 110092 /var/tmp/spdk2.sock 00:06:55.337 06:52:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 110092 ']' 00:06:55.337 06:52:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:55.337 06:52:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:55.337 06:52:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:55.337 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:55.337 06:52:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:55.337 06:52:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:55.596 06:52:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:55.596 06:52:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:55.596 06:52:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:55.596 06:52:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:55.596 06:52:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:55.596 06:52:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:55.596 00:06:55.596 real 0m2.060s 00:06:55.596 user 0m1.132s 00:06:55.596 sys 0m0.187s 00:06:55.596 06:52:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:55.596 06:52:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:55.596 ************************************ 00:06:55.596 END TEST locking_overlapped_coremask_via_rpc 00:06:55.596 ************************************ 00:06:55.596 06:52:16 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:55.596 06:52:16 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 110082 ]] 00:06:55.596 06:52:16 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 110082 00:06:55.596 06:52:16 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 110082 ']' 00:06:55.596 06:52:16 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 110082 00:06:55.596 06:52:16 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:55.596 06:52:16 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:55.596 06:52:16 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 110082 00:06:55.596 06:52:16 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:55.596 06:52:16 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:55.596 06:52:16 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 110082' 00:06:55.596 killing process with pid 110082 00:06:55.596 06:52:16 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 110082 00:06:55.596 06:52:16 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 110082 00:06:56.163 06:52:16 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 110092 ]] 00:06:56.163 06:52:16 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 110092 00:06:56.163 06:52:16 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 110092 ']' 00:06:56.163 06:52:16 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 110092 00:06:56.163 06:52:16 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:56.163 06:52:16 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:56.163 06:52:16 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 110092 00:06:56.163 06:52:16 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:06:56.163 06:52:16 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:06:56.163 06:52:16 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 110092' 00:06:56.163 killing process with pid 110092 00:06:56.163 06:52:16 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 110092 00:06:56.163 06:52:16 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 110092 00:06:56.421 06:52:17 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:56.421 06:52:17 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:56.421 06:52:17 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 110082 ]] 00:06:56.422 06:52:17 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 110082 00:06:56.422 06:52:17 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 110082 ']' 00:06:56.422 06:52:17 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 110082 00:06:56.422 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (110082) - No such process 00:06:56.422 06:52:17 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 110082 is not found' 00:06:56.422 Process with pid 110082 is not found 00:06:56.422 06:52:17 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 110092 ]] 00:06:56.422 06:52:17 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 110092 00:06:56.422 06:52:17 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 110092 ']' 00:06:56.422 06:52:17 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 110092 00:06:56.422 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (110092) - No such process 00:06:56.422 06:52:17 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 110092 is not found' 00:06:56.422 Process with pid 110092 is not found 00:06:56.422 06:52:17 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:56.422 00:06:56.422 real 0m14.831s 00:06:56.422 user 0m27.515s 00:06:56.422 sys 0m5.124s 00:06:56.422 06:52:17 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:56.422 06:52:17 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:56.422 ************************************ 00:06:56.422 END TEST cpu_locks 00:06:56.422 ************************************ 00:06:56.422 00:06:56.422 real 0m39.234s 00:06:56.422 user 1m17.866s 00:06:56.422 sys 0m9.234s 00:06:56.422 06:52:17 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:56.422 06:52:17 event -- common/autotest_common.sh@10 -- # set +x 00:06:56.422 ************************************ 00:06:56.422 END TEST event 00:06:56.422 ************************************ 00:06:56.422 06:52:17 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:56.422 06:52:17 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:56.422 06:52:17 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:56.422 06:52:17 -- common/autotest_common.sh@10 -- # set +x 00:06:56.680 ************************************ 00:06:56.680 START TEST thread 00:06:56.680 ************************************ 00:06:56.681 06:52:17 thread -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:56.681 * Looking for test storage... 00:06:56.681 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:06:56.681 06:52:17 thread -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:56.681 06:52:17 thread -- common/autotest_common.sh@1693 -- # lcov --version 00:06:56.681 06:52:17 thread -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:56.681 06:52:17 thread -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:56.681 06:52:17 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:56.681 06:52:17 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:56.681 06:52:17 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:56.681 06:52:17 thread -- scripts/common.sh@336 -- # IFS=.-: 00:06:56.681 06:52:17 thread -- scripts/common.sh@336 -- # read -ra ver1 00:06:56.681 06:52:17 thread -- scripts/common.sh@337 -- # IFS=.-: 00:06:56.681 06:52:17 thread -- scripts/common.sh@337 -- # read -ra ver2 00:06:56.681 06:52:17 thread -- scripts/common.sh@338 -- # local 'op=<' 00:06:56.681 06:52:17 thread -- scripts/common.sh@340 -- # ver1_l=2 00:06:56.681 06:52:17 thread -- scripts/common.sh@341 -- # ver2_l=1 00:06:56.681 06:52:17 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:56.681 06:52:17 thread -- scripts/common.sh@344 -- # case "$op" in 00:06:56.681 06:52:17 thread -- scripts/common.sh@345 -- # : 1 00:06:56.681 06:52:17 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:56.681 06:52:17 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:56.681 06:52:17 thread -- scripts/common.sh@365 -- # decimal 1 00:06:56.681 06:52:17 thread -- scripts/common.sh@353 -- # local d=1 00:06:56.681 06:52:17 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:56.681 06:52:17 thread -- scripts/common.sh@355 -- # echo 1 00:06:56.681 06:52:17 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:06:56.681 06:52:17 thread -- scripts/common.sh@366 -- # decimal 2 00:06:56.681 06:52:17 thread -- scripts/common.sh@353 -- # local d=2 00:06:56.681 06:52:17 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:56.681 06:52:17 thread -- scripts/common.sh@355 -- # echo 2 00:06:56.681 06:52:17 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:06:56.681 06:52:17 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:56.681 06:52:17 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:56.681 06:52:17 thread -- scripts/common.sh@368 -- # return 0 00:06:56.681 06:52:17 thread -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:56.681 06:52:17 thread -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:56.681 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:56.681 --rc genhtml_branch_coverage=1 00:06:56.681 --rc genhtml_function_coverage=1 00:06:56.681 --rc genhtml_legend=1 00:06:56.681 --rc geninfo_all_blocks=1 00:06:56.681 --rc geninfo_unexecuted_blocks=1 00:06:56.681 00:06:56.681 ' 00:06:56.681 06:52:17 thread -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:56.681 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:56.681 --rc genhtml_branch_coverage=1 00:06:56.681 --rc genhtml_function_coverage=1 00:06:56.681 --rc genhtml_legend=1 00:06:56.681 --rc geninfo_all_blocks=1 00:06:56.681 --rc geninfo_unexecuted_blocks=1 00:06:56.681 00:06:56.681 ' 00:06:56.681 06:52:17 thread -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:56.681 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:56.681 --rc genhtml_branch_coverage=1 00:06:56.681 --rc genhtml_function_coverage=1 00:06:56.681 --rc genhtml_legend=1 00:06:56.681 --rc geninfo_all_blocks=1 00:06:56.681 --rc geninfo_unexecuted_blocks=1 00:06:56.681 00:06:56.681 ' 00:06:56.681 06:52:17 thread -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:56.681 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:56.681 --rc genhtml_branch_coverage=1 00:06:56.681 --rc genhtml_function_coverage=1 00:06:56.681 --rc genhtml_legend=1 00:06:56.681 --rc geninfo_all_blocks=1 00:06:56.681 --rc geninfo_unexecuted_blocks=1 00:06:56.681 00:06:56.681 ' 00:06:56.681 06:52:17 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:56.681 06:52:17 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:56.681 06:52:17 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:56.681 06:52:17 thread -- common/autotest_common.sh@10 -- # set +x 00:06:56.681 ************************************ 00:06:56.681 START TEST thread_poller_perf 00:06:56.681 ************************************ 00:06:56.681 06:52:17 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:56.681 [2024-11-18 06:52:17.608895] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:06:56.681 [2024-11-18 06:52:17.608961] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid110589 ] 00:06:56.940 [2024-11-18 06:52:17.675353] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.940 [2024-11-18 06:52:17.719145] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.940 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:57.876 [2024-11-18T05:52:18.854Z] ====================================== 00:06:57.876 [2024-11-18T05:52:18.854Z] busy:2713097625 (cyc) 00:06:57.876 [2024-11-18T05:52:18.854Z] total_run_count: 368000 00:06:57.876 [2024-11-18T05:52:18.854Z] tsc_hz: 2700000000 (cyc) 00:06:57.876 [2024-11-18T05:52:18.854Z] ====================================== 00:06:57.876 [2024-11-18T05:52:18.854Z] poller_cost: 7372 (cyc), 2730 (nsec) 00:06:57.876 00:06:57.876 real 0m1.175s 00:06:57.876 user 0m1.108s 00:06:57.876 sys 0m0.062s 00:06:57.876 06:52:18 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:57.876 06:52:18 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:57.876 ************************************ 00:06:57.876 END TEST thread_poller_perf 00:06:57.876 ************************************ 00:06:57.876 06:52:18 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:57.876 06:52:18 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:57.876 06:52:18 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:57.876 06:52:18 thread -- common/autotest_common.sh@10 -- # set +x 00:06:57.876 ************************************ 00:06:57.876 START TEST thread_poller_perf 00:06:57.876 ************************************ 00:06:57.876 06:52:18 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:57.876 [2024-11-18 06:52:18.833707] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:06:57.876 [2024-11-18 06:52:18.833769] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid110742 ] 00:06:58.136 [2024-11-18 06:52:18.898638] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.136 [2024-11-18 06:52:18.941698] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.136 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:59.069 [2024-11-18T05:52:20.047Z] ====================================== 00:06:59.069 [2024-11-18T05:52:20.047Z] busy:2702382636 (cyc) 00:06:59.069 [2024-11-18T05:52:20.047Z] total_run_count: 4819000 00:06:59.069 [2024-11-18T05:52:20.047Z] tsc_hz: 2700000000 (cyc) 00:06:59.069 [2024-11-18T05:52:20.047Z] ====================================== 00:06:59.069 [2024-11-18T05:52:20.047Z] poller_cost: 560 (cyc), 207 (nsec) 00:06:59.069 00:06:59.069 real 0m1.167s 00:06:59.069 user 0m1.095s 00:06:59.069 sys 0m0.067s 00:06:59.069 06:52:19 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:59.069 06:52:19 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:59.069 ************************************ 00:06:59.069 END TEST thread_poller_perf 00:06:59.069 ************************************ 00:06:59.069 06:52:20 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:59.069 00:06:59.069 real 0m2.594s 00:06:59.069 user 0m2.348s 00:06:59.069 sys 0m0.251s 00:06:59.069 06:52:20 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:59.069 06:52:20 thread -- common/autotest_common.sh@10 -- # set +x 00:06:59.069 ************************************ 00:06:59.069 END TEST thread 00:06:59.069 ************************************ 00:06:59.069 06:52:20 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:06:59.069 06:52:20 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:59.069 06:52:20 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:59.069 06:52:20 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:59.069 06:52:20 -- common/autotest_common.sh@10 -- # set +x 00:06:59.328 ************************************ 00:06:59.328 START TEST app_cmdline 00:06:59.328 ************************************ 00:06:59.328 06:52:20 app_cmdline -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:59.328 * Looking for test storage... 00:06:59.328 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:59.328 06:52:20 app_cmdline -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:59.328 06:52:20 app_cmdline -- common/autotest_common.sh@1693 -- # lcov --version 00:06:59.328 06:52:20 app_cmdline -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:59.328 06:52:20 app_cmdline -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:59.328 06:52:20 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:59.328 06:52:20 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:59.328 06:52:20 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:59.328 06:52:20 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:06:59.328 06:52:20 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:06:59.328 06:52:20 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:06:59.328 06:52:20 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:06:59.328 06:52:20 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:06:59.328 06:52:20 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:06:59.328 06:52:20 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:06:59.328 06:52:20 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:59.328 06:52:20 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:06:59.328 06:52:20 app_cmdline -- scripts/common.sh@345 -- # : 1 00:06:59.328 06:52:20 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:59.328 06:52:20 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:59.328 06:52:20 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:06:59.328 06:52:20 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:06:59.328 06:52:20 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:59.328 06:52:20 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:06:59.328 06:52:20 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:06:59.328 06:52:20 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:06:59.328 06:52:20 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:06:59.328 06:52:20 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:59.328 06:52:20 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:06:59.328 06:52:20 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:06:59.328 06:52:20 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:59.328 06:52:20 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:59.328 06:52:20 app_cmdline -- scripts/common.sh@368 -- # return 0 00:06:59.328 06:52:20 app_cmdline -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:59.328 06:52:20 app_cmdline -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:59.328 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:59.328 --rc genhtml_branch_coverage=1 00:06:59.328 --rc genhtml_function_coverage=1 00:06:59.328 --rc genhtml_legend=1 00:06:59.328 --rc geninfo_all_blocks=1 00:06:59.328 --rc geninfo_unexecuted_blocks=1 00:06:59.328 00:06:59.328 ' 00:06:59.328 06:52:20 app_cmdline -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:59.328 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:59.328 --rc genhtml_branch_coverage=1 00:06:59.328 --rc genhtml_function_coverage=1 00:06:59.328 --rc genhtml_legend=1 00:06:59.328 --rc geninfo_all_blocks=1 00:06:59.328 --rc geninfo_unexecuted_blocks=1 00:06:59.328 00:06:59.328 ' 00:06:59.328 06:52:20 app_cmdline -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:59.328 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:59.328 --rc genhtml_branch_coverage=1 00:06:59.328 --rc genhtml_function_coverage=1 00:06:59.328 --rc genhtml_legend=1 00:06:59.328 --rc geninfo_all_blocks=1 00:06:59.329 --rc geninfo_unexecuted_blocks=1 00:06:59.329 00:06:59.329 ' 00:06:59.329 06:52:20 app_cmdline -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:59.329 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:59.329 --rc genhtml_branch_coverage=1 00:06:59.329 --rc genhtml_function_coverage=1 00:06:59.329 --rc genhtml_legend=1 00:06:59.329 --rc geninfo_all_blocks=1 00:06:59.329 --rc geninfo_unexecuted_blocks=1 00:06:59.329 00:06:59.329 ' 00:06:59.329 06:52:20 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:59.329 06:52:20 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=110953 00:06:59.329 06:52:20 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:59.329 06:52:20 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 110953 00:06:59.329 06:52:20 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 110953 ']' 00:06:59.329 06:52:20 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:59.329 06:52:20 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:59.329 06:52:20 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:59.329 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:59.329 06:52:20 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:59.329 06:52:20 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:59.329 [2024-11-18 06:52:20.255143] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:06:59.329 [2024-11-18 06:52:20.255225] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid110953 ] 00:06:59.588 [2024-11-18 06:52:20.323687] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.588 [2024-11-18 06:52:20.370381] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.846 06:52:20 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:59.846 06:52:20 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:06:59.846 06:52:20 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:07:00.105 { 00:07:00.105 "version": "SPDK v25.01-pre git sha1 83e8405e4", 00:07:00.105 "fields": { 00:07:00.105 "major": 25, 00:07:00.105 "minor": 1, 00:07:00.105 "patch": 0, 00:07:00.105 "suffix": "-pre", 00:07:00.105 "commit": "83e8405e4" 00:07:00.105 } 00:07:00.105 } 00:07:00.105 06:52:20 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:07:00.105 06:52:20 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:00.105 06:52:20 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:00.105 06:52:20 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:00.105 06:52:20 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:00.105 06:52:20 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:00.105 06:52:20 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:00.105 06:52:20 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:00.105 06:52:20 app_cmdline -- app/cmdline.sh@26 -- # sort 00:07:00.105 06:52:20 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:00.105 06:52:20 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:00.105 06:52:20 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:00.105 06:52:20 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:00.105 06:52:20 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:07:00.105 06:52:20 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:00.105 06:52:20 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:00.105 06:52:20 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:00.105 06:52:20 app_cmdline -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:00.105 06:52:20 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:00.105 06:52:20 app_cmdline -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:00.105 06:52:20 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:00.105 06:52:20 app_cmdline -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:00.105 06:52:20 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:00.105 06:52:20 app_cmdline -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:00.364 request: 00:07:00.364 { 00:07:00.364 "method": "env_dpdk_get_mem_stats", 00:07:00.364 "req_id": 1 00:07:00.364 } 00:07:00.364 Got JSON-RPC error response 00:07:00.364 response: 00:07:00.364 { 00:07:00.364 "code": -32601, 00:07:00.364 "message": "Method not found" 00:07:00.364 } 00:07:00.364 06:52:21 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:07:00.364 06:52:21 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:00.364 06:52:21 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:00.364 06:52:21 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:00.364 06:52:21 app_cmdline -- app/cmdline.sh@1 -- # killprocess 110953 00:07:00.364 06:52:21 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 110953 ']' 00:07:00.364 06:52:21 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 110953 00:07:00.364 06:52:21 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:07:00.364 06:52:21 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:00.364 06:52:21 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 110953 00:07:00.364 06:52:21 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:00.364 06:52:21 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:00.364 06:52:21 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 110953' 00:07:00.364 killing process with pid 110953 00:07:00.364 06:52:21 app_cmdline -- common/autotest_common.sh@973 -- # kill 110953 00:07:00.364 06:52:21 app_cmdline -- common/autotest_common.sh@978 -- # wait 110953 00:07:00.932 00:07:00.932 real 0m1.594s 00:07:00.932 user 0m1.981s 00:07:00.932 sys 0m0.492s 00:07:00.932 06:52:21 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:00.932 06:52:21 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:00.932 ************************************ 00:07:00.932 END TEST app_cmdline 00:07:00.932 ************************************ 00:07:00.932 06:52:21 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:07:00.932 06:52:21 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:00.932 06:52:21 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:00.932 06:52:21 -- common/autotest_common.sh@10 -- # set +x 00:07:00.932 ************************************ 00:07:00.932 START TEST version 00:07:00.932 ************************************ 00:07:00.932 06:52:21 version -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:07:00.932 * Looking for test storage... 00:07:00.932 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:00.932 06:52:21 version -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:00.932 06:52:21 version -- common/autotest_common.sh@1693 -- # lcov --version 00:07:00.932 06:52:21 version -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:00.932 06:52:21 version -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:00.932 06:52:21 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:00.932 06:52:21 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:00.932 06:52:21 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:00.932 06:52:21 version -- scripts/common.sh@336 -- # IFS=.-: 00:07:00.932 06:52:21 version -- scripts/common.sh@336 -- # read -ra ver1 00:07:00.932 06:52:21 version -- scripts/common.sh@337 -- # IFS=.-: 00:07:00.932 06:52:21 version -- scripts/common.sh@337 -- # read -ra ver2 00:07:00.932 06:52:21 version -- scripts/common.sh@338 -- # local 'op=<' 00:07:00.932 06:52:21 version -- scripts/common.sh@340 -- # ver1_l=2 00:07:00.932 06:52:21 version -- scripts/common.sh@341 -- # ver2_l=1 00:07:00.932 06:52:21 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:00.932 06:52:21 version -- scripts/common.sh@344 -- # case "$op" in 00:07:00.932 06:52:21 version -- scripts/common.sh@345 -- # : 1 00:07:00.932 06:52:21 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:00.932 06:52:21 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:00.932 06:52:21 version -- scripts/common.sh@365 -- # decimal 1 00:07:00.932 06:52:21 version -- scripts/common.sh@353 -- # local d=1 00:07:00.932 06:52:21 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:00.932 06:52:21 version -- scripts/common.sh@355 -- # echo 1 00:07:00.932 06:52:21 version -- scripts/common.sh@365 -- # ver1[v]=1 00:07:00.932 06:52:21 version -- scripts/common.sh@366 -- # decimal 2 00:07:00.933 06:52:21 version -- scripts/common.sh@353 -- # local d=2 00:07:00.933 06:52:21 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:00.933 06:52:21 version -- scripts/common.sh@355 -- # echo 2 00:07:00.933 06:52:21 version -- scripts/common.sh@366 -- # ver2[v]=2 00:07:00.933 06:52:21 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:00.933 06:52:21 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:00.933 06:52:21 version -- scripts/common.sh@368 -- # return 0 00:07:00.933 06:52:21 version -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:00.933 06:52:21 version -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:00.933 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:00.933 --rc genhtml_branch_coverage=1 00:07:00.933 --rc genhtml_function_coverage=1 00:07:00.933 --rc genhtml_legend=1 00:07:00.933 --rc geninfo_all_blocks=1 00:07:00.933 --rc geninfo_unexecuted_blocks=1 00:07:00.933 00:07:00.933 ' 00:07:00.933 06:52:21 version -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:00.933 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:00.933 --rc genhtml_branch_coverage=1 00:07:00.933 --rc genhtml_function_coverage=1 00:07:00.933 --rc genhtml_legend=1 00:07:00.933 --rc geninfo_all_blocks=1 00:07:00.933 --rc geninfo_unexecuted_blocks=1 00:07:00.933 00:07:00.933 ' 00:07:00.933 06:52:21 version -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:00.933 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:00.933 --rc genhtml_branch_coverage=1 00:07:00.933 --rc genhtml_function_coverage=1 00:07:00.933 --rc genhtml_legend=1 00:07:00.933 --rc geninfo_all_blocks=1 00:07:00.933 --rc geninfo_unexecuted_blocks=1 00:07:00.933 00:07:00.933 ' 00:07:00.933 06:52:21 version -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:00.933 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:00.933 --rc genhtml_branch_coverage=1 00:07:00.933 --rc genhtml_function_coverage=1 00:07:00.933 --rc genhtml_legend=1 00:07:00.933 --rc geninfo_all_blocks=1 00:07:00.933 --rc geninfo_unexecuted_blocks=1 00:07:00.933 00:07:00.933 ' 00:07:00.933 06:52:21 version -- app/version.sh@17 -- # get_header_version major 00:07:00.933 06:52:21 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:00.933 06:52:21 version -- app/version.sh@14 -- # cut -f2 00:07:00.933 06:52:21 version -- app/version.sh@14 -- # tr -d '"' 00:07:00.933 06:52:21 version -- app/version.sh@17 -- # major=25 00:07:00.933 06:52:21 version -- app/version.sh@18 -- # get_header_version minor 00:07:00.933 06:52:21 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:00.933 06:52:21 version -- app/version.sh@14 -- # cut -f2 00:07:00.933 06:52:21 version -- app/version.sh@14 -- # tr -d '"' 00:07:00.933 06:52:21 version -- app/version.sh@18 -- # minor=1 00:07:00.933 06:52:21 version -- app/version.sh@19 -- # get_header_version patch 00:07:00.933 06:52:21 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:00.933 06:52:21 version -- app/version.sh@14 -- # cut -f2 00:07:00.933 06:52:21 version -- app/version.sh@14 -- # tr -d '"' 00:07:00.933 06:52:21 version -- app/version.sh@19 -- # patch=0 00:07:00.933 06:52:21 version -- app/version.sh@20 -- # get_header_version suffix 00:07:00.933 06:52:21 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:00.933 06:52:21 version -- app/version.sh@14 -- # cut -f2 00:07:00.933 06:52:21 version -- app/version.sh@14 -- # tr -d '"' 00:07:00.933 06:52:21 version -- app/version.sh@20 -- # suffix=-pre 00:07:00.933 06:52:21 version -- app/version.sh@22 -- # version=25.1 00:07:00.933 06:52:21 version -- app/version.sh@25 -- # (( patch != 0 )) 00:07:00.933 06:52:21 version -- app/version.sh@28 -- # version=25.1rc0 00:07:00.933 06:52:21 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:00.933 06:52:21 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:01.192 06:52:21 version -- app/version.sh@30 -- # py_version=25.1rc0 00:07:01.192 06:52:21 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:07:01.192 00:07:01.192 real 0m0.205s 00:07:01.192 user 0m0.137s 00:07:01.192 sys 0m0.094s 00:07:01.192 06:52:21 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:01.192 06:52:21 version -- common/autotest_common.sh@10 -- # set +x 00:07:01.192 ************************************ 00:07:01.192 END TEST version 00:07:01.192 ************************************ 00:07:01.192 06:52:21 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:07:01.192 06:52:21 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:07:01.192 06:52:21 -- spdk/autotest.sh@194 -- # uname -s 00:07:01.192 06:52:21 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:07:01.192 06:52:21 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:07:01.192 06:52:21 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:07:01.192 06:52:21 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:07:01.192 06:52:21 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:07:01.192 06:52:21 -- spdk/autotest.sh@260 -- # timing_exit lib 00:07:01.192 06:52:21 -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:01.192 06:52:21 -- common/autotest_common.sh@10 -- # set +x 00:07:01.192 06:52:21 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:07:01.192 06:52:21 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:07:01.192 06:52:21 -- spdk/autotest.sh@276 -- # '[' 1 -eq 1 ']' 00:07:01.192 06:52:21 -- spdk/autotest.sh@277 -- # export NET_TYPE 00:07:01.192 06:52:21 -- spdk/autotest.sh@280 -- # '[' tcp = rdma ']' 00:07:01.192 06:52:21 -- spdk/autotest.sh@283 -- # '[' tcp = tcp ']' 00:07:01.192 06:52:21 -- spdk/autotest.sh@284 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:01.192 06:52:21 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:01.192 06:52:21 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:01.192 06:52:21 -- common/autotest_common.sh@10 -- # set +x 00:07:01.192 ************************************ 00:07:01.192 START TEST nvmf_tcp 00:07:01.192 ************************************ 00:07:01.192 06:52:21 nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:01.192 * Looking for test storage... 00:07:01.192 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:07:01.192 06:52:22 nvmf_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:01.192 06:52:22 nvmf_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:07:01.192 06:52:22 nvmf_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:01.193 06:52:22 nvmf_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:01.193 06:52:22 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:01.193 06:52:22 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:01.193 06:52:22 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:01.193 06:52:22 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:07:01.193 06:52:22 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:07:01.193 06:52:22 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:07:01.193 06:52:22 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:07:01.193 06:52:22 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:07:01.193 06:52:22 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:07:01.193 06:52:22 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:07:01.193 06:52:22 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:01.193 06:52:22 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:07:01.193 06:52:22 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:07:01.193 06:52:22 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:01.193 06:52:22 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:01.193 06:52:22 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:07:01.193 06:52:22 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:07:01.193 06:52:22 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:01.193 06:52:22 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:07:01.193 06:52:22 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:07:01.193 06:52:22 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:07:01.193 06:52:22 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:07:01.193 06:52:22 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:01.193 06:52:22 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:07:01.193 06:52:22 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:07:01.193 06:52:22 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:01.193 06:52:22 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:01.193 06:52:22 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:07:01.193 06:52:22 nvmf_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:01.193 06:52:22 nvmf_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:01.193 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:01.193 --rc genhtml_branch_coverage=1 00:07:01.193 --rc genhtml_function_coverage=1 00:07:01.193 --rc genhtml_legend=1 00:07:01.193 --rc geninfo_all_blocks=1 00:07:01.193 --rc geninfo_unexecuted_blocks=1 00:07:01.193 00:07:01.193 ' 00:07:01.193 06:52:22 nvmf_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:01.193 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:01.193 --rc genhtml_branch_coverage=1 00:07:01.193 --rc genhtml_function_coverage=1 00:07:01.193 --rc genhtml_legend=1 00:07:01.193 --rc geninfo_all_blocks=1 00:07:01.193 --rc geninfo_unexecuted_blocks=1 00:07:01.193 00:07:01.193 ' 00:07:01.193 06:52:22 nvmf_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:01.193 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:01.193 --rc genhtml_branch_coverage=1 00:07:01.193 --rc genhtml_function_coverage=1 00:07:01.193 --rc genhtml_legend=1 00:07:01.193 --rc geninfo_all_blocks=1 00:07:01.193 --rc geninfo_unexecuted_blocks=1 00:07:01.193 00:07:01.193 ' 00:07:01.193 06:52:22 nvmf_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:01.193 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:01.193 --rc genhtml_branch_coverage=1 00:07:01.193 --rc genhtml_function_coverage=1 00:07:01.193 --rc genhtml_legend=1 00:07:01.193 --rc geninfo_all_blocks=1 00:07:01.193 --rc geninfo_unexecuted_blocks=1 00:07:01.193 00:07:01.193 ' 00:07:01.193 06:52:22 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:07:01.193 06:52:22 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:01.193 06:52:22 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:07:01.193 06:52:22 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:01.193 06:52:22 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:01.193 06:52:22 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:01.193 ************************************ 00:07:01.193 START TEST nvmf_target_core 00:07:01.193 ************************************ 00:07:01.193 06:52:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:07:01.457 * Looking for test storage... 00:07:01.457 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:07:01.457 06:52:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:01.457 06:52:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # lcov --version 00:07:01.457 06:52:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:01.457 06:52:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:01.457 06:52:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:01.457 06:52:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:01.457 06:52:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:01.457 06:52:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:07:01.457 06:52:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:07:01.457 06:52:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:07:01.457 06:52:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:07:01.457 06:52:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:07:01.457 06:52:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:07:01.457 06:52:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:07:01.457 06:52:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:01.457 06:52:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:07:01.457 06:52:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:07:01.457 06:52:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:01.457 06:52:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:01.457 06:52:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:07:01.457 06:52:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:07:01.457 06:52:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:01.457 06:52:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:07:01.458 06:52:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:07:01.458 06:52:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:07:01.458 06:52:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:07:01.458 06:52:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:01.458 06:52:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:07:01.458 06:52:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:07:01.458 06:52:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:01.458 06:52:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:01.458 06:52:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:07:01.458 06:52:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:01.458 06:52:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:01.458 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:01.458 --rc genhtml_branch_coverage=1 00:07:01.458 --rc genhtml_function_coverage=1 00:07:01.458 --rc genhtml_legend=1 00:07:01.458 --rc geninfo_all_blocks=1 00:07:01.458 --rc geninfo_unexecuted_blocks=1 00:07:01.458 00:07:01.458 ' 00:07:01.458 06:52:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:01.458 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:01.458 --rc genhtml_branch_coverage=1 00:07:01.458 --rc genhtml_function_coverage=1 00:07:01.458 --rc genhtml_legend=1 00:07:01.458 --rc geninfo_all_blocks=1 00:07:01.458 --rc geninfo_unexecuted_blocks=1 00:07:01.458 00:07:01.458 ' 00:07:01.458 06:52:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:01.458 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:01.458 --rc genhtml_branch_coverage=1 00:07:01.458 --rc genhtml_function_coverage=1 00:07:01.458 --rc genhtml_legend=1 00:07:01.458 --rc geninfo_all_blocks=1 00:07:01.458 --rc geninfo_unexecuted_blocks=1 00:07:01.458 00:07:01.458 ' 00:07:01.458 06:52:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:01.458 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:01.458 --rc genhtml_branch_coverage=1 00:07:01.458 --rc genhtml_function_coverage=1 00:07:01.458 --rc genhtml_legend=1 00:07:01.458 --rc geninfo_all_blocks=1 00:07:01.458 --rc geninfo_unexecuted_blocks=1 00:07:01.458 00:07:01.459 ' 00:07:01.459 06:52:22 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:07:01.459 06:52:22 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:01.459 06:52:22 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:01.459 06:52:22 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:07:01.459 06:52:22 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:01.459 06:52:22 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:01.459 06:52:22 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:01.459 06:52:22 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:01.459 06:52:22 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:01.459 06:52:22 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:01.459 06:52:22 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:01.459 06:52:22 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:01.459 06:52:22 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:01.459 06:52:22 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:01.459 06:52:22 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:01.459 06:52:22 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:01.459 06:52:22 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:01.459 06:52:22 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:01.459 06:52:22 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:01.459 06:52:22 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:01.459 06:52:22 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:01.459 06:52:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:07:01.459 06:52:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:01.459 06:52:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:01.459 06:52:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:01.459 06:52:22 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:01.459 06:52:22 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:01.459 06:52:22 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:01.459 06:52:22 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:07:01.459 06:52:22 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:01.459 06:52:22 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:07:01.459 06:52:22 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:01.459 06:52:22 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:01.459 06:52:22 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:01.459 06:52:22 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:01.459 06:52:22 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:01.459 06:52:22 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:01.459 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:01.459 06:52:22 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:01.459 06:52:22 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:01.459 06:52:22 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:01.459 06:52:22 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:07:01.459 06:52:22 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:07:01.459 06:52:22 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:07:01.459 06:52:22 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:07:01.459 06:52:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:01.459 06:52:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:01.459 06:52:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:01.459 ************************************ 00:07:01.459 START TEST nvmf_abort 00:07:01.459 ************************************ 00:07:01.459 06:52:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:07:01.459 * Looking for test storage... 00:07:01.459 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:01.459 06:52:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:01.459 06:52:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # lcov --version 00:07:01.459 06:52:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:01.719 06:52:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:01.719 06:52:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:01.719 06:52:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:01.719 06:52:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:01.719 06:52:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:07:01.719 06:52:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:07:01.719 06:52:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:07:01.719 06:52:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:07:01.719 06:52:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:07:01.719 06:52:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:07:01.719 06:52:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:07:01.719 06:52:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:01.719 06:52:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:07:01.719 06:52:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:07:01.719 06:52:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:01.719 06:52:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:01.719 06:52:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:07:01.719 06:52:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:07:01.719 06:52:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:01.719 06:52:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:07:01.719 06:52:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:07:01.719 06:52:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:07:01.719 06:52:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:07:01.719 06:52:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:01.719 06:52:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:07:01.719 06:52:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:07:01.719 06:52:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:01.719 06:52:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:01.719 06:52:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:07:01.719 06:52:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:01.719 06:52:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:01.719 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:01.719 --rc genhtml_branch_coverage=1 00:07:01.719 --rc genhtml_function_coverage=1 00:07:01.719 --rc genhtml_legend=1 00:07:01.719 --rc geninfo_all_blocks=1 00:07:01.719 --rc geninfo_unexecuted_blocks=1 00:07:01.719 00:07:01.719 ' 00:07:01.719 06:52:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:01.719 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:01.719 --rc genhtml_branch_coverage=1 00:07:01.719 --rc genhtml_function_coverage=1 00:07:01.719 --rc genhtml_legend=1 00:07:01.719 --rc geninfo_all_blocks=1 00:07:01.719 --rc geninfo_unexecuted_blocks=1 00:07:01.719 00:07:01.719 ' 00:07:01.719 06:52:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:01.719 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:01.719 --rc genhtml_branch_coverage=1 00:07:01.719 --rc genhtml_function_coverage=1 00:07:01.719 --rc genhtml_legend=1 00:07:01.719 --rc geninfo_all_blocks=1 00:07:01.719 --rc geninfo_unexecuted_blocks=1 00:07:01.719 00:07:01.719 ' 00:07:01.719 06:52:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:01.719 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:01.719 --rc genhtml_branch_coverage=1 00:07:01.719 --rc genhtml_function_coverage=1 00:07:01.719 --rc genhtml_legend=1 00:07:01.719 --rc geninfo_all_blocks=1 00:07:01.719 --rc geninfo_unexecuted_blocks=1 00:07:01.719 00:07:01.719 ' 00:07:01.719 06:52:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:01.719 06:52:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:07:01.719 06:52:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:01.719 06:52:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:01.719 06:52:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:01.719 06:52:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:01.719 06:52:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:01.719 06:52:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:01.719 06:52:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:01.719 06:52:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:01.719 06:52:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:01.719 06:52:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:01.719 06:52:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:01.719 06:52:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:01.719 06:52:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:01.719 06:52:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:01.719 06:52:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:01.719 06:52:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:01.719 06:52:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:01.719 06:52:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:07:01.719 06:52:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:01.719 06:52:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:01.719 06:52:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:01.719 06:52:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:01.719 06:52:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:01.719 06:52:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:01.719 06:52:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:07:01.720 06:52:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:01.720 06:52:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:07:01.720 06:52:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:01.720 06:52:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:01.720 06:52:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:01.720 06:52:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:01.720 06:52:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:01.720 06:52:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:01.720 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:01.720 06:52:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:01.720 06:52:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:01.720 06:52:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:01.720 06:52:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:01.720 06:52:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:07:01.720 06:52:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:07:01.720 06:52:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:01.720 06:52:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:01.720 06:52:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:01.720 06:52:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:01.720 06:52:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:01.720 06:52:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:01.720 06:52:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:01.720 06:52:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:01.720 06:52:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:01.720 06:52:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:01.720 06:52:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:07:01.720 06:52:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:04.258 06:52:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:04.258 06:52:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:07:04.258 06:52:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:04.258 06:52:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:04.258 06:52:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:04.258 06:52:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:04.258 06:52:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:04.258 06:52:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:07:04.258 06:52:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:04.258 06:52:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:07:04.258 06:52:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:07:04.258 06:52:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:07:04.258 06:52:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:07:04.258 06:52:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:07:04.258 06:52:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:07:04.258 06:52:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:04.258 06:52:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:04.258 06:52:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:04.258 06:52:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:04.258 06:52:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:04.258 06:52:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:04.258 06:52:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:04.258 06:52:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:04.258 06:52:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:04.258 06:52:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:04.258 06:52:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:04.258 06:52:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:04.258 06:52:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:04.258 06:52:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:04.258 06:52:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:04.258 06:52:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:04.258 06:52:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:04.258 06:52:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:04.258 06:52:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:04.258 06:52:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:04.258 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:04.258 06:52:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:04.258 06:52:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:04.258 06:52:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:04.258 06:52:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:04.258 06:52:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:04.258 06:52:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:04.258 06:52:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:04.258 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:04.258 06:52:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:04.258 06:52:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:04.258 06:52:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:04.258 06:52:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:04.258 06:52:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:04.258 06:52:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:04.258 06:52:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:04.258 06:52:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:04.258 06:52:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:04.258 06:52:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:04.258 06:52:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:04.258 06:52:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:04.258 06:52:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:04.258 06:52:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:04.258 06:52:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:04.258 06:52:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:04.258 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:04.258 06:52:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:04.258 06:52:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:04.258 06:52:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:04.258 06:52:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:04.258 06:52:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:04.258 06:52:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:04.258 06:52:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:04.258 06:52:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:04.258 06:52:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:04.258 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:04.258 06:52:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:04.258 06:52:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:04.258 06:52:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:07:04.258 06:52:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:04.258 06:52:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:04.258 06:52:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:04.258 06:52:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:04.258 06:52:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:04.258 06:52:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:04.259 06:52:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:04.259 06:52:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:04.259 06:52:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:04.259 06:52:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:04.259 06:52:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:04.259 06:52:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:04.259 06:52:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:04.259 06:52:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:04.259 06:52:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:04.259 06:52:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:04.259 06:52:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:04.259 06:52:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:04.259 06:52:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:04.259 06:52:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:04.259 06:52:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:04.259 06:52:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:04.259 06:52:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:04.259 06:52:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:04.259 06:52:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:04.259 06:52:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:04.259 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:04.259 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.287 ms 00:07:04.259 00:07:04.259 --- 10.0.0.2 ping statistics --- 00:07:04.259 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:04.259 rtt min/avg/max/mdev = 0.287/0.287/0.287/0.000 ms 00:07:04.259 06:52:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:04.259 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:04.259 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.154 ms 00:07:04.259 00:07:04.259 --- 10.0.0.1 ping statistics --- 00:07:04.259 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:04.259 rtt min/avg/max/mdev = 0.154/0.154/0.154/0.000 ms 00:07:04.259 06:52:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:04.259 06:52:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:07:04.259 06:52:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:04.259 06:52:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:04.259 06:52:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:04.259 06:52:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:04.259 06:52:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:04.259 06:52:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:04.259 06:52:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:04.259 06:52:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:07:04.259 06:52:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:04.259 06:52:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:04.259 06:52:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:04.259 06:52:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=113041 00:07:04.259 06:52:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:07:04.259 06:52:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 113041 00:07:04.259 06:52:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 113041 ']' 00:07:04.259 06:52:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:04.259 06:52:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:04.259 06:52:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:04.259 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:04.259 06:52:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:04.259 06:52:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:04.259 [2024-11-18 06:52:24.898666] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:07:04.259 [2024-11-18 06:52:24.898762] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:04.259 [2024-11-18 06:52:24.968644] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:04.259 [2024-11-18 06:52:25.014386] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:04.259 [2024-11-18 06:52:25.014442] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:04.259 [2024-11-18 06:52:25.014469] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:04.259 [2024-11-18 06:52:25.014481] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:04.259 [2024-11-18 06:52:25.014496] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:04.259 [2024-11-18 06:52:25.015901] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:04.259 [2024-11-18 06:52:25.015965] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:04.259 [2024-11-18 06:52:25.015968] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:04.259 06:52:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:04.259 06:52:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:07:04.259 06:52:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:04.259 06:52:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:04.259 06:52:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:04.259 06:52:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:04.259 06:52:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:07:04.259 06:52:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:04.259 06:52:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:04.259 [2024-11-18 06:52:25.156650] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:04.259 06:52:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:04.259 06:52:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:07:04.259 06:52:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:04.259 06:52:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:04.259 Malloc0 00:07:04.259 06:52:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:04.259 06:52:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:04.259 06:52:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:04.259 06:52:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:04.259 Delay0 00:07:04.259 06:52:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:04.259 06:52:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:04.259 06:52:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:04.259 06:52:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:04.259 06:52:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:04.259 06:52:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:07:04.259 06:52:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:04.259 06:52:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:04.259 06:52:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:04.259 06:52:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:04.259 06:52:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:04.259 06:52:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:04.259 [2024-11-18 06:52:25.228156] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:04.259 06:52:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:04.259 06:52:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:04.259 06:52:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:04.259 06:52:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:04.519 06:52:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:04.519 06:52:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:07:04.519 [2024-11-18 06:52:25.343300] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:07:07.050 Initializing NVMe Controllers 00:07:07.050 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:07:07.050 controller IO queue size 128 less than required 00:07:07.050 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:07:07.050 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:07:07.050 Initialization complete. Launching workers. 00:07:07.050 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 29445 00:07:07.050 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 29506, failed to submit 62 00:07:07.050 success 29449, unsuccessful 57, failed 0 00:07:07.050 06:52:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:07.050 06:52:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:07.050 06:52:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:07.050 06:52:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:07.050 06:52:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:07:07.050 06:52:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:07:07.050 06:52:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:07.050 06:52:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:07:07.050 06:52:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:07.050 06:52:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:07:07.050 06:52:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:07.050 06:52:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:07.050 rmmod nvme_tcp 00:07:07.050 rmmod nvme_fabrics 00:07:07.050 rmmod nvme_keyring 00:07:07.050 06:52:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:07.050 06:52:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:07:07.050 06:52:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:07:07.050 06:52:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 113041 ']' 00:07:07.050 06:52:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 113041 00:07:07.050 06:52:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 113041 ']' 00:07:07.050 06:52:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 113041 00:07:07.050 06:52:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:07:07.050 06:52:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:07.050 06:52:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 113041 00:07:07.050 06:52:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:07.050 06:52:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:07.050 06:52:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 113041' 00:07:07.050 killing process with pid 113041 00:07:07.050 06:52:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@973 -- # kill 113041 00:07:07.050 06:52:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@978 -- # wait 113041 00:07:07.050 06:52:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:07.050 06:52:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:07.050 06:52:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:07.050 06:52:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:07:07.050 06:52:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:07:07.050 06:52:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:07.050 06:52:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:07:07.050 06:52:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:07.050 06:52:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:07.050 06:52:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:07.050 06:52:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:07.050 06:52:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:08.961 06:52:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:08.961 00:07:08.961 real 0m7.522s 00:07:08.961 user 0m10.885s 00:07:08.961 sys 0m2.545s 00:07:08.961 06:52:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:08.962 06:52:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:08.962 ************************************ 00:07:08.962 END TEST nvmf_abort 00:07:08.962 ************************************ 00:07:08.962 06:52:29 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:07:08.962 06:52:29 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:08.962 06:52:29 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:08.962 06:52:29 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:08.962 ************************************ 00:07:08.962 START TEST nvmf_ns_hotplug_stress 00:07:08.962 ************************************ 00:07:08.962 06:52:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:07:09.222 * Looking for test storage... 00:07:09.222 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:09.222 06:52:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:09.222 06:52:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:07:09.222 06:52:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:09.222 06:52:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:09.222 06:52:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:09.222 06:52:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:09.222 06:52:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:09.222 06:52:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:07:09.222 06:52:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:07:09.222 06:52:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:07:09.222 06:52:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:07:09.222 06:52:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:07:09.222 06:52:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:07:09.222 06:52:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:07:09.222 06:52:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:09.222 06:52:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:07:09.222 06:52:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:07:09.222 06:52:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:09.222 06:52:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:09.222 06:52:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:07:09.222 06:52:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:07:09.222 06:52:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:09.222 06:52:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:07:09.222 06:52:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:07:09.222 06:52:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:07:09.222 06:52:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:07:09.222 06:52:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:09.222 06:52:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:07:09.222 06:52:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:07:09.222 06:52:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:09.222 06:52:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:09.222 06:52:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:07:09.222 06:52:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:09.222 06:52:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:09.222 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:09.222 --rc genhtml_branch_coverage=1 00:07:09.222 --rc genhtml_function_coverage=1 00:07:09.222 --rc genhtml_legend=1 00:07:09.222 --rc geninfo_all_blocks=1 00:07:09.222 --rc geninfo_unexecuted_blocks=1 00:07:09.222 00:07:09.222 ' 00:07:09.222 06:52:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:09.222 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:09.222 --rc genhtml_branch_coverage=1 00:07:09.222 --rc genhtml_function_coverage=1 00:07:09.222 --rc genhtml_legend=1 00:07:09.222 --rc geninfo_all_blocks=1 00:07:09.222 --rc geninfo_unexecuted_blocks=1 00:07:09.222 00:07:09.222 ' 00:07:09.222 06:52:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:09.222 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:09.222 --rc genhtml_branch_coverage=1 00:07:09.222 --rc genhtml_function_coverage=1 00:07:09.222 --rc genhtml_legend=1 00:07:09.222 --rc geninfo_all_blocks=1 00:07:09.222 --rc geninfo_unexecuted_blocks=1 00:07:09.222 00:07:09.222 ' 00:07:09.222 06:52:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:09.222 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:09.222 --rc genhtml_branch_coverage=1 00:07:09.222 --rc genhtml_function_coverage=1 00:07:09.222 --rc genhtml_legend=1 00:07:09.222 --rc geninfo_all_blocks=1 00:07:09.222 --rc geninfo_unexecuted_blocks=1 00:07:09.222 00:07:09.222 ' 00:07:09.222 06:52:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:09.222 06:52:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:07:09.222 06:52:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:09.222 06:52:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:09.222 06:52:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:09.222 06:52:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:09.222 06:52:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:09.222 06:52:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:09.222 06:52:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:09.222 06:52:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:09.222 06:52:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:09.222 06:52:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:09.222 06:52:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:09.222 06:52:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:09.222 06:52:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:09.222 06:52:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:09.222 06:52:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:09.222 06:52:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:09.222 06:52:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:09.222 06:52:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:07:09.223 06:52:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:09.223 06:52:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:09.223 06:52:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:09.223 06:52:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:09.223 06:52:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:09.223 06:52:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:09.223 06:52:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:07:09.223 06:52:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:09.223 06:52:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:07:09.223 06:52:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:09.223 06:52:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:09.223 06:52:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:09.223 06:52:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:09.223 06:52:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:09.223 06:52:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:09.223 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:09.223 06:52:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:09.223 06:52:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:09.223 06:52:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:09.223 06:52:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:09.223 06:52:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:07:09.223 06:52:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:09.223 06:52:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:09.223 06:52:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:09.223 06:52:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:09.223 06:52:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:09.223 06:52:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:09.223 06:52:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:09.223 06:52:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:09.223 06:52:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:09.223 06:52:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:09.223 06:52:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:07:09.223 06:52:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:11.768 06:52:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:11.769 06:52:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:07:11.769 06:52:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:11.769 06:52:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:11.769 06:52:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:11.769 06:52:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:11.769 06:52:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:11.769 06:52:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:07:11.769 06:52:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:11.769 06:52:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:07:11.769 06:52:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:07:11.769 06:52:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:07:11.769 06:52:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:07:11.769 06:52:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:07:11.769 06:52:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:07:11.769 06:52:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:11.769 06:52:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:11.769 06:52:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:11.769 06:52:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:11.769 06:52:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:11.769 06:52:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:11.769 06:52:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:11.769 06:52:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:11.769 06:52:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:11.769 06:52:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:11.769 06:52:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:11.769 06:52:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:11.769 06:52:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:11.769 06:52:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:11.769 06:52:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:11.769 06:52:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:11.769 06:52:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:11.769 06:52:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:11.769 06:52:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:11.769 06:52:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:11.769 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:11.769 06:52:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:11.769 06:52:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:11.769 06:52:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:11.769 06:52:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:11.769 06:52:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:11.769 06:52:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:11.769 06:52:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:11.769 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:11.769 06:52:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:11.769 06:52:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:11.769 06:52:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:11.769 06:52:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:11.769 06:52:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:11.769 06:52:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:11.769 06:52:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:11.769 06:52:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:11.769 06:52:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:11.769 06:52:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:11.769 06:52:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:11.769 06:52:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:11.769 06:52:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:11.769 06:52:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:11.769 06:52:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:11.769 06:52:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:11.769 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:11.769 06:52:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:11.769 06:52:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:11.769 06:52:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:11.769 06:52:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:11.769 06:52:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:11.769 06:52:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:11.769 06:52:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:11.769 06:52:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:11.769 06:52:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:11.769 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:11.769 06:52:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:11.769 06:52:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:11.769 06:52:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:07:11.769 06:52:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:11.769 06:52:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:11.769 06:52:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:11.769 06:52:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:11.769 06:52:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:11.769 06:52:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:11.769 06:52:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:11.769 06:52:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:11.769 06:52:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:11.769 06:52:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:11.769 06:52:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:11.770 06:52:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:11.770 06:52:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:11.770 06:52:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:11.770 06:52:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:11.770 06:52:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:11.770 06:52:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:11.770 06:52:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:11.770 06:52:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:11.770 06:52:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:11.770 06:52:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:11.770 06:52:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:11.770 06:52:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:11.770 06:52:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:11.770 06:52:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:11.770 06:52:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:11.770 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:11.770 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.319 ms 00:07:11.770 00:07:11.770 --- 10.0.0.2 ping statistics --- 00:07:11.770 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:11.770 rtt min/avg/max/mdev = 0.319/0.319/0.319/0.000 ms 00:07:11.770 06:52:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:11.770 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:11.770 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.168 ms 00:07:11.770 00:07:11.770 --- 10.0.0.1 ping statistics --- 00:07:11.770 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:11.770 rtt min/avg/max/mdev = 0.168/0.168/0.168/0.000 ms 00:07:11.770 06:52:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:11.770 06:52:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:07:11.770 06:52:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:11.770 06:52:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:11.770 06:52:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:11.770 06:52:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:11.770 06:52:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:11.770 06:52:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:11.770 06:52:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:11.770 06:52:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:07:11.770 06:52:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:11.770 06:52:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:11.770 06:52:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:11.770 06:52:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=115399 00:07:11.770 06:52:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:07:11.770 06:52:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 115399 00:07:11.770 06:52:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 115399 ']' 00:07:11.770 06:52:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:11.770 06:52:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:11.770 06:52:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:11.770 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:11.770 06:52:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:11.770 06:52:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:11.770 [2024-11-18 06:52:32.378469] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:07:11.770 [2024-11-18 06:52:32.378595] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:11.770 [2024-11-18 06:52:32.454985] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:11.770 [2024-11-18 06:52:32.504631] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:11.770 [2024-11-18 06:52:32.504682] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:11.770 [2024-11-18 06:52:32.504713] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:11.770 [2024-11-18 06:52:32.504725] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:11.770 [2024-11-18 06:52:32.504735] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:11.770 [2024-11-18 06:52:32.506205] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:11.770 [2024-11-18 06:52:32.507510] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:11.770 [2024-11-18 06:52:32.507516] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:11.770 06:52:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:11.770 06:52:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:07:11.770 06:52:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:11.770 06:52:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:11.770 06:52:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:11.770 06:52:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:11.770 06:52:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:07:11.770 06:52:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:12.029 [2024-11-18 06:52:32.890532] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:12.029 06:52:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:12.288 06:52:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:12.546 [2024-11-18 06:52:33.433147] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:12.546 06:52:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:12.804 06:52:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:07:13.063 Malloc0 00:07:13.063 06:52:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:13.321 Delay0 00:07:13.322 06:52:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:13.887 06:52:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:07:13.887 NULL1 00:07:13.887 06:52:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:07:14.144 06:52:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=115703 00:07:14.144 06:52:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:07:14.144 06:52:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 115703 00:07:14.144 06:52:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:15.519 Read completed with error (sct=0, sc=11) 00:07:15.519 06:52:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:15.519 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:15.519 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:15.519 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:15.519 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:15.519 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:15.777 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:15.777 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:15.777 06:52:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:07:15.777 06:52:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:07:16.035 true 00:07:16.035 06:52:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 115703 00:07:16.035 06:52:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:16.969 06:52:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:16.969 06:52:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:07:16.970 06:52:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:07:17.228 true 00:07:17.228 06:52:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 115703 00:07:17.228 06:52:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:17.486 06:52:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:17.744 06:52:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:07:17.744 06:52:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:07:18.002 true 00:07:18.002 06:52:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 115703 00:07:18.002 06:52:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:18.567 06:52:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:18.567 06:52:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:07:18.567 06:52:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:07:18.825 true 00:07:18.825 06:52:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 115703 00:07:18.825 06:52:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:20.198 06:52:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:20.198 06:52:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:07:20.198 06:52:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:07:20.456 true 00:07:20.456 06:52:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 115703 00:07:20.456 06:52:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:20.714 06:52:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:20.972 06:52:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:07:20.972 06:52:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:07:21.230 true 00:07:21.230 06:52:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 115703 00:07:21.231 06:52:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:21.488 06:52:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:21.746 06:52:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:07:21.746 06:52:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:07:22.004 true 00:07:22.004 06:52:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 115703 00:07:22.004 06:52:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:23.379 06:52:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:23.379 06:52:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:07:23.379 06:52:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:07:23.637 true 00:07:23.637 06:52:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 115703 00:07:23.637 06:52:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:23.895 06:52:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:24.153 06:52:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:07:24.153 06:52:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:07:24.411 true 00:07:24.411 06:52:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 115703 00:07:24.411 06:52:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:24.978 06:52:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:24.978 06:52:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:07:24.978 06:52:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:07:25.236 true 00:07:25.236 06:52:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 115703 00:07:25.236 06:52:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:26.170 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:26.170 06:52:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:26.170 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:26.428 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:26.428 06:52:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:07:26.428 06:52:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:07:26.685 true 00:07:26.685 06:52:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 115703 00:07:26.685 06:52:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:26.944 06:52:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:27.202 06:52:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:07:27.202 06:52:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:07:27.459 true 00:07:27.459 06:52:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 115703 00:07:27.459 06:52:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:28.394 06:52:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:28.394 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:28.652 06:52:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:07:28.652 06:52:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:07:28.910 true 00:07:28.910 06:52:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 115703 00:07:28.910 06:52:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:29.168 06:52:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:29.426 06:52:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:07:29.426 06:52:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:07:29.684 true 00:07:29.684 06:52:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 115703 00:07:29.684 06:52:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:30.619 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:30.619 06:52:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:30.619 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:30.619 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:30.877 06:52:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:07:30.877 06:52:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:07:31.135 true 00:07:31.135 06:52:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 115703 00:07:31.135 06:52:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:31.393 06:52:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:31.651 06:52:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:07:31.651 06:52:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:07:31.909 true 00:07:31.909 06:52:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 115703 00:07:31.909 06:52:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:32.844 06:52:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:32.844 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:32.844 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:32.844 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:32.844 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:33.102 06:52:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:07:33.102 06:52:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:07:33.361 true 00:07:33.361 06:52:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 115703 00:07:33.361 06:52:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:33.619 06:52:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:33.878 06:52:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:07:33.878 06:52:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:07:34.136 true 00:07:34.136 06:52:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 115703 00:07:34.136 06:52:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:35.069 06:52:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:35.327 06:52:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:07:35.327 06:52:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:07:35.585 true 00:07:35.585 06:52:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 115703 00:07:35.585 06:52:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:35.843 06:52:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:36.101 06:52:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:07:36.101 06:52:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:07:36.358 true 00:07:36.358 06:52:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 115703 00:07:36.359 06:52:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:36.616 06:52:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:36.874 06:52:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:07:36.874 06:52:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:07:37.132 true 00:07:37.132 06:52:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 115703 00:07:37.132 06:52:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:38.067 06:52:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:38.331 06:52:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:07:38.331 06:52:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:07:38.591 true 00:07:38.591 06:52:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 115703 00:07:38.591 06:52:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:38.849 06:52:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:39.107 06:53:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:07:39.107 06:53:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:07:39.364 true 00:07:39.623 06:53:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 115703 00:07:39.623 06:53:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:39.880 06:53:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:40.138 06:53:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:07:40.138 06:53:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:07:40.396 true 00:07:40.396 06:53:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 115703 00:07:40.396 06:53:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:41.330 06:53:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:41.330 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:41.588 06:53:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:07:41.588 06:53:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:07:41.847 true 00:07:41.847 06:53:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 115703 00:07:41.847 06:53:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:42.105 06:53:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:42.363 06:53:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:07:42.363 06:53:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:07:42.621 true 00:07:42.622 06:53:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 115703 00:07:42.622 06:53:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:42.880 06:53:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:43.137 06:53:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:07:43.137 06:53:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:07:43.395 true 00:07:43.395 06:53:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 115703 00:07:43.396 06:53:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:44.769 06:53:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:44.769 Initializing NVMe Controllers 00:07:44.769 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:44.769 Controller IO queue size 128, less than required. 00:07:44.769 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:44.769 Controller IO queue size 128, less than required. 00:07:44.769 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:44.769 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:07:44.769 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:07:44.769 Initialization complete. Launching workers. 00:07:44.769 ======================================================== 00:07:44.769 Latency(us) 00:07:44.769 Device Information : IOPS MiB/s Average min max 00:07:44.769 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 691.59 0.34 83181.22 2884.13 1015169.11 00:07:44.769 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 8987.43 4.39 14241.87 4331.19 452636.17 00:07:44.769 ======================================================== 00:07:44.769 Total : 9679.02 4.73 19167.73 2884.13 1015169.11 00:07:44.769 00:07:44.769 06:53:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:07:44.769 06:53:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:07:45.027 true 00:07:45.027 06:53:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 115703 00:07:45.027 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (115703) - No such process 00:07:45.027 06:53:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 115703 00:07:45.027 06:53:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:45.285 06:53:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:45.542 06:53:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:07:45.542 06:53:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:07:45.542 06:53:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:07:45.542 06:53:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:45.543 06:53:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:07:45.800 null0 00:07:45.801 06:53:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:45.801 06:53:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:45.801 06:53:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:07:46.058 null1 00:07:46.058 06:53:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:46.058 06:53:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:46.058 06:53:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:07:46.317 null2 00:07:46.317 06:53:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:46.317 06:53:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:46.317 06:53:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:07:46.575 null3 00:07:46.575 06:53:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:46.575 06:53:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:46.575 06:53:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:07:46.833 null4 00:07:46.833 06:53:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:46.833 06:53:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:46.833 06:53:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:07:47.091 null5 00:07:47.091 06:53:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:47.091 06:53:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:47.091 06:53:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:07:47.349 null6 00:07:47.349 06:53:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:47.349 06:53:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:47.349 06:53:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:07:47.607 null7 00:07:47.607 06:53:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:47.607 06:53:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:47.607 06:53:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:07:47.607 06:53:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:47.607 06:53:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:47.607 06:53:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:07:47.607 06:53:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:47.607 06:53:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:47.607 06:53:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:07:47.607 06:53:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:47.607 06:53:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:47.607 06:53:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:47.607 06:53:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:47.607 06:53:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:47.607 06:53:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:07:47.607 06:53:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:47.607 06:53:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:07:47.607 06:53:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:47.607 06:53:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:47.607 06:53:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:47.607 06:53:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:47.607 06:53:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:47.607 06:53:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:07:47.607 06:53:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:47.607 06:53:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:07:47.607 06:53:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:47.607 06:53:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:47.607 06:53:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:47.607 06:53:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:47.607 06:53:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:07:47.607 06:53:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:47.607 06:53:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:47.607 06:53:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:07:47.607 06:53:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:47.607 06:53:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:47.607 06:53:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:47.607 06:53:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:47.608 06:53:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:07:47.608 06:53:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:47.608 06:53:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:07:47.608 06:53:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:47.608 06:53:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:47.608 06:53:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:47.608 06:53:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:47.608 06:53:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:47.608 06:53:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:07:47.608 06:53:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:47.608 06:53:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:47.608 06:53:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:07:47.608 06:53:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:47.608 06:53:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:47.608 06:53:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:47.608 06:53:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:47.608 06:53:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:07:47.608 06:53:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:47.608 06:53:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:47.608 06:53:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:07:47.608 06:53:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:47.608 06:53:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:47.608 06:53:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:47.608 06:53:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:47.608 06:53:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:07:47.608 06:53:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:47.608 06:53:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:47.608 06:53:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:07:47.608 06:53:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 119768 119769 119771 119773 119775 119777 119779 119781 00:07:47.608 06:53:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:47.608 06:53:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:47.608 06:53:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:48.175 06:53:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:48.175 06:53:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:48.175 06:53:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:48.175 06:53:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:48.175 06:53:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:48.175 06:53:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:48.175 06:53:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:48.175 06:53:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:48.443 06:53:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:48.443 06:53:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:48.443 06:53:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:48.443 06:53:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:48.443 06:53:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:48.443 06:53:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:48.443 06:53:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:48.443 06:53:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:48.443 06:53:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:48.443 06:53:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:48.443 06:53:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:48.443 06:53:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:48.443 06:53:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:48.443 06:53:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:48.443 06:53:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:48.443 06:53:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:48.444 06:53:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:48.444 06:53:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:48.444 06:53:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:48.444 06:53:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:48.444 06:53:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:48.444 06:53:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:48.444 06:53:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:48.444 06:53:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:48.703 06:53:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:48.703 06:53:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:48.703 06:53:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:48.703 06:53:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:48.703 06:53:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:48.703 06:53:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:48.703 06:53:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:48.703 06:53:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:48.962 06:53:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:48.962 06:53:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:48.962 06:53:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:48.962 06:53:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:48.962 06:53:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:48.962 06:53:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:48.962 06:53:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:48.962 06:53:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:48.962 06:53:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:48.962 06:53:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:48.962 06:53:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:48.962 06:53:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:48.962 06:53:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:48.962 06:53:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:48.962 06:53:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:48.962 06:53:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:48.962 06:53:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:48.962 06:53:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:48.962 06:53:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:48.962 06:53:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:48.962 06:53:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:48.962 06:53:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:48.962 06:53:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:48.962 06:53:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:49.220 06:53:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:49.220 06:53:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:49.220 06:53:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:49.220 06:53:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:49.220 06:53:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:49.220 06:53:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:49.220 06:53:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:49.220 06:53:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:49.478 06:53:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:49.478 06:53:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:49.478 06:53:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:49.478 06:53:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:49.478 06:53:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:49.478 06:53:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:49.478 06:53:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:49.478 06:53:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:49.478 06:53:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:49.479 06:53:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:49.479 06:53:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:49.479 06:53:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:49.479 06:53:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:49.479 06:53:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:49.479 06:53:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:49.479 06:53:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:49.479 06:53:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:49.479 06:53:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:49.479 06:53:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:49.479 06:53:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:49.479 06:53:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:49.479 06:53:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:49.479 06:53:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:49.479 06:53:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:49.737 06:53:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:49.737 06:53:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:49.737 06:53:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:49.737 06:53:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:49.737 06:53:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:49.737 06:53:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:49.737 06:53:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:49.995 06:53:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:50.253 06:53:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:50.253 06:53:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:50.253 06:53:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:50.253 06:53:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:50.253 06:53:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:50.253 06:53:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:50.254 06:53:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:50.254 06:53:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:50.254 06:53:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:50.254 06:53:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:50.254 06:53:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:50.254 06:53:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:50.254 06:53:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:50.254 06:53:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:50.254 06:53:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:50.254 06:53:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:50.254 06:53:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:50.254 06:53:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:50.254 06:53:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:50.254 06:53:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:50.254 06:53:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:50.254 06:53:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:50.254 06:53:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:50.254 06:53:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:50.513 06:53:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:50.513 06:53:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:50.513 06:53:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:50.513 06:53:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:50.513 06:53:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:50.513 06:53:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:50.513 06:53:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:50.513 06:53:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:50.771 06:53:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:50.771 06:53:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:50.771 06:53:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:50.771 06:53:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:50.771 06:53:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:50.771 06:53:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:50.771 06:53:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:50.771 06:53:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:50.771 06:53:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:50.772 06:53:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:50.772 06:53:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:50.772 06:53:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:50.772 06:53:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:50.772 06:53:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:50.772 06:53:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:50.772 06:53:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:50.772 06:53:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:50.772 06:53:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:50.772 06:53:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:50.772 06:53:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:50.772 06:53:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:50.772 06:53:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:50.772 06:53:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:50.772 06:53:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:51.030 06:53:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:51.030 06:53:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:51.030 06:53:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:51.030 06:53:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:51.030 06:53:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:51.030 06:53:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:51.030 06:53:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:51.030 06:53:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:51.290 06:53:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:51.290 06:53:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:51.290 06:53:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:51.290 06:53:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:51.290 06:53:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:51.290 06:53:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:51.290 06:53:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:51.290 06:53:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:51.290 06:53:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:51.290 06:53:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:51.290 06:53:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:51.290 06:53:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:51.290 06:53:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:51.290 06:53:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:51.290 06:53:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:51.290 06:53:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:51.290 06:53:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:51.290 06:53:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:51.290 06:53:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:51.290 06:53:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:51.290 06:53:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:51.290 06:53:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:51.290 06:53:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:51.290 06:53:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:51.549 06:53:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:51.549 06:53:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:51.549 06:53:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:51.549 06:53:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:51.549 06:53:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:51.549 06:53:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:51.549 06:53:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:51.549 06:53:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:51.809 06:53:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:51.809 06:53:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:51.809 06:53:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:51.809 06:53:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:51.809 06:53:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:51.809 06:53:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:51.809 06:53:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:51.809 06:53:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:51.809 06:53:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:51.809 06:53:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:51.809 06:53:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:51.809 06:53:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:51.809 06:53:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:51.809 06:53:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:51.809 06:53:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:51.809 06:53:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:51.809 06:53:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:51.809 06:53:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:52.068 06:53:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:52.068 06:53:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:52.068 06:53:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:52.068 06:53:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:52.068 06:53:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:52.068 06:53:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:52.327 06:53:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:52.327 06:53:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:52.327 06:53:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:52.327 06:53:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:52.328 06:53:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:52.328 06:53:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:52.328 06:53:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:52.328 06:53:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:52.586 06:53:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:52.586 06:53:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:52.586 06:53:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:52.586 06:53:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:52.586 06:53:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:52.586 06:53:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:52.586 06:53:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:52.586 06:53:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:52.586 06:53:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:52.586 06:53:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:52.586 06:53:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:52.587 06:53:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:52.587 06:53:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:52.587 06:53:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:52.587 06:53:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:52.587 06:53:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:52.587 06:53:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:52.587 06:53:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:52.587 06:53:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:52.587 06:53:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:52.587 06:53:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:52.587 06:53:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:52.587 06:53:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:52.587 06:53:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:52.845 06:53:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:52.846 06:53:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:52.846 06:53:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:52.846 06:53:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:52.846 06:53:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:52.846 06:53:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:52.846 06:53:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:52.846 06:53:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:53.104 06:53:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:53.104 06:53:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:53.104 06:53:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:53.104 06:53:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:53.104 06:53:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:53.104 06:53:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:53.104 06:53:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:53.104 06:53:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:53.104 06:53:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:53.104 06:53:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:53.104 06:53:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:53.104 06:53:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:53.104 06:53:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:53.104 06:53:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:53.104 06:53:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:53.104 06:53:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:53.104 06:53:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:53.104 06:53:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:53.104 06:53:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:53.104 06:53:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:53.104 06:53:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:53.104 06:53:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:53.104 06:53:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:53.104 06:53:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:53.363 06:53:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:53.363 06:53:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:53.363 06:53:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:53.363 06:53:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:53.363 06:53:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:53.363 06:53:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:53.363 06:53:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:53.363 06:53:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:53.622 06:53:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:53.622 06:53:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:53.622 06:53:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:53.622 06:53:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:53.622 06:53:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:53.622 06:53:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:53.622 06:53:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:53.622 06:53:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:53.622 06:53:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:53.622 06:53:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:53.622 06:53:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:53.622 06:53:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:53.622 06:53:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:53.622 06:53:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:53.622 06:53:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:53.622 06:53:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:53.622 06:53:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:07:53.622 06:53:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:07:53.622 06:53:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:53.622 06:53:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:07:53.881 06:53:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:53.881 06:53:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:07:53.881 06:53:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:53.881 06:53:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:53.881 rmmod nvme_tcp 00:07:53.881 rmmod nvme_fabrics 00:07:53.881 rmmod nvme_keyring 00:07:53.881 06:53:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:53.881 06:53:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:07:53.881 06:53:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:07:53.881 06:53:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 115399 ']' 00:07:53.881 06:53:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 115399 00:07:53.881 06:53:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 115399 ']' 00:07:53.881 06:53:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 115399 00:07:53.881 06:53:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:07:53.881 06:53:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:53.881 06:53:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 115399 00:07:53.881 06:53:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:53.881 06:53:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:53.881 06:53:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 115399' 00:07:53.881 killing process with pid 115399 00:07:53.881 06:53:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 115399 00:07:53.881 06:53:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 115399 00:07:54.141 06:53:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:54.141 06:53:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:54.141 06:53:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:54.141 06:53:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:07:54.141 06:53:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:07:54.141 06:53:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:54.141 06:53:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:07:54.141 06:53:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:54.141 06:53:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:54.141 06:53:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:54.141 06:53:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:54.141 06:53:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:56.048 06:53:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:56.048 00:07:56.048 real 0m47.069s 00:07:56.048 user 3m38.484s 00:07:56.048 sys 0m16.128s 00:07:56.048 06:53:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:56.048 06:53:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:56.048 ************************************ 00:07:56.048 END TEST nvmf_ns_hotplug_stress 00:07:56.048 ************************************ 00:07:56.048 06:53:16 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:07:56.048 06:53:16 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:56.048 06:53:16 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:56.048 06:53:16 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:56.048 ************************************ 00:07:56.048 START TEST nvmf_delete_subsystem 00:07:56.048 ************************************ 00:07:56.048 06:53:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:07:56.308 * Looking for test storage... 00:07:56.308 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:56.308 06:53:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:56.308 06:53:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lcov --version 00:07:56.308 06:53:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:56.308 06:53:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:56.308 06:53:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:56.308 06:53:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:56.308 06:53:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:56.308 06:53:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:07:56.308 06:53:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:07:56.308 06:53:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:07:56.308 06:53:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:07:56.308 06:53:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:07:56.308 06:53:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:07:56.308 06:53:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:07:56.308 06:53:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:56.308 06:53:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:07:56.308 06:53:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:07:56.308 06:53:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:56.308 06:53:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:56.308 06:53:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:07:56.308 06:53:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:07:56.308 06:53:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:56.308 06:53:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:07:56.308 06:53:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:07:56.308 06:53:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:07:56.308 06:53:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:07:56.308 06:53:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:56.308 06:53:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:07:56.308 06:53:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:07:56.308 06:53:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:56.308 06:53:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:56.308 06:53:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:07:56.308 06:53:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:56.308 06:53:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:56.308 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:56.308 --rc genhtml_branch_coverage=1 00:07:56.308 --rc genhtml_function_coverage=1 00:07:56.308 --rc genhtml_legend=1 00:07:56.308 --rc geninfo_all_blocks=1 00:07:56.308 --rc geninfo_unexecuted_blocks=1 00:07:56.308 00:07:56.308 ' 00:07:56.308 06:53:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:56.308 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:56.308 --rc genhtml_branch_coverage=1 00:07:56.308 --rc genhtml_function_coverage=1 00:07:56.308 --rc genhtml_legend=1 00:07:56.308 --rc geninfo_all_blocks=1 00:07:56.308 --rc geninfo_unexecuted_blocks=1 00:07:56.308 00:07:56.308 ' 00:07:56.308 06:53:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:56.308 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:56.308 --rc genhtml_branch_coverage=1 00:07:56.308 --rc genhtml_function_coverage=1 00:07:56.308 --rc genhtml_legend=1 00:07:56.308 --rc geninfo_all_blocks=1 00:07:56.308 --rc geninfo_unexecuted_blocks=1 00:07:56.308 00:07:56.308 ' 00:07:56.308 06:53:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:56.308 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:56.308 --rc genhtml_branch_coverage=1 00:07:56.308 --rc genhtml_function_coverage=1 00:07:56.308 --rc genhtml_legend=1 00:07:56.308 --rc geninfo_all_blocks=1 00:07:56.308 --rc geninfo_unexecuted_blocks=1 00:07:56.308 00:07:56.308 ' 00:07:56.308 06:53:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:56.308 06:53:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:07:56.308 06:53:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:56.308 06:53:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:56.308 06:53:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:56.308 06:53:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:56.308 06:53:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:56.308 06:53:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:56.308 06:53:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:56.308 06:53:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:56.308 06:53:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:56.308 06:53:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:56.308 06:53:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:56.308 06:53:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:56.308 06:53:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:56.308 06:53:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:56.308 06:53:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:56.308 06:53:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:56.308 06:53:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:56.308 06:53:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:07:56.308 06:53:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:56.308 06:53:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:56.308 06:53:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:56.308 06:53:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:56.308 06:53:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:56.308 06:53:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:56.308 06:53:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:07:56.309 06:53:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:56.309 06:53:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:07:56.309 06:53:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:56.309 06:53:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:56.309 06:53:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:56.309 06:53:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:56.309 06:53:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:56.309 06:53:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:56.309 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:56.309 06:53:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:56.309 06:53:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:56.309 06:53:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:56.309 06:53:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:07:56.309 06:53:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:56.309 06:53:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:56.309 06:53:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:56.309 06:53:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:56.309 06:53:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:56.309 06:53:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:56.309 06:53:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:56.309 06:53:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:56.309 06:53:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:56.309 06:53:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:56.309 06:53:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:07:56.309 06:53:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:58.846 06:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:58.846 06:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:07:58.846 06:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:58.846 06:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:58.846 06:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:58.846 06:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:58.846 06:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:58.846 06:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:07:58.846 06:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:58.846 06:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:07:58.846 06:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:07:58.846 06:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:07:58.846 06:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:07:58.846 06:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:07:58.846 06:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:07:58.846 06:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:58.846 06:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:58.846 06:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:58.846 06:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:58.846 06:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:58.846 06:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:58.846 06:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:58.846 06:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:58.846 06:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:58.846 06:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:58.846 06:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:58.846 06:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:58.846 06:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:58.847 06:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:58.847 06:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:58.847 06:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:58.847 06:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:58.847 06:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:58.847 06:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:58.847 06:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:58.847 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:58.847 06:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:58.847 06:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:58.847 06:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:58.847 06:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:58.847 06:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:58.847 06:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:58.847 06:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:58.847 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:58.847 06:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:58.847 06:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:58.847 06:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:58.847 06:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:58.847 06:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:58.847 06:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:58.847 06:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:58.847 06:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:58.847 06:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:58.847 06:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:58.847 06:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:58.847 06:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:58.847 06:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:58.847 06:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:58.847 06:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:58.847 06:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:58.847 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:58.847 06:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:58.847 06:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:58.847 06:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:58.847 06:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:58.847 06:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:58.847 06:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:58.847 06:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:58.847 06:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:58.847 06:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:58.847 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:58.847 06:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:58.847 06:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:58.847 06:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:07:58.847 06:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:58.847 06:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:58.847 06:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:58.847 06:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:58.847 06:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:58.847 06:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:58.847 06:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:58.847 06:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:58.847 06:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:58.847 06:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:58.847 06:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:58.847 06:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:58.847 06:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:58.847 06:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:58.847 06:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:58.847 06:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:58.847 06:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:58.847 06:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:58.847 06:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:58.847 06:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:58.847 06:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:58.847 06:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:58.847 06:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:58.847 06:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:58.847 06:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:58.847 06:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:58.847 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:58.847 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.187 ms 00:07:58.847 00:07:58.847 --- 10.0.0.2 ping statistics --- 00:07:58.847 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:58.847 rtt min/avg/max/mdev = 0.187/0.187/0.187/0.000 ms 00:07:58.847 06:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:58.847 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:58.847 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.104 ms 00:07:58.847 00:07:58.847 --- 10.0.0.1 ping statistics --- 00:07:58.847 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:58.847 rtt min/avg/max/mdev = 0.104/0.104/0.104/0.000 ms 00:07:58.847 06:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:58.847 06:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:07:58.847 06:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:58.847 06:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:58.847 06:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:58.848 06:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:58.848 06:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:58.848 06:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:58.848 06:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:58.848 06:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:07:58.848 06:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:58.848 06:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:58.848 06:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:58.848 06:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=122671 00:07:58.848 06:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:07:58.848 06:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 122671 00:07:58.848 06:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 122671 ']' 00:07:58.848 06:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:58.848 06:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:58.848 06:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:58.848 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:58.848 06:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:58.848 06:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:58.848 [2024-11-18 06:53:19.507018] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:07:58.848 [2024-11-18 06:53:19.507106] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:58.848 [2024-11-18 06:53:19.577003] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:58.848 [2024-11-18 06:53:19.618515] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:58.848 [2024-11-18 06:53:19.618578] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:58.848 [2024-11-18 06:53:19.618607] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:58.848 [2024-11-18 06:53:19.618619] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:58.848 [2024-11-18 06:53:19.618629] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:58.848 [2024-11-18 06:53:19.619909] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:58.848 [2024-11-18 06:53:19.619915] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:58.848 06:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:58.848 06:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:07:58.848 06:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:58.848 06:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:58.848 06:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:58.848 06:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:58.848 06:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:58.848 06:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.848 06:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:58.848 [2024-11-18 06:53:19.757742] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:58.848 06:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.848 06:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:58.848 06:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.848 06:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:58.848 06:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.848 06:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:58.848 06:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.848 06:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:58.848 [2024-11-18 06:53:19.773979] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:58.848 06:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.848 06:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:07:58.848 06:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.848 06:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:58.848 NULL1 00:07:58.848 06:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.848 06:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:58.848 06:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.848 06:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:58.848 Delay0 00:07:58.848 06:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.848 06:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:58.848 06:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.848 06:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:58.848 06:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.848 06:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=122691 00:07:58.848 06:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:07:58.848 06:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:07:59.107 [2024-11-18 06:53:19.858808] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:08:01.007 06:53:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:01.007 06:53:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:01.007 06:53:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:01.007 Read completed with error (sct=0, sc=8) 00:08:01.007 starting I/O failed: -6 00:08:01.007 Read completed with error (sct=0, sc=8) 00:08:01.007 Write completed with error (sct=0, sc=8) 00:08:01.007 Read completed with error (sct=0, sc=8) 00:08:01.007 Write completed with error (sct=0, sc=8) 00:08:01.007 starting I/O failed: -6 00:08:01.007 Read completed with error (sct=0, sc=8) 00:08:01.007 Write completed with error (sct=0, sc=8) 00:08:01.007 Read completed with error (sct=0, sc=8) 00:08:01.007 Read completed with error (sct=0, sc=8) 00:08:01.008 starting I/O failed: -6 00:08:01.008 Read completed with error (sct=0, sc=8) 00:08:01.008 Write completed with error (sct=0, sc=8) 00:08:01.008 Read completed with error (sct=0, sc=8) 00:08:01.008 Read completed with error (sct=0, sc=8) 00:08:01.008 starting I/O failed: -6 00:08:01.008 Read completed with error (sct=0, sc=8) 00:08:01.008 Read completed with error (sct=0, sc=8) 00:08:01.008 Read completed with error (sct=0, sc=8) 00:08:01.008 Read completed with error (sct=0, sc=8) 00:08:01.008 starting I/O failed: -6 00:08:01.008 Read completed with error (sct=0, sc=8) 00:08:01.008 Read completed with error (sct=0, sc=8) 00:08:01.008 Read completed with error (sct=0, sc=8) 00:08:01.008 Read completed with error (sct=0, sc=8) 00:08:01.008 starting I/O failed: -6 00:08:01.008 Write completed with error (sct=0, sc=8) 00:08:01.008 Write completed with error (sct=0, sc=8) 00:08:01.008 Read completed with error (sct=0, sc=8) 00:08:01.008 Write completed with error (sct=0, sc=8) 00:08:01.008 starting I/O failed: -6 00:08:01.008 Read completed with error (sct=0, sc=8) 00:08:01.008 Write completed with error (sct=0, sc=8) 00:08:01.008 Read completed with error (sct=0, sc=8) 00:08:01.008 Write completed with error (sct=0, sc=8) 00:08:01.008 starting I/O failed: -6 00:08:01.008 Read completed with error (sct=0, sc=8) 00:08:01.008 Read completed with error (sct=0, sc=8) 00:08:01.008 Read completed with error (sct=0, sc=8) 00:08:01.008 Read completed with error (sct=0, sc=8) 00:08:01.008 starting I/O failed: -6 00:08:01.008 Write completed with error (sct=0, sc=8) 00:08:01.008 [2024-11-18 06:53:21.981313] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fa5bc000c40 is same with the state(6) to be set 00:08:01.008 Read completed with error (sct=0, sc=8) 00:08:01.008 Write completed with error (sct=0, sc=8) 00:08:01.008 Read completed with error (sct=0, sc=8) 00:08:01.008 Read completed with error (sct=0, sc=8) 00:08:01.008 Read completed with error (sct=0, sc=8) 00:08:01.008 Read completed with error (sct=0, sc=8) 00:08:01.008 Read completed with error (sct=0, sc=8) 00:08:01.008 Write completed with error (sct=0, sc=8) 00:08:01.008 Write completed with error (sct=0, sc=8) 00:08:01.008 Read completed with error (sct=0, sc=8) 00:08:01.008 Read completed with error (sct=0, sc=8) 00:08:01.008 Read completed with error (sct=0, sc=8) 00:08:01.008 Write completed with error (sct=0, sc=8) 00:08:01.008 Write completed with error (sct=0, sc=8) 00:08:01.008 Read completed with error (sct=0, sc=8) 00:08:01.008 Write completed with error (sct=0, sc=8) 00:08:01.008 Read completed with error (sct=0, sc=8) 00:08:01.008 Read completed with error (sct=0, sc=8) 00:08:01.008 Write completed with error (sct=0, sc=8) 00:08:01.008 Read completed with error (sct=0, sc=8) 00:08:01.008 Read completed with error (sct=0, sc=8) 00:08:01.008 Read completed with error (sct=0, sc=8) 00:08:01.008 Read completed with error (sct=0, sc=8) 00:08:01.008 Read completed with error (sct=0, sc=8) 00:08:01.008 Read completed with error (sct=0, sc=8) 00:08:01.008 Read completed with error (sct=0, sc=8) 00:08:01.008 Read completed with error (sct=0, sc=8) 00:08:01.008 Read completed with error (sct=0, sc=8) 00:08:01.008 Read completed with error (sct=0, sc=8) 00:08:01.008 Read completed with error (sct=0, sc=8) 00:08:01.008 Read completed with error (sct=0, sc=8) 00:08:01.008 Read completed with error (sct=0, sc=8) 00:08:01.008 Read completed with error (sct=0, sc=8) 00:08:01.008 Write completed with error (sct=0, sc=8) 00:08:01.008 Read completed with error (sct=0, sc=8) 00:08:01.008 Read completed with error (sct=0, sc=8) 00:08:01.008 Read completed with error (sct=0, sc=8) 00:08:01.008 Read completed with error (sct=0, sc=8) 00:08:01.008 Write completed with error (sct=0, sc=8) 00:08:01.008 Read completed with error (sct=0, sc=8) 00:08:01.008 Read completed with error (sct=0, sc=8) 00:08:01.008 Read completed with error (sct=0, sc=8) 00:08:01.008 Read completed with error (sct=0, sc=8) 00:08:01.008 Read completed with error (sct=0, sc=8) 00:08:01.008 Write completed with error (sct=0, sc=8) 00:08:01.008 Read completed with error (sct=0, sc=8) 00:08:01.008 starting I/O failed: -6 00:08:01.008 Read completed with error (sct=0, sc=8) 00:08:01.008 Write completed with error (sct=0, sc=8) 00:08:01.008 Write completed with error (sct=0, sc=8) 00:08:01.008 Write completed with error (sct=0, sc=8) 00:08:01.008 starting I/O failed: -6 00:08:01.008 Read completed with error (sct=0, sc=8) 00:08:01.008 Read completed with error (sct=0, sc=8) 00:08:01.008 Read completed with error (sct=0, sc=8) 00:08:01.008 Read completed with error (sct=0, sc=8) 00:08:01.008 starting I/O failed: -6 00:08:01.008 Read completed with error (sct=0, sc=8) 00:08:01.008 Write completed with error (sct=0, sc=8) 00:08:01.008 Read completed with error (sct=0, sc=8) 00:08:01.008 Read completed with error (sct=0, sc=8) 00:08:01.008 starting I/O failed: -6 00:08:01.008 Read completed with error (sct=0, sc=8) 00:08:01.008 Read completed with error (sct=0, sc=8) 00:08:01.008 Read completed with error (sct=0, sc=8) 00:08:01.008 Read completed with error (sct=0, sc=8) 00:08:01.008 starting I/O failed: -6 00:08:01.008 Read completed with error (sct=0, sc=8) 00:08:01.008 Read completed with error (sct=0, sc=8) 00:08:01.008 Read completed with error (sct=0, sc=8) 00:08:01.008 Read completed with error (sct=0, sc=8) 00:08:01.008 starting I/O failed: -6 00:08:01.008 Write completed with error (sct=0, sc=8) 00:08:01.008 Write completed with error (sct=0, sc=8) 00:08:01.008 Read completed with error (sct=0, sc=8) 00:08:01.008 Write completed with error (sct=0, sc=8) 00:08:01.008 starting I/O failed: -6 00:08:01.008 Read completed with error (sct=0, sc=8) 00:08:01.008 Read completed with error (sct=0, sc=8) 00:08:01.008 Read completed with error (sct=0, sc=8) 00:08:01.008 Read completed with error (sct=0, sc=8) 00:08:01.008 starting I/O failed: -6 00:08:01.008 Read completed with error (sct=0, sc=8) 00:08:01.008 Read completed with error (sct=0, sc=8) 00:08:01.008 Write completed with error (sct=0, sc=8) 00:08:01.008 Read completed with error (sct=0, sc=8) 00:08:01.008 starting I/O failed: -6 00:08:01.008 Read completed with error (sct=0, sc=8) 00:08:01.008 Read completed with error (sct=0, sc=8) 00:08:01.008 Read completed with error (sct=0, sc=8) 00:08:01.008 Read completed with error (sct=0, sc=8) 00:08:01.008 starting I/O failed: -6 00:08:01.008 Read completed with error (sct=0, sc=8) 00:08:01.008 Read completed with error (sct=0, sc=8) 00:08:01.008 Read completed with error (sct=0, sc=8) 00:08:01.008 Write completed with error (sct=0, sc=8) 00:08:01.008 starting I/O failed: -6 00:08:01.008 Read completed with error (sct=0, sc=8) 00:08:01.008 Write completed with error (sct=0, sc=8) 00:08:01.008 [2024-11-18 06:53:21.982304] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9beb40 is same with the state(6) to be set 00:08:01.008 Read completed with error (sct=0, sc=8) 00:08:01.008 Write completed with error (sct=0, sc=8) 00:08:01.008 Read completed with error (sct=0, sc=8) 00:08:01.008 Read completed with error (sct=0, sc=8) 00:08:01.008 Read completed with error (sct=0, sc=8) 00:08:01.008 Write completed with error (sct=0, sc=8) 00:08:01.008 Write completed with error (sct=0, sc=8) 00:08:01.008 Read completed with error (sct=0, sc=8) 00:08:01.008 Read completed with error (sct=0, sc=8) 00:08:01.008 Read completed with error (sct=0, sc=8) 00:08:01.008 Read completed with error (sct=0, sc=8) 00:08:01.008 Read completed with error (sct=0, sc=8) 00:08:01.008 Read completed with error (sct=0, sc=8) 00:08:01.008 Read completed with error (sct=0, sc=8) 00:08:01.008 Read completed with error (sct=0, sc=8) 00:08:01.008 Write completed with error (sct=0, sc=8) 00:08:01.008 Read completed with error (sct=0, sc=8) 00:08:01.008 Read completed with error (sct=0, sc=8) 00:08:01.008 Write completed with error (sct=0, sc=8) 00:08:01.008 Read completed with error (sct=0, sc=8) 00:08:01.008 Write completed with error (sct=0, sc=8) 00:08:01.008 Read completed with error (sct=0, sc=8) 00:08:01.008 Write completed with error (sct=0, sc=8) 00:08:01.008 Write completed with error (sct=0, sc=8) 00:08:01.008 Read completed with error (sct=0, sc=8) 00:08:01.008 Read completed with error (sct=0, sc=8) 00:08:01.008 Write completed with error (sct=0, sc=8) 00:08:01.008 Read completed with error (sct=0, sc=8) 00:08:01.008 Read completed with error (sct=0, sc=8) 00:08:01.008 Read completed with error (sct=0, sc=8) 00:08:01.008 Write completed with error (sct=0, sc=8) 00:08:01.009 Read completed with error (sct=0, sc=8) 00:08:01.009 Read completed with error (sct=0, sc=8) 00:08:01.009 Read completed with error (sct=0, sc=8) 00:08:01.009 Read completed with error (sct=0, sc=8) 00:08:01.009 Write completed with error (sct=0, sc=8) 00:08:01.009 Write completed with error (sct=0, sc=8) 00:08:01.009 Read completed with error (sct=0, sc=8) 00:08:01.009 Read completed with error (sct=0, sc=8) 00:08:01.009 Read completed with error (sct=0, sc=8) 00:08:01.009 Read completed with error (sct=0, sc=8) 00:08:01.009 Write completed with error (sct=0, sc=8) 00:08:01.009 Read completed with error (sct=0, sc=8) 00:08:01.009 [2024-11-18 06:53:21.982636] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fa5bc00d020 is same with the state(6) to be set 00:08:02.383 [2024-11-18 06:53:22.953458] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cc5b0 is same with the state(6) to be set 00:08:02.383 Read completed with error (sct=0, sc=8) 00:08:02.383 Read completed with error (sct=0, sc=8) 00:08:02.383 Read completed with error (sct=0, sc=8) 00:08:02.383 Write completed with error (sct=0, sc=8) 00:08:02.383 Read completed with error (sct=0, sc=8) 00:08:02.383 Read completed with error (sct=0, sc=8) 00:08:02.384 Read completed with error (sct=0, sc=8) 00:08:02.384 Write completed with error (sct=0, sc=8) 00:08:02.384 Read completed with error (sct=0, sc=8) 00:08:02.384 Write completed with error (sct=0, sc=8) 00:08:02.384 Write completed with error (sct=0, sc=8) 00:08:02.384 Write completed with error (sct=0, sc=8) 00:08:02.384 Read completed with error (sct=0, sc=8) 00:08:02.384 Read completed with error (sct=0, sc=8) 00:08:02.384 Read completed with error (sct=0, sc=8) 00:08:02.384 Read completed with error (sct=0, sc=8) 00:08:02.384 Read completed with error (sct=0, sc=8) 00:08:02.384 Read completed with error (sct=0, sc=8) 00:08:02.384 Write completed with error (sct=0, sc=8) 00:08:02.384 Write completed with error (sct=0, sc=8) 00:08:02.384 Write completed with error (sct=0, sc=8) 00:08:02.384 Write completed with error (sct=0, sc=8) 00:08:02.384 Read completed with error (sct=0, sc=8) 00:08:02.384 Read completed with error (sct=0, sc=8) 00:08:02.384 Read completed with error (sct=0, sc=8) 00:08:02.384 Read completed with error (sct=0, sc=8) 00:08:02.384 Read completed with error (sct=0, sc=8) 00:08:02.384 Write completed with error (sct=0, sc=8) 00:08:02.384 Read completed with error (sct=0, sc=8) 00:08:02.384 Write completed with error (sct=0, sc=8) 00:08:02.384 Read completed with error (sct=0, sc=8) 00:08:02.384 [2024-11-18 06:53:22.985219] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9be810 is same with the state(6) to be set 00:08:02.384 Read completed with error (sct=0, sc=8) 00:08:02.384 Write completed with error (sct=0, sc=8) 00:08:02.384 Read completed with error (sct=0, sc=8) 00:08:02.384 Write completed with error (sct=0, sc=8) 00:08:02.384 Read completed with error (sct=0, sc=8) 00:08:02.384 Read completed with error (sct=0, sc=8) 00:08:02.384 Write completed with error (sct=0, sc=8) 00:08:02.384 Write completed with error (sct=0, sc=8) 00:08:02.384 Read completed with error (sct=0, sc=8) 00:08:02.384 Read completed with error (sct=0, sc=8) 00:08:02.384 Write completed with error (sct=0, sc=8) 00:08:02.384 Write completed with error (sct=0, sc=8) 00:08:02.384 Write completed with error (sct=0, sc=8) 00:08:02.384 Read completed with error (sct=0, sc=8) 00:08:02.384 Read completed with error (sct=0, sc=8) 00:08:02.384 Write completed with error (sct=0, sc=8) 00:08:02.384 Write completed with error (sct=0, sc=8) 00:08:02.384 Read completed with error (sct=0, sc=8) 00:08:02.384 Read completed with error (sct=0, sc=8) 00:08:02.384 Write completed with error (sct=0, sc=8) 00:08:02.384 Read completed with error (sct=0, sc=8) 00:08:02.384 Write completed with error (sct=0, sc=8) 00:08:02.384 Write completed with error (sct=0, sc=8) 00:08:02.384 Read completed with error (sct=0, sc=8) 00:08:02.384 Read completed with error (sct=0, sc=8) 00:08:02.384 Read completed with error (sct=0, sc=8) 00:08:02.384 Write completed with error (sct=0, sc=8) 00:08:02.384 Read completed with error (sct=0, sc=8) 00:08:02.384 Read completed with error (sct=0, sc=8) 00:08:02.384 Write completed with error (sct=0, sc=8) 00:08:02.384 Read completed with error (sct=0, sc=8) 00:08:02.384 Read completed with error (sct=0, sc=8) 00:08:02.384 Write completed with error (sct=0, sc=8) 00:08:02.384 Read completed with error (sct=0, sc=8) 00:08:02.384 Read completed with error (sct=0, sc=8) 00:08:02.384 Write completed with error (sct=0, sc=8) 00:08:02.384 Read completed with error (sct=0, sc=8) 00:08:02.384 [2024-11-18 06:53:22.985480] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9be3f0 is same with the state(6) to be set 00:08:02.384 Read completed with error (sct=0, sc=8) 00:08:02.384 Write completed with error (sct=0, sc=8) 00:08:02.384 Write completed with error (sct=0, sc=8) 00:08:02.384 Read completed with error (sct=0, sc=8) 00:08:02.384 Read completed with error (sct=0, sc=8) 00:08:02.384 Read completed with error (sct=0, sc=8) 00:08:02.384 Read completed with error (sct=0, sc=8) 00:08:02.384 Read completed with error (sct=0, sc=8) 00:08:02.384 Read completed with error (sct=0, sc=8) 00:08:02.384 Write completed with error (sct=0, sc=8) 00:08:02.384 Write completed with error (sct=0, sc=8) 00:08:02.384 Read completed with error (sct=0, sc=8) 00:08:02.384 Write completed with error (sct=0, sc=8) 00:08:02.384 Read completed with error (sct=0, sc=8) 00:08:02.384 Write completed with error (sct=0, sc=8) 00:08:02.384 Read completed with error (sct=0, sc=8) 00:08:02.384 Write completed with error (sct=0, sc=8) 00:08:02.384 Read completed with error (sct=0, sc=8) 00:08:02.384 Read completed with error (sct=0, sc=8) 00:08:02.384 Write completed with error (sct=0, sc=8) 00:08:02.384 Read completed with error (sct=0, sc=8) 00:08:02.384 Write completed with error (sct=0, sc=8) 00:08:02.384 Read completed with error (sct=0, sc=8) 00:08:02.384 Write completed with error (sct=0, sc=8) 00:08:02.384 Read completed with error (sct=0, sc=8) 00:08:02.384 Write completed with error (sct=0, sc=8) 00:08:02.384 Read completed with error (sct=0, sc=8) 00:08:02.384 Read completed with error (sct=0, sc=8) 00:08:02.384 Write completed with error (sct=0, sc=8) 00:08:02.384 Read completed with error (sct=0, sc=8) 00:08:02.384 Write completed with error (sct=0, sc=8) 00:08:02.384 Read completed with error (sct=0, sc=8) 00:08:02.384 Write completed with error (sct=0, sc=8) 00:08:02.384 Read completed with error (sct=0, sc=8) 00:08:02.384 Write completed with error (sct=0, sc=8) 00:08:02.384 Write completed with error (sct=0, sc=8) 00:08:02.384 [2024-11-18 06:53:22.985722] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bee70 is same with the state(6) to be set 00:08:02.384 Write completed with error (sct=0, sc=8) 00:08:02.384 Write completed with error (sct=0, sc=8) 00:08:02.384 Write completed with error (sct=0, sc=8) 00:08:02.384 Read completed with error (sct=0, sc=8) 00:08:02.384 Read completed with error (sct=0, sc=8) 00:08:02.384 Read completed with error (sct=0, sc=8) 00:08:02.384 Read completed with error (sct=0, sc=8) 00:08:02.384 Write completed with error (sct=0, sc=8) 00:08:02.384 Write completed with error (sct=0, sc=8) 00:08:02.384 Read completed with error (sct=0, sc=8) 00:08:02.384 [2024-11-18 06:53:22.985841] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fa5bc00d350 is same with the state(6) to be set 00:08:02.384 Initializing NVMe Controllers 00:08:02.384 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:02.384 Controller IO queue size 128, less than required. 00:08:02.384 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:02.384 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:08:02.384 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:08:02.384 Initialization complete. Launching workers. 00:08:02.384 ======================================================== 00:08:02.384 Latency(us) 00:08:02.384 Device Information : IOPS MiB/s Average min max 00:08:02.384 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 171.60 0.08 1046644.83 595.26 2002968.66 00:08:02.384 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 143.33 0.07 927526.52 488.08 1014244.09 00:08:02.384 ======================================================== 00:08:02.384 Total : 314.94 0.15 992431.93 488.08 2002968.66 00:08:02.384 00:08:02.384 [2024-11-18 06:53:22.987138] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cc5b0 (9): Bad file descriptor 00:08:02.384 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:08:02.384 06:53:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.384 06:53:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:08:02.384 06:53:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 122691 00:08:02.384 06:53:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:08:02.643 06:53:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:08:02.643 06:53:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 122691 00:08:02.643 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (122691) - No such process 00:08:02.643 06:53:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 122691 00:08:02.643 06:53:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:08:02.643 06:53:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 122691 00:08:02.643 06:53:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:08:02.643 06:53:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:02.643 06:53:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:08:02.643 06:53:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:02.643 06:53:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 122691 00:08:02.643 06:53:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:08:02.643 06:53:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:02.643 06:53:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:02.643 06:53:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:02.643 06:53:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:02.643 06:53:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.643 06:53:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:02.643 06:53:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.643 06:53:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:02.643 06:53:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.643 06:53:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:02.643 [2024-11-18 06:53:23.508937] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:02.643 06:53:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.643 06:53:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:02.643 06:53:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.643 06:53:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:02.643 06:53:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.643 06:53:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=123116 00:08:02.643 06:53:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:08:02.643 06:53:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:08:02.643 06:53:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 123116 00:08:02.643 06:53:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:02.643 [2024-11-18 06:53:23.571709] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:08:03.209 06:53:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:03.209 06:53:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 123116 00:08:03.209 06:53:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:03.775 06:53:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:03.775 06:53:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 123116 00:08:03.775 06:53:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:04.340 06:53:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:04.340 06:53:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 123116 00:08:04.340 06:53:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:04.599 06:53:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:04.599 06:53:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 123116 00:08:04.599 06:53:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:05.165 06:53:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:05.165 06:53:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 123116 00:08:05.165 06:53:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:05.731 06:53:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:05.731 06:53:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 123116 00:08:05.731 06:53:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:05.731 Initializing NVMe Controllers 00:08:05.731 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:05.731 Controller IO queue size 128, less than required. 00:08:05.731 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:05.731 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:08:05.731 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:08:05.731 Initialization complete. Launching workers. 00:08:05.731 ======================================================== 00:08:05.731 Latency(us) 00:08:05.731 Device Information : IOPS MiB/s Average min max 00:08:05.731 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1003530.69 1000153.18 1011751.71 00:08:05.731 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004846.79 1000194.43 1012254.17 00:08:05.731 ======================================================== 00:08:05.731 Total : 256.00 0.12 1004188.74 1000153.18 1012254.17 00:08:05.731 00:08:06.298 06:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:06.298 06:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 123116 00:08:06.298 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (123116) - No such process 00:08:06.298 06:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 123116 00:08:06.298 06:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:08:06.298 06:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:08:06.299 06:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:06.299 06:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:08:06.299 06:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:06.299 06:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:08:06.299 06:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:06.299 06:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:06.299 rmmod nvme_tcp 00:08:06.299 rmmod nvme_fabrics 00:08:06.299 rmmod nvme_keyring 00:08:06.299 06:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:06.299 06:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:08:06.299 06:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:08:06.299 06:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 122671 ']' 00:08:06.299 06:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 122671 00:08:06.299 06:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 122671 ']' 00:08:06.299 06:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 122671 00:08:06.299 06:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:08:06.299 06:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:06.299 06:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 122671 00:08:06.299 06:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:06.299 06:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:06.299 06:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 122671' 00:08:06.299 killing process with pid 122671 00:08:06.299 06:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 122671 00:08:06.299 06:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 122671 00:08:06.560 06:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:06.560 06:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:06.560 06:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:06.560 06:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:08:06.560 06:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:08:06.560 06:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:06.560 06:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:08:06.560 06:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:06.560 06:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:06.560 06:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:06.560 06:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:06.560 06:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:08.473 06:53:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:08.473 00:08:08.473 real 0m12.357s 00:08:08.473 user 0m27.732s 00:08:08.473 sys 0m3.003s 00:08:08.473 06:53:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:08.473 06:53:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:08.473 ************************************ 00:08:08.473 END TEST nvmf_delete_subsystem 00:08:08.473 ************************************ 00:08:08.473 06:53:29 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:08:08.473 06:53:29 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:08.473 06:53:29 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:08.473 06:53:29 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:08.473 ************************************ 00:08:08.473 START TEST nvmf_host_management 00:08:08.473 ************************************ 00:08:08.474 06:53:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:08:08.733 * Looking for test storage... 00:08:08.733 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:08.733 06:53:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:08.733 06:53:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # lcov --version 00:08:08.733 06:53:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:08.733 06:53:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:08.733 06:53:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:08.733 06:53:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:08.733 06:53:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:08.733 06:53:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:08:08.733 06:53:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:08:08.733 06:53:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:08:08.733 06:53:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:08:08.733 06:53:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:08:08.733 06:53:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:08:08.733 06:53:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:08:08.733 06:53:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:08.733 06:53:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:08:08.733 06:53:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:08:08.733 06:53:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:08.733 06:53:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:08.733 06:53:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:08:08.733 06:53:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:08:08.733 06:53:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:08.733 06:53:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:08:08.733 06:53:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:08:08.733 06:53:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:08:08.733 06:53:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:08:08.733 06:53:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:08.733 06:53:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:08:08.734 06:53:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:08:08.734 06:53:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:08.734 06:53:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:08.734 06:53:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:08:08.734 06:53:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:08.734 06:53:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:08.734 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:08.734 --rc genhtml_branch_coverage=1 00:08:08.734 --rc genhtml_function_coverage=1 00:08:08.734 --rc genhtml_legend=1 00:08:08.734 --rc geninfo_all_blocks=1 00:08:08.734 --rc geninfo_unexecuted_blocks=1 00:08:08.734 00:08:08.734 ' 00:08:08.734 06:53:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:08.734 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:08.734 --rc genhtml_branch_coverage=1 00:08:08.734 --rc genhtml_function_coverage=1 00:08:08.734 --rc genhtml_legend=1 00:08:08.734 --rc geninfo_all_blocks=1 00:08:08.734 --rc geninfo_unexecuted_blocks=1 00:08:08.734 00:08:08.734 ' 00:08:08.734 06:53:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:08.734 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:08.734 --rc genhtml_branch_coverage=1 00:08:08.734 --rc genhtml_function_coverage=1 00:08:08.734 --rc genhtml_legend=1 00:08:08.734 --rc geninfo_all_blocks=1 00:08:08.734 --rc geninfo_unexecuted_blocks=1 00:08:08.734 00:08:08.734 ' 00:08:08.734 06:53:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:08.734 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:08.734 --rc genhtml_branch_coverage=1 00:08:08.734 --rc genhtml_function_coverage=1 00:08:08.734 --rc genhtml_legend=1 00:08:08.734 --rc geninfo_all_blocks=1 00:08:08.734 --rc geninfo_unexecuted_blocks=1 00:08:08.734 00:08:08.734 ' 00:08:08.734 06:53:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:08.734 06:53:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:08:08.734 06:53:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:08.734 06:53:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:08.734 06:53:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:08.734 06:53:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:08.734 06:53:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:08.734 06:53:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:08.734 06:53:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:08.734 06:53:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:08.734 06:53:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:08.734 06:53:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:08.734 06:53:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:08.734 06:53:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:08.734 06:53:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:08.734 06:53:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:08.734 06:53:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:08.734 06:53:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:08.734 06:53:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:08.734 06:53:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:08:08.734 06:53:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:08.734 06:53:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:08.734 06:53:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:08.734 06:53:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:08.734 06:53:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:08.734 06:53:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:08.734 06:53:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:08:08.734 06:53:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:08.734 06:53:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:08:08.734 06:53:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:08.734 06:53:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:08.734 06:53:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:08.734 06:53:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:08.734 06:53:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:08.734 06:53:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:08.734 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:08.734 06:53:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:08.734 06:53:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:08.734 06:53:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:08.734 06:53:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:08.734 06:53:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:08.734 06:53:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:08:08.734 06:53:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:08.734 06:53:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:08.734 06:53:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:08.734 06:53:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:08.734 06:53:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:08.734 06:53:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:08.734 06:53:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:08.734 06:53:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:08.734 06:53:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:08.734 06:53:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:08.734 06:53:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:08:08.734 06:53:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:11.270 06:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:11.271 06:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:08:11.271 06:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:11.271 06:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:11.271 06:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:11.271 06:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:11.271 06:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:11.271 06:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:08:11.271 06:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:11.271 06:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:08:11.271 06:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:08:11.271 06:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:08:11.271 06:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:08:11.271 06:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:08:11.271 06:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:08:11.271 06:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:11.271 06:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:11.271 06:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:11.271 06:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:11.271 06:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:11.271 06:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:11.271 06:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:11.271 06:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:11.271 06:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:11.271 06:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:11.271 06:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:11.271 06:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:11.271 06:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:11.271 06:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:11.271 06:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:11.271 06:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:11.271 06:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:11.271 06:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:11.271 06:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:11.271 06:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:11.271 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:11.271 06:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:11.271 06:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:11.271 06:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:11.271 06:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:11.271 06:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:11.271 06:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:11.271 06:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:11.271 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:11.271 06:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:11.271 06:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:11.271 06:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:11.271 06:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:11.271 06:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:11.271 06:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:11.271 06:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:11.271 06:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:11.271 06:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:11.271 06:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:11.271 06:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:11.271 06:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:11.271 06:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:11.271 06:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:11.271 06:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:11.271 06:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:11.271 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:11.271 06:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:11.271 06:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:11.271 06:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:11.271 06:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:11.271 06:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:11.271 06:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:11.271 06:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:11.271 06:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:11.271 06:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:11.271 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:11.271 06:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:11.271 06:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:11.271 06:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:08:11.271 06:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:11.271 06:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:11.271 06:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:11.271 06:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:11.271 06:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:11.271 06:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:11.271 06:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:11.271 06:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:11.271 06:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:11.271 06:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:11.271 06:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:11.271 06:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:11.271 06:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:11.271 06:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:11.271 06:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:11.271 06:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:11.271 06:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:11.271 06:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:11.271 06:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:11.271 06:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:11.271 06:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:11.271 06:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:11.271 06:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:11.271 06:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:11.271 06:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:11.271 06:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:11.271 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:11.271 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.242 ms 00:08:11.271 00:08:11.271 --- 10.0.0.2 ping statistics --- 00:08:11.271 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:11.271 rtt min/avg/max/mdev = 0.242/0.242/0.242/0.000 ms 00:08:11.271 06:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:11.271 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:11.271 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.170 ms 00:08:11.271 00:08:11.271 --- 10.0.0.1 ping statistics --- 00:08:11.272 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:11.272 rtt min/avg/max/mdev = 0.170/0.170/0.170/0.000 ms 00:08:11.272 06:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:11.272 06:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:08:11.272 06:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:11.272 06:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:11.272 06:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:11.272 06:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:11.272 06:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:11.272 06:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:11.272 06:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:11.272 06:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:08:11.272 06:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:08:11.272 06:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:08:11.272 06:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:11.272 06:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:11.272 06:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:11.272 06:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=125576 00:08:11.272 06:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:08:11.272 06:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 125576 00:08:11.272 06:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 125576 ']' 00:08:11.272 06:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:11.272 06:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:11.272 06:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:11.272 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:11.272 06:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:11.272 06:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:11.272 [2024-11-18 06:53:31.871786] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:08:11.272 [2024-11-18 06:53:31.871885] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:11.272 [2024-11-18 06:53:31.948573] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:11.272 [2024-11-18 06:53:31.999264] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:11.272 [2024-11-18 06:53:31.999338] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:11.272 [2024-11-18 06:53:31.999352] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:11.272 [2024-11-18 06:53:31.999362] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:11.272 [2024-11-18 06:53:31.999371] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:11.272 [2024-11-18 06:53:32.001015] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:11.272 [2024-11-18 06:53:32.001079] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:11.272 [2024-11-18 06:53:32.001141] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:08:11.272 [2024-11-18 06:53:32.001144] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:11.272 06:53:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:11.272 06:53:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:08:11.272 06:53:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:11.272 06:53:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:11.272 06:53:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:11.272 06:53:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:11.272 06:53:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:11.272 06:53:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.272 06:53:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:11.272 [2024-11-18 06:53:32.150755] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:11.272 06:53:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.272 06:53:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:08:11.272 06:53:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:11.272 06:53:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:11.272 06:53:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:08:11.272 06:53:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:08:11.272 06:53:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:08:11.272 06:53:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.272 06:53:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:11.272 Malloc0 00:08:11.272 [2024-11-18 06:53:32.224219] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:11.272 06:53:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.272 06:53:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:08:11.272 06:53:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:11.272 06:53:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:11.531 06:53:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=125623 00:08:11.531 06:53:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 125623 /var/tmp/bdevperf.sock 00:08:11.531 06:53:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 125623 ']' 00:08:11.531 06:53:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:11.531 06:53:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:08:11.531 06:53:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:08:11.531 06:53:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:11.531 06:53:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:08:11.531 06:53:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:11.531 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:11.531 06:53:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:08:11.531 06:53:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:11.531 06:53:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:11.531 06:53:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:11.531 06:53:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:11.531 { 00:08:11.531 "params": { 00:08:11.531 "name": "Nvme$subsystem", 00:08:11.531 "trtype": "$TEST_TRANSPORT", 00:08:11.531 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:11.531 "adrfam": "ipv4", 00:08:11.531 "trsvcid": "$NVMF_PORT", 00:08:11.531 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:11.531 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:11.531 "hdgst": ${hdgst:-false}, 00:08:11.531 "ddgst": ${ddgst:-false} 00:08:11.531 }, 00:08:11.531 "method": "bdev_nvme_attach_controller" 00:08:11.531 } 00:08:11.531 EOF 00:08:11.531 )") 00:08:11.531 06:53:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:08:11.531 06:53:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:08:11.531 06:53:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:08:11.531 06:53:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:11.531 "params": { 00:08:11.531 "name": "Nvme0", 00:08:11.531 "trtype": "tcp", 00:08:11.531 "traddr": "10.0.0.2", 00:08:11.531 "adrfam": "ipv4", 00:08:11.531 "trsvcid": "4420", 00:08:11.531 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:11.531 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:08:11.531 "hdgst": false, 00:08:11.531 "ddgst": false 00:08:11.531 }, 00:08:11.531 "method": "bdev_nvme_attach_controller" 00:08:11.531 }' 00:08:11.531 [2024-11-18 06:53:32.307502] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:08:11.531 [2024-11-18 06:53:32.307602] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid125623 ] 00:08:11.531 [2024-11-18 06:53:32.377930] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:11.531 [2024-11-18 06:53:32.425069] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:11.790 Running I/O for 10 seconds... 00:08:11.790 06:53:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:11.790 06:53:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:08:11.790 06:53:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:08:11.790 06:53:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.790 06:53:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:11.790 06:53:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.790 06:53:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:11.790 06:53:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:08:11.790 06:53:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:08:11.790 06:53:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:08:11.790 06:53:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:08:11.790 06:53:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:08:11.790 06:53:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:08:11.790 06:53:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:08:11.790 06:53:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:08:11.790 06:53:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:08:11.790 06:53:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.790 06:53:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:11.790 06:53:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.790 06:53:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:08:11.790 06:53:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:08:11.790 06:53:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:08:12.049 06:53:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:08:12.049 06:53:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:08:12.049 06:53:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:08:12.049 06:53:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:08:12.049 06:53:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:12.049 06:53:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:12.049 06:53:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:12.309 06:53:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=563 00:08:12.309 06:53:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 563 -ge 100 ']' 00:08:12.309 06:53:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:08:12.309 06:53:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:08:12.309 06:53:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:08:12.309 06:53:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:08:12.309 06:53:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:12.309 06:53:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:12.309 [2024-11-18 06:53:33.038950] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f6a40 is same with the state(6) to be set 00:08:12.309 [2024-11-18 06:53:33.039041] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f6a40 is same with the state(6) to be set 00:08:12.309 [2024-11-18 06:53:33.039057] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f6a40 is same with the state(6) to be set 00:08:12.309 [2024-11-18 06:53:33.039069] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f6a40 is same with the state(6) to be set 00:08:12.309 [2024-11-18 06:53:33.039081] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f6a40 is same with the state(6) to be set 00:08:12.309 [2024-11-18 06:53:33.039094] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f6a40 is same with the state(6) to be set 00:08:12.310 [2024-11-18 06:53:33.039106] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f6a40 is same with the state(6) to be set 00:08:12.310 [2024-11-18 06:53:33.039118] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f6a40 is same with the state(6) to be set 00:08:12.310 [2024-11-18 06:53:33.039131] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f6a40 is same with the state(6) to be set 00:08:12.310 [2024-11-18 06:53:33.039143] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f6a40 is same with the state(6) to be set 00:08:12.310 [2024-11-18 06:53:33.039155] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f6a40 is same with the state(6) to be set 00:08:12.310 [2024-11-18 06:53:33.039167] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f6a40 is same with the state(6) to be set 00:08:12.310 [2024-11-18 06:53:33.039179] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f6a40 is same with the state(6) to be set 00:08:12.310 [2024-11-18 06:53:33.039191] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f6a40 is same with the state(6) to be set 00:08:12.310 [2024-11-18 06:53:33.039203] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f6a40 is same with the state(6) to be set 00:08:12.310 [2024-11-18 06:53:33.039215] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f6a40 is same with the state(6) to be set 00:08:12.310 [2024-11-18 06:53:33.039227] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f6a40 is same with the state(6) to be set 00:08:12.310 [2024-11-18 06:53:33.039239] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f6a40 is same with the state(6) to be set 00:08:12.310 [2024-11-18 06:53:33.039250] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f6a40 is same with the state(6) to be set 00:08:12.310 [2024-11-18 06:53:33.039262] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f6a40 is same with the state(6) to be set 00:08:12.310 [2024-11-18 06:53:33.039274] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f6a40 is same with the state(6) to be set 00:08:12.310 [2024-11-18 06:53:33.039286] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f6a40 is same with the state(6) to be set 00:08:12.310 06:53:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:12.310 06:53:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:08:12.310 06:53:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:12.310 06:53:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:12.310 [2024-11-18 06:53:33.048295] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:08:12.310 [2024-11-18 06:53:33.048337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:12.310 [2024-11-18 06:53:33.048355] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:08:12.310 [2024-11-18 06:53:33.048370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:12.310 [2024-11-18 06:53:33.048384] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:08:12.310 [2024-11-18 06:53:33.048398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:12.310 [2024-11-18 06:53:33.048412] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:08:12.310 [2024-11-18 06:53:33.048425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:12.310 [2024-11-18 06:53:33.048438] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe32d70 is same with the state(6) to be set 00:08:12.310 [2024-11-18 06:53:33.048770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.310 [2024-11-18 06:53:33.048803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:12.310 [2024-11-18 06:53:33.048828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:82048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.310 [2024-11-18 06:53:33.048844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:12.310 [2024-11-18 06:53:33.048862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:82176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.310 [2024-11-18 06:53:33.048876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:12.310 [2024-11-18 06:53:33.048891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:82304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.310 [2024-11-18 06:53:33.048906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:12.310 [2024-11-18 06:53:33.048921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:82432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.310 [2024-11-18 06:53:33.048934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:12.310 [2024-11-18 06:53:33.048949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:82560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.310 [2024-11-18 06:53:33.048963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:12.310 [2024-11-18 06:53:33.048979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:82688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.310 [2024-11-18 06:53:33.048993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:12.310 [2024-11-18 06:53:33.049013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:82816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.310 [2024-11-18 06:53:33.049029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:12.310 [2024-11-18 06:53:33.049044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:82944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.310 [2024-11-18 06:53:33.049058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:12.310 [2024-11-18 06:53:33.049073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:83072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.310 [2024-11-18 06:53:33.049087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:12.310 [2024-11-18 06:53:33.049102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:83200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.310 [2024-11-18 06:53:33.049116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:12.310 [2024-11-18 06:53:33.049131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:83328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.310 [2024-11-18 06:53:33.049146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:12.310 [2024-11-18 06:53:33.049161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:83456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.310 [2024-11-18 06:53:33.049176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:12.310 [2024-11-18 06:53:33.049192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:83584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.310 [2024-11-18 06:53:33.049205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:12.310 [2024-11-18 06:53:33.049220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:83712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.310 [2024-11-18 06:53:33.049234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:12.310 [2024-11-18 06:53:33.049249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:83840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.310 [2024-11-18 06:53:33.049263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:12.310 [2024-11-18 06:53:33.049278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:83968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.310 [2024-11-18 06:53:33.049292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:12.310 [2024-11-18 06:53:33.049307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:84096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.310 [2024-11-18 06:53:33.049320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:12.310 [2024-11-18 06:53:33.049335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:84224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.310 [2024-11-18 06:53:33.049348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:12.310 [2024-11-18 06:53:33.049363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:84352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.310 [2024-11-18 06:53:33.049381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:12.310 [2024-11-18 06:53:33.049397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:84480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.310 [2024-11-18 06:53:33.049411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:12.310 [2024-11-18 06:53:33.049425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:84608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.310 [2024-11-18 06:53:33.049439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:12.310 [2024-11-18 06:53:33.049454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:84736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.310 [2024-11-18 06:53:33.049468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:12.310 [2024-11-18 06:53:33.049483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:84864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.310 [2024-11-18 06:53:33.049506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:12.310 [2024-11-18 06:53:33.049522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:84992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.310 [2024-11-18 06:53:33.049536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:12.311 [2024-11-18 06:53:33.049551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:85120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.311 [2024-11-18 06:53:33.049565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:12.311 [2024-11-18 06:53:33.049580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:85248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.311 [2024-11-18 06:53:33.049593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:12.311 [2024-11-18 06:53:33.049608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:85376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.311 [2024-11-18 06:53:33.049622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:12.311 [2024-11-18 06:53:33.049637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:85504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.311 [2024-11-18 06:53:33.049651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:12.311 [2024-11-18 06:53:33.049665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:85632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.311 [2024-11-18 06:53:33.049679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:12.311 [2024-11-18 06:53:33.049693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:85760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.311 [2024-11-18 06:53:33.049707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:12.311 [2024-11-18 06:53:33.049721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:85888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.311 [2024-11-18 06:53:33.049739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:12.311 [2024-11-18 06:53:33.049755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:86016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.311 [2024-11-18 06:53:33.049769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:12.311 [2024-11-18 06:53:33.049784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:86144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.311 [2024-11-18 06:53:33.049803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:12.311 [2024-11-18 06:53:33.049818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:86272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.311 [2024-11-18 06:53:33.049832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:12.311 [2024-11-18 06:53:33.049848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:86400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.311 [2024-11-18 06:53:33.049862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:12.311 [2024-11-18 06:53:33.049877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:86528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.311 [2024-11-18 06:53:33.049890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:12.311 [2024-11-18 06:53:33.049905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:86656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.311 [2024-11-18 06:53:33.049919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:12.311 [2024-11-18 06:53:33.049933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:86784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.311 [2024-11-18 06:53:33.049947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:12.311 [2024-11-18 06:53:33.049962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:86912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.311 [2024-11-18 06:53:33.049976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:12.311 [2024-11-18 06:53:33.049991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:87040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.311 [2024-11-18 06:53:33.050005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:12.311 [2024-11-18 06:53:33.050020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:87168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.311 [2024-11-18 06:53:33.050033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:12.311 [2024-11-18 06:53:33.050049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:87296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.311 [2024-11-18 06:53:33.050062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:12.311 [2024-11-18 06:53:33.050078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:87424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.311 [2024-11-18 06:53:33.050091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:12.311 [2024-11-18 06:53:33.050110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:87552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.311 [2024-11-18 06:53:33.050125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:12.311 [2024-11-18 06:53:33.050140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:87680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.311 [2024-11-18 06:53:33.050154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:12.311 [2024-11-18 06:53:33.050168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:87808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.311 [2024-11-18 06:53:33.050182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:12.311 [2024-11-18 06:53:33.050197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:87936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.311 [2024-11-18 06:53:33.050211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:12.311 [2024-11-18 06:53:33.050226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:88064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.311 [2024-11-18 06:53:33.050240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:12.311 [2024-11-18 06:53:33.050255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:88192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.311 [2024-11-18 06:53:33.050268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:12.311 [2024-11-18 06:53:33.050283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:88320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.311 [2024-11-18 06:53:33.050297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:12.311 [2024-11-18 06:53:33.050312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:88448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.311 [2024-11-18 06:53:33.050326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:12.311 [2024-11-18 06:53:33.050341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:88576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.311 [2024-11-18 06:53:33.050354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:12.311 [2024-11-18 06:53:33.050369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:88704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.311 [2024-11-18 06:53:33.050383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:12.311 [2024-11-18 06:53:33.050398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:88832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.311 [2024-11-18 06:53:33.050412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:12.311 [2024-11-18 06:53:33.050426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:88960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.311 [2024-11-18 06:53:33.050440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:12.311 [2024-11-18 06:53:33.050455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:89088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.311 [2024-11-18 06:53:33.050474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:12.311 [2024-11-18 06:53:33.050497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:89216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.311 [2024-11-18 06:53:33.050514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:12.311 [2024-11-18 06:53:33.050529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:89344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.311 [2024-11-18 06:53:33.050552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:12.311 [2024-11-18 06:53:33.050567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:89472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.311 [2024-11-18 06:53:33.050580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:12.311 [2024-11-18 06:53:33.050595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:89600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.311 [2024-11-18 06:53:33.050608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:12.311 [2024-11-18 06:53:33.050623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:89728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.311 [2024-11-18 06:53:33.050637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:12.311 [2024-11-18 06:53:33.050652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:89856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.311 [2024-11-18 06:53:33.050667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:12.311 [2024-11-18 06:53:33.050681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:89984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.311 [2024-11-18 06:53:33.050702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:12.311 06:53:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:12.312 06:53:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:08:12.312 [2024-11-18 06:53:33.051885] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:08:12.312 task offset: 81920 on job bdev=Nvme0n1 fails 00:08:12.312 00:08:12.312 Latency(us) 00:08:12.312 [2024-11-18T05:53:33.290Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:12.312 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:08:12.312 Job: Nvme0n1 ended in about 0.40 seconds with error 00:08:12.312 Verification LBA range: start 0x0 length 0x400 00:08:12.312 Nvme0n1 : 0.40 1592.69 99.54 159.27 0.00 35473.99 2852.03 35146.71 00:08:12.312 [2024-11-18T05:53:33.290Z] =================================================================================================================== 00:08:12.312 [2024-11-18T05:53:33.290Z] Total : 1592.69 99.54 159.27 0.00 35473.99 2852.03 35146.71 00:08:12.312 [2024-11-18 06:53:33.053765] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:12.312 [2024-11-18 06:53:33.053814] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe32d70 (9): Bad file descriptor 00:08:12.312 [2024-11-18 06:53:33.099730] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:08:13.247 06:53:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 125623 00:08:13.247 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (125623) - No such process 00:08:13.247 06:53:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:08:13.247 06:53:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:08:13.247 06:53:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:08:13.247 06:53:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:08:13.247 06:53:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:08:13.247 06:53:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:08:13.247 06:53:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:13.247 06:53:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:13.247 { 00:08:13.247 "params": { 00:08:13.247 "name": "Nvme$subsystem", 00:08:13.247 "trtype": "$TEST_TRANSPORT", 00:08:13.247 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:13.247 "adrfam": "ipv4", 00:08:13.247 "trsvcid": "$NVMF_PORT", 00:08:13.247 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:13.247 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:13.247 "hdgst": ${hdgst:-false}, 00:08:13.247 "ddgst": ${ddgst:-false} 00:08:13.247 }, 00:08:13.247 "method": "bdev_nvme_attach_controller" 00:08:13.247 } 00:08:13.247 EOF 00:08:13.247 )") 00:08:13.247 06:53:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:08:13.247 06:53:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:08:13.247 06:53:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:08:13.247 06:53:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:13.247 "params": { 00:08:13.247 "name": "Nvme0", 00:08:13.247 "trtype": "tcp", 00:08:13.247 "traddr": "10.0.0.2", 00:08:13.247 "adrfam": "ipv4", 00:08:13.247 "trsvcid": "4420", 00:08:13.247 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:13.247 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:08:13.247 "hdgst": false, 00:08:13.247 "ddgst": false 00:08:13.247 }, 00:08:13.247 "method": "bdev_nvme_attach_controller" 00:08:13.247 }' 00:08:13.247 [2024-11-18 06:53:34.097794] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:08:13.247 [2024-11-18 06:53:34.097868] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid125896 ] 00:08:13.247 [2024-11-18 06:53:34.168015] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:13.247 [2024-11-18 06:53:34.216054] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:13.814 Running I/O for 1 seconds... 00:08:14.750 1664.00 IOPS, 104.00 MiB/s 00:08:14.750 Latency(us) 00:08:14.750 [2024-11-18T05:53:35.728Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:14.750 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:08:14.750 Verification LBA range: start 0x0 length 0x400 00:08:14.750 Nvme0n1 : 1.02 1686.58 105.41 0.00 0.00 37331.85 7233.23 33204.91 00:08:14.750 [2024-11-18T05:53:35.728Z] =================================================================================================================== 00:08:14.750 [2024-11-18T05:53:35.728Z] Total : 1686.58 105.41 0.00 0.00 37331.85 7233.23 33204.91 00:08:15.009 06:53:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:08:15.009 06:53:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:08:15.009 06:53:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:08:15.009 06:53:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:08:15.009 06:53:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:08:15.009 06:53:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:15.009 06:53:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:08:15.009 06:53:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:15.009 06:53:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:08:15.009 06:53:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:15.009 06:53:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:15.009 rmmod nvme_tcp 00:08:15.009 rmmod nvme_fabrics 00:08:15.009 rmmod nvme_keyring 00:08:15.009 06:53:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:15.009 06:53:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:08:15.009 06:53:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:08:15.009 06:53:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 125576 ']' 00:08:15.009 06:53:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 125576 00:08:15.009 06:53:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 125576 ']' 00:08:15.009 06:53:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 125576 00:08:15.009 06:53:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:08:15.009 06:53:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:15.009 06:53:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 125576 00:08:15.009 06:53:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:15.009 06:53:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:15.009 06:53:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 125576' 00:08:15.009 killing process with pid 125576 00:08:15.009 06:53:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 125576 00:08:15.009 06:53:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 125576 00:08:15.269 [2024-11-18 06:53:36.082136] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:08:15.269 06:53:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:15.269 06:53:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:15.269 06:53:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:15.269 06:53:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:08:15.269 06:53:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:08:15.269 06:53:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:15.269 06:53:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:08:15.269 06:53:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:15.269 06:53:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:15.269 06:53:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:15.269 06:53:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:15.269 06:53:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:17.180 06:53:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:17.440 06:53:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:08:17.440 00:08:17.440 real 0m8.740s 00:08:17.440 user 0m19.610s 00:08:17.440 sys 0m2.681s 00:08:17.440 06:53:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:17.440 06:53:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:17.440 ************************************ 00:08:17.440 END TEST nvmf_host_management 00:08:17.440 ************************************ 00:08:17.440 06:53:38 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:08:17.440 06:53:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:17.440 06:53:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:17.440 06:53:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:17.440 ************************************ 00:08:17.440 START TEST nvmf_lvol 00:08:17.440 ************************************ 00:08:17.440 06:53:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:08:17.440 * Looking for test storage... 00:08:17.440 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:17.440 06:53:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:17.440 06:53:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # lcov --version 00:08:17.440 06:53:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:17.440 06:53:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:17.440 06:53:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:17.440 06:53:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:17.440 06:53:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:17.440 06:53:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:08:17.440 06:53:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:08:17.440 06:53:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:08:17.440 06:53:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:08:17.440 06:53:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:08:17.440 06:53:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:08:17.440 06:53:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:08:17.440 06:53:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:17.440 06:53:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:08:17.440 06:53:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:08:17.440 06:53:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:17.440 06:53:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:17.440 06:53:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:08:17.440 06:53:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:08:17.440 06:53:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:17.440 06:53:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:08:17.440 06:53:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:08:17.440 06:53:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:08:17.440 06:53:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:08:17.440 06:53:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:17.440 06:53:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:08:17.440 06:53:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:08:17.440 06:53:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:17.440 06:53:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:17.440 06:53:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:08:17.440 06:53:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:17.440 06:53:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:17.440 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:17.440 --rc genhtml_branch_coverage=1 00:08:17.440 --rc genhtml_function_coverage=1 00:08:17.440 --rc genhtml_legend=1 00:08:17.440 --rc geninfo_all_blocks=1 00:08:17.440 --rc geninfo_unexecuted_blocks=1 00:08:17.440 00:08:17.440 ' 00:08:17.440 06:53:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:17.440 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:17.440 --rc genhtml_branch_coverage=1 00:08:17.440 --rc genhtml_function_coverage=1 00:08:17.440 --rc genhtml_legend=1 00:08:17.440 --rc geninfo_all_blocks=1 00:08:17.440 --rc geninfo_unexecuted_blocks=1 00:08:17.440 00:08:17.440 ' 00:08:17.440 06:53:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:17.440 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:17.440 --rc genhtml_branch_coverage=1 00:08:17.440 --rc genhtml_function_coverage=1 00:08:17.440 --rc genhtml_legend=1 00:08:17.440 --rc geninfo_all_blocks=1 00:08:17.440 --rc geninfo_unexecuted_blocks=1 00:08:17.440 00:08:17.440 ' 00:08:17.440 06:53:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:17.441 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:17.441 --rc genhtml_branch_coverage=1 00:08:17.441 --rc genhtml_function_coverage=1 00:08:17.441 --rc genhtml_legend=1 00:08:17.441 --rc geninfo_all_blocks=1 00:08:17.441 --rc geninfo_unexecuted_blocks=1 00:08:17.441 00:08:17.441 ' 00:08:17.441 06:53:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:17.441 06:53:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:08:17.441 06:53:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:17.441 06:53:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:17.441 06:53:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:17.441 06:53:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:17.441 06:53:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:17.441 06:53:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:17.441 06:53:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:17.441 06:53:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:17.441 06:53:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:17.441 06:53:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:17.441 06:53:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:17.441 06:53:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:17.441 06:53:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:17.441 06:53:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:17.441 06:53:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:17.441 06:53:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:17.441 06:53:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:17.441 06:53:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:08:17.441 06:53:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:17.441 06:53:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:17.441 06:53:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:17.441 06:53:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:17.441 06:53:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:17.441 06:53:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:17.441 06:53:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:08:17.441 06:53:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:17.441 06:53:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:08:17.441 06:53:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:17.441 06:53:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:17.441 06:53:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:17.441 06:53:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:17.441 06:53:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:17.441 06:53:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:17.441 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:17.441 06:53:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:17.441 06:53:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:17.441 06:53:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:17.441 06:53:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:17.441 06:53:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:17.441 06:53:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:08:17.441 06:53:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:08:17.441 06:53:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:17.441 06:53:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:08:17.441 06:53:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:17.441 06:53:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:17.441 06:53:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:17.441 06:53:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:17.441 06:53:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:17.441 06:53:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:17.441 06:53:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:17.441 06:53:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:17.441 06:53:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:17.441 06:53:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:17.441 06:53:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:08:17.441 06:53:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:19.977 06:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:19.977 06:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:08:19.977 06:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:19.977 06:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:19.977 06:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:19.977 06:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:19.977 06:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:19.977 06:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:08:19.977 06:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:19.977 06:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:08:19.977 06:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:08:19.977 06:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:08:19.977 06:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:08:19.977 06:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:08:19.977 06:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:08:19.977 06:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:19.977 06:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:19.977 06:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:19.977 06:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:19.977 06:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:19.977 06:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:19.977 06:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:19.977 06:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:19.977 06:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:19.977 06:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:19.977 06:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:19.977 06:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:19.977 06:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:19.977 06:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:19.977 06:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:19.977 06:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:19.977 06:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:19.977 06:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:19.977 06:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:19.977 06:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:19.977 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:19.977 06:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:19.977 06:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:19.977 06:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:19.977 06:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:19.977 06:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:19.977 06:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:19.977 06:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:19.977 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:19.977 06:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:19.977 06:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:19.977 06:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:19.978 06:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:19.978 06:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:19.978 06:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:19.978 06:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:19.978 06:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:19.978 06:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:19.978 06:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:19.978 06:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:19.978 06:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:19.978 06:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:19.978 06:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:19.978 06:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:19.978 06:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:19.978 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:19.978 06:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:19.978 06:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:19.978 06:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:19.978 06:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:19.978 06:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:19.978 06:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:19.978 06:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:19.978 06:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:19.978 06:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:19.978 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:19.978 06:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:19.978 06:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:19.978 06:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:08:19.978 06:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:19.978 06:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:19.978 06:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:19.978 06:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:19.978 06:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:19.978 06:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:19.978 06:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:19.978 06:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:19.978 06:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:19.978 06:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:19.978 06:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:19.978 06:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:19.978 06:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:19.978 06:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:19.978 06:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:19.978 06:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:19.978 06:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:19.978 06:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:19.978 06:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:19.978 06:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:19.978 06:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:19.978 06:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:19.978 06:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:19.978 06:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:19.978 06:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:19.978 06:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:19.978 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:19.978 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.174 ms 00:08:19.978 00:08:19.978 --- 10.0.0.2 ping statistics --- 00:08:19.978 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:19.978 rtt min/avg/max/mdev = 0.174/0.174/0.174/0.000 ms 00:08:19.978 06:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:19.978 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:19.978 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.044 ms 00:08:19.978 00:08:19.978 --- 10.0.0.1 ping statistics --- 00:08:19.978 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:19.978 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:08:19.978 06:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:19.978 06:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:08:19.978 06:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:19.978 06:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:19.978 06:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:19.978 06:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:19.978 06:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:19.978 06:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:19.978 06:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:19.978 06:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:08:19.978 06:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:19.978 06:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:19.978 06:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:19.978 06:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=128114 00:08:19.978 06:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:08:19.978 06:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 128114 00:08:19.978 06:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 128114 ']' 00:08:19.978 06:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:19.978 06:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:19.978 06:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:19.978 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:19.978 06:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:19.978 06:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:19.978 [2024-11-18 06:53:40.778332] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:08:19.978 [2024-11-18 06:53:40.778411] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:19.978 [2024-11-18 06:53:40.851812] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:19.978 [2024-11-18 06:53:40.898135] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:19.978 [2024-11-18 06:53:40.898190] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:19.978 [2024-11-18 06:53:40.898218] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:19.978 [2024-11-18 06:53:40.898229] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:19.978 [2024-11-18 06:53:40.898239] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:19.978 [2024-11-18 06:53:40.899817] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:19.978 [2024-11-18 06:53:40.899931] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:19.978 [2024-11-18 06:53:40.899939] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:20.237 06:53:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:20.237 06:53:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:08:20.237 06:53:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:20.237 06:53:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:20.237 06:53:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:20.237 06:53:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:20.237 06:53:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:20.495 [2024-11-18 06:53:41.280171] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:20.495 06:53:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:20.753 06:53:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:08:20.753 06:53:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:21.012 06:53:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:08:21.012 06:53:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:08:21.271 06:53:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:08:21.529 06:53:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=73478168-bdcb-4552-a1d7-dbdea9f8491d 00:08:21.529 06:53:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 73478168-bdcb-4552-a1d7-dbdea9f8491d lvol 20 00:08:21.787 06:53:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=b43d5ad8-41ac-46b1-93d8-e1df6fb83f9d 00:08:21.787 06:53:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:22.045 06:53:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 b43d5ad8-41ac-46b1-93d8-e1df6fb83f9d 00:08:22.304 06:53:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:22.562 [2024-11-18 06:53:43.506258] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:22.562 06:53:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:22.820 06:53:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=128425 00:08:22.820 06:53:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:08:22.820 06:53:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:08:24.195 06:53:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot b43d5ad8-41ac-46b1-93d8-e1df6fb83f9d MY_SNAPSHOT 00:08:24.195 06:53:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=ff282ba0-2e5a-42b6-90a7-44db7c240595 00:08:24.195 06:53:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize b43d5ad8-41ac-46b1-93d8-e1df6fb83f9d 30 00:08:24.454 06:53:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone ff282ba0-2e5a-42b6-90a7-44db7c240595 MY_CLONE 00:08:25.021 06:53:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=bb940907-9fdc-47a1-a344-9ca6de863515 00:08:25.021 06:53:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate bb940907-9fdc-47a1-a344-9ca6de863515 00:08:25.588 06:53:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 128425 00:08:33.701 Initializing NVMe Controllers 00:08:33.701 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:08:33.701 Controller IO queue size 128, less than required. 00:08:33.701 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:33.701 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:08:33.701 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:08:33.701 Initialization complete. Launching workers. 00:08:33.701 ======================================================== 00:08:33.701 Latency(us) 00:08:33.701 Device Information : IOPS MiB/s Average min max 00:08:33.701 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10392.10 40.59 12322.26 2068.32 142806.24 00:08:33.701 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10178.40 39.76 12583.79 3999.42 76032.03 00:08:33.701 ======================================================== 00:08:33.701 Total : 20570.50 80.35 12451.67 2068.32 142806.24 00:08:33.701 00:08:33.701 06:53:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:33.701 06:53:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete b43d5ad8-41ac-46b1-93d8-e1df6fb83f9d 00:08:33.960 06:53:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 73478168-bdcb-4552-a1d7-dbdea9f8491d 00:08:34.218 06:53:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:08:34.218 06:53:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:08:34.218 06:53:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:08:34.218 06:53:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:34.218 06:53:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:08:34.218 06:53:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:34.218 06:53:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:08:34.218 06:53:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:34.218 06:53:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:34.218 rmmod nvme_tcp 00:08:34.218 rmmod nvme_fabrics 00:08:34.218 rmmod nvme_keyring 00:08:34.218 06:53:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:34.218 06:53:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:08:34.218 06:53:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:08:34.218 06:53:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 128114 ']' 00:08:34.218 06:53:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 128114 00:08:34.218 06:53:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 128114 ']' 00:08:34.218 06:53:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 128114 00:08:34.218 06:53:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:08:34.218 06:53:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:34.218 06:53:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 128114 00:08:34.218 06:53:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:34.218 06:53:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:34.218 06:53:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 128114' 00:08:34.218 killing process with pid 128114 00:08:34.218 06:53:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 128114 00:08:34.218 06:53:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 128114 00:08:34.478 06:53:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:34.478 06:53:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:34.478 06:53:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:34.478 06:53:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:08:34.478 06:53:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:08:34.478 06:53:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:34.478 06:53:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:08:34.478 06:53:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:34.478 06:53:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:34.478 06:53:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:34.479 06:53:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:34.479 06:53:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:37.027 06:53:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:37.027 00:08:37.027 real 0m19.243s 00:08:37.027 user 1m5.346s 00:08:37.027 sys 0m5.635s 00:08:37.027 06:53:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:37.027 06:53:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:37.027 ************************************ 00:08:37.027 END TEST nvmf_lvol 00:08:37.027 ************************************ 00:08:37.027 06:53:57 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:08:37.027 06:53:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:37.027 06:53:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:37.027 06:53:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:37.027 ************************************ 00:08:37.027 START TEST nvmf_lvs_grow 00:08:37.027 ************************************ 00:08:37.027 06:53:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:08:37.027 * Looking for test storage... 00:08:37.027 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:37.027 06:53:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:37.027 06:53:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lcov --version 00:08:37.027 06:53:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:37.027 06:53:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:37.027 06:53:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:37.027 06:53:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:37.027 06:53:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:37.027 06:53:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:08:37.027 06:53:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:08:37.027 06:53:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:08:37.027 06:53:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:08:37.027 06:53:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:08:37.027 06:53:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:08:37.027 06:53:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:08:37.027 06:53:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:37.027 06:53:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:08:37.028 06:53:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:08:37.028 06:53:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:37.028 06:53:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:37.028 06:53:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:08:37.028 06:53:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:08:37.028 06:53:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:37.028 06:53:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:08:37.028 06:53:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:08:37.028 06:53:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:08:37.028 06:53:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:08:37.028 06:53:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:37.028 06:53:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:08:37.028 06:53:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:08:37.028 06:53:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:37.028 06:53:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:37.028 06:53:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:08:37.028 06:53:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:37.028 06:53:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:37.028 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:37.028 --rc genhtml_branch_coverage=1 00:08:37.028 --rc genhtml_function_coverage=1 00:08:37.028 --rc genhtml_legend=1 00:08:37.028 --rc geninfo_all_blocks=1 00:08:37.028 --rc geninfo_unexecuted_blocks=1 00:08:37.028 00:08:37.028 ' 00:08:37.028 06:53:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:37.028 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:37.028 --rc genhtml_branch_coverage=1 00:08:37.028 --rc genhtml_function_coverage=1 00:08:37.028 --rc genhtml_legend=1 00:08:37.028 --rc geninfo_all_blocks=1 00:08:37.028 --rc geninfo_unexecuted_blocks=1 00:08:37.028 00:08:37.028 ' 00:08:37.028 06:53:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:37.028 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:37.028 --rc genhtml_branch_coverage=1 00:08:37.028 --rc genhtml_function_coverage=1 00:08:37.028 --rc genhtml_legend=1 00:08:37.028 --rc geninfo_all_blocks=1 00:08:37.028 --rc geninfo_unexecuted_blocks=1 00:08:37.028 00:08:37.028 ' 00:08:37.028 06:53:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:37.028 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:37.028 --rc genhtml_branch_coverage=1 00:08:37.028 --rc genhtml_function_coverage=1 00:08:37.028 --rc genhtml_legend=1 00:08:37.028 --rc geninfo_all_blocks=1 00:08:37.028 --rc geninfo_unexecuted_blocks=1 00:08:37.028 00:08:37.028 ' 00:08:37.028 06:53:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:37.028 06:53:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:08:37.028 06:53:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:37.028 06:53:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:37.028 06:53:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:37.028 06:53:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:37.028 06:53:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:37.028 06:53:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:37.028 06:53:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:37.028 06:53:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:37.028 06:53:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:37.028 06:53:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:37.028 06:53:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:37.028 06:53:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:37.028 06:53:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:37.028 06:53:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:37.028 06:53:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:37.028 06:53:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:37.028 06:53:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:37.028 06:53:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:08:37.028 06:53:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:37.028 06:53:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:37.028 06:53:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:37.028 06:53:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:37.028 06:53:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:37.028 06:53:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:37.028 06:53:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:08:37.028 06:53:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:37.028 06:53:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:08:37.028 06:53:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:37.028 06:53:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:37.028 06:53:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:37.028 06:53:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:37.028 06:53:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:37.028 06:53:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:37.028 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:37.028 06:53:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:37.028 06:53:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:37.028 06:53:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:37.028 06:53:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:37.029 06:53:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:08:37.029 06:53:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:08:37.029 06:53:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:37.029 06:53:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:37.029 06:53:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:37.029 06:53:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:37.029 06:53:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:37.029 06:53:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:37.029 06:53:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:37.029 06:53:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:37.029 06:53:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:37.029 06:53:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:37.029 06:53:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:08:37.029 06:53:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:38.936 06:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:38.936 06:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:08:38.936 06:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:38.936 06:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:38.936 06:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:38.936 06:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:38.936 06:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:38.936 06:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:08:38.936 06:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:38.936 06:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:08:38.936 06:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:08:38.936 06:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:08:38.936 06:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:08:38.936 06:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:08:38.936 06:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:08:38.936 06:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:38.936 06:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:38.936 06:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:38.936 06:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:38.936 06:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:38.936 06:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:38.936 06:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:38.936 06:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:38.936 06:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:38.936 06:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:38.936 06:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:38.936 06:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:38.936 06:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:38.936 06:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:38.936 06:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:38.936 06:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:38.936 06:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:38.936 06:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:38.936 06:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:38.936 06:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:38.936 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:38.936 06:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:38.936 06:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:38.936 06:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:38.936 06:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:38.936 06:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:38.936 06:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:38.936 06:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:38.936 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:38.936 06:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:38.936 06:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:38.937 06:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:38.937 06:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:38.937 06:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:38.937 06:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:38.937 06:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:38.937 06:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:38.937 06:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:38.937 06:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:38.937 06:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:38.937 06:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:38.937 06:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:38.937 06:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:38.937 06:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:38.937 06:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:38.937 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:38.937 06:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:38.937 06:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:38.937 06:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:38.937 06:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:38.937 06:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:38.937 06:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:38.937 06:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:38.937 06:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:38.937 06:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:38.937 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:38.937 06:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:38.937 06:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:38.937 06:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:08:38.937 06:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:38.937 06:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:38.937 06:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:38.937 06:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:38.937 06:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:38.937 06:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:38.937 06:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:38.937 06:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:38.937 06:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:38.937 06:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:38.937 06:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:38.937 06:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:38.937 06:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:38.937 06:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:38.937 06:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:38.937 06:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:38.937 06:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:38.937 06:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:39.196 06:54:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:39.196 06:54:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:39.196 06:54:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:39.196 06:54:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:39.196 06:54:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:39.196 06:54:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:39.196 06:54:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:39.196 06:54:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:39.196 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:39.196 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.236 ms 00:08:39.196 00:08:39.196 --- 10.0.0.2 ping statistics --- 00:08:39.196 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:39.196 rtt min/avg/max/mdev = 0.236/0.236/0.236/0.000 ms 00:08:39.196 06:54:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:39.196 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:39.196 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.104 ms 00:08:39.196 00:08:39.196 --- 10.0.0.1 ping statistics --- 00:08:39.196 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:39.196 rtt min/avg/max/mdev = 0.104/0.104/0.104/0.000 ms 00:08:39.196 06:54:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:39.196 06:54:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:08:39.196 06:54:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:39.196 06:54:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:39.196 06:54:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:39.196 06:54:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:39.196 06:54:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:39.196 06:54:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:39.196 06:54:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:39.196 06:54:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:08:39.196 06:54:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:39.196 06:54:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:39.196 06:54:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:39.196 06:54:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=131843 00:08:39.196 06:54:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:39.196 06:54:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 131843 00:08:39.196 06:54:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 131843 ']' 00:08:39.196 06:54:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:39.196 06:54:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:39.196 06:54:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:39.196 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:39.196 06:54:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:39.196 06:54:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:39.455 [2024-11-18 06:54:00.175666] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:08:39.455 [2024-11-18 06:54:00.175750] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:39.455 [2024-11-18 06:54:00.251325] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:39.455 [2024-11-18 06:54:00.297441] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:39.455 [2024-11-18 06:54:00.297526] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:39.455 [2024-11-18 06:54:00.297542] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:39.455 [2024-11-18 06:54:00.297554] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:39.455 [2024-11-18 06:54:00.297578] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:39.455 [2024-11-18 06:54:00.298242] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:39.455 06:54:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:39.455 06:54:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:08:39.455 06:54:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:39.455 06:54:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:39.455 06:54:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:39.714 06:54:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:39.714 06:54:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:39.973 [2024-11-18 06:54:00.699758] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:39.973 06:54:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:08:39.973 06:54:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:39.973 06:54:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:39.973 06:54:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:39.973 ************************************ 00:08:39.973 START TEST lvs_grow_clean 00:08:39.973 ************************************ 00:08:39.973 06:54:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:08:39.973 06:54:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:39.973 06:54:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:39.973 06:54:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:39.973 06:54:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:39.973 06:54:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:39.973 06:54:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:39.973 06:54:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:39.973 06:54:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:39.973 06:54:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:40.232 06:54:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:40.232 06:54:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:40.491 06:54:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=58bf4e74-f085-4da3-94bc-0a63008fc98e 00:08:40.491 06:54:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 58bf4e74-f085-4da3-94bc-0a63008fc98e 00:08:40.491 06:54:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:40.748 06:54:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:40.748 06:54:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:40.748 06:54:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 58bf4e74-f085-4da3-94bc-0a63008fc98e lvol 150 00:08:41.006 06:54:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=f95957f8-141e-4667-ae23-e4f1a16530a5 00:08:41.006 06:54:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:41.006 06:54:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:41.264 [2024-11-18 06:54:02.153994] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:41.264 [2024-11-18 06:54:02.154094] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:41.264 true 00:08:41.264 06:54:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 58bf4e74-f085-4da3-94bc-0a63008fc98e 00:08:41.264 06:54:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:41.521 06:54:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:41.521 06:54:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:41.779 06:54:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 f95957f8-141e-4667-ae23-e4f1a16530a5 00:08:42.345 06:54:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:42.345 [2024-11-18 06:54:03.281426] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:42.345 06:54:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:42.604 06:54:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=132383 00:08:42.604 06:54:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:42.604 06:54:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:42.604 06:54:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 132383 /var/tmp/bdevperf.sock 00:08:42.604 06:54:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 132383 ']' 00:08:42.604 06:54:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:42.604 06:54:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:42.604 06:54:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:42.604 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:42.604 06:54:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:42.604 06:54:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:08:42.863 [2024-11-18 06:54:03.624266] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:08:42.863 [2024-11-18 06:54:03.624335] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid132383 ] 00:08:42.863 [2024-11-18 06:54:03.693839] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:42.863 [2024-11-18 06:54:03.744961] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:43.121 06:54:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:43.121 06:54:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:08:43.121 06:54:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:43.380 Nvme0n1 00:08:43.638 06:54:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:43.904 [ 00:08:43.904 { 00:08:43.904 "name": "Nvme0n1", 00:08:43.904 "aliases": [ 00:08:43.904 "f95957f8-141e-4667-ae23-e4f1a16530a5" 00:08:43.904 ], 00:08:43.904 "product_name": "NVMe disk", 00:08:43.904 "block_size": 4096, 00:08:43.904 "num_blocks": 38912, 00:08:43.904 "uuid": "f95957f8-141e-4667-ae23-e4f1a16530a5", 00:08:43.904 "numa_id": 0, 00:08:43.904 "assigned_rate_limits": { 00:08:43.904 "rw_ios_per_sec": 0, 00:08:43.904 "rw_mbytes_per_sec": 0, 00:08:43.904 "r_mbytes_per_sec": 0, 00:08:43.904 "w_mbytes_per_sec": 0 00:08:43.904 }, 00:08:43.904 "claimed": false, 00:08:43.904 "zoned": false, 00:08:43.904 "supported_io_types": { 00:08:43.904 "read": true, 00:08:43.904 "write": true, 00:08:43.904 "unmap": true, 00:08:43.904 "flush": true, 00:08:43.904 "reset": true, 00:08:43.904 "nvme_admin": true, 00:08:43.904 "nvme_io": true, 00:08:43.904 "nvme_io_md": false, 00:08:43.904 "write_zeroes": true, 00:08:43.904 "zcopy": false, 00:08:43.904 "get_zone_info": false, 00:08:43.904 "zone_management": false, 00:08:43.904 "zone_append": false, 00:08:43.904 "compare": true, 00:08:43.904 "compare_and_write": true, 00:08:43.904 "abort": true, 00:08:43.904 "seek_hole": false, 00:08:43.904 "seek_data": false, 00:08:43.904 "copy": true, 00:08:43.904 "nvme_iov_md": false 00:08:43.904 }, 00:08:43.904 "memory_domains": [ 00:08:43.904 { 00:08:43.904 "dma_device_id": "system", 00:08:43.904 "dma_device_type": 1 00:08:43.904 } 00:08:43.904 ], 00:08:43.904 "driver_specific": { 00:08:43.904 "nvme": [ 00:08:43.904 { 00:08:43.904 "trid": { 00:08:43.904 "trtype": "TCP", 00:08:43.904 "adrfam": "IPv4", 00:08:43.904 "traddr": "10.0.0.2", 00:08:43.904 "trsvcid": "4420", 00:08:43.904 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:08:43.904 }, 00:08:43.904 "ctrlr_data": { 00:08:43.904 "cntlid": 1, 00:08:43.904 "vendor_id": "0x8086", 00:08:43.904 "model_number": "SPDK bdev Controller", 00:08:43.904 "serial_number": "SPDK0", 00:08:43.904 "firmware_revision": "25.01", 00:08:43.904 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:43.904 "oacs": { 00:08:43.904 "security": 0, 00:08:43.904 "format": 0, 00:08:43.904 "firmware": 0, 00:08:43.904 "ns_manage": 0 00:08:43.904 }, 00:08:43.904 "multi_ctrlr": true, 00:08:43.904 "ana_reporting": false 00:08:43.904 }, 00:08:43.904 "vs": { 00:08:43.904 "nvme_version": "1.3" 00:08:43.904 }, 00:08:43.904 "ns_data": { 00:08:43.904 "id": 1, 00:08:43.904 "can_share": true 00:08:43.904 } 00:08:43.904 } 00:08:43.904 ], 00:08:43.904 "mp_policy": "active_passive" 00:08:43.904 } 00:08:43.904 } 00:08:43.904 ] 00:08:43.904 06:54:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=132521 00:08:43.904 06:54:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:43.904 06:54:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:43.904 Running I/O for 10 seconds... 00:08:44.838 Latency(us) 00:08:44.838 [2024-11-18T05:54:05.816Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:44.838 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:44.838 Nvme0n1 : 1.00 14860.00 58.05 0.00 0.00 0.00 0.00 0.00 00:08:44.838 [2024-11-18T05:54:05.816Z] =================================================================================================================== 00:08:44.838 [2024-11-18T05:54:05.816Z] Total : 14860.00 58.05 0.00 0.00 0.00 0.00 0.00 00:08:44.838 00:08:45.775 06:54:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 58bf4e74-f085-4da3-94bc-0a63008fc98e 00:08:45.775 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:45.775 Nvme0n1 : 2.00 15113.50 59.04 0.00 0.00 0.00 0.00 0.00 00:08:45.775 [2024-11-18T05:54:06.753Z] =================================================================================================================== 00:08:45.775 [2024-11-18T05:54:06.753Z] Total : 15113.50 59.04 0.00 0.00 0.00 0.00 0.00 00:08:45.775 00:08:46.033 true 00:08:46.033 06:54:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 58bf4e74-f085-4da3-94bc-0a63008fc98e 00:08:46.033 06:54:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:46.292 06:54:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:46.292 06:54:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:46.292 06:54:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 132521 00:08:46.859 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:46.859 Nvme0n1 : 3.00 15240.33 59.53 0.00 0.00 0.00 0.00 0.00 00:08:46.859 [2024-11-18T05:54:07.837Z] =================================================================================================================== 00:08:46.859 [2024-11-18T05:54:07.837Z] Total : 15240.33 59.53 0.00 0.00 0.00 0.00 0.00 00:08:46.859 00:08:47.795 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:47.795 Nvme0n1 : 4.00 15335.50 59.90 0.00 0.00 0.00 0.00 0.00 00:08:47.795 [2024-11-18T05:54:08.773Z] =================================================================================================================== 00:08:47.795 [2024-11-18T05:54:08.773Z] Total : 15335.50 59.90 0.00 0.00 0.00 0.00 0.00 00:08:47.795 00:08:49.173 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:49.173 Nvme0n1 : 5.00 15418.00 60.23 0.00 0.00 0.00 0.00 0.00 00:08:49.173 [2024-11-18T05:54:10.151Z] =================================================================================================================== 00:08:49.173 [2024-11-18T05:54:10.151Z] Total : 15418.00 60.23 0.00 0.00 0.00 0.00 0.00 00:08:49.173 00:08:50.109 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:50.109 Nvme0n1 : 6.00 15462.50 60.40 0.00 0.00 0.00 0.00 0.00 00:08:50.109 [2024-11-18T05:54:11.087Z] =================================================================================================================== 00:08:50.109 [2024-11-18T05:54:11.087Z] Total : 15462.50 60.40 0.00 0.00 0.00 0.00 0.00 00:08:50.109 00:08:51.045 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:51.045 Nvme0n1 : 7.00 15512.43 60.60 0.00 0.00 0.00 0.00 0.00 00:08:51.045 [2024-11-18T05:54:12.023Z] =================================================================================================================== 00:08:51.045 [2024-11-18T05:54:12.023Z] Total : 15512.43 60.60 0.00 0.00 0.00 0.00 0.00 00:08:51.045 00:08:51.981 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:51.981 Nvme0n1 : 8.00 15557.75 60.77 0.00 0.00 0.00 0.00 0.00 00:08:51.981 [2024-11-18T05:54:12.959Z] =================================================================================================================== 00:08:51.981 [2024-11-18T05:54:12.959Z] Total : 15557.75 60.77 0.00 0.00 0.00 0.00 0.00 00:08:51.981 00:08:52.918 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:52.918 Nvme0n1 : 9.00 15586.11 60.88 0.00 0.00 0.00 0.00 0.00 00:08:52.918 [2024-11-18T05:54:13.896Z] =================================================================================================================== 00:08:52.918 [2024-11-18T05:54:13.896Z] Total : 15586.11 60.88 0.00 0.00 0.00 0.00 0.00 00:08:52.918 00:08:53.855 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:53.855 Nvme0n1 : 10.00 15615.00 61.00 0.00 0.00 0.00 0.00 0.00 00:08:53.855 [2024-11-18T05:54:14.833Z] =================================================================================================================== 00:08:53.855 [2024-11-18T05:54:14.833Z] Total : 15615.00 61.00 0.00 0.00 0.00 0.00 0.00 00:08:53.855 00:08:53.855 00:08:53.855 Latency(us) 00:08:53.855 [2024-11-18T05:54:14.833Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:53.855 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:53.855 Nvme0n1 : 10.01 15617.06 61.00 0.00 0.00 8191.63 4684.61 18447.17 00:08:53.855 [2024-11-18T05:54:14.833Z] =================================================================================================================== 00:08:53.855 [2024-11-18T05:54:14.833Z] Total : 15617.06 61.00 0.00 0.00 8191.63 4684.61 18447.17 00:08:53.855 { 00:08:53.855 "results": [ 00:08:53.855 { 00:08:53.855 "job": "Nvme0n1", 00:08:53.855 "core_mask": "0x2", 00:08:53.855 "workload": "randwrite", 00:08:53.855 "status": "finished", 00:08:53.855 "queue_depth": 128, 00:08:53.855 "io_size": 4096, 00:08:53.855 "runtime": 10.006879, 00:08:53.855 "iops": 15617.05702647149, 00:08:53.855 "mibps": 61.00412900965426, 00:08:53.855 "io_failed": 0, 00:08:53.855 "io_timeout": 0, 00:08:53.855 "avg_latency_us": 8191.6260749007115, 00:08:53.855 "min_latency_us": 4684.61037037037, 00:08:53.855 "max_latency_us": 18447.17037037037 00:08:53.855 } 00:08:53.855 ], 00:08:53.855 "core_count": 1 00:08:53.855 } 00:08:53.855 06:54:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 132383 00:08:53.855 06:54:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 132383 ']' 00:08:53.855 06:54:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 132383 00:08:53.855 06:54:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:08:53.855 06:54:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:53.855 06:54:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 132383 00:08:53.855 06:54:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:53.855 06:54:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:53.855 06:54:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 132383' 00:08:53.855 killing process with pid 132383 00:08:53.855 06:54:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 132383 00:08:53.855 Received shutdown signal, test time was about 10.000000 seconds 00:08:53.855 00:08:53.855 Latency(us) 00:08:53.855 [2024-11-18T05:54:14.833Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:53.855 [2024-11-18T05:54:14.833Z] =================================================================================================================== 00:08:53.855 [2024-11-18T05:54:14.833Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:53.855 06:54:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 132383 00:08:54.114 06:54:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:54.372 06:54:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:54.631 06:54:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 58bf4e74-f085-4da3-94bc-0a63008fc98e 00:08:54.631 06:54:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:08:54.889 06:54:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:08:54.889 06:54:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:08:54.889 06:54:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:55.148 [2024-11-18 06:54:16.082261] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:55.148 06:54:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 58bf4e74-f085-4da3-94bc-0a63008fc98e 00:08:55.148 06:54:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:08:55.148 06:54:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 58bf4e74-f085-4da3-94bc-0a63008fc98e 00:08:55.148 06:54:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:55.148 06:54:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:55.148 06:54:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:55.148 06:54:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:55.148 06:54:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:55.148 06:54:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:55.148 06:54:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:55.148 06:54:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:08:55.148 06:54:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 58bf4e74-f085-4da3-94bc-0a63008fc98e 00:08:55.407 request: 00:08:55.407 { 00:08:55.407 "uuid": "58bf4e74-f085-4da3-94bc-0a63008fc98e", 00:08:55.407 "method": "bdev_lvol_get_lvstores", 00:08:55.407 "req_id": 1 00:08:55.407 } 00:08:55.407 Got JSON-RPC error response 00:08:55.407 response: 00:08:55.407 { 00:08:55.407 "code": -19, 00:08:55.407 "message": "No such device" 00:08:55.407 } 00:08:55.665 06:54:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:08:55.665 06:54:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:55.665 06:54:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:55.665 06:54:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:55.665 06:54:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:55.924 aio_bdev 00:08:55.924 06:54:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev f95957f8-141e-4667-ae23-e4f1a16530a5 00:08:55.924 06:54:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=f95957f8-141e-4667-ae23-e4f1a16530a5 00:08:55.924 06:54:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:55.924 06:54:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:08:55.924 06:54:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:55.924 06:54:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:55.924 06:54:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:56.183 06:54:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b f95957f8-141e-4667-ae23-e4f1a16530a5 -t 2000 00:08:56.442 [ 00:08:56.442 { 00:08:56.442 "name": "f95957f8-141e-4667-ae23-e4f1a16530a5", 00:08:56.442 "aliases": [ 00:08:56.442 "lvs/lvol" 00:08:56.442 ], 00:08:56.442 "product_name": "Logical Volume", 00:08:56.442 "block_size": 4096, 00:08:56.442 "num_blocks": 38912, 00:08:56.442 "uuid": "f95957f8-141e-4667-ae23-e4f1a16530a5", 00:08:56.442 "assigned_rate_limits": { 00:08:56.442 "rw_ios_per_sec": 0, 00:08:56.442 "rw_mbytes_per_sec": 0, 00:08:56.442 "r_mbytes_per_sec": 0, 00:08:56.442 "w_mbytes_per_sec": 0 00:08:56.442 }, 00:08:56.442 "claimed": false, 00:08:56.442 "zoned": false, 00:08:56.442 "supported_io_types": { 00:08:56.442 "read": true, 00:08:56.442 "write": true, 00:08:56.442 "unmap": true, 00:08:56.442 "flush": false, 00:08:56.442 "reset": true, 00:08:56.442 "nvme_admin": false, 00:08:56.442 "nvme_io": false, 00:08:56.442 "nvme_io_md": false, 00:08:56.442 "write_zeroes": true, 00:08:56.442 "zcopy": false, 00:08:56.442 "get_zone_info": false, 00:08:56.442 "zone_management": false, 00:08:56.442 "zone_append": false, 00:08:56.442 "compare": false, 00:08:56.442 "compare_and_write": false, 00:08:56.442 "abort": false, 00:08:56.442 "seek_hole": true, 00:08:56.442 "seek_data": true, 00:08:56.442 "copy": false, 00:08:56.442 "nvme_iov_md": false 00:08:56.442 }, 00:08:56.442 "driver_specific": { 00:08:56.442 "lvol": { 00:08:56.442 "lvol_store_uuid": "58bf4e74-f085-4da3-94bc-0a63008fc98e", 00:08:56.442 "base_bdev": "aio_bdev", 00:08:56.442 "thin_provision": false, 00:08:56.442 "num_allocated_clusters": 38, 00:08:56.442 "snapshot": false, 00:08:56.442 "clone": false, 00:08:56.442 "esnap_clone": false 00:08:56.442 } 00:08:56.442 } 00:08:56.442 } 00:08:56.442 ] 00:08:56.442 06:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:08:56.442 06:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 58bf4e74-f085-4da3-94bc-0a63008fc98e 00:08:56.442 06:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:08:56.701 06:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:08:56.701 06:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 58bf4e74-f085-4da3-94bc-0a63008fc98e 00:08:56.701 06:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:08:56.960 06:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:08:56.960 06:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete f95957f8-141e-4667-ae23-e4f1a16530a5 00:08:57.219 06:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 58bf4e74-f085-4da3-94bc-0a63008fc98e 00:08:57.478 06:54:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:57.737 06:54:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:57.737 00:08:57.737 real 0m17.842s 00:08:57.737 user 0m17.474s 00:08:57.737 sys 0m1.776s 00:08:57.737 06:54:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:57.737 06:54:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:08:57.737 ************************************ 00:08:57.737 END TEST lvs_grow_clean 00:08:57.737 ************************************ 00:08:57.737 06:54:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:08:57.737 06:54:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:57.737 06:54:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:57.737 06:54:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:57.737 ************************************ 00:08:57.737 START TEST lvs_grow_dirty 00:08:57.737 ************************************ 00:08:57.737 06:54:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:08:57.737 06:54:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:57.737 06:54:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:57.737 06:54:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:57.737 06:54:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:57.737 06:54:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:57.737 06:54:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:57.737 06:54:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:57.737 06:54:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:57.737 06:54:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:57.997 06:54:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:57.997 06:54:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:58.256 06:54:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=4163763d-ac97-40b5-8e8b-836fb3bf6e4f 00:08:58.256 06:54:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4163763d-ac97-40b5-8e8b-836fb3bf6e4f 00:08:58.256 06:54:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:58.515 06:54:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:58.515 06:54:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:58.515 06:54:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 4163763d-ac97-40b5-8e8b-836fb3bf6e4f lvol 150 00:08:59.079 06:54:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=dbcec946-151c-4287-8074-ad31fc4d31e5 00:08:59.079 06:54:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:59.079 06:54:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:59.079 [2024-11-18 06:54:20.012973] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:59.079 [2024-11-18 06:54:20.013082] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:59.079 true 00:08:59.079 06:54:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:59.079 06:54:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4163763d-ac97-40b5-8e8b-836fb3bf6e4f 00:08:59.337 06:54:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:59.337 06:54:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:59.904 06:54:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 dbcec946-151c-4287-8074-ad31fc4d31e5 00:08:59.904 06:54:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:09:00.162 [2024-11-18 06:54:21.124368] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:00.162 06:54:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:00.729 06:54:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=135075 00:09:00.729 06:54:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:09:00.729 06:54:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:00.729 06:54:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 135075 /var/tmp/bdevperf.sock 00:09:00.729 06:54:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 135075 ']' 00:09:00.729 06:54:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:00.729 06:54:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:00.729 06:54:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:00.729 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:00.729 06:54:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:00.729 06:54:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:00.729 [2024-11-18 06:54:21.455085] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:09:00.729 [2024-11-18 06:54:21.455171] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid135075 ] 00:09:00.729 [2024-11-18 06:54:21.519376] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:00.729 [2024-11-18 06:54:21.563951] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:00.729 06:54:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:00.729 06:54:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:09:00.729 06:54:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:09:01.296 Nvme0n1 00:09:01.296 06:54:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:09:01.554 [ 00:09:01.554 { 00:09:01.554 "name": "Nvme0n1", 00:09:01.554 "aliases": [ 00:09:01.554 "dbcec946-151c-4287-8074-ad31fc4d31e5" 00:09:01.554 ], 00:09:01.554 "product_name": "NVMe disk", 00:09:01.554 "block_size": 4096, 00:09:01.554 "num_blocks": 38912, 00:09:01.554 "uuid": "dbcec946-151c-4287-8074-ad31fc4d31e5", 00:09:01.554 "numa_id": 0, 00:09:01.554 "assigned_rate_limits": { 00:09:01.554 "rw_ios_per_sec": 0, 00:09:01.554 "rw_mbytes_per_sec": 0, 00:09:01.554 "r_mbytes_per_sec": 0, 00:09:01.554 "w_mbytes_per_sec": 0 00:09:01.554 }, 00:09:01.554 "claimed": false, 00:09:01.554 "zoned": false, 00:09:01.554 "supported_io_types": { 00:09:01.554 "read": true, 00:09:01.554 "write": true, 00:09:01.554 "unmap": true, 00:09:01.554 "flush": true, 00:09:01.555 "reset": true, 00:09:01.555 "nvme_admin": true, 00:09:01.555 "nvme_io": true, 00:09:01.555 "nvme_io_md": false, 00:09:01.555 "write_zeroes": true, 00:09:01.555 "zcopy": false, 00:09:01.555 "get_zone_info": false, 00:09:01.555 "zone_management": false, 00:09:01.555 "zone_append": false, 00:09:01.555 "compare": true, 00:09:01.555 "compare_and_write": true, 00:09:01.555 "abort": true, 00:09:01.555 "seek_hole": false, 00:09:01.555 "seek_data": false, 00:09:01.555 "copy": true, 00:09:01.555 "nvme_iov_md": false 00:09:01.555 }, 00:09:01.555 "memory_domains": [ 00:09:01.555 { 00:09:01.555 "dma_device_id": "system", 00:09:01.555 "dma_device_type": 1 00:09:01.555 } 00:09:01.555 ], 00:09:01.555 "driver_specific": { 00:09:01.555 "nvme": [ 00:09:01.555 { 00:09:01.555 "trid": { 00:09:01.555 "trtype": "TCP", 00:09:01.555 "adrfam": "IPv4", 00:09:01.555 "traddr": "10.0.0.2", 00:09:01.555 "trsvcid": "4420", 00:09:01.555 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:09:01.555 }, 00:09:01.555 "ctrlr_data": { 00:09:01.555 "cntlid": 1, 00:09:01.555 "vendor_id": "0x8086", 00:09:01.555 "model_number": "SPDK bdev Controller", 00:09:01.555 "serial_number": "SPDK0", 00:09:01.555 "firmware_revision": "25.01", 00:09:01.555 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:01.555 "oacs": { 00:09:01.555 "security": 0, 00:09:01.555 "format": 0, 00:09:01.555 "firmware": 0, 00:09:01.555 "ns_manage": 0 00:09:01.555 }, 00:09:01.555 "multi_ctrlr": true, 00:09:01.555 "ana_reporting": false 00:09:01.555 }, 00:09:01.555 "vs": { 00:09:01.555 "nvme_version": "1.3" 00:09:01.555 }, 00:09:01.555 "ns_data": { 00:09:01.555 "id": 1, 00:09:01.555 "can_share": true 00:09:01.555 } 00:09:01.555 } 00:09:01.555 ], 00:09:01.555 "mp_policy": "active_passive" 00:09:01.555 } 00:09:01.555 } 00:09:01.555 ] 00:09:01.555 06:54:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=135107 00:09:01.555 06:54:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:09:01.555 06:54:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:01.814 Running I/O for 10 seconds... 00:09:02.749 Latency(us) 00:09:02.749 [2024-11-18T05:54:23.727Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:02.749 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:02.749 Nvme0n1 : 1.00 15008.00 58.62 0.00 0.00 0.00 0.00 0.00 00:09:02.749 [2024-11-18T05:54:23.727Z] =================================================================================================================== 00:09:02.749 [2024-11-18T05:54:23.727Z] Total : 15008.00 58.62 0.00 0.00 0.00 0.00 0.00 00:09:02.749 00:09:03.684 06:54:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 4163763d-ac97-40b5-8e8b-836fb3bf6e4f 00:09:03.684 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:03.684 Nvme0n1 : 2.00 15234.50 59.51 0.00 0.00 0.00 0.00 0.00 00:09:03.684 [2024-11-18T05:54:24.662Z] =================================================================================================================== 00:09:03.684 [2024-11-18T05:54:24.662Z] Total : 15234.50 59.51 0.00 0.00 0.00 0.00 0.00 00:09:03.684 00:09:03.943 true 00:09:03.943 06:54:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4163763d-ac97-40b5-8e8b-836fb3bf6e4f 00:09:03.943 06:54:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:09:04.202 06:54:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:09:04.202 06:54:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:09:04.202 06:54:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 135107 00:09:04.769 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:04.769 Nvme0n1 : 3.00 15366.33 60.02 0.00 0.00 0.00 0.00 0.00 00:09:04.769 [2024-11-18T05:54:25.747Z] =================================================================================================================== 00:09:04.769 [2024-11-18T05:54:25.747Z] Total : 15366.33 60.02 0.00 0.00 0.00 0.00 0.00 00:09:04.769 00:09:05.706 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:05.706 Nvme0n1 : 4.00 15480.50 60.47 0.00 0.00 0.00 0.00 0.00 00:09:05.706 [2024-11-18T05:54:26.684Z] =================================================================================================================== 00:09:05.706 [2024-11-18T05:54:26.684Z] Total : 15480.50 60.47 0.00 0.00 0.00 0.00 0.00 00:09:05.706 00:09:06.642 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:06.642 Nvme0n1 : 5.00 15573.60 60.83 0.00 0.00 0.00 0.00 0.00 00:09:06.642 [2024-11-18T05:54:27.621Z] =================================================================================================================== 00:09:06.643 [2024-11-18T05:54:27.621Z] Total : 15573.60 60.83 0.00 0.00 0.00 0.00 0.00 00:09:06.643 00:09:08.020 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:08.020 Nvme0n1 : 6.00 15625.17 61.04 0.00 0.00 0.00 0.00 0.00 00:09:08.020 [2024-11-18T05:54:28.998Z] =================================================================================================================== 00:09:08.020 [2024-11-18T05:54:28.998Z] Total : 15625.17 61.04 0.00 0.00 0.00 0.00 0.00 00:09:08.020 00:09:08.587 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:08.587 Nvme0n1 : 7.00 15667.14 61.20 0.00 0.00 0.00 0.00 0.00 00:09:08.587 [2024-11-18T05:54:29.565Z] =================================================================================================================== 00:09:08.587 [2024-11-18T05:54:29.565Z] Total : 15667.14 61.20 0.00 0.00 0.00 0.00 0.00 00:09:08.587 00:09:09.964 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:09.964 Nvme0n1 : 8.00 15678.38 61.24 0.00 0.00 0.00 0.00 0.00 00:09:09.964 [2024-11-18T05:54:30.942Z] =================================================================================================================== 00:09:09.964 [2024-11-18T05:54:30.942Z] Total : 15678.38 61.24 0.00 0.00 0.00 0.00 0.00 00:09:09.964 00:09:10.901 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:10.901 Nvme0n1 : 9.00 15694.67 61.31 0.00 0.00 0.00 0.00 0.00 00:09:10.901 [2024-11-18T05:54:31.879Z] =================================================================================================================== 00:09:10.901 [2024-11-18T05:54:31.879Z] Total : 15694.67 61.31 0.00 0.00 0.00 0.00 0.00 00:09:10.901 00:09:11.837 00:09:11.837 Latency(us) 00:09:11.837 [2024-11-18T05:54:32.815Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:11.837 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:11.837 Nvme0n1 : 10.00 15717.19 61.40 0.00 0.00 8139.45 4490.43 17670.45 00:09:11.837 [2024-11-18T05:54:32.815Z] =================================================================================================================== 00:09:11.837 [2024-11-18T05:54:32.815Z] Total : 15717.19 61.40 0.00 0.00 8139.45 4490.43 17670.45 00:09:11.837 { 00:09:11.837 "results": [ 00:09:11.837 { 00:09:11.837 "job": "Nvme0n1", 00:09:11.837 "core_mask": "0x2", 00:09:11.837 "workload": "randwrite", 00:09:11.837 "status": "finished", 00:09:11.837 "queue_depth": 128, 00:09:11.837 "io_size": 4096, 00:09:11.837 "runtime": 10.002234, 00:09:11.837 "iops": 15717.188780026541, 00:09:11.837 "mibps": 61.39526867197868, 00:09:11.837 "io_failed": 0, 00:09:11.837 "io_timeout": 0, 00:09:11.837 "avg_latency_us": 8139.4526401496105, 00:09:11.837 "min_latency_us": 4490.42962962963, 00:09:11.837 "max_latency_us": 17670.447407407406 00:09:11.837 } 00:09:11.837 ], 00:09:11.837 "core_count": 1 00:09:11.837 } 00:09:11.837 06:54:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 135075 00:09:11.837 06:54:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 135075 ']' 00:09:11.837 06:54:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 135075 00:09:11.837 06:54:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:09:11.837 06:54:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:11.837 06:54:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 135075 00:09:11.837 06:54:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:09:11.837 06:54:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:09:11.837 06:54:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 135075' 00:09:11.837 killing process with pid 135075 00:09:11.837 06:54:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 135075 00:09:11.837 Received shutdown signal, test time was about 10.000000 seconds 00:09:11.837 00:09:11.837 Latency(us) 00:09:11.837 [2024-11-18T05:54:32.815Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:11.837 [2024-11-18T05:54:32.815Z] =================================================================================================================== 00:09:11.837 [2024-11-18T05:54:32.815Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:11.837 06:54:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 135075 00:09:11.837 06:54:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:12.096 06:54:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:12.664 06:54:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4163763d-ac97-40b5-8e8b-836fb3bf6e4f 00:09:12.664 06:54:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:09:12.664 06:54:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:09:12.664 06:54:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:09:12.665 06:54:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 131843 00:09:12.665 06:54:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 131843 00:09:12.665 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 131843 Killed "${NVMF_APP[@]}" "$@" 00:09:12.665 06:54:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:09:12.665 06:54:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:09:12.665 06:54:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:12.923 06:54:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:12.923 06:54:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:12.923 06:54:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=136435 00:09:12.923 06:54:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:09:12.923 06:54:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 136435 00:09:12.923 06:54:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 136435 ']' 00:09:12.923 06:54:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:12.923 06:54:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:12.923 06:54:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:12.923 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:12.923 06:54:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:12.923 06:54:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:12.923 [2024-11-18 06:54:33.692168] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:09:12.923 [2024-11-18 06:54:33.692254] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:12.923 [2024-11-18 06:54:33.763547] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:12.923 [2024-11-18 06:54:33.807645] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:12.923 [2024-11-18 06:54:33.807694] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:12.923 [2024-11-18 06:54:33.807724] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:12.923 [2024-11-18 06:54:33.807736] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:12.923 [2024-11-18 06:54:33.807746] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:12.923 [2024-11-18 06:54:33.808399] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:13.181 06:54:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:13.181 06:54:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:09:13.181 06:54:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:13.181 06:54:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:13.181 06:54:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:13.181 06:54:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:13.181 06:54:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:13.440 [2024-11-18 06:54:34.196271] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:09:13.440 [2024-11-18 06:54:34.196394] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:09:13.440 [2024-11-18 06:54:34.196441] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:09:13.440 06:54:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:09:13.440 06:54:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev dbcec946-151c-4287-8074-ad31fc4d31e5 00:09:13.440 06:54:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=dbcec946-151c-4287-8074-ad31fc4d31e5 00:09:13.440 06:54:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:13.440 06:54:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:09:13.440 06:54:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:13.440 06:54:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:13.440 06:54:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:13.699 06:54:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b dbcec946-151c-4287-8074-ad31fc4d31e5 -t 2000 00:09:13.957 [ 00:09:13.957 { 00:09:13.957 "name": "dbcec946-151c-4287-8074-ad31fc4d31e5", 00:09:13.957 "aliases": [ 00:09:13.957 "lvs/lvol" 00:09:13.957 ], 00:09:13.957 "product_name": "Logical Volume", 00:09:13.957 "block_size": 4096, 00:09:13.957 "num_blocks": 38912, 00:09:13.957 "uuid": "dbcec946-151c-4287-8074-ad31fc4d31e5", 00:09:13.957 "assigned_rate_limits": { 00:09:13.957 "rw_ios_per_sec": 0, 00:09:13.957 "rw_mbytes_per_sec": 0, 00:09:13.957 "r_mbytes_per_sec": 0, 00:09:13.957 "w_mbytes_per_sec": 0 00:09:13.957 }, 00:09:13.957 "claimed": false, 00:09:13.957 "zoned": false, 00:09:13.957 "supported_io_types": { 00:09:13.957 "read": true, 00:09:13.957 "write": true, 00:09:13.957 "unmap": true, 00:09:13.957 "flush": false, 00:09:13.957 "reset": true, 00:09:13.957 "nvme_admin": false, 00:09:13.957 "nvme_io": false, 00:09:13.957 "nvme_io_md": false, 00:09:13.957 "write_zeroes": true, 00:09:13.957 "zcopy": false, 00:09:13.957 "get_zone_info": false, 00:09:13.957 "zone_management": false, 00:09:13.957 "zone_append": false, 00:09:13.957 "compare": false, 00:09:13.957 "compare_and_write": false, 00:09:13.957 "abort": false, 00:09:13.957 "seek_hole": true, 00:09:13.957 "seek_data": true, 00:09:13.957 "copy": false, 00:09:13.957 "nvme_iov_md": false 00:09:13.957 }, 00:09:13.957 "driver_specific": { 00:09:13.957 "lvol": { 00:09:13.957 "lvol_store_uuid": "4163763d-ac97-40b5-8e8b-836fb3bf6e4f", 00:09:13.957 "base_bdev": "aio_bdev", 00:09:13.957 "thin_provision": false, 00:09:13.957 "num_allocated_clusters": 38, 00:09:13.957 "snapshot": false, 00:09:13.957 "clone": false, 00:09:13.957 "esnap_clone": false 00:09:13.957 } 00:09:13.957 } 00:09:13.957 } 00:09:13.957 ] 00:09:13.957 06:54:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:09:13.957 06:54:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4163763d-ac97-40b5-8e8b-836fb3bf6e4f 00:09:13.957 06:54:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:09:14.216 06:54:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:09:14.216 06:54:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4163763d-ac97-40b5-8e8b-836fb3bf6e4f 00:09:14.216 06:54:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:09:14.475 06:54:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:09:14.475 06:54:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:14.736 [2024-11-18 06:54:35.558162] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:09:14.736 06:54:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4163763d-ac97-40b5-8e8b-836fb3bf6e4f 00:09:14.736 06:54:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:09:14.736 06:54:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4163763d-ac97-40b5-8e8b-836fb3bf6e4f 00:09:14.736 06:54:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:14.736 06:54:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:14.736 06:54:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:14.736 06:54:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:14.736 06:54:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:14.736 06:54:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:14.736 06:54:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:14.736 06:54:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:09:14.736 06:54:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4163763d-ac97-40b5-8e8b-836fb3bf6e4f 00:09:14.995 request: 00:09:14.995 { 00:09:14.995 "uuid": "4163763d-ac97-40b5-8e8b-836fb3bf6e4f", 00:09:14.995 "method": "bdev_lvol_get_lvstores", 00:09:14.995 "req_id": 1 00:09:14.995 } 00:09:14.995 Got JSON-RPC error response 00:09:14.995 response: 00:09:14.995 { 00:09:14.995 "code": -19, 00:09:14.995 "message": "No such device" 00:09:14.995 } 00:09:14.995 06:54:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:09:14.995 06:54:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:14.995 06:54:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:14.995 06:54:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:14.995 06:54:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:15.254 aio_bdev 00:09:15.254 06:54:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev dbcec946-151c-4287-8074-ad31fc4d31e5 00:09:15.254 06:54:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=dbcec946-151c-4287-8074-ad31fc4d31e5 00:09:15.254 06:54:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:15.254 06:54:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:09:15.254 06:54:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:15.254 06:54:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:15.254 06:54:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:15.512 06:54:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b dbcec946-151c-4287-8074-ad31fc4d31e5 -t 2000 00:09:15.771 [ 00:09:15.771 { 00:09:15.771 "name": "dbcec946-151c-4287-8074-ad31fc4d31e5", 00:09:15.771 "aliases": [ 00:09:15.771 "lvs/lvol" 00:09:15.771 ], 00:09:15.771 "product_name": "Logical Volume", 00:09:15.771 "block_size": 4096, 00:09:15.771 "num_blocks": 38912, 00:09:15.771 "uuid": "dbcec946-151c-4287-8074-ad31fc4d31e5", 00:09:15.771 "assigned_rate_limits": { 00:09:15.771 "rw_ios_per_sec": 0, 00:09:15.771 "rw_mbytes_per_sec": 0, 00:09:15.771 "r_mbytes_per_sec": 0, 00:09:15.771 "w_mbytes_per_sec": 0 00:09:15.771 }, 00:09:15.771 "claimed": false, 00:09:15.771 "zoned": false, 00:09:15.771 "supported_io_types": { 00:09:15.771 "read": true, 00:09:15.771 "write": true, 00:09:15.772 "unmap": true, 00:09:15.772 "flush": false, 00:09:15.772 "reset": true, 00:09:15.772 "nvme_admin": false, 00:09:15.772 "nvme_io": false, 00:09:15.772 "nvme_io_md": false, 00:09:15.772 "write_zeroes": true, 00:09:15.772 "zcopy": false, 00:09:15.772 "get_zone_info": false, 00:09:15.772 "zone_management": false, 00:09:15.772 "zone_append": false, 00:09:15.772 "compare": false, 00:09:15.772 "compare_and_write": false, 00:09:15.772 "abort": false, 00:09:15.772 "seek_hole": true, 00:09:15.772 "seek_data": true, 00:09:15.772 "copy": false, 00:09:15.772 "nvme_iov_md": false 00:09:15.772 }, 00:09:15.772 "driver_specific": { 00:09:15.772 "lvol": { 00:09:15.772 "lvol_store_uuid": "4163763d-ac97-40b5-8e8b-836fb3bf6e4f", 00:09:15.772 "base_bdev": "aio_bdev", 00:09:15.772 "thin_provision": false, 00:09:15.772 "num_allocated_clusters": 38, 00:09:15.772 "snapshot": false, 00:09:15.772 "clone": false, 00:09:15.772 "esnap_clone": false 00:09:15.772 } 00:09:15.772 } 00:09:15.772 } 00:09:15.772 ] 00:09:15.772 06:54:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:09:15.772 06:54:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4163763d-ac97-40b5-8e8b-836fb3bf6e4f 00:09:15.772 06:54:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:09:16.030 06:54:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:09:16.030 06:54:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4163763d-ac97-40b5-8e8b-836fb3bf6e4f 00:09:16.030 06:54:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:09:16.289 06:54:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:09:16.289 06:54:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete dbcec946-151c-4287-8074-ad31fc4d31e5 00:09:16.547 06:54:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 4163763d-ac97-40b5-8e8b-836fb3bf6e4f 00:09:16.805 06:54:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:17.064 06:54:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:17.323 00:09:17.323 real 0m19.418s 00:09:17.323 user 0m48.730s 00:09:17.323 sys 0m4.786s 00:09:17.323 06:54:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:17.323 06:54:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:17.323 ************************************ 00:09:17.323 END TEST lvs_grow_dirty 00:09:17.323 ************************************ 00:09:17.323 06:54:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:09:17.323 06:54:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:09:17.323 06:54:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:09:17.323 06:54:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:09:17.323 06:54:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:09:17.323 06:54:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:09:17.323 06:54:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:09:17.323 06:54:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:09:17.323 06:54:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:09:17.323 nvmf_trace.0 00:09:17.323 06:54:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:09:17.323 06:54:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:09:17.323 06:54:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:17.324 06:54:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:09:17.324 06:54:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:17.324 06:54:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:09:17.324 06:54:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:17.324 06:54:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:17.324 rmmod nvme_tcp 00:09:17.324 rmmod nvme_fabrics 00:09:17.324 rmmod nvme_keyring 00:09:17.324 06:54:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:17.324 06:54:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:09:17.324 06:54:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:09:17.324 06:54:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 136435 ']' 00:09:17.324 06:54:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 136435 00:09:17.324 06:54:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 136435 ']' 00:09:17.324 06:54:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 136435 00:09:17.324 06:54:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:09:17.324 06:54:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:17.324 06:54:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 136435 00:09:17.324 06:54:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:17.324 06:54:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:17.324 06:54:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 136435' 00:09:17.324 killing process with pid 136435 00:09:17.324 06:54:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 136435 00:09:17.324 06:54:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 136435 00:09:17.585 06:54:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:17.585 06:54:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:17.585 06:54:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:17.585 06:54:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:09:17.585 06:54:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:09:17.585 06:54:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:17.585 06:54:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:09:17.585 06:54:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:17.585 06:54:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:17.585 06:54:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:17.585 06:54:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:17.585 06:54:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:19.499 06:54:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:19.499 00:09:19.499 real 0m42.924s 00:09:19.499 user 1m12.210s 00:09:19.499 sys 0m8.628s 00:09:19.499 06:54:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:19.499 06:54:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:19.499 ************************************ 00:09:19.499 END TEST nvmf_lvs_grow 00:09:19.499 ************************************ 00:09:19.499 06:54:40 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:09:19.499 06:54:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:19.499 06:54:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:19.499 06:54:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:19.759 ************************************ 00:09:19.759 START TEST nvmf_bdev_io_wait 00:09:19.759 ************************************ 00:09:19.759 06:54:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:09:19.759 * Looking for test storage... 00:09:19.759 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:19.760 06:54:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:19.760 06:54:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lcov --version 00:09:19.760 06:54:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:19.760 06:54:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:19.760 06:54:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:19.760 06:54:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:19.760 06:54:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:19.760 06:54:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:09:19.760 06:54:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:09:19.760 06:54:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:09:19.760 06:54:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:09:19.760 06:54:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:09:19.760 06:54:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:09:19.760 06:54:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:09:19.760 06:54:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:19.760 06:54:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:09:19.760 06:54:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:09:19.760 06:54:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:19.760 06:54:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:19.760 06:54:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:09:19.760 06:54:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:09:19.760 06:54:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:19.760 06:54:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:09:19.760 06:54:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:09:19.760 06:54:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:09:19.760 06:54:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:09:19.760 06:54:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:19.760 06:54:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:09:19.760 06:54:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:09:19.760 06:54:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:19.760 06:54:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:19.760 06:54:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:09:19.760 06:54:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:19.760 06:54:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:19.760 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:19.760 --rc genhtml_branch_coverage=1 00:09:19.760 --rc genhtml_function_coverage=1 00:09:19.760 --rc genhtml_legend=1 00:09:19.760 --rc geninfo_all_blocks=1 00:09:19.760 --rc geninfo_unexecuted_blocks=1 00:09:19.760 00:09:19.760 ' 00:09:19.760 06:54:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:19.760 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:19.760 --rc genhtml_branch_coverage=1 00:09:19.760 --rc genhtml_function_coverage=1 00:09:19.760 --rc genhtml_legend=1 00:09:19.760 --rc geninfo_all_blocks=1 00:09:19.760 --rc geninfo_unexecuted_blocks=1 00:09:19.760 00:09:19.760 ' 00:09:19.760 06:54:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:19.760 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:19.760 --rc genhtml_branch_coverage=1 00:09:19.760 --rc genhtml_function_coverage=1 00:09:19.760 --rc genhtml_legend=1 00:09:19.760 --rc geninfo_all_blocks=1 00:09:19.760 --rc geninfo_unexecuted_blocks=1 00:09:19.760 00:09:19.760 ' 00:09:19.760 06:54:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:19.760 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:19.760 --rc genhtml_branch_coverage=1 00:09:19.760 --rc genhtml_function_coverage=1 00:09:19.760 --rc genhtml_legend=1 00:09:19.760 --rc geninfo_all_blocks=1 00:09:19.760 --rc geninfo_unexecuted_blocks=1 00:09:19.760 00:09:19.760 ' 00:09:19.760 06:54:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:19.760 06:54:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:09:19.760 06:54:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:19.760 06:54:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:19.760 06:54:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:19.760 06:54:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:19.760 06:54:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:19.760 06:54:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:19.760 06:54:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:19.760 06:54:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:19.760 06:54:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:19.760 06:54:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:19.760 06:54:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:19.760 06:54:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:19.760 06:54:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:19.760 06:54:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:19.760 06:54:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:19.760 06:54:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:19.760 06:54:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:19.760 06:54:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:09:19.760 06:54:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:19.760 06:54:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:19.761 06:54:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:19.761 06:54:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:19.761 06:54:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:19.761 06:54:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:19.761 06:54:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:09:19.761 06:54:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:19.761 06:54:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:09:19.761 06:54:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:19.761 06:54:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:19.761 06:54:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:19.761 06:54:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:19.761 06:54:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:19.761 06:54:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:19.761 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:19.761 06:54:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:19.761 06:54:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:19.761 06:54:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:19.761 06:54:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:19.761 06:54:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:19.761 06:54:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:09:19.761 06:54:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:19.761 06:54:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:19.761 06:54:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:19.761 06:54:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:19.761 06:54:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:19.761 06:54:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:19.761 06:54:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:19.761 06:54:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:19.761 06:54:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:19.761 06:54:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:19.761 06:54:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:09:19.761 06:54:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:22.299 06:54:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:22.299 06:54:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:09:22.299 06:54:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:22.299 06:54:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:22.299 06:54:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:22.299 06:54:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:22.299 06:54:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:22.299 06:54:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:09:22.299 06:54:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:22.299 06:54:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:09:22.299 06:54:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:09:22.299 06:54:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:09:22.299 06:54:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:09:22.299 06:54:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:09:22.299 06:54:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:09:22.299 06:54:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:22.299 06:54:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:22.299 06:54:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:22.299 06:54:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:22.299 06:54:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:22.299 06:54:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:22.299 06:54:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:22.299 06:54:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:22.299 06:54:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:22.299 06:54:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:22.299 06:54:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:22.299 06:54:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:22.299 06:54:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:22.299 06:54:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:22.299 06:54:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:22.299 06:54:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:22.299 06:54:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:22.299 06:54:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:22.299 06:54:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:22.299 06:54:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:09:22.299 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:09:22.299 06:54:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:22.299 06:54:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:22.299 06:54:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:22.299 06:54:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:22.299 06:54:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:22.299 06:54:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:22.299 06:54:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:09:22.299 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:09:22.299 06:54:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:22.299 06:54:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:22.299 06:54:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:22.299 06:54:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:22.299 06:54:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:22.299 06:54:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:22.299 06:54:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:22.299 06:54:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:22.299 06:54:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:22.299 06:54:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:22.299 06:54:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:22.299 06:54:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:22.299 06:54:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:22.299 06:54:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:22.299 06:54:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:22.299 06:54:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:09:22.299 Found net devices under 0000:0a:00.0: cvl_0_0 00:09:22.299 06:54:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:22.299 06:54:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:22.299 06:54:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:22.299 06:54:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:22.299 06:54:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:22.299 06:54:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:22.299 06:54:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:22.299 06:54:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:22.299 06:54:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:09:22.299 Found net devices under 0000:0a:00.1: cvl_0_1 00:09:22.299 06:54:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:22.299 06:54:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:22.299 06:54:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:09:22.299 06:54:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:22.299 06:54:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:22.299 06:54:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:22.299 06:54:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:22.299 06:54:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:22.299 06:54:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:22.299 06:54:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:22.299 06:54:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:22.299 06:54:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:22.299 06:54:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:22.299 06:54:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:22.299 06:54:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:22.300 06:54:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:22.300 06:54:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:22.300 06:54:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:22.300 06:54:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:22.300 06:54:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:22.300 06:54:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:22.300 06:54:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:22.300 06:54:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:22.300 06:54:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:22.300 06:54:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:22.300 06:54:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:22.300 06:54:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:22.300 06:54:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:22.300 06:54:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:22.300 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:22.300 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.337 ms 00:09:22.300 00:09:22.300 --- 10.0.0.2 ping statistics --- 00:09:22.300 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:22.300 rtt min/avg/max/mdev = 0.337/0.337/0.337/0.000 ms 00:09:22.300 06:54:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:22.300 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:22.300 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.150 ms 00:09:22.300 00:09:22.300 --- 10.0.0.1 ping statistics --- 00:09:22.300 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:22.300 rtt min/avg/max/mdev = 0.150/0.150/0.150/0.000 ms 00:09:22.300 06:54:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:22.300 06:54:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:09:22.300 06:54:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:22.300 06:54:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:22.300 06:54:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:22.300 06:54:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:22.300 06:54:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:22.300 06:54:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:22.300 06:54:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:22.300 06:54:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:09:22.300 06:54:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:22.300 06:54:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:22.300 06:54:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:22.300 06:54:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=139087 00:09:22.300 06:54:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:09:22.300 06:54:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 139087 00:09:22.300 06:54:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 139087 ']' 00:09:22.300 06:54:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:22.300 06:54:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:22.300 06:54:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:22.300 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:22.300 06:54:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:22.300 06:54:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:22.300 [2024-11-18 06:54:43.110587] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:09:22.300 [2024-11-18 06:54:43.110692] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:22.300 [2024-11-18 06:54:43.185703] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:22.300 [2024-11-18 06:54:43.239184] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:22.300 [2024-11-18 06:54:43.239252] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:22.300 [2024-11-18 06:54:43.239282] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:22.300 [2024-11-18 06:54:43.239294] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:22.300 [2024-11-18 06:54:43.239304] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:22.300 [2024-11-18 06:54:43.241042] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:22.300 [2024-11-18 06:54:43.241071] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:22.300 [2024-11-18 06:54:43.241120] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:22.300 [2024-11-18 06:54:43.241123] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:22.559 06:54:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:22.559 06:54:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:09:22.559 06:54:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:22.559 06:54:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:22.559 06:54:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:22.559 06:54:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:22.559 06:54:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:09:22.560 06:54:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.560 06:54:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:22.560 06:54:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.560 06:54:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:09:22.560 06:54:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.560 06:54:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:22.560 06:54:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.560 06:54:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:22.560 06:54:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.560 06:54:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:22.560 [2024-11-18 06:54:43.463029] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:22.560 06:54:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.560 06:54:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:22.560 06:54:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.560 06:54:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:22.560 Malloc0 00:09:22.560 06:54:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.560 06:54:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:22.560 06:54:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.560 06:54:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:22.560 06:54:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.560 06:54:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:22.560 06:54:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.560 06:54:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:22.560 06:54:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.560 06:54:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:22.560 06:54:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.560 06:54:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:22.560 [2024-11-18 06:54:43.516005] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:22.560 06:54:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.560 06:54:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=139123 00:09:22.560 06:54:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=139124 00:09:22.560 06:54:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:09:22.560 06:54:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:09:22.560 06:54:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:09:22.560 06:54:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:09:22.560 06:54:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:09:22.560 06:54:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:09:22.560 06:54:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=139127 00:09:22.560 06:54:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:22.560 06:54:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:22.560 { 00:09:22.560 "params": { 00:09:22.560 "name": "Nvme$subsystem", 00:09:22.560 "trtype": "$TEST_TRANSPORT", 00:09:22.560 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:22.560 "adrfam": "ipv4", 00:09:22.560 "trsvcid": "$NVMF_PORT", 00:09:22.560 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:22.560 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:22.560 "hdgst": ${hdgst:-false}, 00:09:22.560 "ddgst": ${ddgst:-false} 00:09:22.560 }, 00:09:22.560 "method": "bdev_nvme_attach_controller" 00:09:22.560 } 00:09:22.560 EOF 00:09:22.560 )") 00:09:22.560 06:54:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:09:22.560 06:54:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:09:22.560 06:54:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:22.560 06:54:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:22.560 { 00:09:22.560 "params": { 00:09:22.560 "name": "Nvme$subsystem", 00:09:22.560 "trtype": "$TEST_TRANSPORT", 00:09:22.560 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:22.560 "adrfam": "ipv4", 00:09:22.560 "trsvcid": "$NVMF_PORT", 00:09:22.560 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:22.560 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:22.560 "hdgst": ${hdgst:-false}, 00:09:22.560 "ddgst": ${ddgst:-false} 00:09:22.560 }, 00:09:22.560 "method": "bdev_nvme_attach_controller" 00:09:22.560 } 00:09:22.560 EOF 00:09:22.560 )") 00:09:22.560 06:54:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:09:22.560 06:54:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:09:22.560 06:54:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=139129 00:09:22.560 06:54:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:09:22.560 06:54:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:09:22.560 06:54:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:09:22.560 06:54:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:22.560 06:54:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:09:22.560 06:54:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:09:22.560 06:54:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:09:22.560 06:54:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:22.560 { 00:09:22.560 "params": { 00:09:22.560 "name": "Nvme$subsystem", 00:09:22.560 "trtype": "$TEST_TRANSPORT", 00:09:22.560 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:22.560 "adrfam": "ipv4", 00:09:22.560 "trsvcid": "$NVMF_PORT", 00:09:22.560 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:22.560 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:22.560 "hdgst": ${hdgst:-false}, 00:09:22.560 "ddgst": ${ddgst:-false} 00:09:22.560 }, 00:09:22.560 "method": "bdev_nvme_attach_controller" 00:09:22.560 } 00:09:22.560 EOF 00:09:22.560 )") 00:09:22.560 06:54:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:09:22.560 06:54:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:09:22.560 06:54:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:22.560 06:54:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:09:22.560 06:54:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:22.560 { 00:09:22.560 "params": { 00:09:22.560 "name": "Nvme$subsystem", 00:09:22.560 "trtype": "$TEST_TRANSPORT", 00:09:22.560 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:22.560 "adrfam": "ipv4", 00:09:22.560 "trsvcid": "$NVMF_PORT", 00:09:22.560 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:22.560 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:22.560 "hdgst": ${hdgst:-false}, 00:09:22.560 "ddgst": ${ddgst:-false} 00:09:22.560 }, 00:09:22.560 "method": "bdev_nvme_attach_controller" 00:09:22.560 } 00:09:22.560 EOF 00:09:22.560 )") 00:09:22.560 06:54:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:09:22.560 06:54:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 139123 00:09:22.560 06:54:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:09:22.560 06:54:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:09:22.560 06:54:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:09:22.560 06:54:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:09:22.560 06:54:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:09:22.560 06:54:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:09:22.560 06:54:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:22.560 "params": { 00:09:22.560 "name": "Nvme1", 00:09:22.560 "trtype": "tcp", 00:09:22.560 "traddr": "10.0.0.2", 00:09:22.560 "adrfam": "ipv4", 00:09:22.560 "trsvcid": "4420", 00:09:22.560 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:22.560 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:22.560 "hdgst": false, 00:09:22.560 "ddgst": false 00:09:22.560 }, 00:09:22.560 "method": "bdev_nvme_attach_controller" 00:09:22.560 }' 00:09:22.560 06:54:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:09:22.561 06:54:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:22.561 "params": { 00:09:22.561 "name": "Nvme1", 00:09:22.561 "trtype": "tcp", 00:09:22.561 "traddr": "10.0.0.2", 00:09:22.561 "adrfam": "ipv4", 00:09:22.561 "trsvcid": "4420", 00:09:22.561 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:22.561 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:22.561 "hdgst": false, 00:09:22.561 "ddgst": false 00:09:22.561 }, 00:09:22.561 "method": "bdev_nvme_attach_controller" 00:09:22.561 }' 00:09:22.561 06:54:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:09:22.561 06:54:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:22.561 "params": { 00:09:22.561 "name": "Nvme1", 00:09:22.561 "trtype": "tcp", 00:09:22.561 "traddr": "10.0.0.2", 00:09:22.561 "adrfam": "ipv4", 00:09:22.561 "trsvcid": "4420", 00:09:22.561 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:22.561 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:22.561 "hdgst": false, 00:09:22.561 "ddgst": false 00:09:22.561 }, 00:09:22.561 "method": "bdev_nvme_attach_controller" 00:09:22.561 }' 00:09:22.561 06:54:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:09:22.561 06:54:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:22.561 "params": { 00:09:22.561 "name": "Nvme1", 00:09:22.561 "trtype": "tcp", 00:09:22.561 "traddr": "10.0.0.2", 00:09:22.561 "adrfam": "ipv4", 00:09:22.561 "trsvcid": "4420", 00:09:22.561 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:22.561 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:22.561 "hdgst": false, 00:09:22.561 "ddgst": false 00:09:22.561 }, 00:09:22.561 "method": "bdev_nvme_attach_controller" 00:09:22.561 }' 00:09:22.820 [2024-11-18 06:54:43.565843] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:09:22.820 [2024-11-18 06:54:43.565843] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:09:22.820 [2024-11-18 06:54:43.565843] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:09:22.820 [2024-11-18 06:54:43.565926] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-11-18 06:54:43.565926] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-11-18 06:54:43.565925] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:09:22.820 --proc-type=auto ] 00:09:22.820 --proc-type=auto ] 00:09:22.820 [2024-11-18 06:54:43.565992] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:09:22.820 [2024-11-18 06:54:43.566058] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:09:22.820 [2024-11-18 06:54:43.748261] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:22.820 [2024-11-18 06:54:43.790014] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:09:23.079 [2024-11-18 06:54:43.851422] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:23.079 [2024-11-18 06:54:43.893579] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:09:23.079 [2024-11-18 06:54:43.947576] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:23.079 [2024-11-18 06:54:43.989785] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:09:23.079 [2024-11-18 06:54:44.019372] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:23.079 [2024-11-18 06:54:44.057079] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:09:23.338 Running I/O for 1 seconds... 00:09:23.338 Running I/O for 1 seconds... 00:09:23.338 Running I/O for 1 seconds... 00:09:23.338 Running I/O for 1 seconds... 00:09:24.277 6420.00 IOPS, 25.08 MiB/s 00:09:24.277 Latency(us) 00:09:24.277 [2024-11-18T05:54:45.255Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:24.277 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:09:24.277 Nvme1n1 : 1.02 6427.30 25.11 0.00 0.00 19751.91 9709.04 31651.46 00:09:24.277 [2024-11-18T05:54:45.255Z] =================================================================================================================== 00:09:24.277 [2024-11-18T05:54:45.255Z] Total : 6427.30 25.11 0.00 0.00 19751.91 9709.04 31651.46 00:09:24.277 8898.00 IOPS, 34.76 MiB/s 00:09:24.277 Latency(us) 00:09:24.277 [2024-11-18T05:54:45.255Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:24.277 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:09:24.277 Nvme1n1 : 1.01 8953.82 34.98 0.00 0.00 14225.55 6990.51 26796.94 00:09:24.277 [2024-11-18T05:54:45.255Z] =================================================================================================================== 00:09:24.277 [2024-11-18T05:54:45.255Z] Total : 8953.82 34.98 0.00 0.00 14225.55 6990.51 26796.94 00:09:24.277 6426.00 IOPS, 25.10 MiB/s 00:09:24.277 Latency(us) 00:09:24.277 [2024-11-18T05:54:45.255Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:24.277 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:09:24.277 Nvme1n1 : 1.01 6532.48 25.52 0.00 0.00 19539.49 3713.71 45632.47 00:09:24.277 [2024-11-18T05:54:45.255Z] =================================================================================================================== 00:09:24.277 [2024-11-18T05:54:45.255Z] Total : 6532.48 25.52 0.00 0.00 19539.49 3713.71 45632.47 00:09:24.536 180088.00 IOPS, 703.47 MiB/s 00:09:24.536 Latency(us) 00:09:24.536 [2024-11-18T05:54:45.514Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:24.536 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:09:24.536 Nvme1n1 : 1.00 179736.71 702.10 0.00 0.00 708.29 300.37 1941.81 00:09:24.536 [2024-11-18T05:54:45.514Z] =================================================================================================================== 00:09:24.536 [2024-11-18T05:54:45.514Z] Total : 179736.71 702.10 0.00 0.00 708.29 300.37 1941.81 00:09:24.536 06:54:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 139124 00:09:24.536 06:54:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 139127 00:09:24.536 06:54:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 139129 00:09:24.536 06:54:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:24.536 06:54:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.536 06:54:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:24.536 06:54:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.536 06:54:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:09:24.536 06:54:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:09:24.536 06:54:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:24.536 06:54:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:09:24.536 06:54:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:24.536 06:54:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:09:24.536 06:54:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:24.536 06:54:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:24.536 rmmod nvme_tcp 00:09:24.536 rmmod nvme_fabrics 00:09:24.536 rmmod nvme_keyring 00:09:24.536 06:54:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:24.536 06:54:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:09:24.536 06:54:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:09:24.536 06:54:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 139087 ']' 00:09:24.536 06:54:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 139087 00:09:24.536 06:54:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 139087 ']' 00:09:24.536 06:54:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 139087 00:09:24.536 06:54:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:09:24.536 06:54:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:24.536 06:54:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 139087 00:09:24.796 06:54:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:24.796 06:54:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:24.796 06:54:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 139087' 00:09:24.796 killing process with pid 139087 00:09:24.796 06:54:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 139087 00:09:24.796 06:54:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 139087 00:09:24.796 06:54:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:24.796 06:54:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:24.796 06:54:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:24.796 06:54:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:09:24.796 06:54:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:09:24.796 06:54:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:24.796 06:54:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:09:24.796 06:54:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:24.796 06:54:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:24.796 06:54:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:24.796 06:54:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:24.796 06:54:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:27.343 06:54:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:27.343 00:09:27.343 real 0m7.253s 00:09:27.343 user 0m15.494s 00:09:27.343 sys 0m3.530s 00:09:27.343 06:54:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:27.343 06:54:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:27.343 ************************************ 00:09:27.343 END TEST nvmf_bdev_io_wait 00:09:27.343 ************************************ 00:09:27.343 06:54:47 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:09:27.343 06:54:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:27.343 06:54:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:27.343 06:54:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:27.343 ************************************ 00:09:27.343 START TEST nvmf_queue_depth 00:09:27.343 ************************************ 00:09:27.343 06:54:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:09:27.343 * Looking for test storage... 00:09:27.343 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:27.343 06:54:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:27.343 06:54:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lcov --version 00:09:27.343 06:54:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:27.343 06:54:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:27.343 06:54:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:27.343 06:54:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:27.343 06:54:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:27.343 06:54:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:09:27.343 06:54:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:09:27.343 06:54:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:09:27.343 06:54:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:09:27.343 06:54:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:09:27.343 06:54:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:09:27.343 06:54:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:09:27.343 06:54:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:27.343 06:54:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:09:27.343 06:54:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:09:27.343 06:54:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:27.343 06:54:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:27.343 06:54:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:09:27.343 06:54:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:09:27.343 06:54:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:27.343 06:54:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:09:27.343 06:54:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:09:27.343 06:54:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:09:27.343 06:54:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:09:27.343 06:54:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:27.343 06:54:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:09:27.343 06:54:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:09:27.343 06:54:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:27.343 06:54:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:27.343 06:54:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:09:27.343 06:54:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:27.343 06:54:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:27.343 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:27.343 --rc genhtml_branch_coverage=1 00:09:27.343 --rc genhtml_function_coverage=1 00:09:27.343 --rc genhtml_legend=1 00:09:27.343 --rc geninfo_all_blocks=1 00:09:27.343 --rc geninfo_unexecuted_blocks=1 00:09:27.343 00:09:27.343 ' 00:09:27.343 06:54:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:27.343 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:27.343 --rc genhtml_branch_coverage=1 00:09:27.343 --rc genhtml_function_coverage=1 00:09:27.343 --rc genhtml_legend=1 00:09:27.343 --rc geninfo_all_blocks=1 00:09:27.343 --rc geninfo_unexecuted_blocks=1 00:09:27.343 00:09:27.343 ' 00:09:27.343 06:54:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:27.343 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:27.343 --rc genhtml_branch_coverage=1 00:09:27.343 --rc genhtml_function_coverage=1 00:09:27.343 --rc genhtml_legend=1 00:09:27.343 --rc geninfo_all_blocks=1 00:09:27.343 --rc geninfo_unexecuted_blocks=1 00:09:27.343 00:09:27.343 ' 00:09:27.343 06:54:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:27.343 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:27.343 --rc genhtml_branch_coverage=1 00:09:27.343 --rc genhtml_function_coverage=1 00:09:27.343 --rc genhtml_legend=1 00:09:27.343 --rc geninfo_all_blocks=1 00:09:27.343 --rc geninfo_unexecuted_blocks=1 00:09:27.343 00:09:27.343 ' 00:09:27.343 06:54:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:27.343 06:54:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:09:27.343 06:54:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:27.343 06:54:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:27.343 06:54:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:27.343 06:54:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:27.343 06:54:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:27.343 06:54:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:27.343 06:54:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:27.343 06:54:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:27.343 06:54:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:27.343 06:54:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:27.343 06:54:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:27.343 06:54:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:27.343 06:54:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:27.343 06:54:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:27.343 06:54:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:27.343 06:54:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:27.343 06:54:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:27.343 06:54:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:09:27.344 06:54:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:27.344 06:54:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:27.344 06:54:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:27.344 06:54:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:27.344 06:54:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:27.344 06:54:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:27.344 06:54:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:09:27.344 06:54:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:27.344 06:54:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:09:27.344 06:54:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:27.344 06:54:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:27.344 06:54:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:27.344 06:54:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:27.344 06:54:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:27.344 06:54:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:27.344 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:27.344 06:54:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:27.344 06:54:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:27.344 06:54:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:27.344 06:54:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:09:27.344 06:54:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:09:27.344 06:54:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:09:27.344 06:54:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:09:27.344 06:54:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:27.344 06:54:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:27.344 06:54:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:27.344 06:54:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:27.344 06:54:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:27.344 06:54:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:27.344 06:54:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:27.344 06:54:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:27.344 06:54:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:27.344 06:54:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:27.344 06:54:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:09:27.344 06:54:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:29.252 06:54:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:29.252 06:54:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:09:29.252 06:54:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:29.252 06:54:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:29.252 06:54:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:29.252 06:54:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:29.252 06:54:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:29.252 06:54:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:09:29.252 06:54:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:29.252 06:54:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:09:29.252 06:54:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:09:29.252 06:54:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:09:29.252 06:54:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:09:29.252 06:54:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:09:29.252 06:54:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:09:29.252 06:54:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:29.252 06:54:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:29.252 06:54:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:29.252 06:54:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:29.252 06:54:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:29.252 06:54:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:29.252 06:54:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:29.252 06:54:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:29.252 06:54:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:29.252 06:54:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:29.252 06:54:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:29.252 06:54:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:29.252 06:54:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:29.252 06:54:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:29.252 06:54:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:29.252 06:54:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:29.252 06:54:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:29.252 06:54:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:29.252 06:54:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:29.252 06:54:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:09:29.252 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:09:29.252 06:54:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:29.252 06:54:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:29.252 06:54:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:29.252 06:54:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:29.252 06:54:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:29.252 06:54:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:29.252 06:54:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:09:29.252 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:09:29.252 06:54:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:29.252 06:54:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:29.252 06:54:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:29.252 06:54:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:29.252 06:54:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:29.252 06:54:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:29.252 06:54:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:29.252 06:54:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:29.252 06:54:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:29.252 06:54:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:29.252 06:54:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:29.252 06:54:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:29.252 06:54:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:29.252 06:54:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:29.252 06:54:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:29.252 06:54:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:09:29.252 Found net devices under 0000:0a:00.0: cvl_0_0 00:09:29.252 06:54:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:29.252 06:54:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:29.252 06:54:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:29.252 06:54:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:29.252 06:54:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:29.252 06:54:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:29.252 06:54:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:29.252 06:54:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:29.252 06:54:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:09:29.252 Found net devices under 0000:0a:00.1: cvl_0_1 00:09:29.252 06:54:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:29.252 06:54:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:29.252 06:54:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:09:29.252 06:54:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:29.252 06:54:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:29.252 06:54:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:29.252 06:54:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:29.252 06:54:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:29.252 06:54:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:29.252 06:54:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:29.252 06:54:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:29.252 06:54:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:29.252 06:54:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:29.253 06:54:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:29.253 06:54:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:29.253 06:54:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:29.253 06:54:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:29.253 06:54:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:29.253 06:54:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:29.253 06:54:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:29.253 06:54:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:29.253 06:54:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:29.253 06:54:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:29.253 06:54:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:29.253 06:54:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:29.253 06:54:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:29.253 06:54:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:29.253 06:54:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:29.253 06:54:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:29.253 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:29.253 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.243 ms 00:09:29.253 00:09:29.253 --- 10.0.0.2 ping statistics --- 00:09:29.253 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:29.253 rtt min/avg/max/mdev = 0.243/0.243/0.243/0.000 ms 00:09:29.253 06:54:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:29.253 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:29.253 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.162 ms 00:09:29.253 00:09:29.253 --- 10.0.0.1 ping statistics --- 00:09:29.253 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:29.253 rtt min/avg/max/mdev = 0.162/0.162/0.162/0.000 ms 00:09:29.253 06:54:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:29.253 06:54:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:09:29.253 06:54:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:29.253 06:54:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:29.253 06:54:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:29.253 06:54:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:29.253 06:54:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:29.253 06:54:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:29.253 06:54:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:29.253 06:54:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:09:29.253 06:54:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:29.253 06:54:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:29.253 06:54:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:29.253 06:54:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=141353 00:09:29.253 06:54:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:29.253 06:54:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 141353 00:09:29.253 06:54:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 141353 ']' 00:09:29.253 06:54:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:29.253 06:54:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:29.253 06:54:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:29.253 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:29.253 06:54:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:29.253 06:54:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:29.513 [2024-11-18 06:54:50.236024] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:09:29.513 [2024-11-18 06:54:50.236114] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:29.513 [2024-11-18 06:54:50.313125] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:29.513 [2024-11-18 06:54:50.362567] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:29.513 [2024-11-18 06:54:50.362612] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:29.513 [2024-11-18 06:54:50.362642] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:29.513 [2024-11-18 06:54:50.362654] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:29.513 [2024-11-18 06:54:50.362664] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:29.513 [2024-11-18 06:54:50.363267] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:29.772 06:54:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:29.772 06:54:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:09:29.772 06:54:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:29.772 06:54:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:29.772 06:54:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:29.772 06:54:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:29.772 06:54:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:29.772 06:54:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.772 06:54:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:29.772 [2024-11-18 06:54:50.538819] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:29.772 06:54:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.772 06:54:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:29.772 06:54:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.772 06:54:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:29.772 Malloc0 00:09:29.772 06:54:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.772 06:54:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:29.772 06:54:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.772 06:54:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:29.772 06:54:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.772 06:54:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:29.772 06:54:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.772 06:54:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:29.772 06:54:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.772 06:54:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:29.772 06:54:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.772 06:54:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:29.772 [2024-11-18 06:54:50.587240] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:29.772 06:54:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.772 06:54:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=141383 00:09:29.772 06:54:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:29.772 06:54:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 141383 /var/tmp/bdevperf.sock 00:09:29.772 06:54:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 141383 ']' 00:09:29.772 06:54:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:29.772 06:54:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:29.772 06:54:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:29.773 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:29.773 06:54:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:09:29.773 06:54:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:29.773 06:54:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:29.773 [2024-11-18 06:54:50.639511] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:09:29.773 [2024-11-18 06:54:50.639585] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid141383 ] 00:09:29.773 [2024-11-18 06:54:50.708329] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:30.032 [2024-11-18 06:54:50.755701] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:30.032 06:54:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:30.032 06:54:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:09:30.032 06:54:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:09:30.032 06:54:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.032 06:54:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:30.290 NVMe0n1 00:09:30.290 06:54:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.290 06:54:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:30.290 Running I/O for 10 seconds... 00:09:32.603 8199.00 IOPS, 32.03 MiB/s [2024-11-18T05:54:54.517Z] 8691.50 IOPS, 33.95 MiB/s [2024-11-18T05:54:55.452Z] 8536.33 IOPS, 33.35 MiB/s [2024-11-18T05:54:56.389Z] 8699.50 IOPS, 33.98 MiB/s [2024-11-18T05:54:57.325Z] 8767.80 IOPS, 34.25 MiB/s [2024-11-18T05:54:58.261Z] 8786.50 IOPS, 34.32 MiB/s [2024-11-18T05:54:59.637Z] 8789.43 IOPS, 34.33 MiB/s [2024-11-18T05:55:00.573Z] 8824.12 IOPS, 34.47 MiB/s [2024-11-18T05:55:01.509Z] 8855.56 IOPS, 34.59 MiB/s [2024-11-18T05:55:01.509Z] 8868.40 IOPS, 34.64 MiB/s 00:09:40.531 Latency(us) 00:09:40.531 [2024-11-18T05:55:01.509Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:40.531 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:09:40.531 Verification LBA range: start 0x0 length 0x4000 00:09:40.531 NVMe0n1 : 10.09 8888.56 34.72 0.00 0.00 114668.35 21165.70 68739.98 00:09:40.531 [2024-11-18T05:55:01.509Z] =================================================================================================================== 00:09:40.531 [2024-11-18T05:55:01.509Z] Total : 8888.56 34.72 0.00 0.00 114668.35 21165.70 68739.98 00:09:40.531 { 00:09:40.531 "results": [ 00:09:40.531 { 00:09:40.531 "job": "NVMe0n1", 00:09:40.531 "core_mask": "0x1", 00:09:40.531 "workload": "verify", 00:09:40.531 "status": "finished", 00:09:40.531 "verify_range": { 00:09:40.531 "start": 0, 00:09:40.531 "length": 16384 00:09:40.531 }, 00:09:40.531 "queue_depth": 1024, 00:09:40.531 "io_size": 4096, 00:09:40.531 "runtime": 10.089262, 00:09:40.531 "iops": 8888.558945143857, 00:09:40.531 "mibps": 34.72093337946819, 00:09:40.531 "io_failed": 0, 00:09:40.531 "io_timeout": 0, 00:09:40.531 "avg_latency_us": 114668.35029438743, 00:09:40.531 "min_latency_us": 21165.70074074074, 00:09:40.531 "max_latency_us": 68739.98222222223 00:09:40.531 } 00:09:40.531 ], 00:09:40.531 "core_count": 1 00:09:40.531 } 00:09:40.531 06:55:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 141383 00:09:40.531 06:55:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 141383 ']' 00:09:40.531 06:55:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 141383 00:09:40.531 06:55:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:09:40.531 06:55:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:40.531 06:55:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 141383 00:09:40.531 06:55:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:40.531 06:55:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:40.531 06:55:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 141383' 00:09:40.531 killing process with pid 141383 00:09:40.531 06:55:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 141383 00:09:40.531 Received shutdown signal, test time was about 10.000000 seconds 00:09:40.531 00:09:40.531 Latency(us) 00:09:40.531 [2024-11-18T05:55:01.509Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:40.531 [2024-11-18T05:55:01.509Z] =================================================================================================================== 00:09:40.531 [2024-11-18T05:55:01.509Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:40.531 06:55:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 141383 00:09:40.790 06:55:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:09:40.790 06:55:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:09:40.790 06:55:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:40.790 06:55:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:09:40.790 06:55:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:40.790 06:55:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:09:40.790 06:55:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:40.790 06:55:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:40.790 rmmod nvme_tcp 00:09:40.790 rmmod nvme_fabrics 00:09:40.790 rmmod nvme_keyring 00:09:40.790 06:55:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:40.790 06:55:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:09:40.790 06:55:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:09:40.790 06:55:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 141353 ']' 00:09:40.790 06:55:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 141353 00:09:40.790 06:55:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 141353 ']' 00:09:40.790 06:55:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 141353 00:09:40.790 06:55:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:09:40.790 06:55:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:40.790 06:55:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 141353 00:09:40.790 06:55:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:09:40.790 06:55:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:09:40.790 06:55:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 141353' 00:09:40.790 killing process with pid 141353 00:09:40.790 06:55:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 141353 00:09:40.790 06:55:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 141353 00:09:41.051 06:55:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:41.051 06:55:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:41.051 06:55:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:41.051 06:55:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:09:41.051 06:55:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:09:41.051 06:55:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:41.051 06:55:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:09:41.051 06:55:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:41.051 06:55:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:41.051 06:55:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:41.051 06:55:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:41.051 06:55:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:43.594 06:55:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:43.594 00:09:43.594 real 0m16.175s 00:09:43.594 user 0m22.715s 00:09:43.594 sys 0m3.193s 00:09:43.594 06:55:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:43.594 06:55:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:43.594 ************************************ 00:09:43.594 END TEST nvmf_queue_depth 00:09:43.594 ************************************ 00:09:43.594 06:55:03 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:09:43.594 06:55:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:43.594 06:55:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:43.594 06:55:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:43.594 ************************************ 00:09:43.594 START TEST nvmf_target_multipath 00:09:43.594 ************************************ 00:09:43.594 06:55:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:09:43.594 * Looking for test storage... 00:09:43.594 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:43.594 06:55:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:43.594 06:55:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lcov --version 00:09:43.594 06:55:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:43.594 06:55:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:43.594 06:55:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:43.594 06:55:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:43.594 06:55:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:43.594 06:55:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:09:43.594 06:55:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:09:43.594 06:55:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:09:43.594 06:55:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:09:43.594 06:55:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:09:43.594 06:55:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:09:43.594 06:55:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:09:43.594 06:55:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:43.594 06:55:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:09:43.594 06:55:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:09:43.594 06:55:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:43.594 06:55:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:43.594 06:55:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:09:43.594 06:55:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:09:43.594 06:55:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:43.594 06:55:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:09:43.594 06:55:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:09:43.594 06:55:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:09:43.594 06:55:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:09:43.594 06:55:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:43.594 06:55:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:09:43.594 06:55:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:09:43.594 06:55:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:43.594 06:55:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:43.594 06:55:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:09:43.594 06:55:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:43.594 06:55:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:43.594 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:43.594 --rc genhtml_branch_coverage=1 00:09:43.594 --rc genhtml_function_coverage=1 00:09:43.594 --rc genhtml_legend=1 00:09:43.594 --rc geninfo_all_blocks=1 00:09:43.594 --rc geninfo_unexecuted_blocks=1 00:09:43.594 00:09:43.594 ' 00:09:43.594 06:55:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:43.594 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:43.594 --rc genhtml_branch_coverage=1 00:09:43.594 --rc genhtml_function_coverage=1 00:09:43.594 --rc genhtml_legend=1 00:09:43.594 --rc geninfo_all_blocks=1 00:09:43.594 --rc geninfo_unexecuted_blocks=1 00:09:43.594 00:09:43.594 ' 00:09:43.594 06:55:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:43.594 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:43.594 --rc genhtml_branch_coverage=1 00:09:43.594 --rc genhtml_function_coverage=1 00:09:43.594 --rc genhtml_legend=1 00:09:43.594 --rc geninfo_all_blocks=1 00:09:43.594 --rc geninfo_unexecuted_blocks=1 00:09:43.594 00:09:43.594 ' 00:09:43.594 06:55:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:43.594 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:43.594 --rc genhtml_branch_coverage=1 00:09:43.594 --rc genhtml_function_coverage=1 00:09:43.594 --rc genhtml_legend=1 00:09:43.594 --rc geninfo_all_blocks=1 00:09:43.594 --rc geninfo_unexecuted_blocks=1 00:09:43.594 00:09:43.594 ' 00:09:43.594 06:55:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:43.594 06:55:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:09:43.594 06:55:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:43.594 06:55:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:43.595 06:55:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:43.595 06:55:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:43.595 06:55:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:43.595 06:55:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:43.595 06:55:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:43.595 06:55:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:43.595 06:55:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:43.595 06:55:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:43.595 06:55:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:43.595 06:55:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:43.595 06:55:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:43.595 06:55:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:43.595 06:55:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:43.595 06:55:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:43.595 06:55:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:43.595 06:55:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:09:43.595 06:55:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:43.595 06:55:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:43.595 06:55:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:43.595 06:55:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:43.595 06:55:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:43.595 06:55:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:43.595 06:55:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:09:43.595 06:55:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:43.595 06:55:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:09:43.595 06:55:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:43.595 06:55:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:43.595 06:55:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:43.595 06:55:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:43.595 06:55:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:43.595 06:55:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:43.595 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:43.595 06:55:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:43.595 06:55:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:43.595 06:55:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:43.595 06:55:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:43.595 06:55:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:43.595 06:55:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:09:43.595 06:55:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:43.595 06:55:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:09:43.595 06:55:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:43.595 06:55:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:43.595 06:55:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:43.595 06:55:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:43.595 06:55:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:43.595 06:55:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:43.595 06:55:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:43.595 06:55:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:43.595 06:55:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:43.595 06:55:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:43.595 06:55:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:09:43.595 06:55:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:45.506 06:55:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:45.506 06:55:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:09:45.506 06:55:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:45.506 06:55:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:45.506 06:55:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:45.506 06:55:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:45.506 06:55:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:45.506 06:55:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:09:45.506 06:55:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:45.506 06:55:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:09:45.506 06:55:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:09:45.506 06:55:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:09:45.506 06:55:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:09:45.506 06:55:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:09:45.506 06:55:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:09:45.506 06:55:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:45.506 06:55:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:45.506 06:55:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:45.506 06:55:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:45.506 06:55:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:45.506 06:55:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:45.506 06:55:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:45.506 06:55:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:45.506 06:55:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:45.506 06:55:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:45.506 06:55:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:45.506 06:55:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:45.506 06:55:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:45.506 06:55:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:45.506 06:55:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:45.506 06:55:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:45.506 06:55:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:45.506 06:55:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:45.506 06:55:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:45.506 06:55:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:09:45.506 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:09:45.506 06:55:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:45.506 06:55:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:45.506 06:55:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:45.506 06:55:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:45.506 06:55:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:45.506 06:55:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:45.506 06:55:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:09:45.506 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:09:45.506 06:55:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:45.506 06:55:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:45.506 06:55:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:45.506 06:55:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:45.506 06:55:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:45.506 06:55:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:45.506 06:55:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:45.506 06:55:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:45.506 06:55:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:45.506 06:55:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:45.506 06:55:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:45.506 06:55:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:45.506 06:55:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:45.506 06:55:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:45.507 06:55:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:45.507 06:55:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:09:45.507 Found net devices under 0000:0a:00.0: cvl_0_0 00:09:45.507 06:55:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:45.507 06:55:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:45.507 06:55:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:45.507 06:55:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:45.507 06:55:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:45.507 06:55:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:45.507 06:55:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:45.507 06:55:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:45.507 06:55:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:09:45.507 Found net devices under 0000:0a:00.1: cvl_0_1 00:09:45.507 06:55:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:45.507 06:55:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:45.507 06:55:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:09:45.507 06:55:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:45.507 06:55:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:45.507 06:55:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:45.507 06:55:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:45.507 06:55:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:45.507 06:55:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:45.507 06:55:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:45.507 06:55:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:45.507 06:55:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:45.507 06:55:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:45.507 06:55:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:45.507 06:55:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:45.507 06:55:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:45.507 06:55:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:45.507 06:55:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:45.507 06:55:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:45.507 06:55:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:45.507 06:55:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:45.507 06:55:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:45.507 06:55:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:45.507 06:55:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:45.507 06:55:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:45.766 06:55:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:45.766 06:55:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:45.766 06:55:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:45.766 06:55:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:45.766 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:45.766 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.289 ms 00:09:45.766 00:09:45.766 --- 10.0.0.2 ping statistics --- 00:09:45.766 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:45.766 rtt min/avg/max/mdev = 0.289/0.289/0.289/0.000 ms 00:09:45.766 06:55:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:45.766 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:45.766 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.153 ms 00:09:45.766 00:09:45.766 --- 10.0.0.1 ping statistics --- 00:09:45.766 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:45.766 rtt min/avg/max/mdev = 0.153/0.153/0.153/0.000 ms 00:09:45.766 06:55:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:45.766 06:55:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:09:45.766 06:55:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:45.766 06:55:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:45.766 06:55:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:45.767 06:55:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:45.767 06:55:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:45.767 06:55:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:45.767 06:55:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:45.767 06:55:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:09:45.767 06:55:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:09:45.767 only one NIC for nvmf test 00:09:45.767 06:55:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:09:45.767 06:55:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:45.767 06:55:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:09:45.767 06:55:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:45.767 06:55:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:09:45.767 06:55:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:45.767 06:55:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:45.767 rmmod nvme_tcp 00:09:45.767 rmmod nvme_fabrics 00:09:45.767 rmmod nvme_keyring 00:09:45.767 06:55:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:45.767 06:55:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:09:45.767 06:55:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:09:45.767 06:55:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:09:45.767 06:55:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:45.767 06:55:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:45.767 06:55:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:45.767 06:55:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:09:45.767 06:55:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:09:45.767 06:55:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:45.767 06:55:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:09:45.767 06:55:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:45.767 06:55:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:45.767 06:55:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:45.767 06:55:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:45.767 06:55:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:47.759 06:55:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:47.759 06:55:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:09:47.759 06:55:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:09:47.759 06:55:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:47.759 06:55:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:09:47.759 06:55:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:47.759 06:55:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:09:47.759 06:55:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:47.759 06:55:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:47.759 06:55:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:47.759 06:55:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:09:47.759 06:55:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:09:47.759 06:55:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:09:47.759 06:55:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:47.759 06:55:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:47.759 06:55:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:47.759 06:55:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:09:47.759 06:55:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:09:47.759 06:55:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:47.759 06:55:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:09:47.759 06:55:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:47.759 06:55:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:47.759 06:55:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:47.759 06:55:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:47.759 06:55:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:47.759 06:55:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:47.759 00:09:47.759 real 0m4.657s 00:09:47.759 user 0m0.963s 00:09:47.759 sys 0m1.688s 00:09:47.760 06:55:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:47.760 06:55:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:47.760 ************************************ 00:09:47.760 END TEST nvmf_target_multipath 00:09:47.760 ************************************ 00:09:47.760 06:55:08 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:09:47.760 06:55:08 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:47.760 06:55:08 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:47.760 06:55:08 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:47.760 ************************************ 00:09:47.760 START TEST nvmf_zcopy 00:09:47.760 ************************************ 00:09:47.760 06:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:09:48.030 * Looking for test storage... 00:09:48.030 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:48.030 06:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:48.030 06:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lcov --version 00:09:48.030 06:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:48.030 06:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:48.030 06:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:48.030 06:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:48.030 06:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:48.030 06:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:09:48.030 06:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:09:48.030 06:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:09:48.030 06:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:09:48.030 06:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:09:48.030 06:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:09:48.030 06:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:09:48.030 06:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:48.030 06:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:09:48.030 06:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:09:48.030 06:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:48.030 06:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:48.030 06:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:09:48.030 06:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:09:48.030 06:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:48.030 06:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:09:48.030 06:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:09:48.030 06:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:09:48.030 06:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:09:48.030 06:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:48.030 06:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:09:48.030 06:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:09:48.030 06:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:48.030 06:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:48.030 06:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:09:48.030 06:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:48.030 06:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:48.030 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:48.030 --rc genhtml_branch_coverage=1 00:09:48.030 --rc genhtml_function_coverage=1 00:09:48.030 --rc genhtml_legend=1 00:09:48.030 --rc geninfo_all_blocks=1 00:09:48.030 --rc geninfo_unexecuted_blocks=1 00:09:48.030 00:09:48.030 ' 00:09:48.030 06:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:48.030 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:48.030 --rc genhtml_branch_coverage=1 00:09:48.030 --rc genhtml_function_coverage=1 00:09:48.030 --rc genhtml_legend=1 00:09:48.030 --rc geninfo_all_blocks=1 00:09:48.030 --rc geninfo_unexecuted_blocks=1 00:09:48.030 00:09:48.030 ' 00:09:48.030 06:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:48.030 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:48.030 --rc genhtml_branch_coverage=1 00:09:48.030 --rc genhtml_function_coverage=1 00:09:48.030 --rc genhtml_legend=1 00:09:48.030 --rc geninfo_all_blocks=1 00:09:48.030 --rc geninfo_unexecuted_blocks=1 00:09:48.030 00:09:48.030 ' 00:09:48.030 06:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:48.030 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:48.030 --rc genhtml_branch_coverage=1 00:09:48.030 --rc genhtml_function_coverage=1 00:09:48.030 --rc genhtml_legend=1 00:09:48.030 --rc geninfo_all_blocks=1 00:09:48.030 --rc geninfo_unexecuted_blocks=1 00:09:48.030 00:09:48.030 ' 00:09:48.030 06:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:48.030 06:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:09:48.030 06:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:48.030 06:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:48.030 06:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:48.030 06:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:48.030 06:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:48.030 06:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:48.030 06:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:48.030 06:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:48.030 06:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:48.031 06:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:48.031 06:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:48.031 06:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:48.031 06:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:48.031 06:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:48.031 06:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:48.031 06:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:48.031 06:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:48.031 06:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:09:48.031 06:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:48.031 06:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:48.031 06:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:48.031 06:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:48.031 06:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:48.031 06:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:48.031 06:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:09:48.031 06:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:48.031 06:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:09:48.031 06:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:48.031 06:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:48.031 06:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:48.031 06:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:48.031 06:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:48.031 06:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:48.031 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:48.031 06:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:48.031 06:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:48.031 06:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:48.031 06:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:09:48.031 06:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:48.031 06:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:48.031 06:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:48.031 06:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:48.031 06:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:48.031 06:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:48.031 06:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:48.031 06:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:48.031 06:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:48.031 06:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:48.031 06:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:09:48.031 06:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:50.107 06:55:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:50.107 06:55:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:09:50.107 06:55:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:50.107 06:55:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:50.107 06:55:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:50.107 06:55:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:50.107 06:55:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:50.107 06:55:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:09:50.107 06:55:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:50.107 06:55:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:09:50.107 06:55:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:09:50.107 06:55:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:09:50.107 06:55:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:09:50.107 06:55:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:09:50.107 06:55:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:09:50.107 06:55:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:50.107 06:55:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:50.107 06:55:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:50.107 06:55:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:50.107 06:55:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:50.107 06:55:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:50.107 06:55:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:50.107 06:55:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:50.107 06:55:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:50.107 06:55:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:50.107 06:55:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:50.107 06:55:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:50.107 06:55:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:50.107 06:55:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:50.107 06:55:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:50.107 06:55:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:50.107 06:55:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:50.107 06:55:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:50.107 06:55:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:50.107 06:55:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:09:50.107 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:09:50.107 06:55:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:50.107 06:55:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:50.107 06:55:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:50.107 06:55:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:50.107 06:55:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:50.107 06:55:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:50.107 06:55:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:09:50.107 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:09:50.107 06:55:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:50.107 06:55:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:50.107 06:55:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:50.107 06:55:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:50.107 06:55:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:50.107 06:55:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:50.107 06:55:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:50.107 06:55:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:50.107 06:55:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:50.107 06:55:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:50.107 06:55:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:50.107 06:55:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:50.107 06:55:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:50.107 06:55:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:50.107 06:55:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:50.107 06:55:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:09:50.107 Found net devices under 0000:0a:00.0: cvl_0_0 00:09:50.107 06:55:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:50.107 06:55:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:50.107 06:55:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:50.107 06:55:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:50.107 06:55:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:50.107 06:55:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:50.107 06:55:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:50.107 06:55:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:50.107 06:55:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:09:50.107 Found net devices under 0000:0a:00.1: cvl_0_1 00:09:50.107 06:55:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:50.107 06:55:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:50.107 06:55:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:09:50.107 06:55:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:50.107 06:55:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:50.107 06:55:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:50.107 06:55:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:50.107 06:55:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:50.107 06:55:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:50.107 06:55:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:50.107 06:55:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:50.107 06:55:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:50.107 06:55:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:50.107 06:55:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:50.107 06:55:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:50.107 06:55:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:50.107 06:55:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:50.107 06:55:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:50.107 06:55:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:50.107 06:55:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:50.107 06:55:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:50.396 06:55:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:50.396 06:55:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:50.396 06:55:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:50.396 06:55:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:50.396 06:55:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:50.396 06:55:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:50.396 06:55:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:50.396 06:55:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:50.396 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:50.396 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.299 ms 00:09:50.396 00:09:50.396 --- 10.0.0.2 ping statistics --- 00:09:50.396 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:50.396 rtt min/avg/max/mdev = 0.299/0.299/0.299/0.000 ms 00:09:50.396 06:55:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:50.396 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:50.396 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.073 ms 00:09:50.396 00:09:50.396 --- 10.0.0.1 ping statistics --- 00:09:50.396 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:50.396 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:09:50.396 06:55:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:50.396 06:55:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:09:50.396 06:55:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:50.396 06:55:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:50.396 06:55:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:50.396 06:55:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:50.396 06:55:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:50.396 06:55:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:50.396 06:55:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:50.396 06:55:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:09:50.396 06:55:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:50.396 06:55:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:50.396 06:55:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:50.396 06:55:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=146610 00:09:50.396 06:55:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:50.396 06:55:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 146610 00:09:50.396 06:55:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 146610 ']' 00:09:50.396 06:55:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:50.396 06:55:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:50.396 06:55:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:50.396 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:50.396 06:55:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:50.396 06:55:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:50.396 [2024-11-18 06:55:11.255724] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:09:50.396 [2024-11-18 06:55:11.255833] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:50.396 [2024-11-18 06:55:11.329695] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:50.669 [2024-11-18 06:55:11.376873] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:50.669 [2024-11-18 06:55:11.376943] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:50.669 [2024-11-18 06:55:11.376958] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:50.669 [2024-11-18 06:55:11.376969] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:50.669 [2024-11-18 06:55:11.376979] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:50.669 [2024-11-18 06:55:11.377585] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:50.669 06:55:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:50.669 06:55:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:09:50.669 06:55:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:50.669 06:55:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:50.669 06:55:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:50.669 06:55:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:50.669 06:55:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:09:50.669 06:55:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:09:50.669 06:55:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.669 06:55:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:50.669 [2024-11-18 06:55:11.516208] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:50.669 06:55:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.669 06:55:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:50.669 06:55:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.669 06:55:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:50.669 06:55:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.669 06:55:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:50.669 06:55:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.669 06:55:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:50.669 [2024-11-18 06:55:11.532416] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:50.669 06:55:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.669 06:55:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:50.669 06:55:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.669 06:55:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:50.669 06:55:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.669 06:55:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:09:50.669 06:55:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.669 06:55:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:50.669 malloc0 00:09:50.669 06:55:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.669 06:55:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:09:50.669 06:55:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.669 06:55:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:50.669 06:55:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.669 06:55:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:09:50.669 06:55:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:09:50.669 06:55:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:09:50.669 06:55:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:09:50.669 06:55:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:50.669 06:55:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:50.669 { 00:09:50.669 "params": { 00:09:50.669 "name": "Nvme$subsystem", 00:09:50.669 "trtype": "$TEST_TRANSPORT", 00:09:50.669 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:50.669 "adrfam": "ipv4", 00:09:50.669 "trsvcid": "$NVMF_PORT", 00:09:50.669 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:50.669 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:50.669 "hdgst": ${hdgst:-false}, 00:09:50.669 "ddgst": ${ddgst:-false} 00:09:50.669 }, 00:09:50.669 "method": "bdev_nvme_attach_controller" 00:09:50.669 } 00:09:50.669 EOF 00:09:50.669 )") 00:09:50.669 06:55:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:09:50.669 06:55:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:09:50.669 06:55:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:09:50.669 06:55:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:50.669 "params": { 00:09:50.669 "name": "Nvme1", 00:09:50.669 "trtype": "tcp", 00:09:50.669 "traddr": "10.0.0.2", 00:09:50.669 "adrfam": "ipv4", 00:09:50.669 "trsvcid": "4420", 00:09:50.669 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:50.669 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:50.669 "hdgst": false, 00:09:50.669 "ddgst": false 00:09:50.669 }, 00:09:50.669 "method": "bdev_nvme_attach_controller" 00:09:50.669 }' 00:09:50.669 [2024-11-18 06:55:11.619440] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:09:50.669 [2024-11-18 06:55:11.619558] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid146762 ] 00:09:50.948 [2024-11-18 06:55:11.693006] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:50.948 [2024-11-18 06:55:11.741991] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:51.221 Running I/O for 10 seconds... 00:09:53.225 5818.00 IOPS, 45.45 MiB/s [2024-11-18T05:55:15.198Z] 5865.50 IOPS, 45.82 MiB/s [2024-11-18T05:55:16.190Z] 5866.00 IOPS, 45.83 MiB/s [2024-11-18T05:55:17.175Z] 5866.75 IOPS, 45.83 MiB/s [2024-11-18T05:55:18.159Z] 5866.20 IOPS, 45.83 MiB/s [2024-11-18T05:55:19.155Z] 5866.00 IOPS, 45.83 MiB/s [2024-11-18T05:55:20.103Z] 5872.14 IOPS, 45.88 MiB/s [2024-11-18T05:55:21.038Z] 5873.75 IOPS, 45.89 MiB/s [2024-11-18T05:55:22.413Z] 5872.89 IOPS, 45.88 MiB/s [2024-11-18T05:55:22.414Z] 5871.70 IOPS, 45.87 MiB/s 00:10:01.436 Latency(us) 00:10:01.436 [2024-11-18T05:55:22.414Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:01.436 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:10:01.436 Verification LBA range: start 0x0 length 0x1000 00:10:01.436 Nvme1n1 : 10.01 5876.66 45.91 0.00 0.00 21723.52 2354.44 29127.11 00:10:01.436 [2024-11-18T05:55:22.414Z] =================================================================================================================== 00:10:01.436 [2024-11-18T05:55:22.414Z] Total : 5876.66 45.91 0.00 0.00 21723.52 2354.44 29127.11 00:10:01.436 06:55:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=147987 00:10:01.436 06:55:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:10:01.436 06:55:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:01.436 06:55:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:10:01.436 06:55:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:10:01.436 06:55:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:10:01.436 06:55:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:10:01.436 06:55:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:01.436 06:55:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:01.436 { 00:10:01.436 "params": { 00:10:01.436 "name": "Nvme$subsystem", 00:10:01.436 "trtype": "$TEST_TRANSPORT", 00:10:01.436 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:01.436 "adrfam": "ipv4", 00:10:01.436 "trsvcid": "$NVMF_PORT", 00:10:01.436 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:01.436 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:01.436 "hdgst": ${hdgst:-false}, 00:10:01.436 "ddgst": ${ddgst:-false} 00:10:01.436 }, 00:10:01.436 "method": "bdev_nvme_attach_controller" 00:10:01.436 } 00:10:01.436 EOF 00:10:01.436 )") 00:10:01.436 06:55:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:10:01.436 [2024-11-18 06:55:22.206625] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.436 [2024-11-18 06:55:22.206670] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.436 06:55:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:10:01.436 06:55:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:10:01.436 06:55:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:01.436 "params": { 00:10:01.436 "name": "Nvme1", 00:10:01.436 "trtype": "tcp", 00:10:01.436 "traddr": "10.0.0.2", 00:10:01.436 "adrfam": "ipv4", 00:10:01.436 "trsvcid": "4420", 00:10:01.436 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:01.436 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:01.436 "hdgst": false, 00:10:01.436 "ddgst": false 00:10:01.436 }, 00:10:01.436 "method": "bdev_nvme_attach_controller" 00:10:01.436 }' 00:10:01.436 [2024-11-18 06:55:22.214580] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.436 [2024-11-18 06:55:22.214606] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.436 [2024-11-18 06:55:22.222588] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.436 [2024-11-18 06:55:22.222612] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.436 [2024-11-18 06:55:22.230621] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.436 [2024-11-18 06:55:22.230644] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.436 [2024-11-18 06:55:22.238628] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.436 [2024-11-18 06:55:22.238660] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.436 [2024-11-18 06:55:22.246462] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:10:01.436 [2024-11-18 06:55:22.246562] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid147987 ] 00:10:01.436 [2024-11-18 06:55:22.246664] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.436 [2024-11-18 06:55:22.246686] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.436 [2024-11-18 06:55:22.254676] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.436 [2024-11-18 06:55:22.254699] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.436 [2024-11-18 06:55:22.262696] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.436 [2024-11-18 06:55:22.262719] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.436 [2024-11-18 06:55:22.270711] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.436 [2024-11-18 06:55:22.270732] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.436 [2024-11-18 06:55:22.278752] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.436 [2024-11-18 06:55:22.278788] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.436 [2024-11-18 06:55:22.286758] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.436 [2024-11-18 06:55:22.286794] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.436 [2024-11-18 06:55:22.294796] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.436 [2024-11-18 06:55:22.294816] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.436 [2024-11-18 06:55:22.302814] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.436 [2024-11-18 06:55:22.302849] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.436 [2024-11-18 06:55:22.310833] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.436 [2024-11-18 06:55:22.310854] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.436 [2024-11-18 06:55:22.315267] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:01.436 [2024-11-18 06:55:22.318865] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.436 [2024-11-18 06:55:22.318886] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.436 [2024-11-18 06:55:22.326905] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.436 [2024-11-18 06:55:22.326944] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.436 [2024-11-18 06:55:22.334904] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.436 [2024-11-18 06:55:22.334931] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.436 [2024-11-18 06:55:22.342912] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.436 [2024-11-18 06:55:22.342933] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.436 [2024-11-18 06:55:22.350958] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.436 [2024-11-18 06:55:22.350985] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.436 [2024-11-18 06:55:22.358960] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.436 [2024-11-18 06:55:22.358984] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.436 [2024-11-18 06:55:22.364789] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:01.436 [2024-11-18 06:55:22.366984] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.436 [2024-11-18 06:55:22.367012] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.436 [2024-11-18 06:55:22.375004] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.436 [2024-11-18 06:55:22.375025] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.436 [2024-11-18 06:55:22.383060] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.436 [2024-11-18 06:55:22.383095] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.436 [2024-11-18 06:55:22.391083] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.436 [2024-11-18 06:55:22.391119] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.436 [2024-11-18 06:55:22.399106] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.436 [2024-11-18 06:55:22.399144] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.436 [2024-11-18 06:55:22.407127] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.436 [2024-11-18 06:55:22.407166] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.695 [2024-11-18 06:55:22.415152] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.695 [2024-11-18 06:55:22.415191] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.695 [2024-11-18 06:55:22.423174] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.695 [2024-11-18 06:55:22.423214] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.695 [2024-11-18 06:55:22.431186] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.695 [2024-11-18 06:55:22.431224] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.695 [2024-11-18 06:55:22.439181] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.695 [2024-11-18 06:55:22.439203] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.695 [2024-11-18 06:55:22.447231] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.695 [2024-11-18 06:55:22.447269] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.695 [2024-11-18 06:55:22.455254] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.695 [2024-11-18 06:55:22.455292] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.695 [2024-11-18 06:55:22.463262] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.695 [2024-11-18 06:55:22.463291] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.695 [2024-11-18 06:55:22.471263] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.695 [2024-11-18 06:55:22.471284] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.695 [2024-11-18 06:55:22.479303] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.695 [2024-11-18 06:55:22.479325] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.695 [2024-11-18 06:55:22.487361] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.695 [2024-11-18 06:55:22.487387] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.695 [2024-11-18 06:55:22.495354] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.695 [2024-11-18 06:55:22.495377] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.695 [2024-11-18 06:55:22.503420] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.695 [2024-11-18 06:55:22.503445] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.695 [2024-11-18 06:55:22.511420] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.695 [2024-11-18 06:55:22.511444] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.695 [2024-11-18 06:55:22.519438] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.695 [2024-11-18 06:55:22.519481] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.695 [2024-11-18 06:55:22.527460] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.695 [2024-11-18 06:55:22.527507] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.695 [2024-11-18 06:55:22.535514] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.695 [2024-11-18 06:55:22.535537] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.695 [2024-11-18 06:55:22.543539] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.695 [2024-11-18 06:55:22.543561] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.695 [2024-11-18 06:55:22.551560] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.695 [2024-11-18 06:55:22.551583] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.695 [2024-11-18 06:55:22.559570] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.695 [2024-11-18 06:55:22.559594] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.695 [2024-11-18 06:55:22.567604] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.695 [2024-11-18 06:55:22.567631] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.695 [2024-11-18 06:55:22.575625] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.695 [2024-11-18 06:55:22.575648] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.695 [2024-11-18 06:55:22.583648] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.695 [2024-11-18 06:55:22.583670] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.695 [2024-11-18 06:55:22.591668] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.695 [2024-11-18 06:55:22.591690] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.696 [2024-11-18 06:55:22.599688] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.696 [2024-11-18 06:55:22.599712] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.696 [2024-11-18 06:55:22.607748] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.696 [2024-11-18 06:55:22.607774] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.696 [2024-11-18 06:55:22.615750] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.696 [2024-11-18 06:55:22.615772] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.696 [2024-11-18 06:55:22.623784] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.696 [2024-11-18 06:55:22.623807] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.696 [2024-11-18 06:55:22.631807] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.696 [2024-11-18 06:55:22.631828] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.696 [2024-11-18 06:55:22.639828] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.696 [2024-11-18 06:55:22.639863] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.696 [2024-11-18 06:55:22.647866] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.696 [2024-11-18 06:55:22.647885] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.696 [2024-11-18 06:55:22.655887] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.696 [2024-11-18 06:55:22.655909] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.696 [2024-11-18 06:55:22.663908] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.696 [2024-11-18 06:55:22.663928] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.696 [2024-11-18 06:55:22.671939] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.696 [2024-11-18 06:55:22.671964] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.955 [2024-11-18 06:55:22.679945] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.955 [2024-11-18 06:55:22.679965] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.955 [2024-11-18 06:55:22.687963] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.955 [2024-11-18 06:55:22.687983] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.955 [2024-11-18 06:55:22.695987] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.955 [2024-11-18 06:55:22.696008] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.955 [2024-11-18 06:55:22.704009] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.955 [2024-11-18 06:55:22.704029] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.955 [2024-11-18 06:55:22.712039] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.955 [2024-11-18 06:55:22.712063] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.955 [2024-11-18 06:55:22.720058] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.955 [2024-11-18 06:55:22.720081] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.955 Running I/O for 5 seconds... 00:10:01.955 [2024-11-18 06:55:22.731385] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.955 [2024-11-18 06:55:22.731414] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.955 [2024-11-18 06:55:22.740613] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.955 [2024-11-18 06:55:22.740642] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.955 [2024-11-18 06:55:22.752739] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.955 [2024-11-18 06:55:22.752767] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.955 [2024-11-18 06:55:22.763678] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.955 [2024-11-18 06:55:22.763721] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.955 [2024-11-18 06:55:22.774892] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.955 [2024-11-18 06:55:22.774919] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.955 [2024-11-18 06:55:22.788327] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.955 [2024-11-18 06:55:22.788354] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.955 [2024-11-18 06:55:22.798962] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.955 [2024-11-18 06:55:22.798990] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.955 [2024-11-18 06:55:22.809903] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.955 [2024-11-18 06:55:22.809929] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.955 [2024-11-18 06:55:22.820949] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.955 [2024-11-18 06:55:22.820975] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.955 [2024-11-18 06:55:22.831615] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.955 [2024-11-18 06:55:22.831643] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.955 [2024-11-18 06:55:22.844509] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.955 [2024-11-18 06:55:22.844537] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.955 [2024-11-18 06:55:22.854314] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.955 [2024-11-18 06:55:22.854340] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.955 [2024-11-18 06:55:22.865178] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.955 [2024-11-18 06:55:22.865205] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.955 [2024-11-18 06:55:22.877583] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.955 [2024-11-18 06:55:22.877610] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.955 [2024-11-18 06:55:22.887273] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.955 [2024-11-18 06:55:22.887299] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.955 [2024-11-18 06:55:22.900917] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.955 [2024-11-18 06:55:22.900944] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.955 [2024-11-18 06:55:22.911356] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.955 [2024-11-18 06:55:22.911383] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.955 [2024-11-18 06:55:22.922080] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.955 [2024-11-18 06:55:22.922107] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.955 [2024-11-18 06:55:22.932752] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.955 [2024-11-18 06:55:22.932779] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.214 [2024-11-18 06:55:22.943679] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.214 [2024-11-18 06:55:22.943721] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.214 [2024-11-18 06:55:22.956551] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.214 [2024-11-18 06:55:22.956578] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.214 [2024-11-18 06:55:22.966662] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.214 [2024-11-18 06:55:22.966689] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.214 [2024-11-18 06:55:22.977527] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.214 [2024-11-18 06:55:22.977555] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.214 [2024-11-18 06:55:22.990035] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.214 [2024-11-18 06:55:22.990061] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.214 [2024-11-18 06:55:23.000072] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.214 [2024-11-18 06:55:23.000098] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.214 [2024-11-18 06:55:23.011168] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.214 [2024-11-18 06:55:23.011196] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.214 [2024-11-18 06:55:23.023906] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.214 [2024-11-18 06:55:23.023933] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.214 [2024-11-18 06:55:23.034018] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.214 [2024-11-18 06:55:23.034044] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.214 [2024-11-18 06:55:23.044243] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.214 [2024-11-18 06:55:23.044270] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.214 [2024-11-18 06:55:23.055206] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.214 [2024-11-18 06:55:23.055232] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.214 [2024-11-18 06:55:23.067578] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.214 [2024-11-18 06:55:23.067619] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.214 [2024-11-18 06:55:23.077829] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.214 [2024-11-18 06:55:23.077856] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.214 [2024-11-18 06:55:23.088204] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.214 [2024-11-18 06:55:23.088231] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.214 [2024-11-18 06:55:23.098887] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.214 [2024-11-18 06:55:23.098915] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.214 [2024-11-18 06:55:23.109629] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.214 [2024-11-18 06:55:23.109656] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.214 [2024-11-18 06:55:23.120393] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.214 [2024-11-18 06:55:23.120420] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.214 [2024-11-18 06:55:23.133211] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.214 [2024-11-18 06:55:23.133237] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.214 [2024-11-18 06:55:23.143403] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.214 [2024-11-18 06:55:23.143431] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.214 [2024-11-18 06:55:23.154023] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.214 [2024-11-18 06:55:23.154049] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.214 [2024-11-18 06:55:23.164578] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.214 [2024-11-18 06:55:23.164606] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.214 [2024-11-18 06:55:23.175379] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.214 [2024-11-18 06:55:23.175406] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.214 [2024-11-18 06:55:23.188485] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.214 [2024-11-18 06:55:23.188520] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.473 [2024-11-18 06:55:23.198700] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.473 [2024-11-18 06:55:23.198727] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.473 [2024-11-18 06:55:23.209308] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.473 [2024-11-18 06:55:23.209334] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.473 [2024-11-18 06:55:23.220077] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.473 [2024-11-18 06:55:23.220103] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.473 [2024-11-18 06:55:23.232700] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.473 [2024-11-18 06:55:23.232728] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.473 [2024-11-18 06:55:23.242680] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.473 [2024-11-18 06:55:23.242707] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.473 [2024-11-18 06:55:23.253620] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.473 [2024-11-18 06:55:23.253648] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.473 [2024-11-18 06:55:23.267148] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.473 [2024-11-18 06:55:23.267174] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.473 [2024-11-18 06:55:23.277582] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.473 [2024-11-18 06:55:23.277609] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.473 [2024-11-18 06:55:23.288594] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.473 [2024-11-18 06:55:23.288621] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.473 [2024-11-18 06:55:23.299558] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.473 [2024-11-18 06:55:23.299585] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.473 [2024-11-18 06:55:23.310724] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.473 [2024-11-18 06:55:23.310751] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.473 [2024-11-18 06:55:23.322104] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.474 [2024-11-18 06:55:23.322130] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.474 [2024-11-18 06:55:23.335497] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.474 [2024-11-18 06:55:23.335525] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.474 [2024-11-18 06:55:23.346148] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.474 [2024-11-18 06:55:23.346174] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.474 [2024-11-18 06:55:23.356975] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.474 [2024-11-18 06:55:23.357002] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.474 [2024-11-18 06:55:23.369540] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.474 [2024-11-18 06:55:23.369567] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.474 [2024-11-18 06:55:23.379267] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.474 [2024-11-18 06:55:23.379292] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.474 [2024-11-18 06:55:23.390977] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.474 [2024-11-18 06:55:23.391003] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.474 [2024-11-18 06:55:23.401607] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.474 [2024-11-18 06:55:23.401634] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.474 [2024-11-18 06:55:23.411930] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.474 [2024-11-18 06:55:23.411957] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.474 [2024-11-18 06:55:23.422518] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.474 [2024-11-18 06:55:23.422545] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.474 [2024-11-18 06:55:23.433246] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.474 [2024-11-18 06:55:23.433273] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.474 [2024-11-18 06:55:23.444099] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.474 [2024-11-18 06:55:23.444128] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.798 [2024-11-18 06:55:23.454983] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.798 [2024-11-18 06:55:23.455009] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.798 [2024-11-18 06:55:23.465797] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.798 [2024-11-18 06:55:23.465838] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.798 [2024-11-18 06:55:23.476276] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.798 [2024-11-18 06:55:23.476303] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.798 [2024-11-18 06:55:23.487237] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.798 [2024-11-18 06:55:23.487270] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.798 [2024-11-18 06:55:23.500375] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.798 [2024-11-18 06:55:23.500401] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.798 [2024-11-18 06:55:23.510810] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.798 [2024-11-18 06:55:23.510836] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.798 [2024-11-18 06:55:23.521728] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.798 [2024-11-18 06:55:23.521755] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.798 [2024-11-18 06:55:23.534600] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.798 [2024-11-18 06:55:23.534627] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.798 [2024-11-18 06:55:23.544901] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.798 [2024-11-18 06:55:23.544928] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.798 [2024-11-18 06:55:23.556063] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.798 [2024-11-18 06:55:23.556090] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.799 [2024-11-18 06:55:23.566964] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.799 [2024-11-18 06:55:23.566991] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.799 [2024-11-18 06:55:23.578016] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.799 [2024-11-18 06:55:23.578042] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.799 [2024-11-18 06:55:23.590975] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.799 [2024-11-18 06:55:23.591001] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.799 [2024-11-18 06:55:23.601407] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.799 [2024-11-18 06:55:23.601453] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.799 [2024-11-18 06:55:23.611947] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.799 [2024-11-18 06:55:23.611976] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.799 [2024-11-18 06:55:23.622406] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.799 [2024-11-18 06:55:23.622434] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.799 [2024-11-18 06:55:23.632897] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.799 [2024-11-18 06:55:23.632924] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.799 [2024-11-18 06:55:23.643569] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.799 [2024-11-18 06:55:23.643596] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.799 [2024-11-18 06:55:23.654198] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.799 [2024-11-18 06:55:23.654226] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.799 [2024-11-18 06:55:23.665300] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.799 [2024-11-18 06:55:23.665327] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.799 [2024-11-18 06:55:23.675629] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.799 [2024-11-18 06:55:23.675655] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.799 [2024-11-18 06:55:23.686160] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.799 [2024-11-18 06:55:23.686187] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.799 [2024-11-18 06:55:23.697137] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.799 [2024-11-18 06:55:23.697171] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.799 [2024-11-18 06:55:23.708018] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.799 [2024-11-18 06:55:23.708046] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.799 [2024-11-18 06:55:23.720641] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.799 [2024-11-18 06:55:23.720669] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.799 11662.00 IOPS, 91.11 MiB/s [2024-11-18T05:55:23.777Z] [2024-11-18 06:55:23.730672] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.799 [2024-11-18 06:55:23.730699] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.799 [2024-11-18 06:55:23.741221] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.799 [2024-11-18 06:55:23.741249] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.799 [2024-11-18 06:55:23.751552] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.799 [2024-11-18 06:55:23.751579] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.799 [2024-11-18 06:55:23.762294] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.799 [2024-11-18 06:55:23.762322] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.799 [2024-11-18 06:55:23.773405] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.799 [2024-11-18 06:55:23.773432] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.058 [2024-11-18 06:55:23.783774] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.058 [2024-11-18 06:55:23.783801] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.058 [2024-11-18 06:55:23.794869] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.058 [2024-11-18 06:55:23.794896] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.058 [2024-11-18 06:55:23.805931] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.058 [2024-11-18 06:55:23.805958] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.058 [2024-11-18 06:55:23.816659] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.058 [2024-11-18 06:55:23.816687] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.058 [2024-11-18 06:55:23.827400] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.058 [2024-11-18 06:55:23.827428] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.058 [2024-11-18 06:55:23.840175] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.058 [2024-11-18 06:55:23.840202] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.058 [2024-11-18 06:55:23.850121] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.058 [2024-11-18 06:55:23.850149] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.058 [2024-11-18 06:55:23.860868] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.058 [2024-11-18 06:55:23.860895] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.058 [2024-11-18 06:55:23.871440] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.058 [2024-11-18 06:55:23.871466] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.058 [2024-11-18 06:55:23.884267] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.058 [2024-11-18 06:55:23.884293] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.058 [2024-11-18 06:55:23.896250] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.058 [2024-11-18 06:55:23.896278] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.058 [2024-11-18 06:55:23.905591] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.058 [2024-11-18 06:55:23.905629] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.058 [2024-11-18 06:55:23.917434] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.058 [2024-11-18 06:55:23.917474] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.058 [2024-11-18 06:55:23.929784] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.058 [2024-11-18 06:55:23.929825] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.058 [2024-11-18 06:55:23.940138] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.058 [2024-11-18 06:55:23.940165] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.058 [2024-11-18 06:55:23.950883] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.058 [2024-11-18 06:55:23.950909] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.058 [2024-11-18 06:55:23.963016] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.058 [2024-11-18 06:55:23.963042] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.058 [2024-11-18 06:55:23.972400] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.058 [2024-11-18 06:55:23.972426] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.059 [2024-11-18 06:55:23.983029] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.059 [2024-11-18 06:55:23.983054] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.059 [2024-11-18 06:55:23.996027] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.059 [2024-11-18 06:55:23.996053] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.059 [2024-11-18 06:55:24.006041] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.059 [2024-11-18 06:55:24.006067] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.059 [2024-11-18 06:55:24.016748] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.059 [2024-11-18 06:55:24.016790] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.059 [2024-11-18 06:55:24.027802] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.059 [2024-11-18 06:55:24.027830] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.317 [2024-11-18 06:55:24.038397] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.317 [2024-11-18 06:55:24.038439] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.317 [2024-11-18 06:55:24.051199] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.317 [2024-11-18 06:55:24.051226] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.317 [2024-11-18 06:55:24.061377] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.318 [2024-11-18 06:55:24.061403] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.318 [2024-11-18 06:55:24.071620] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.318 [2024-11-18 06:55:24.071646] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.318 [2024-11-18 06:55:24.082033] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.318 [2024-11-18 06:55:24.082060] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.318 [2024-11-18 06:55:24.092541] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.318 [2024-11-18 06:55:24.092568] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.318 [2024-11-18 06:55:24.103425] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.318 [2024-11-18 06:55:24.103452] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.318 [2024-11-18 06:55:24.114048] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.318 [2024-11-18 06:55:24.114074] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.318 [2024-11-18 06:55:24.124694] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.318 [2024-11-18 06:55:24.124720] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.318 [2024-11-18 06:55:24.135110] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.318 [2024-11-18 06:55:24.135136] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.318 [2024-11-18 06:55:24.148381] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.318 [2024-11-18 06:55:24.148408] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.318 [2024-11-18 06:55:24.158603] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.318 [2024-11-18 06:55:24.158631] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.318 [2024-11-18 06:55:24.169191] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.318 [2024-11-18 06:55:24.169218] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.318 [2024-11-18 06:55:24.179686] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.318 [2024-11-18 06:55:24.179714] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.318 [2024-11-18 06:55:24.190044] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.318 [2024-11-18 06:55:24.190070] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.318 [2024-11-18 06:55:24.200772] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.318 [2024-11-18 06:55:24.200812] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.318 [2024-11-18 06:55:24.211655] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.318 [2024-11-18 06:55:24.211681] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.318 [2024-11-18 06:55:24.222072] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.318 [2024-11-18 06:55:24.222098] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.318 [2024-11-18 06:55:24.232707] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.318 [2024-11-18 06:55:24.232735] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.318 [2024-11-18 06:55:24.243690] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.318 [2024-11-18 06:55:24.243717] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.318 [2024-11-18 06:55:24.254731] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.318 [2024-11-18 06:55:24.254766] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.318 [2024-11-18 06:55:24.265683] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.318 [2024-11-18 06:55:24.265711] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.318 [2024-11-18 06:55:24.276351] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.318 [2024-11-18 06:55:24.276378] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.318 [2024-11-18 06:55:24.287317] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.318 [2024-11-18 06:55:24.287344] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.577 [2024-11-18 06:55:24.298278] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.577 [2024-11-18 06:55:24.298305] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.577 [2024-11-18 06:55:24.309378] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.577 [2024-11-18 06:55:24.309405] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.577 [2024-11-18 06:55:24.320409] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.577 [2024-11-18 06:55:24.320436] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.577 [2024-11-18 06:55:24.331595] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.577 [2024-11-18 06:55:24.331638] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.577 [2024-11-18 06:55:24.342062] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.577 [2024-11-18 06:55:24.342089] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.577 [2024-11-18 06:55:24.352889] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.577 [2024-11-18 06:55:24.352916] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.577 [2024-11-18 06:55:24.363242] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.577 [2024-11-18 06:55:24.363269] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.577 [2024-11-18 06:55:24.374093] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.577 [2024-11-18 06:55:24.374119] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.577 [2024-11-18 06:55:24.386696] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.577 [2024-11-18 06:55:24.386722] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.577 [2024-11-18 06:55:24.396319] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.577 [2024-11-18 06:55:24.396345] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.577 [2024-11-18 06:55:24.408265] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.577 [2024-11-18 06:55:24.408291] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.577 [2024-11-18 06:55:24.419252] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.577 [2024-11-18 06:55:24.419278] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.577 [2024-11-18 06:55:24.430225] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.577 [2024-11-18 06:55:24.430252] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.577 [2024-11-18 06:55:24.442740] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.577 [2024-11-18 06:55:24.442782] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.577 [2024-11-18 06:55:24.453214] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.577 [2024-11-18 06:55:24.453240] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.577 [2024-11-18 06:55:24.463880] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.577 [2024-11-18 06:55:24.463907] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.577 [2024-11-18 06:55:24.474417] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.577 [2024-11-18 06:55:24.474443] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.577 [2024-11-18 06:55:24.485065] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.577 [2024-11-18 06:55:24.485091] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.577 [2024-11-18 06:55:24.497927] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.577 [2024-11-18 06:55:24.497954] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.577 [2024-11-18 06:55:24.508160] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.577 [2024-11-18 06:55:24.508186] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.577 [2024-11-18 06:55:24.518784] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.577 [2024-11-18 06:55:24.518811] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.577 [2024-11-18 06:55:24.529523] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.577 [2024-11-18 06:55:24.529551] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.577 [2024-11-18 06:55:24.540516] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.577 [2024-11-18 06:55:24.540543] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.577 [2024-11-18 06:55:24.551171] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.577 [2024-11-18 06:55:24.551199] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.836 [2024-11-18 06:55:24.561869] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.836 [2024-11-18 06:55:24.561896] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.836 [2024-11-18 06:55:24.572679] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.836 [2024-11-18 06:55:24.572706] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.836 [2024-11-18 06:55:24.583343] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.836 [2024-11-18 06:55:24.583369] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.836 [2024-11-18 06:55:24.594234] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.836 [2024-11-18 06:55:24.594260] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.836 [2024-11-18 06:55:24.605251] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.836 [2024-11-18 06:55:24.605277] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.836 [2024-11-18 06:55:24.616049] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.837 [2024-11-18 06:55:24.616075] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.837 [2024-11-18 06:55:24.629271] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.837 [2024-11-18 06:55:24.629298] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.837 [2024-11-18 06:55:24.639718] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.837 [2024-11-18 06:55:24.639745] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.837 [2024-11-18 06:55:24.650546] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.837 [2024-11-18 06:55:24.650573] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.837 [2024-11-18 06:55:24.663085] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.837 [2024-11-18 06:55:24.663112] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.837 [2024-11-18 06:55:24.673231] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.837 [2024-11-18 06:55:24.673258] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.837 [2024-11-18 06:55:24.684348] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.837 [2024-11-18 06:55:24.684375] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.837 [2024-11-18 06:55:24.696859] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.837 [2024-11-18 06:55:24.696886] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.837 [2024-11-18 06:55:24.706987] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.837 [2024-11-18 06:55:24.707014] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.837 [2024-11-18 06:55:24.717658] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.837 [2024-11-18 06:55:24.717685] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.837 [2024-11-18 06:55:24.728259] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.837 [2024-11-18 06:55:24.728294] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.837 11783.50 IOPS, 92.06 MiB/s [2024-11-18T05:55:24.815Z] [2024-11-18 06:55:24.739091] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.837 [2024-11-18 06:55:24.739118] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.837 [2024-11-18 06:55:24.750200] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.837 [2024-11-18 06:55:24.750227] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.837 [2024-11-18 06:55:24.761348] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.837 [2024-11-18 06:55:24.761376] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.837 [2024-11-18 06:55:24.772618] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.837 [2024-11-18 06:55:24.772668] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.837 [2024-11-18 06:55:24.783352] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.837 [2024-11-18 06:55:24.783378] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.837 [2024-11-18 06:55:24.794259] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.837 [2024-11-18 06:55:24.794285] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.837 [2024-11-18 06:55:24.805146] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.837 [2024-11-18 06:55:24.805173] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.096 [2024-11-18 06:55:24.818839] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.096 [2024-11-18 06:55:24.818866] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.096 [2024-11-18 06:55:24.829104] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.096 [2024-11-18 06:55:24.829131] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.096 [2024-11-18 06:55:24.839895] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.096 [2024-11-18 06:55:24.839921] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.096 [2024-11-18 06:55:24.850703] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.096 [2024-11-18 06:55:24.850737] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.096 [2024-11-18 06:55:24.861551] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.096 [2024-11-18 06:55:24.861579] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.096 [2024-11-18 06:55:24.871977] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.096 [2024-11-18 06:55:24.872004] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.096 [2024-11-18 06:55:24.882337] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.096 [2024-11-18 06:55:24.882365] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.096 [2024-11-18 06:55:24.892675] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.096 [2024-11-18 06:55:24.892703] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.096 [2024-11-18 06:55:24.903078] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.096 [2024-11-18 06:55:24.903106] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.096 [2024-11-18 06:55:24.913606] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.096 [2024-11-18 06:55:24.913633] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.096 [2024-11-18 06:55:24.924187] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.096 [2024-11-18 06:55:24.924214] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.096 [2024-11-18 06:55:24.934924] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.096 [2024-11-18 06:55:24.934958] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.096 [2024-11-18 06:55:24.945730] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.096 [2024-11-18 06:55:24.945758] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.096 [2024-11-18 06:55:24.958415] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.096 [2024-11-18 06:55:24.958443] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.096 [2024-11-18 06:55:24.968805] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.096 [2024-11-18 06:55:24.968832] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.096 [2024-11-18 06:55:24.979591] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.096 [2024-11-18 06:55:24.979619] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.096 [2024-11-18 06:55:24.992308] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.097 [2024-11-18 06:55:24.992335] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.097 [2024-11-18 06:55:25.002579] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.097 [2024-11-18 06:55:25.002607] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.097 [2024-11-18 06:55:25.013337] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.097 [2024-11-18 06:55:25.013364] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.097 [2024-11-18 06:55:25.026212] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.097 [2024-11-18 06:55:25.026238] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.097 [2024-11-18 06:55:25.036319] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.097 [2024-11-18 06:55:25.036346] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.097 [2024-11-18 06:55:25.047083] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.097 [2024-11-18 06:55:25.047109] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.097 [2024-11-18 06:55:25.057734] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.097 [2024-11-18 06:55:25.057762] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.097 [2024-11-18 06:55:25.068644] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.097 [2024-11-18 06:55:25.068672] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.355 [2024-11-18 06:55:25.079215] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.355 [2024-11-18 06:55:25.079242] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.355 [2024-11-18 06:55:25.089939] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.355 [2024-11-18 06:55:25.089966] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.355 [2024-11-18 06:55:25.100786] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.355 [2024-11-18 06:55:25.100813] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.355 [2024-11-18 06:55:25.111646] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.355 [2024-11-18 06:55:25.111674] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.355 [2024-11-18 06:55:25.124106] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.355 [2024-11-18 06:55:25.124133] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.355 [2024-11-18 06:55:25.133919] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.355 [2024-11-18 06:55:25.133946] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.355 [2024-11-18 06:55:25.144938] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.355 [2024-11-18 06:55:25.144972] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.355 [2024-11-18 06:55:25.155431] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.355 [2024-11-18 06:55:25.155457] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.355 [2024-11-18 06:55:25.166350] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.355 [2024-11-18 06:55:25.166377] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.355 [2024-11-18 06:55:25.178806] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.355 [2024-11-18 06:55:25.178834] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.355 [2024-11-18 06:55:25.187958] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.355 [2024-11-18 06:55:25.187985] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.355 [2024-11-18 06:55:25.199457] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.355 [2024-11-18 06:55:25.199510] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.355 [2024-11-18 06:55:25.209804] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.355 [2024-11-18 06:55:25.209830] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.355 [2024-11-18 06:55:25.220812] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.355 [2024-11-18 06:55:25.220839] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.355 [2024-11-18 06:55:25.231560] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.355 [2024-11-18 06:55:25.231587] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.355 [2024-11-18 06:55:25.242107] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.355 [2024-11-18 06:55:25.242134] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.355 [2024-11-18 06:55:25.253018] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.355 [2024-11-18 06:55:25.253045] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.355 [2024-11-18 06:55:25.265740] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.355 [2024-11-18 06:55:25.265768] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.355 [2024-11-18 06:55:25.276137] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.355 [2024-11-18 06:55:25.276164] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.355 [2024-11-18 06:55:25.286532] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.355 [2024-11-18 06:55:25.286560] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.355 [2024-11-18 06:55:25.297745] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.355 [2024-11-18 06:55:25.297772] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.355 [2024-11-18 06:55:25.310536] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.355 [2024-11-18 06:55:25.310563] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.355 [2024-11-18 06:55:25.320698] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.355 [2024-11-18 06:55:25.320724] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.355 [2024-11-18 06:55:25.331693] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.355 [2024-11-18 06:55:25.331720] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.615 [2024-11-18 06:55:25.342252] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.615 [2024-11-18 06:55:25.342279] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.615 [2024-11-18 06:55:25.352836] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.615 [2024-11-18 06:55:25.352863] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.615 [2024-11-18 06:55:25.363387] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.615 [2024-11-18 06:55:25.363414] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.615 [2024-11-18 06:55:25.374099] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.615 [2024-11-18 06:55:25.374125] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.615 [2024-11-18 06:55:25.386762] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.615 [2024-11-18 06:55:25.386789] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.615 [2024-11-18 06:55:25.396275] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.615 [2024-11-18 06:55:25.396302] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.615 [2024-11-18 06:55:25.407019] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.615 [2024-11-18 06:55:25.407046] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.615 [2024-11-18 06:55:25.418137] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.615 [2024-11-18 06:55:25.418164] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.615 [2024-11-18 06:55:25.430857] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.615 [2024-11-18 06:55:25.430883] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.615 [2024-11-18 06:55:25.441079] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.615 [2024-11-18 06:55:25.441105] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.615 [2024-11-18 06:55:25.451830] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.615 [2024-11-18 06:55:25.451856] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.615 [2024-11-18 06:55:25.462439] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.615 [2024-11-18 06:55:25.462480] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.615 [2024-11-18 06:55:25.473317] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.615 [2024-11-18 06:55:25.473343] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.615 [2024-11-18 06:55:25.484033] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.615 [2024-11-18 06:55:25.484060] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.615 [2024-11-18 06:55:25.494653] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.615 [2024-11-18 06:55:25.494680] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.615 [2024-11-18 06:55:25.505290] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.615 [2024-11-18 06:55:25.505316] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.615 [2024-11-18 06:55:25.517977] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.615 [2024-11-18 06:55:25.518004] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.615 [2024-11-18 06:55:25.529786] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.615 [2024-11-18 06:55:25.529813] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.615 [2024-11-18 06:55:25.538525] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.615 [2024-11-18 06:55:25.538552] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.615 [2024-11-18 06:55:25.549987] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.615 [2024-11-18 06:55:25.550014] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.615 [2024-11-18 06:55:25.560999] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.615 [2024-11-18 06:55:25.561026] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.615 [2024-11-18 06:55:25.571940] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.615 [2024-11-18 06:55:25.571966] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.615 [2024-11-18 06:55:25.584878] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.615 [2024-11-18 06:55:25.584904] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.873 [2024-11-18 06:55:25.594912] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.873 [2024-11-18 06:55:25.594938] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.873 [2024-11-18 06:55:25.605317] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.873 [2024-11-18 06:55:25.605344] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.873 [2024-11-18 06:55:25.615682] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.873 [2024-11-18 06:55:25.615709] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.873 [2024-11-18 06:55:25.626756] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.873 [2024-11-18 06:55:25.626801] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.873 [2024-11-18 06:55:25.637653] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.873 [2024-11-18 06:55:25.637680] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.873 [2024-11-18 06:55:25.648624] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.873 [2024-11-18 06:55:25.648651] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.873 [2024-11-18 06:55:25.659460] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.873 [2024-11-18 06:55:25.659509] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.873 [2024-11-18 06:55:25.669944] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.873 [2024-11-18 06:55:25.669971] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.873 [2024-11-18 06:55:25.682764] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.873 [2024-11-18 06:55:25.682805] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.873 [2024-11-18 06:55:25.692648] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.873 [2024-11-18 06:55:25.692675] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.873 [2024-11-18 06:55:25.703271] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.873 [2024-11-18 06:55:25.703297] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.873 [2024-11-18 06:55:25.714188] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.873 [2024-11-18 06:55:25.714214] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.873 [2024-11-18 06:55:25.725290] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.873 [2024-11-18 06:55:25.725316] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.873 11801.00 IOPS, 92.20 MiB/s [2024-11-18T05:55:25.851Z] [2024-11-18 06:55:25.737540] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.873 [2024-11-18 06:55:25.737568] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.873 [2024-11-18 06:55:25.747306] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.873 [2024-11-18 06:55:25.747333] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.873 [2024-11-18 06:55:25.757897] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.873 [2024-11-18 06:55:25.757924] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.873 [2024-11-18 06:55:25.768679] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.873 [2024-11-18 06:55:25.768708] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.873 [2024-11-18 06:55:25.779567] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.873 [2024-11-18 06:55:25.779595] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.873 [2024-11-18 06:55:25.790429] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.873 [2024-11-18 06:55:25.790456] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.873 [2024-11-18 06:55:25.801275] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.873 [2024-11-18 06:55:25.801302] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.873 [2024-11-18 06:55:25.813740] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.873 [2024-11-18 06:55:25.813783] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.873 [2024-11-18 06:55:25.823662] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.873 [2024-11-18 06:55:25.823690] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.873 [2024-11-18 06:55:25.834043] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.873 [2024-11-18 06:55:25.834068] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.873 [2024-11-18 06:55:25.844971] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.873 [2024-11-18 06:55:25.844998] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.132 [2024-11-18 06:55:25.857530] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.132 [2024-11-18 06:55:25.857557] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.132 [2024-11-18 06:55:25.867936] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.132 [2024-11-18 06:55:25.867962] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.132 [2024-11-18 06:55:25.878993] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.132 [2024-11-18 06:55:25.879019] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.132 [2024-11-18 06:55:25.891488] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.132 [2024-11-18 06:55:25.891526] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.132 [2024-11-18 06:55:25.901911] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.132 [2024-11-18 06:55:25.901937] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.132 [2024-11-18 06:55:25.912690] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.132 [2024-11-18 06:55:25.912718] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.132 [2024-11-18 06:55:25.923632] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.132 [2024-11-18 06:55:25.923659] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.132 [2024-11-18 06:55:25.934743] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.132 [2024-11-18 06:55:25.934770] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.132 [2024-11-18 06:55:25.947927] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.132 [2024-11-18 06:55:25.947954] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.132 [2024-11-18 06:55:25.959924] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.132 [2024-11-18 06:55:25.959951] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.132 [2024-11-18 06:55:25.968702] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.132 [2024-11-18 06:55:25.968737] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.132 [2024-11-18 06:55:25.980624] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.132 [2024-11-18 06:55:25.980651] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.132 [2024-11-18 06:55:25.991379] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.132 [2024-11-18 06:55:25.991407] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.132 [2024-11-18 06:55:26.002349] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.132 [2024-11-18 06:55:26.002377] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.132 [2024-11-18 06:55:26.012874] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.132 [2024-11-18 06:55:26.012901] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.132 [2024-11-18 06:55:26.023536] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.132 [2024-11-18 06:55:26.023564] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.132 [2024-11-18 06:55:26.034098] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.132 [2024-11-18 06:55:26.034125] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.132 [2024-11-18 06:55:26.044880] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.132 [2024-11-18 06:55:26.044907] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.132 [2024-11-18 06:55:26.055797] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.132 [2024-11-18 06:55:26.055825] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.132 [2024-11-18 06:55:26.066793] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.132 [2024-11-18 06:55:26.066820] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.132 [2024-11-18 06:55:26.079705] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.132 [2024-11-18 06:55:26.079733] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.132 [2024-11-18 06:55:26.090226] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.132 [2024-11-18 06:55:26.090253] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.132 [2024-11-18 06:55:26.101246] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.132 [2024-11-18 06:55:26.101293] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.391 [2024-11-18 06:55:26.113630] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.391 [2024-11-18 06:55:26.113658] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.391 [2024-11-18 06:55:26.123327] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.391 [2024-11-18 06:55:26.123354] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.391 [2024-11-18 06:55:26.135939] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.391 [2024-11-18 06:55:26.135966] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.391 [2024-11-18 06:55:26.146192] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.391 [2024-11-18 06:55:26.146219] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.391 [2024-11-18 06:55:26.157080] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.391 [2024-11-18 06:55:26.157107] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.391 [2024-11-18 06:55:26.171337] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.391 [2024-11-18 06:55:26.171377] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.391 [2024-11-18 06:55:26.181704] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.391 [2024-11-18 06:55:26.181739] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.391 [2024-11-18 06:55:26.192590] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.391 [2024-11-18 06:55:26.192617] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.391 [2024-11-18 06:55:26.205368] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.391 [2024-11-18 06:55:26.205394] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.391 [2024-11-18 06:55:26.215549] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.391 [2024-11-18 06:55:26.215577] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.391 [2024-11-18 06:55:26.226341] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.391 [2024-11-18 06:55:26.226368] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.391 [2024-11-18 06:55:26.237243] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.391 [2024-11-18 06:55:26.237269] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.391 [2024-11-18 06:55:26.247621] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.391 [2024-11-18 06:55:26.247648] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.391 [2024-11-18 06:55:26.258682] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.391 [2024-11-18 06:55:26.258710] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.391 [2024-11-18 06:55:26.269448] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.391 [2024-11-18 06:55:26.269497] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.391 [2024-11-18 06:55:26.280305] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.391 [2024-11-18 06:55:26.280331] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.391 [2024-11-18 06:55:26.291600] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.391 [2024-11-18 06:55:26.291627] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.391 [2024-11-18 06:55:26.302479] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.391 [2024-11-18 06:55:26.302516] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.391 [2024-11-18 06:55:26.313074] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.391 [2024-11-18 06:55:26.313101] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.391 [2024-11-18 06:55:26.323702] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.391 [2024-11-18 06:55:26.323729] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.391 [2024-11-18 06:55:26.334361] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.391 [2024-11-18 06:55:26.334387] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.391 [2024-11-18 06:55:26.345355] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.391 [2024-11-18 06:55:26.345382] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.391 [2024-11-18 06:55:26.356078] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.391 [2024-11-18 06:55:26.356105] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.391 [2024-11-18 06:55:26.368883] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.391 [2024-11-18 06:55:26.368910] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.650 [2024-11-18 06:55:26.379193] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.650 [2024-11-18 06:55:26.379219] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.651 [2024-11-18 06:55:26.389581] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.651 [2024-11-18 06:55:26.389632] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.651 [2024-11-18 06:55:26.400132] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.651 [2024-11-18 06:55:26.400158] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.651 [2024-11-18 06:55:26.410953] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.651 [2024-11-18 06:55:26.410980] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.651 [2024-11-18 06:55:26.422021] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.651 [2024-11-18 06:55:26.422048] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.651 [2024-11-18 06:55:26.432536] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.651 [2024-11-18 06:55:26.432563] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.651 [2024-11-18 06:55:26.443623] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.651 [2024-11-18 06:55:26.443650] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.651 [2024-11-18 06:55:26.456220] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.651 [2024-11-18 06:55:26.456246] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.651 [2024-11-18 06:55:26.466154] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.651 [2024-11-18 06:55:26.466180] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.651 [2024-11-18 06:55:26.476976] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.651 [2024-11-18 06:55:26.477002] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.651 [2024-11-18 06:55:26.487943] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.651 [2024-11-18 06:55:26.487970] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.651 [2024-11-18 06:55:26.498830] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.651 [2024-11-18 06:55:26.498856] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.651 [2024-11-18 06:55:26.512401] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.651 [2024-11-18 06:55:26.512427] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.651 [2024-11-18 06:55:26.522884] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.651 [2024-11-18 06:55:26.522911] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.651 [2024-11-18 06:55:26.533364] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.651 [2024-11-18 06:55:26.533391] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.651 [2024-11-18 06:55:26.544161] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.651 [2024-11-18 06:55:26.544203] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.651 [2024-11-18 06:55:26.554538] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.651 [2024-11-18 06:55:26.554565] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.651 [2024-11-18 06:55:26.565092] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.651 [2024-11-18 06:55:26.565120] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.651 [2024-11-18 06:55:26.575518] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.651 [2024-11-18 06:55:26.575546] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.651 [2024-11-18 06:55:26.586458] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.651 [2024-11-18 06:55:26.586510] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.651 [2024-11-18 06:55:26.597127] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.651 [2024-11-18 06:55:26.597160] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.651 [2024-11-18 06:55:26.607993] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.651 [2024-11-18 06:55:26.608019] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.651 [2024-11-18 06:55:26.619350] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.651 [2024-11-18 06:55:26.619376] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.910 [2024-11-18 06:55:26.629974] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.910 [2024-11-18 06:55:26.630001] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.910 [2024-11-18 06:55:26.642641] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.910 [2024-11-18 06:55:26.642669] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.910 [2024-11-18 06:55:26.652846] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.910 [2024-11-18 06:55:26.652873] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.910 [2024-11-18 06:55:26.663820] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.910 [2024-11-18 06:55:26.663846] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.910 [2024-11-18 06:55:26.674632] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.910 [2024-11-18 06:55:26.674660] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.910 [2024-11-18 06:55:26.685454] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.910 [2024-11-18 06:55:26.685505] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.910 [2024-11-18 06:55:26.697956] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.910 [2024-11-18 06:55:26.697982] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.910 [2024-11-18 06:55:26.708242] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.910 [2024-11-18 06:55:26.708268] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.910 [2024-11-18 06:55:26.719208] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.910 [2024-11-18 06:55:26.719235] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.910 [2024-11-18 06:55:26.731867] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.910 [2024-11-18 06:55:26.731893] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.910 11807.75 IOPS, 92.25 MiB/s [2024-11-18T05:55:26.888Z] [2024-11-18 06:55:26.741998] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.910 [2024-11-18 06:55:26.742025] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.910 [2024-11-18 06:55:26.752802] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.910 [2024-11-18 06:55:26.752829] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.910 [2024-11-18 06:55:26.765180] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.910 [2024-11-18 06:55:26.765206] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.910 [2024-11-18 06:55:26.775016] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.910 [2024-11-18 06:55:26.775042] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.910 [2024-11-18 06:55:26.786310] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.910 [2024-11-18 06:55:26.786337] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.910 [2024-11-18 06:55:26.797254] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.910 [2024-11-18 06:55:26.797282] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.910 [2024-11-18 06:55:26.811260] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.910 [2024-11-18 06:55:26.811286] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.910 [2024-11-18 06:55:26.821502] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.910 [2024-11-18 06:55:26.821530] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.910 [2024-11-18 06:55:26.832333] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.910 [2024-11-18 06:55:26.832360] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.910 [2024-11-18 06:55:26.844916] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.910 [2024-11-18 06:55:26.844942] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.910 [2024-11-18 06:55:26.854824] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.910 [2024-11-18 06:55:26.854851] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.910 [2024-11-18 06:55:26.865900] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.910 [2024-11-18 06:55:26.865927] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.910 [2024-11-18 06:55:26.876599] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.910 [2024-11-18 06:55:26.876626] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.910 [2024-11-18 06:55:26.887306] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.910 [2024-11-18 06:55:26.887334] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.170 [2024-11-18 06:55:26.900550] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.170 [2024-11-18 06:55:26.900591] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.170 [2024-11-18 06:55:26.911018] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.170 [2024-11-18 06:55:26.911044] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.170 [2024-11-18 06:55:26.921680] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.170 [2024-11-18 06:55:26.921706] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.170 [2024-11-18 06:55:26.934503] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.170 [2024-11-18 06:55:26.934530] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.170 [2024-11-18 06:55:26.944270] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.170 [2024-11-18 06:55:26.944297] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.170 [2024-11-18 06:55:26.955231] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.170 [2024-11-18 06:55:26.955257] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.170 [2024-11-18 06:55:26.967797] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.170 [2024-11-18 06:55:26.967824] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.170 [2024-11-18 06:55:26.977711] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.170 [2024-11-18 06:55:26.977739] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.170 [2024-11-18 06:55:26.988179] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.170 [2024-11-18 06:55:26.988206] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.170 [2024-11-18 06:55:26.999042] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.170 [2024-11-18 06:55:26.999069] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.170 [2024-11-18 06:55:27.010074] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.170 [2024-11-18 06:55:27.010100] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.170 [2024-11-18 06:55:27.022449] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.170 [2024-11-18 06:55:27.022476] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.170 [2024-11-18 06:55:27.032259] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.170 [2024-11-18 06:55:27.032286] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.170 [2024-11-18 06:55:27.042915] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.170 [2024-11-18 06:55:27.042941] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.170 [2024-11-18 06:55:27.053418] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.170 [2024-11-18 06:55:27.053445] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.170 [2024-11-18 06:55:27.066548] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.170 [2024-11-18 06:55:27.066590] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.170 [2024-11-18 06:55:27.078647] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.170 [2024-11-18 06:55:27.078674] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.170 [2024-11-18 06:55:27.087620] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.170 [2024-11-18 06:55:27.087648] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.170 [2024-11-18 06:55:27.099591] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.170 [2024-11-18 06:55:27.099618] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.170 [2024-11-18 06:55:27.112790] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.170 [2024-11-18 06:55:27.112817] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.170 [2024-11-18 06:55:27.122887] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.170 [2024-11-18 06:55:27.122914] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.170 [2024-11-18 06:55:27.133586] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.170 [2024-11-18 06:55:27.133614] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.170 [2024-11-18 06:55:27.146042] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.170 [2024-11-18 06:55:27.146071] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.430 [2024-11-18 06:55:27.155425] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.430 [2024-11-18 06:55:27.155466] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.430 [2024-11-18 06:55:27.169505] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.430 [2024-11-18 06:55:27.169533] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.430 [2024-11-18 06:55:27.180010] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.430 [2024-11-18 06:55:27.180037] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.430 [2024-11-18 06:55:27.190659] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.430 [2024-11-18 06:55:27.190687] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.430 [2024-11-18 06:55:27.201163] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.430 [2024-11-18 06:55:27.201190] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.430 [2024-11-18 06:55:27.211751] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.430 [2024-11-18 06:55:27.211794] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.430 [2024-11-18 06:55:27.222835] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.430 [2024-11-18 06:55:27.222862] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.430 [2024-11-18 06:55:27.233170] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.430 [2024-11-18 06:55:27.233197] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.430 [2024-11-18 06:55:27.243694] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.430 [2024-11-18 06:55:27.243722] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.430 [2024-11-18 06:55:27.254923] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.430 [2024-11-18 06:55:27.254951] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.430 [2024-11-18 06:55:27.267515] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.430 [2024-11-18 06:55:27.267543] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.430 [2024-11-18 06:55:27.277815] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.430 [2024-11-18 06:55:27.277841] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.430 [2024-11-18 06:55:27.288488] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.430 [2024-11-18 06:55:27.288524] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.430 [2024-11-18 06:55:27.300894] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.430 [2024-11-18 06:55:27.300920] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.430 [2024-11-18 06:55:27.310006] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.430 [2024-11-18 06:55:27.310032] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.430 [2024-11-18 06:55:27.322678] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.430 [2024-11-18 06:55:27.322706] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.430 [2024-11-18 06:55:27.332701] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.430 [2024-11-18 06:55:27.332729] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.430 [2024-11-18 06:55:27.343097] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.430 [2024-11-18 06:55:27.343123] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.430 [2024-11-18 06:55:27.354018] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.430 [2024-11-18 06:55:27.354046] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.430 [2024-11-18 06:55:27.366575] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.430 [2024-11-18 06:55:27.366603] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.430 [2024-11-18 06:55:27.376199] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.430 [2024-11-18 06:55:27.376227] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.430 [2024-11-18 06:55:27.386383] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.430 [2024-11-18 06:55:27.386411] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.430 [2024-11-18 06:55:27.397136] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.430 [2024-11-18 06:55:27.397163] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.689 [2024-11-18 06:55:27.409866] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.689 [2024-11-18 06:55:27.409894] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.689 [2024-11-18 06:55:27.425777] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.689 [2024-11-18 06:55:27.425809] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.689 [2024-11-18 06:55:27.435893] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.689 [2024-11-18 06:55:27.435926] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.689 [2024-11-18 06:55:27.446232] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.689 [2024-11-18 06:55:27.446260] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.689 [2024-11-18 06:55:27.456872] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.689 [2024-11-18 06:55:27.456899] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.689 [2024-11-18 06:55:27.467668] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.689 [2024-11-18 06:55:27.467695] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.689 [2024-11-18 06:55:27.478506] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.689 [2024-11-18 06:55:27.478533] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.689 [2024-11-18 06:55:27.491549] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.689 [2024-11-18 06:55:27.491576] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.689 [2024-11-18 06:55:27.501906] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.689 [2024-11-18 06:55:27.501932] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.689 [2024-11-18 06:55:27.512464] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.689 [2024-11-18 06:55:27.512498] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.689 [2024-11-18 06:55:27.522997] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.689 [2024-11-18 06:55:27.523024] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.689 [2024-11-18 06:55:27.533673] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.689 [2024-11-18 06:55:27.533700] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.689 [2024-11-18 06:55:27.547588] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.689 [2024-11-18 06:55:27.547615] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.689 [2024-11-18 06:55:27.557913] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.689 [2024-11-18 06:55:27.557941] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.689 [2024-11-18 06:55:27.568800] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.689 [2024-11-18 06:55:27.568826] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.689 [2024-11-18 06:55:27.581234] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.689 [2024-11-18 06:55:27.581260] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.689 [2024-11-18 06:55:27.590925] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.689 [2024-11-18 06:55:27.590952] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.689 [2024-11-18 06:55:27.601915] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.689 [2024-11-18 06:55:27.601942] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.689 [2024-11-18 06:55:27.613099] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.689 [2024-11-18 06:55:27.613125] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.689 [2024-11-18 06:55:27.625575] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.689 [2024-11-18 06:55:27.625602] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.689 [2024-11-18 06:55:27.635963] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.689 [2024-11-18 06:55:27.635990] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.690 [2024-11-18 06:55:27.646906] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.690 [2024-11-18 06:55:27.646939] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.690 [2024-11-18 06:55:27.660432] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.690 [2024-11-18 06:55:27.660459] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.950 [2024-11-18 06:55:27.670681] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.950 [2024-11-18 06:55:27.670708] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.950 [2024-11-18 06:55:27.681674] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.950 [2024-11-18 06:55:27.681716] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.950 [2024-11-18 06:55:27.694335] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.950 [2024-11-18 06:55:27.694361] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.950 [2024-11-18 06:55:27.704186] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.950 [2024-11-18 06:55:27.704213] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.950 [2024-11-18 06:55:27.715136] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.950 [2024-11-18 06:55:27.715164] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.950 [2024-11-18 06:55:27.725925] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.950 [2024-11-18 06:55:27.725952] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.950 11824.80 IOPS, 92.38 MiB/s [2024-11-18T05:55:27.928Z] [2024-11-18 06:55:27.736259] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.950 [2024-11-18 06:55:27.736285] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.950 [2024-11-18 06:55:27.743577] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.950 [2024-11-18 06:55:27.743603] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.950 00:10:06.950 Latency(us) 00:10:06.950 [2024-11-18T05:55:27.928Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:06.950 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:10:06.950 Nvme1n1 : 5.01 11827.19 92.40 0.00 0.00 10809.57 3932.16 18058.81 00:10:06.950 [2024-11-18T05:55:27.928Z] =================================================================================================================== 00:10:06.950 [2024-11-18T05:55:27.928Z] Total : 11827.19 92.40 0.00 0.00 10809.57 3932.16 18058.81 00:10:06.950 [2024-11-18 06:55:27.751100] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.950 [2024-11-18 06:55:27.751123] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.950 [2024-11-18 06:55:27.759124] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.950 [2024-11-18 06:55:27.759147] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.951 [2024-11-18 06:55:27.767203] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.951 [2024-11-18 06:55:27.767246] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.951 [2024-11-18 06:55:27.775227] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.951 [2024-11-18 06:55:27.775279] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.951 [2024-11-18 06:55:27.783246] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.951 [2024-11-18 06:55:27.783295] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.951 [2024-11-18 06:55:27.791260] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.951 [2024-11-18 06:55:27.791308] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.951 [2024-11-18 06:55:27.799276] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.951 [2024-11-18 06:55:27.799324] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.951 [2024-11-18 06:55:27.807317] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.951 [2024-11-18 06:55:27.807366] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.951 [2024-11-18 06:55:27.815335] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.951 [2024-11-18 06:55:27.815385] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.951 [2024-11-18 06:55:27.823351] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.951 [2024-11-18 06:55:27.823399] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.951 [2024-11-18 06:55:27.831376] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.951 [2024-11-18 06:55:27.831423] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.951 [2024-11-18 06:55:27.839401] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.951 [2024-11-18 06:55:27.839460] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.951 [2024-11-18 06:55:27.847423] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.951 [2024-11-18 06:55:27.847473] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.951 [2024-11-18 06:55:27.855437] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.951 [2024-11-18 06:55:27.855484] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.951 [2024-11-18 06:55:27.863461] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.951 [2024-11-18 06:55:27.863515] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.951 [2024-11-18 06:55:27.871484] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.951 [2024-11-18 06:55:27.871538] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.951 [2024-11-18 06:55:27.879511] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.951 [2024-11-18 06:55:27.879560] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.951 [2024-11-18 06:55:27.887519] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.951 [2024-11-18 06:55:27.887572] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.951 [2024-11-18 06:55:27.895555] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.951 [2024-11-18 06:55:27.895579] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.951 [2024-11-18 06:55:27.903575] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.951 [2024-11-18 06:55:27.903622] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.951 [2024-11-18 06:55:27.911600] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.951 [2024-11-18 06:55:27.911649] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.951 [2024-11-18 06:55:27.919614] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.951 [2024-11-18 06:55:27.919654] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.951 [2024-11-18 06:55:27.927588] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.951 [2024-11-18 06:55:27.927610] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.211 [2024-11-18 06:55:27.935607] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.211 [2024-11-18 06:55:27.935627] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.211 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (147987) - No such process 00:10:07.211 06:55:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 147987 00:10:07.211 06:55:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:07.211 06:55:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.211 06:55:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:07.211 06:55:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.211 06:55:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:10:07.211 06:55:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.211 06:55:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:07.211 delay0 00:10:07.211 06:55:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.211 06:55:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:10:07.211 06:55:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.211 06:55:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:07.211 06:55:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.211 06:55:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:10:07.211 [2024-11-18 06:55:28.054449] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:10:13.775 Initializing NVMe Controllers 00:10:13.775 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:13.775 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:10:13.775 Initialization complete. Launching workers. 00:10:13.775 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 778 00:10:13.775 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 1065, failed to submit 33 00:10:13.775 success 932, unsuccessful 133, failed 0 00:10:13.775 06:55:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:10:13.775 06:55:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:10:13.775 06:55:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:13.775 06:55:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:10:13.775 06:55:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:13.775 06:55:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:10:13.775 06:55:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:13.775 06:55:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:13.775 rmmod nvme_tcp 00:10:13.775 rmmod nvme_fabrics 00:10:13.775 rmmod nvme_keyring 00:10:13.775 06:55:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:13.775 06:55:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:10:13.775 06:55:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:10:13.775 06:55:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 146610 ']' 00:10:13.775 06:55:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 146610 00:10:13.775 06:55:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 146610 ']' 00:10:13.775 06:55:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 146610 00:10:13.775 06:55:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:10:13.775 06:55:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:13.775 06:55:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 146610 00:10:13.775 06:55:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:10:13.775 06:55:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:10:13.775 06:55:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 146610' 00:10:13.775 killing process with pid 146610 00:10:13.775 06:55:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 146610 00:10:13.775 06:55:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 146610 00:10:13.775 06:55:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:13.775 06:55:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:13.775 06:55:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:13.775 06:55:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:10:13.775 06:55:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:10:13.775 06:55:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:10:13.775 06:55:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:13.775 06:55:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:13.775 06:55:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:13.775 06:55:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:13.775 06:55:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:13.775 06:55:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:15.687 06:55:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:15.687 00:10:15.687 real 0m27.928s 00:10:15.687 user 0m41.933s 00:10:15.687 sys 0m7.529s 00:10:15.687 06:55:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:15.687 06:55:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:15.687 ************************************ 00:10:15.687 END TEST nvmf_zcopy 00:10:15.687 ************************************ 00:10:15.947 06:55:36 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:10:15.947 06:55:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:15.947 06:55:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:15.947 06:55:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:15.947 ************************************ 00:10:15.947 START TEST nvmf_nmic 00:10:15.947 ************************************ 00:10:15.947 06:55:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:10:15.947 * Looking for test storage... 00:10:15.947 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:15.947 06:55:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:15.947 06:55:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # lcov --version 00:10:15.947 06:55:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:15.947 06:55:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:15.947 06:55:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:15.947 06:55:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:15.947 06:55:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:15.947 06:55:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:10:15.947 06:55:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:10:15.947 06:55:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:10:15.947 06:55:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:10:15.947 06:55:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:10:15.947 06:55:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:10:15.947 06:55:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:10:15.947 06:55:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:15.947 06:55:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:10:15.947 06:55:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:10:15.947 06:55:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:15.947 06:55:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:15.947 06:55:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:10:15.947 06:55:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:10:15.947 06:55:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:15.947 06:55:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:10:15.947 06:55:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:10:15.947 06:55:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:10:15.947 06:55:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:10:15.947 06:55:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:15.948 06:55:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:10:15.948 06:55:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:10:15.948 06:55:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:15.948 06:55:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:15.948 06:55:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:10:15.948 06:55:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:15.948 06:55:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:15.948 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:15.948 --rc genhtml_branch_coverage=1 00:10:15.948 --rc genhtml_function_coverage=1 00:10:15.948 --rc genhtml_legend=1 00:10:15.948 --rc geninfo_all_blocks=1 00:10:15.948 --rc geninfo_unexecuted_blocks=1 00:10:15.948 00:10:15.948 ' 00:10:15.948 06:55:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:15.948 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:15.948 --rc genhtml_branch_coverage=1 00:10:15.948 --rc genhtml_function_coverage=1 00:10:15.948 --rc genhtml_legend=1 00:10:15.948 --rc geninfo_all_blocks=1 00:10:15.948 --rc geninfo_unexecuted_blocks=1 00:10:15.948 00:10:15.948 ' 00:10:15.948 06:55:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:15.948 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:15.948 --rc genhtml_branch_coverage=1 00:10:15.948 --rc genhtml_function_coverage=1 00:10:15.948 --rc genhtml_legend=1 00:10:15.948 --rc geninfo_all_blocks=1 00:10:15.948 --rc geninfo_unexecuted_blocks=1 00:10:15.948 00:10:15.948 ' 00:10:15.948 06:55:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:15.948 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:15.948 --rc genhtml_branch_coverage=1 00:10:15.948 --rc genhtml_function_coverage=1 00:10:15.948 --rc genhtml_legend=1 00:10:15.948 --rc geninfo_all_blocks=1 00:10:15.948 --rc geninfo_unexecuted_blocks=1 00:10:15.948 00:10:15.948 ' 00:10:15.948 06:55:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:15.948 06:55:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:10:15.948 06:55:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:15.948 06:55:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:15.948 06:55:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:15.948 06:55:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:15.948 06:55:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:15.948 06:55:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:15.948 06:55:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:15.948 06:55:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:15.948 06:55:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:15.948 06:55:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:15.948 06:55:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:15.948 06:55:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:15.948 06:55:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:15.948 06:55:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:15.948 06:55:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:15.948 06:55:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:15.948 06:55:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:15.948 06:55:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:10:15.948 06:55:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:15.948 06:55:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:15.948 06:55:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:15.948 06:55:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:15.948 06:55:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:15.948 06:55:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:15.948 06:55:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:10:15.948 06:55:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:15.948 06:55:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:10:15.948 06:55:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:15.948 06:55:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:15.948 06:55:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:15.948 06:55:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:15.948 06:55:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:15.948 06:55:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:15.948 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:15.948 06:55:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:15.948 06:55:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:15.948 06:55:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:15.948 06:55:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:15.948 06:55:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:15.948 06:55:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:10:15.948 06:55:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:15.948 06:55:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:15.948 06:55:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:15.948 06:55:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:15.948 06:55:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:15.948 06:55:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:15.948 06:55:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:15.948 06:55:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:15.948 06:55:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:15.948 06:55:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:15.948 06:55:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:10:15.948 06:55:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:18.486 06:55:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:18.486 06:55:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:10:18.486 06:55:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:18.486 06:55:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:18.486 06:55:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:18.486 06:55:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:18.486 06:55:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:18.486 06:55:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:10:18.486 06:55:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:18.486 06:55:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:10:18.486 06:55:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:10:18.486 06:55:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:10:18.486 06:55:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:10:18.486 06:55:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:10:18.486 06:55:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:10:18.486 06:55:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:18.486 06:55:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:18.486 06:55:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:18.486 06:55:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:18.486 06:55:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:18.486 06:55:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:18.486 06:55:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:18.486 06:55:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:18.486 06:55:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:18.486 06:55:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:18.486 06:55:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:18.486 06:55:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:18.486 06:55:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:18.486 06:55:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:18.486 06:55:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:18.486 06:55:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:18.486 06:55:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:18.486 06:55:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:18.486 06:55:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:18.486 06:55:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:10:18.486 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:10:18.486 06:55:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:18.486 06:55:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:18.486 06:55:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:18.486 06:55:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:18.486 06:55:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:18.486 06:55:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:18.486 06:55:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:10:18.486 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:10:18.486 06:55:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:18.486 06:55:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:18.486 06:55:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:18.486 06:55:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:18.486 06:55:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:18.486 06:55:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:18.486 06:55:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:18.486 06:55:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:18.486 06:55:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:18.486 06:55:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:18.486 06:55:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:18.486 06:55:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:18.486 06:55:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:18.486 06:55:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:18.486 06:55:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:18.486 06:55:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:10:18.486 Found net devices under 0000:0a:00.0: cvl_0_0 00:10:18.486 06:55:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:18.486 06:55:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:18.486 06:55:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:18.486 06:55:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:18.486 06:55:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:18.486 06:55:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:18.486 06:55:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:18.486 06:55:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:18.486 06:55:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:10:18.486 Found net devices under 0000:0a:00.1: cvl_0_1 00:10:18.486 06:55:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:18.486 06:55:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:18.486 06:55:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:10:18.486 06:55:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:18.486 06:55:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:18.486 06:55:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:18.486 06:55:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:18.486 06:55:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:18.487 06:55:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:18.487 06:55:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:18.487 06:55:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:18.487 06:55:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:18.487 06:55:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:18.487 06:55:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:18.487 06:55:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:18.487 06:55:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:18.487 06:55:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:18.487 06:55:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:18.487 06:55:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:18.487 06:55:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:18.487 06:55:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:18.487 06:55:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:18.487 06:55:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:18.487 06:55:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:18.487 06:55:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:18.487 06:55:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:18.487 06:55:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:18.487 06:55:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:18.487 06:55:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:18.487 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:18.487 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.147 ms 00:10:18.487 00:10:18.487 --- 10.0.0.2 ping statistics --- 00:10:18.487 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:18.487 rtt min/avg/max/mdev = 0.147/0.147/0.147/0.000 ms 00:10:18.487 06:55:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:18.487 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:18.487 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.087 ms 00:10:18.487 00:10:18.487 --- 10.0.0.1 ping statistics --- 00:10:18.487 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:18.487 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:10:18.487 06:55:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:18.487 06:55:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:10:18.487 06:55:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:18.487 06:55:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:18.487 06:55:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:18.487 06:55:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:18.487 06:55:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:18.487 06:55:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:18.487 06:55:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:18.487 06:55:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:10:18.487 06:55:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:18.487 06:55:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:18.487 06:55:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:18.487 06:55:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=151389 00:10:18.487 06:55:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 151389 00:10:18.487 06:55:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:18.487 06:55:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 151389 ']' 00:10:18.487 06:55:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:18.487 06:55:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:18.487 06:55:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:18.487 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:18.487 06:55:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:18.487 06:55:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:18.487 [2024-11-18 06:55:39.243006] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:10:18.487 [2024-11-18 06:55:39.243085] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:18.487 [2024-11-18 06:55:39.316082] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:18.487 [2024-11-18 06:55:39.361219] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:18.487 [2024-11-18 06:55:39.361292] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:18.487 [2024-11-18 06:55:39.361305] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:18.487 [2024-11-18 06:55:39.361329] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:18.487 [2024-11-18 06:55:39.361338] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:18.487 [2024-11-18 06:55:39.362752] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:18.487 [2024-11-18 06:55:39.362812] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:18.487 [2024-11-18 06:55:39.362877] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:18.487 [2024-11-18 06:55:39.362881] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:18.746 06:55:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:18.746 06:55:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:10:18.746 06:55:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:18.746 06:55:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:18.746 06:55:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:18.746 06:55:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:18.746 06:55:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:18.746 06:55:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.746 06:55:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:18.746 [2024-11-18 06:55:39.504626] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:18.746 06:55:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.746 06:55:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:18.746 06:55:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.746 06:55:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:18.746 Malloc0 00:10:18.746 06:55:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.746 06:55:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:18.746 06:55:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.746 06:55:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:18.746 06:55:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.746 06:55:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:18.746 06:55:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.746 06:55:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:18.746 06:55:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.746 06:55:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:18.746 06:55:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.746 06:55:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:18.746 [2024-11-18 06:55:39.567422] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:18.746 06:55:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.746 06:55:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:10:18.746 test case1: single bdev can't be used in multiple subsystems 00:10:18.746 06:55:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:10:18.746 06:55:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.746 06:55:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:18.746 06:55:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.746 06:55:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:10:18.746 06:55:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.746 06:55:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:18.746 06:55:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.746 06:55:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:10:18.746 06:55:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:10:18.746 06:55:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.746 06:55:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:18.746 [2024-11-18 06:55:39.591244] bdev.c:8198:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:10:18.746 [2024-11-18 06:55:39.591274] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:10:18.746 [2024-11-18 06:55:39.591303] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.746 request: 00:10:18.746 { 00:10:18.746 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:10:18.746 "namespace": { 00:10:18.746 "bdev_name": "Malloc0", 00:10:18.746 "no_auto_visible": false 00:10:18.746 }, 00:10:18.746 "method": "nvmf_subsystem_add_ns", 00:10:18.746 "req_id": 1 00:10:18.746 } 00:10:18.746 Got JSON-RPC error response 00:10:18.746 response: 00:10:18.746 { 00:10:18.746 "code": -32602, 00:10:18.746 "message": "Invalid parameters" 00:10:18.746 } 00:10:18.746 06:55:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:10:18.746 06:55:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:10:18.746 06:55:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:10:18.746 06:55:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:10:18.746 Adding namespace failed - expected result. 00:10:18.746 06:55:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:10:18.746 test case2: host connect to nvmf target in multiple paths 00:10:18.746 06:55:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:10:18.746 06:55:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.746 06:55:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:18.746 [2024-11-18 06:55:39.599360] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:10:18.746 06:55:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.746 06:55:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:19.313 06:55:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:10:20.248 06:55:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:10:20.248 06:55:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:10:20.248 06:55:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:20.248 06:55:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:10:20.248 06:55:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:10:22.149 06:55:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:22.149 06:55:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:22.149 06:55:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:22.149 06:55:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:10:22.149 06:55:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:22.149 06:55:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:10:22.149 06:55:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:22.149 [global] 00:10:22.149 thread=1 00:10:22.149 invalidate=1 00:10:22.149 rw=write 00:10:22.149 time_based=1 00:10:22.149 runtime=1 00:10:22.149 ioengine=libaio 00:10:22.149 direct=1 00:10:22.149 bs=4096 00:10:22.149 iodepth=1 00:10:22.149 norandommap=0 00:10:22.149 numjobs=1 00:10:22.149 00:10:22.149 verify_dump=1 00:10:22.150 verify_backlog=512 00:10:22.150 verify_state_save=0 00:10:22.150 do_verify=1 00:10:22.150 verify=crc32c-intel 00:10:22.150 [job0] 00:10:22.150 filename=/dev/nvme0n1 00:10:22.150 Could not set queue depth (nvme0n1) 00:10:22.716 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:22.716 fio-3.35 00:10:22.716 Starting 1 thread 00:10:23.651 00:10:23.651 job0: (groupid=0, jobs=1): err= 0: pid=152032: Mon Nov 18 06:55:44 2024 00:10:23.651 read: IOPS=2188, BW=8755KiB/s (8965kB/s)(8764KiB/1001msec) 00:10:23.651 slat (nsec): min=3976, max=47596, avg=10927.80, stdev=5178.86 00:10:23.651 clat (usec): min=179, max=378, avg=229.00, stdev=24.11 00:10:23.651 lat (usec): min=184, max=390, avg=239.93, stdev=26.98 00:10:23.651 clat percentiles (usec): 00:10:23.651 | 1.00th=[ 186], 5.00th=[ 194], 10.00th=[ 200], 20.00th=[ 208], 00:10:23.651 | 30.00th=[ 215], 40.00th=[ 223], 50.00th=[ 229], 60.00th=[ 235], 00:10:23.651 | 70.00th=[ 241], 80.00th=[ 249], 90.00th=[ 260], 95.00th=[ 269], 00:10:23.651 | 99.00th=[ 289], 99.50th=[ 297], 99.90th=[ 375], 99.95th=[ 379], 00:10:23.651 | 99.99th=[ 379] 00:10:23.651 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:10:23.651 slat (nsec): min=5132, max=64851, avg=15161.58, stdev=7538.90 00:10:23.651 clat (usec): min=118, max=294, avg=163.11, stdev=25.13 00:10:23.651 lat (usec): min=124, max=333, avg=178.27, stdev=30.43 00:10:23.651 clat percentiles (usec): 00:10:23.651 | 1.00th=[ 122], 5.00th=[ 125], 10.00th=[ 129], 20.00th=[ 141], 00:10:23.651 | 30.00th=[ 147], 40.00th=[ 157], 50.00th=[ 165], 60.00th=[ 172], 00:10:23.651 | 70.00th=[ 178], 80.00th=[ 184], 90.00th=[ 192], 95.00th=[ 202], 00:10:23.651 | 99.00th=[ 241], 99.50th=[ 251], 99.90th=[ 289], 99.95th=[ 293], 00:10:23.651 | 99.99th=[ 293] 00:10:23.651 bw ( KiB/s): min=12264, max=12264, per=100.00%, avg=12264.00, stdev= 0.00, samples=1 00:10:23.651 iops : min= 3066, max= 3066, avg=3066.00, stdev= 0.00, samples=1 00:10:23.651 lat (usec) : 250=91.24%, 500=8.76% 00:10:23.651 cpu : usr=5.30%, sys=7.40%, ctx=4751, majf=0, minf=1 00:10:23.651 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:23.651 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:23.651 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:23.651 issued rwts: total=2191,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:23.651 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:23.651 00:10:23.651 Run status group 0 (all jobs): 00:10:23.651 READ: bw=8755KiB/s (8965kB/s), 8755KiB/s-8755KiB/s (8965kB/s-8965kB/s), io=8764KiB (8974kB), run=1001-1001msec 00:10:23.651 WRITE: bw=9.99MiB/s (10.5MB/s), 9.99MiB/s-9.99MiB/s (10.5MB/s-10.5MB/s), io=10.0MiB (10.5MB), run=1001-1001msec 00:10:23.651 00:10:23.651 Disk stats (read/write): 00:10:23.651 nvme0n1: ios=2098/2238, merge=0/0, ticks=458/330, in_queue=788, util=91.48% 00:10:23.651 06:55:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:23.910 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:10:23.910 06:55:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:23.910 06:55:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:10:23.910 06:55:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:10:23.910 06:55:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:23.910 06:55:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:10:23.910 06:55:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:23.910 06:55:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:10:23.910 06:55:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:10:23.910 06:55:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:10:23.910 06:55:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:23.910 06:55:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:10:23.910 06:55:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:23.910 06:55:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:10:23.910 06:55:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:23.910 06:55:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:23.910 rmmod nvme_tcp 00:10:23.910 rmmod nvme_fabrics 00:10:23.910 rmmod nvme_keyring 00:10:23.910 06:55:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:23.910 06:55:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:10:23.910 06:55:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:10:23.910 06:55:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 151389 ']' 00:10:23.910 06:55:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 151389 00:10:23.910 06:55:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 151389 ']' 00:10:23.910 06:55:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 151389 00:10:23.910 06:55:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:10:23.910 06:55:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:23.910 06:55:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 151389 00:10:23.910 06:55:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:23.910 06:55:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:23.910 06:55:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 151389' 00:10:23.910 killing process with pid 151389 00:10:23.910 06:55:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 151389 00:10:23.910 06:55:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 151389 00:10:24.171 06:55:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:24.171 06:55:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:24.171 06:55:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:24.171 06:55:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:10:24.171 06:55:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:10:24.171 06:55:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:24.171 06:55:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:10:24.171 06:55:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:24.171 06:55:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:24.171 06:55:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:24.171 06:55:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:24.171 06:55:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:26.093 06:55:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:26.093 00:10:26.093 real 0m10.355s 00:10:26.093 user 0m23.567s 00:10:26.093 sys 0m2.913s 00:10:26.093 06:55:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:26.093 06:55:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:26.093 ************************************ 00:10:26.093 END TEST nvmf_nmic 00:10:26.093 ************************************ 00:10:26.353 06:55:47 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:26.353 06:55:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:26.353 06:55:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:26.353 06:55:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:26.353 ************************************ 00:10:26.353 START TEST nvmf_fio_target 00:10:26.353 ************************************ 00:10:26.353 06:55:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:26.353 * Looking for test storage... 00:10:26.353 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:26.353 06:55:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:26.353 06:55:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lcov --version 00:10:26.353 06:55:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:26.353 06:55:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:26.353 06:55:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:26.353 06:55:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:26.353 06:55:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:26.353 06:55:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:10:26.353 06:55:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:10:26.353 06:55:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:10:26.353 06:55:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:10:26.353 06:55:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:10:26.353 06:55:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:10:26.353 06:55:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:10:26.353 06:55:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:26.353 06:55:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:10:26.353 06:55:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:10:26.353 06:55:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:26.353 06:55:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:26.353 06:55:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:10:26.353 06:55:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:10:26.353 06:55:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:26.353 06:55:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:10:26.353 06:55:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:10:26.353 06:55:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:10:26.353 06:55:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:10:26.353 06:55:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:26.353 06:55:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:10:26.353 06:55:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:10:26.353 06:55:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:26.353 06:55:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:26.353 06:55:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:10:26.353 06:55:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:26.353 06:55:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:26.353 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:26.353 --rc genhtml_branch_coverage=1 00:10:26.353 --rc genhtml_function_coverage=1 00:10:26.353 --rc genhtml_legend=1 00:10:26.353 --rc geninfo_all_blocks=1 00:10:26.353 --rc geninfo_unexecuted_blocks=1 00:10:26.353 00:10:26.353 ' 00:10:26.353 06:55:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:26.353 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:26.353 --rc genhtml_branch_coverage=1 00:10:26.353 --rc genhtml_function_coverage=1 00:10:26.353 --rc genhtml_legend=1 00:10:26.353 --rc geninfo_all_blocks=1 00:10:26.353 --rc geninfo_unexecuted_blocks=1 00:10:26.353 00:10:26.353 ' 00:10:26.353 06:55:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:26.353 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:26.353 --rc genhtml_branch_coverage=1 00:10:26.353 --rc genhtml_function_coverage=1 00:10:26.353 --rc genhtml_legend=1 00:10:26.353 --rc geninfo_all_blocks=1 00:10:26.353 --rc geninfo_unexecuted_blocks=1 00:10:26.353 00:10:26.353 ' 00:10:26.353 06:55:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:26.353 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:26.353 --rc genhtml_branch_coverage=1 00:10:26.353 --rc genhtml_function_coverage=1 00:10:26.353 --rc genhtml_legend=1 00:10:26.353 --rc geninfo_all_blocks=1 00:10:26.353 --rc geninfo_unexecuted_blocks=1 00:10:26.353 00:10:26.353 ' 00:10:26.353 06:55:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:26.353 06:55:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:10:26.353 06:55:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:26.353 06:55:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:26.354 06:55:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:26.354 06:55:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:26.354 06:55:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:26.354 06:55:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:26.354 06:55:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:26.354 06:55:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:26.354 06:55:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:26.354 06:55:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:26.354 06:55:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:26.354 06:55:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:26.354 06:55:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:26.354 06:55:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:26.354 06:55:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:26.354 06:55:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:26.354 06:55:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:26.354 06:55:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:10:26.354 06:55:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:26.354 06:55:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:26.354 06:55:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:26.354 06:55:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:26.354 06:55:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:26.354 06:55:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:26.354 06:55:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:10:26.354 06:55:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:26.354 06:55:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:10:26.354 06:55:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:26.354 06:55:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:26.354 06:55:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:26.354 06:55:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:26.354 06:55:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:26.354 06:55:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:26.354 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:26.354 06:55:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:26.354 06:55:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:26.354 06:55:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:26.354 06:55:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:26.354 06:55:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:26.354 06:55:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:26.354 06:55:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:10:26.354 06:55:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:26.354 06:55:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:26.354 06:55:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:26.354 06:55:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:26.354 06:55:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:26.354 06:55:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:26.354 06:55:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:26.354 06:55:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:26.354 06:55:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:26.354 06:55:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:26.354 06:55:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:10:26.354 06:55:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:28.890 06:55:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:28.890 06:55:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:10:28.890 06:55:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:28.890 06:55:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:28.890 06:55:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:28.891 06:55:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:28.891 06:55:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:28.891 06:55:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:10:28.891 06:55:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:28.891 06:55:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:10:28.891 06:55:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:10:28.891 06:55:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:10:28.891 06:55:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:10:28.891 06:55:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:10:28.891 06:55:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:10:28.891 06:55:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:28.891 06:55:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:28.891 06:55:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:28.891 06:55:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:28.891 06:55:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:28.891 06:55:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:28.891 06:55:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:28.891 06:55:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:28.891 06:55:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:28.891 06:55:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:28.891 06:55:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:28.891 06:55:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:28.891 06:55:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:28.891 06:55:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:28.891 06:55:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:28.891 06:55:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:28.891 06:55:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:28.891 06:55:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:28.891 06:55:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:28.891 06:55:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:10:28.891 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:10:28.891 06:55:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:28.891 06:55:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:28.891 06:55:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:28.891 06:55:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:28.891 06:55:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:28.891 06:55:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:28.891 06:55:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:10:28.891 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:10:28.891 06:55:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:28.891 06:55:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:28.891 06:55:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:28.891 06:55:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:28.891 06:55:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:28.891 06:55:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:28.891 06:55:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:28.891 06:55:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:28.891 06:55:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:28.891 06:55:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:28.891 06:55:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:28.891 06:55:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:28.891 06:55:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:28.891 06:55:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:28.891 06:55:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:28.891 06:55:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:10:28.891 Found net devices under 0000:0a:00.0: cvl_0_0 00:10:28.891 06:55:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:28.891 06:55:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:28.891 06:55:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:28.891 06:55:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:28.891 06:55:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:28.891 06:55:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:28.891 06:55:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:28.891 06:55:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:28.891 06:55:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:10:28.891 Found net devices under 0000:0a:00.1: cvl_0_1 00:10:28.891 06:55:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:28.891 06:55:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:28.891 06:55:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:10:28.891 06:55:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:28.891 06:55:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:28.891 06:55:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:28.891 06:55:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:28.891 06:55:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:28.891 06:55:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:28.891 06:55:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:28.891 06:55:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:28.891 06:55:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:28.891 06:55:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:28.891 06:55:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:28.891 06:55:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:28.891 06:55:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:28.891 06:55:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:28.891 06:55:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:28.891 06:55:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:28.891 06:55:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:28.891 06:55:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:28.891 06:55:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:28.891 06:55:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:28.891 06:55:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:28.891 06:55:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:28.891 06:55:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:28.891 06:55:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:28.891 06:55:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:28.891 06:55:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:28.891 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:28.891 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.187 ms 00:10:28.891 00:10:28.891 --- 10.0.0.2 ping statistics --- 00:10:28.891 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:28.891 rtt min/avg/max/mdev = 0.187/0.187/0.187/0.000 ms 00:10:28.891 06:55:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:28.891 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:28.891 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.096 ms 00:10:28.891 00:10:28.891 --- 10.0.0.1 ping statistics --- 00:10:28.891 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:28.891 rtt min/avg/max/mdev = 0.096/0.096/0.096/0.000 ms 00:10:28.891 06:55:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:28.891 06:55:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:10:28.891 06:55:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:28.891 06:55:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:28.892 06:55:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:28.892 06:55:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:28.892 06:55:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:28.892 06:55:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:28.892 06:55:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:28.892 06:55:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:10:28.892 06:55:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:28.892 06:55:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:28.892 06:55:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:28.892 06:55:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=154121 00:10:28.892 06:55:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:28.892 06:55:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 154121 00:10:28.892 06:55:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 154121 ']' 00:10:28.892 06:55:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:28.892 06:55:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:28.892 06:55:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:28.892 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:28.892 06:55:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:28.892 06:55:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:28.892 [2024-11-18 06:55:49.648003] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:10:28.892 [2024-11-18 06:55:49.648071] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:28.892 [2024-11-18 06:55:49.721704] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:28.892 [2024-11-18 06:55:49.773167] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:28.892 [2024-11-18 06:55:49.773226] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:28.892 [2024-11-18 06:55:49.773255] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:28.892 [2024-11-18 06:55:49.773267] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:28.892 [2024-11-18 06:55:49.773284] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:28.892 [2024-11-18 06:55:49.774993] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:28.892 [2024-11-18 06:55:49.775054] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:28.892 [2024-11-18 06:55:49.775120] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:28.892 [2024-11-18 06:55:49.775123] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:29.150 06:55:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:29.150 06:55:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:10:29.150 06:55:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:29.150 06:55:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:29.150 06:55:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:29.150 06:55:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:29.151 06:55:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:29.409 [2024-11-18 06:55:50.213726] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:29.409 06:55:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:29.667 06:55:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:10:29.667 06:55:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:29.926 06:55:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:10:29.926 06:55:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:30.185 06:55:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:10:30.185 06:55:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:30.443 06:55:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:10:30.443 06:55:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:10:30.702 06:55:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:31.269 06:55:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:10:31.269 06:55:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:31.269 06:55:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:10:31.269 06:55:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:31.834 06:55:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:10:31.834 06:55:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:10:31.834 06:55:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:32.092 06:55:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:32.092 06:55:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:32.348 06:55:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:32.348 06:55:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:32.914 06:55:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:32.914 [2024-11-18 06:55:53.849553] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:32.914 06:55:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:10:33.172 06:55:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:10:33.430 06:55:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:34.365 06:55:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:10:34.365 06:55:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:10:34.365 06:55:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:34.365 06:55:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:10:34.365 06:55:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:10:34.365 06:55:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:10:36.269 06:55:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:36.269 06:55:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:36.269 06:55:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:36.269 06:55:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:10:36.269 06:55:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:36.269 06:55:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:10:36.269 06:55:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:36.269 [global] 00:10:36.269 thread=1 00:10:36.269 invalidate=1 00:10:36.269 rw=write 00:10:36.269 time_based=1 00:10:36.269 runtime=1 00:10:36.269 ioengine=libaio 00:10:36.269 direct=1 00:10:36.269 bs=4096 00:10:36.269 iodepth=1 00:10:36.269 norandommap=0 00:10:36.269 numjobs=1 00:10:36.269 00:10:36.269 verify_dump=1 00:10:36.269 verify_backlog=512 00:10:36.269 verify_state_save=0 00:10:36.269 do_verify=1 00:10:36.269 verify=crc32c-intel 00:10:36.269 [job0] 00:10:36.269 filename=/dev/nvme0n1 00:10:36.269 [job1] 00:10:36.269 filename=/dev/nvme0n2 00:10:36.269 [job2] 00:10:36.269 filename=/dev/nvme0n3 00:10:36.269 [job3] 00:10:36.269 filename=/dev/nvme0n4 00:10:36.269 Could not set queue depth (nvme0n1) 00:10:36.269 Could not set queue depth (nvme0n2) 00:10:36.269 Could not set queue depth (nvme0n3) 00:10:36.269 Could not set queue depth (nvme0n4) 00:10:36.528 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:36.528 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:36.528 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:36.528 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:36.528 fio-3.35 00:10:36.528 Starting 4 threads 00:10:37.904 00:10:37.904 job0: (groupid=0, jobs=1): err= 0: pid=155198: Mon Nov 18 06:55:58 2024 00:10:37.904 read: IOPS=1611, BW=6446KiB/s (6600kB/s)(6452KiB/1001msec) 00:10:37.904 slat (nsec): min=5425, max=52749, avg=13539.81, stdev=5991.52 00:10:37.904 clat (usec): min=200, max=721, avg=313.34, stdev=74.53 00:10:37.904 lat (usec): min=207, max=752, avg=326.88, stdev=74.82 00:10:37.904 clat percentiles (usec): 00:10:37.904 | 1.00th=[ 219], 5.00th=[ 235], 10.00th=[ 241], 20.00th=[ 251], 00:10:37.904 | 30.00th=[ 260], 40.00th=[ 273], 50.00th=[ 293], 60.00th=[ 314], 00:10:37.904 | 70.00th=[ 338], 80.00th=[ 367], 90.00th=[ 433], 95.00th=[ 457], 00:10:37.904 | 99.00th=[ 537], 99.50th=[ 562], 99.90th=[ 603], 99.95th=[ 725], 00:10:37.904 | 99.99th=[ 725] 00:10:37.904 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:10:37.904 slat (nsec): min=6903, max=51614, avg=17912.79, stdev=6527.29 00:10:37.904 clat (usec): min=139, max=432, avg=204.61, stdev=35.02 00:10:37.904 lat (usec): min=148, max=469, avg=222.53, stdev=37.42 00:10:37.904 clat percentiles (usec): 00:10:37.904 | 1.00th=[ 153], 5.00th=[ 163], 10.00th=[ 169], 20.00th=[ 178], 00:10:37.904 | 30.00th=[ 184], 40.00th=[ 190], 50.00th=[ 198], 60.00th=[ 204], 00:10:37.904 | 70.00th=[ 215], 80.00th=[ 229], 90.00th=[ 251], 95.00th=[ 273], 00:10:37.904 | 99.00th=[ 314], 99.50th=[ 363], 99.90th=[ 392], 99.95th=[ 396], 00:10:37.904 | 99.99th=[ 433] 00:10:37.904 bw ( KiB/s): min= 8192, max= 8192, per=37.38%, avg=8192.00, stdev= 0.00, samples=1 00:10:37.904 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:10:37.904 lat (usec) : 250=59.00%, 500=40.04%, 750=0.96% 00:10:37.904 cpu : usr=4.10%, sys=8.40%, ctx=3661, majf=0, minf=1 00:10:37.904 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:37.904 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:37.904 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:37.904 issued rwts: total=1613,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:37.904 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:37.904 job1: (groupid=0, jobs=1): err= 0: pid=155199: Mon Nov 18 06:55:58 2024 00:10:37.904 read: IOPS=1155, BW=4623KiB/s (4734kB/s)(4628KiB/1001msec) 00:10:37.904 slat (nsec): min=4890, max=28087, avg=8301.80, stdev=2456.66 00:10:37.904 clat (usec): min=181, max=40966, avg=525.34, stdev=2902.01 00:10:37.904 lat (usec): min=186, max=40975, avg=533.64, stdev=2902.09 00:10:37.904 clat percentiles (usec): 00:10:37.904 | 1.00th=[ 204], 5.00th=[ 215], 10.00th=[ 225], 20.00th=[ 237], 00:10:37.904 | 30.00th=[ 249], 40.00th=[ 269], 50.00th=[ 285], 60.00th=[ 310], 00:10:37.904 | 70.00th=[ 351], 80.00th=[ 375], 90.00th=[ 494], 95.00th=[ 537], 00:10:37.904 | 99.00th=[ 619], 99.50th=[40633], 99.90th=[41157], 99.95th=[41157], 00:10:37.904 | 99.99th=[41157] 00:10:37.904 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:10:37.904 slat (nsec): min=6690, max=54390, avg=11096.85, stdev=2971.32 00:10:37.904 clat (usec): min=129, max=890, avg=231.21, stdev=52.36 00:10:37.904 lat (usec): min=138, max=903, avg=242.31, stdev=52.69 00:10:37.904 clat percentiles (usec): 00:10:37.904 | 1.00th=[ 135], 5.00th=[ 147], 10.00th=[ 161], 20.00th=[ 190], 00:10:37.904 | 30.00th=[ 206], 40.00th=[ 235], 50.00th=[ 241], 60.00th=[ 243], 00:10:37.904 | 70.00th=[ 247], 80.00th=[ 258], 90.00th=[ 273], 95.00th=[ 314], 00:10:37.904 | 99.00th=[ 396], 99.50th=[ 429], 99.90th=[ 469], 99.95th=[ 889], 00:10:37.904 | 99.99th=[ 889] 00:10:37.904 bw ( KiB/s): min= 7784, max= 7784, per=35.52%, avg=7784.00, stdev= 0.00, samples=1 00:10:37.904 iops : min= 1946, max= 1946, avg=1946.00, stdev= 0.00, samples=1 00:10:37.904 lat (usec) : 250=54.77%, 500=41.37%, 750=3.56%, 1000=0.07% 00:10:37.904 lat (msec) : 50=0.22% 00:10:37.904 cpu : usr=1.60%, sys=2.50%, ctx=2696, majf=0, minf=1 00:10:37.904 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:37.904 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:37.904 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:37.904 issued rwts: total=1157,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:37.904 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:37.904 job2: (groupid=0, jobs=1): err= 0: pid=155200: Mon Nov 18 06:55:58 2024 00:10:37.904 read: IOPS=1273, BW=5096KiB/s (5218kB/s)(5116KiB/1004msec) 00:10:37.904 slat (nsec): min=5758, max=47362, avg=14754.38, stdev=6016.66 00:10:37.904 clat (usec): min=192, max=41423, avg=502.63, stdev=3000.56 00:10:37.904 lat (usec): min=198, max=41431, avg=517.38, stdev=3000.45 00:10:37.904 clat percentiles (usec): 00:10:37.904 | 1.00th=[ 200], 5.00th=[ 210], 10.00th=[ 223], 20.00th=[ 233], 00:10:37.904 | 30.00th=[ 239], 40.00th=[ 245], 50.00th=[ 251], 60.00th=[ 265], 00:10:37.904 | 70.00th=[ 330], 80.00th=[ 355], 90.00th=[ 367], 95.00th=[ 383], 00:10:37.904 | 99.00th=[ 453], 99.50th=[40633], 99.90th=[41157], 99.95th=[41681], 00:10:37.904 | 99.99th=[41681] 00:10:37.904 write: IOPS=1529, BW=6120KiB/s (6266kB/s)(6144KiB/1004msec); 0 zone resets 00:10:37.905 slat (nsec): min=7534, max=54209, avg=17525.76, stdev=7422.19 00:10:37.905 clat (usec): min=146, max=375, avg=194.89, stdev=26.26 00:10:37.905 lat (usec): min=154, max=414, avg=212.42, stdev=30.65 00:10:37.905 clat percentiles (usec): 00:10:37.905 | 1.00th=[ 153], 5.00th=[ 161], 10.00th=[ 167], 20.00th=[ 176], 00:10:37.905 | 30.00th=[ 182], 40.00th=[ 186], 50.00th=[ 192], 60.00th=[ 198], 00:10:37.905 | 70.00th=[ 204], 80.00th=[ 215], 90.00th=[ 227], 95.00th=[ 233], 00:10:37.905 | 99.00th=[ 302], 99.50th=[ 322], 99.90th=[ 359], 99.95th=[ 375], 00:10:37.905 | 99.99th=[ 375] 00:10:37.905 bw ( KiB/s): min= 4096, max= 8192, per=28.04%, avg=6144.00, stdev=2896.31, samples=2 00:10:37.905 iops : min= 1024, max= 2048, avg=1536.00, stdev=724.08, samples=2 00:10:37.905 lat (usec) : 250=75.28%, 500=24.30%, 750=0.18% 00:10:37.905 lat (msec) : 50=0.25% 00:10:37.905 cpu : usr=2.89%, sys=6.58%, ctx=2816, majf=0, minf=1 00:10:37.905 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:37.905 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:37.905 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:37.905 issued rwts: total=1279,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:37.905 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:37.905 job3: (groupid=0, jobs=1): err= 0: pid=155201: Mon Nov 18 06:55:58 2024 00:10:37.905 read: IOPS=226, BW=907KiB/s (928kB/s)(932KiB/1028msec) 00:10:37.905 slat (nsec): min=6045, max=42344, avg=14176.12, stdev=7337.73 00:10:37.905 clat (usec): min=358, max=41088, avg=3798.84, stdev=11093.03 00:10:37.905 lat (usec): min=371, max=41105, avg=3813.02, stdev=11095.12 00:10:37.905 clat percentiles (usec): 00:10:37.905 | 1.00th=[ 367], 5.00th=[ 392], 10.00th=[ 408], 20.00th=[ 424], 00:10:37.905 | 30.00th=[ 453], 40.00th=[ 478], 50.00th=[ 515], 60.00th=[ 529], 00:10:37.905 | 70.00th=[ 553], 80.00th=[ 603], 90.00th=[ 652], 95.00th=[41157], 00:10:37.905 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:10:37.905 | 99.99th=[41157] 00:10:37.905 write: IOPS=498, BW=1992KiB/s (2040kB/s)(2048KiB/1028msec); 0 zone resets 00:10:37.905 slat (nsec): min=6619, max=45104, avg=14280.43, stdev=7307.44 00:10:37.905 clat (usec): min=161, max=454, avg=252.31, stdev=35.71 00:10:37.905 lat (usec): min=170, max=473, avg=266.59, stdev=35.28 00:10:37.905 clat percentiles (usec): 00:10:37.905 | 1.00th=[ 182], 5.00th=[ 210], 10.00th=[ 223], 20.00th=[ 229], 00:10:37.905 | 30.00th=[ 237], 40.00th=[ 243], 50.00th=[ 247], 60.00th=[ 253], 00:10:37.905 | 70.00th=[ 260], 80.00th=[ 269], 90.00th=[ 285], 95.00th=[ 314], 00:10:37.905 | 99.00th=[ 404], 99.50th=[ 424], 99.90th=[ 453], 99.95th=[ 453], 00:10:37.905 | 99.99th=[ 453] 00:10:37.905 bw ( KiB/s): min= 4096, max= 4096, per=18.69%, avg=4096.00, stdev= 0.00, samples=1 00:10:37.905 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:37.905 lat (usec) : 250=37.72%, 500=45.77%, 750=13.83% 00:10:37.905 lat (msec) : 2=0.13%, 50=2.55% 00:10:37.905 cpu : usr=0.39%, sys=1.07%, ctx=745, majf=0, minf=1 00:10:37.905 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:37.905 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:37.905 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:37.905 issued rwts: total=233,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:37.905 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:37.905 00:10:37.905 Run status group 0 (all jobs): 00:10:37.905 READ: bw=16.3MiB/s (17.1MB/s), 907KiB/s-6446KiB/s (928kB/s-6600kB/s), io=16.7MiB (17.5MB), run=1001-1028msec 00:10:37.905 WRITE: bw=21.4MiB/s (22.4MB/s), 1992KiB/s-8184KiB/s (2040kB/s-8380kB/s), io=22.0MiB (23.1MB), run=1001-1028msec 00:10:37.905 00:10:37.905 Disk stats (read/write): 00:10:37.905 nvme0n1: ios=1586/1604, merge=0/0, ticks=466/297, in_queue=763, util=86.47% 00:10:37.905 nvme0n2: ios=1049/1065, merge=0/0, ticks=1468/240, in_queue=1708, util=89.00% 00:10:37.905 nvme0n3: ios=1252/1536, merge=0/0, ticks=1193/292, in_queue=1485, util=93.10% 00:10:37.905 nvme0n4: ios=285/512, merge=0/0, ticks=747/124, in_queue=871, util=95.36% 00:10:37.905 06:55:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:10:37.905 [global] 00:10:37.905 thread=1 00:10:37.905 invalidate=1 00:10:37.905 rw=randwrite 00:10:37.905 time_based=1 00:10:37.905 runtime=1 00:10:37.905 ioengine=libaio 00:10:37.905 direct=1 00:10:37.905 bs=4096 00:10:37.905 iodepth=1 00:10:37.905 norandommap=0 00:10:37.905 numjobs=1 00:10:37.905 00:10:37.905 verify_dump=1 00:10:37.905 verify_backlog=512 00:10:37.905 verify_state_save=0 00:10:37.905 do_verify=1 00:10:37.905 verify=crc32c-intel 00:10:37.905 [job0] 00:10:37.905 filename=/dev/nvme0n1 00:10:37.905 [job1] 00:10:37.905 filename=/dev/nvme0n2 00:10:37.905 [job2] 00:10:37.905 filename=/dev/nvme0n3 00:10:37.905 [job3] 00:10:37.905 filename=/dev/nvme0n4 00:10:37.905 Could not set queue depth (nvme0n1) 00:10:37.905 Could not set queue depth (nvme0n2) 00:10:37.905 Could not set queue depth (nvme0n3) 00:10:37.905 Could not set queue depth (nvme0n4) 00:10:37.905 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:37.905 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:37.905 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:37.905 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:37.905 fio-3.35 00:10:37.905 Starting 4 threads 00:10:39.280 00:10:39.280 job0: (groupid=0, jobs=1): err= 0: pid=155433: Mon Nov 18 06:56:00 2024 00:10:39.280 read: IOPS=21, BW=84.6KiB/s (86.6kB/s)(88.0KiB/1040msec) 00:10:39.280 slat (nsec): min=9975, max=35641, avg=22696.41, stdev=9501.80 00:10:39.280 clat (usec): min=40884, max=41047, avg=40970.56, stdev=40.54 00:10:39.280 lat (usec): min=40917, max=41060, avg=40993.25, stdev=35.83 00:10:39.280 clat percentiles (usec): 00:10:39.280 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:10:39.280 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:10:39.280 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:10:39.280 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:10:39.280 | 99.99th=[41157] 00:10:39.280 write: IOPS=492, BW=1969KiB/s (2016kB/s)(2048KiB/1040msec); 0 zone resets 00:10:39.280 slat (nsec): min=7746, max=57109, avg=16510.28, stdev=7605.31 00:10:39.280 clat (usec): min=155, max=513, avg=247.13, stdev=63.43 00:10:39.280 lat (usec): min=166, max=535, avg=263.64, stdev=66.69 00:10:39.280 clat percentiles (usec): 00:10:39.280 | 1.00th=[ 159], 5.00th=[ 184], 10.00th=[ 194], 20.00th=[ 206], 00:10:39.280 | 30.00th=[ 217], 40.00th=[ 223], 50.00th=[ 229], 60.00th=[ 237], 00:10:39.280 | 70.00th=[ 247], 80.00th=[ 277], 90.00th=[ 326], 95.00th=[ 412], 00:10:39.280 | 99.00th=[ 457], 99.50th=[ 510], 99.90th=[ 515], 99.95th=[ 515], 00:10:39.280 | 99.99th=[ 515] 00:10:39.280 bw ( KiB/s): min= 4096, max= 4096, per=26.93%, avg=4096.00, stdev= 0.00, samples=1 00:10:39.280 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:39.280 lat (usec) : 250=68.91%, 500=26.22%, 750=0.75% 00:10:39.280 lat (msec) : 50=4.12% 00:10:39.280 cpu : usr=0.67%, sys=1.06%, ctx=534, majf=0, minf=2 00:10:39.280 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:39.280 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:39.280 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:39.280 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:39.280 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:39.280 job1: (groupid=0, jobs=1): err= 0: pid=155434: Mon Nov 18 06:56:00 2024 00:10:39.280 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:10:39.280 slat (nsec): min=4508, max=50893, avg=7231.61, stdev=5278.77 00:10:39.280 clat (usec): min=174, max=41045, avg=1565.00, stdev=7304.80 00:10:39.280 lat (usec): min=180, max=41064, avg=1572.24, stdev=7308.47 00:10:39.280 clat percentiles (usec): 00:10:39.280 | 1.00th=[ 182], 5.00th=[ 186], 10.00th=[ 188], 20.00th=[ 192], 00:10:39.280 | 30.00th=[ 194], 40.00th=[ 196], 50.00th=[ 200], 60.00th=[ 204], 00:10:39.280 | 70.00th=[ 208], 80.00th=[ 251], 90.00th=[ 285], 95.00th=[ 310], 00:10:39.280 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:10:39.280 | 99.99th=[41157] 00:10:39.280 write: IOPS=882, BW=3528KiB/s (3613kB/s)(3532KiB/1001msec); 0 zone resets 00:10:39.280 slat (nsec): min=5900, max=54908, avg=12042.80, stdev=7366.50 00:10:39.280 clat (usec): min=119, max=530, avg=204.56, stdev=74.61 00:10:39.280 lat (usec): min=125, max=540, avg=216.60, stdev=78.91 00:10:39.280 clat percentiles (usec): 00:10:39.280 | 1.00th=[ 124], 5.00th=[ 127], 10.00th=[ 130], 20.00th=[ 135], 00:10:39.280 | 30.00th=[ 143], 40.00th=[ 155], 50.00th=[ 202], 60.00th=[ 215], 00:10:39.280 | 70.00th=[ 235], 80.00th=[ 262], 90.00th=[ 289], 95.00th=[ 359], 00:10:39.280 | 99.00th=[ 445], 99.50th=[ 469], 99.90th=[ 529], 99.95th=[ 529], 00:10:39.280 | 99.99th=[ 529] 00:10:39.280 bw ( KiB/s): min= 4096, max= 4096, per=26.93%, avg=4096.00, stdev= 0.00, samples=1 00:10:39.280 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:39.280 lat (usec) : 250=76.85%, 500=21.86%, 750=0.07% 00:10:39.280 lat (msec) : 50=1.22% 00:10:39.280 cpu : usr=1.10%, sys=1.00%, ctx=1396, majf=0, minf=1 00:10:39.280 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:39.280 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:39.280 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:39.280 issued rwts: total=512,883,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:39.280 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:39.280 job2: (groupid=0, jobs=1): err= 0: pid=155435: Mon Nov 18 06:56:00 2024 00:10:39.280 read: IOPS=1921, BW=7684KiB/s (7869kB/s)(7692KiB/1001msec) 00:10:39.280 slat (nsec): min=5510, max=54229, avg=13339.86, stdev=5561.28 00:10:39.280 clat (usec): min=199, max=40816, avg=275.15, stdev=925.92 00:10:39.280 lat (usec): min=206, max=40822, avg=288.49, stdev=925.81 00:10:39.280 clat percentiles (usec): 00:10:39.280 | 1.00th=[ 215], 5.00th=[ 227], 10.00th=[ 231], 20.00th=[ 237], 00:10:39.280 | 30.00th=[ 241], 40.00th=[ 245], 50.00th=[ 247], 60.00th=[ 251], 00:10:39.280 | 70.00th=[ 255], 80.00th=[ 260], 90.00th=[ 269], 95.00th=[ 281], 00:10:39.280 | 99.00th=[ 482], 99.50th=[ 537], 99.90th=[ 742], 99.95th=[40633], 00:10:39.280 | 99.99th=[40633] 00:10:39.280 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:10:39.280 slat (nsec): min=8246, max=55148, avg=18102.13, stdev=6602.61 00:10:39.280 clat (usec): min=137, max=380, avg=190.82, stdev=28.53 00:10:39.280 lat (usec): min=146, max=433, avg=208.92, stdev=31.50 00:10:39.281 clat percentiles (usec): 00:10:39.281 | 1.00th=[ 145], 5.00th=[ 153], 10.00th=[ 161], 20.00th=[ 172], 00:10:39.281 | 30.00th=[ 178], 40.00th=[ 182], 50.00th=[ 186], 60.00th=[ 190], 00:10:39.281 | 70.00th=[ 198], 80.00th=[ 210], 90.00th=[ 227], 95.00th=[ 237], 00:10:39.281 | 99.00th=[ 297], 99.50th=[ 306], 99.90th=[ 359], 99.95th=[ 363], 00:10:39.281 | 99.99th=[ 379] 00:10:39.281 bw ( KiB/s): min= 8192, max= 8192, per=53.85%, avg=8192.00, stdev= 0.00, samples=1 00:10:39.281 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:10:39.281 lat (usec) : 250=78.44%, 500=21.23%, 750=0.30% 00:10:39.281 lat (msec) : 50=0.03% 00:10:39.281 cpu : usr=4.40%, sys=8.60%, ctx=3973, majf=0, minf=1 00:10:39.281 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:39.281 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:39.281 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:39.281 issued rwts: total=1923,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:39.281 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:39.281 job3: (groupid=0, jobs=1): err= 0: pid=155436: Mon Nov 18 06:56:00 2024 00:10:39.281 read: IOPS=22, BW=90.2KiB/s (92.4kB/s)(92.0KiB/1020msec) 00:10:39.281 slat (nsec): min=7521, max=35032, avg=23810.78, stdev=10216.60 00:10:39.281 clat (usec): min=224, max=41044, avg=39169.21, stdev=8490.63 00:10:39.281 lat (usec): min=233, max=41059, avg=39193.02, stdev=8493.87 00:10:39.281 clat percentiles (usec): 00:10:39.281 | 1.00th=[ 225], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:10:39.281 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:10:39.281 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:10:39.281 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:10:39.281 | 99.99th=[41157] 00:10:39.281 write: IOPS=501, BW=2008KiB/s (2056kB/s)(2048KiB/1020msec); 0 zone resets 00:10:39.281 slat (nsec): min=6959, max=75612, avg=15019.79, stdev=7039.11 00:10:39.281 clat (usec): min=146, max=434, avg=211.26, stdev=55.37 00:10:39.281 lat (usec): min=155, max=442, avg=226.28, stdev=57.78 00:10:39.281 clat percentiles (usec): 00:10:39.281 | 1.00th=[ 153], 5.00th=[ 161], 10.00th=[ 165], 20.00th=[ 172], 00:10:39.281 | 30.00th=[ 176], 40.00th=[ 182], 50.00th=[ 190], 60.00th=[ 212], 00:10:39.281 | 70.00th=[ 229], 80.00th=[ 239], 90.00th=[ 277], 95.00th=[ 347], 00:10:39.281 | 99.00th=[ 412], 99.50th=[ 424], 99.90th=[ 433], 99.95th=[ 433], 00:10:39.281 | 99.99th=[ 433] 00:10:39.281 bw ( KiB/s): min= 4096, max= 4096, per=26.93%, avg=4096.00, stdev= 0.00, samples=1 00:10:39.281 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:39.281 lat (usec) : 250=83.36%, 500=12.52% 00:10:39.281 lat (msec) : 50=4.11% 00:10:39.281 cpu : usr=0.39%, sys=0.69%, ctx=536, majf=0, minf=1 00:10:39.281 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:39.281 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:39.281 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:39.281 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:39.281 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:39.281 00:10:39.281 Run status group 0 (all jobs): 00:10:39.281 READ: bw=9538KiB/s (9767kB/s), 84.6KiB/s-7684KiB/s (86.6kB/s-7869kB/s), io=9920KiB (10.2MB), run=1001-1040msec 00:10:39.281 WRITE: bw=14.9MiB/s (15.6MB/s), 1969KiB/s-8184KiB/s (2016kB/s-8380kB/s), io=15.4MiB (16.2MB), run=1001-1040msec 00:10:39.281 00:10:39.281 Disk stats (read/write): 00:10:39.281 nvme0n1: ios=67/512, merge=0/0, ticks=723/119, in_queue=842, util=87.27% 00:10:39.281 nvme0n2: ios=148/512, merge=0/0, ticks=1024/123, in_queue=1147, util=90.15% 00:10:39.281 nvme0n3: ios=1592/2045, merge=0/0, ticks=908/363, in_queue=1271, util=93.66% 00:10:39.281 nvme0n4: ios=76/512, merge=0/0, ticks=840/109, in_queue=949, util=94.24% 00:10:39.281 06:56:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:10:39.281 [global] 00:10:39.281 thread=1 00:10:39.281 invalidate=1 00:10:39.281 rw=write 00:10:39.281 time_based=1 00:10:39.281 runtime=1 00:10:39.281 ioengine=libaio 00:10:39.281 direct=1 00:10:39.281 bs=4096 00:10:39.281 iodepth=128 00:10:39.281 norandommap=0 00:10:39.281 numjobs=1 00:10:39.281 00:10:39.281 verify_dump=1 00:10:39.281 verify_backlog=512 00:10:39.281 verify_state_save=0 00:10:39.281 do_verify=1 00:10:39.281 verify=crc32c-intel 00:10:39.281 [job0] 00:10:39.281 filename=/dev/nvme0n1 00:10:39.281 [job1] 00:10:39.281 filename=/dev/nvme0n2 00:10:39.281 [job2] 00:10:39.281 filename=/dev/nvme0n3 00:10:39.281 [job3] 00:10:39.281 filename=/dev/nvme0n4 00:10:39.281 Could not set queue depth (nvme0n1) 00:10:39.281 Could not set queue depth (nvme0n2) 00:10:39.281 Could not set queue depth (nvme0n3) 00:10:39.281 Could not set queue depth (nvme0n4) 00:10:39.540 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:39.540 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:39.540 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:39.540 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:39.540 fio-3.35 00:10:39.540 Starting 4 threads 00:10:40.916 00:10:40.916 job0: (groupid=0, jobs=1): err= 0: pid=155751: Mon Nov 18 06:56:01 2024 00:10:40.916 read: IOPS=3573, BW=14.0MiB/s (14.6MB/s)(14.0MiB/1003msec) 00:10:40.916 slat (usec): min=2, max=32095, avg=122.74, stdev=1053.09 00:10:40.916 clat (usec): min=4508, max=74643, avg=17316.85, stdev=12721.73 00:10:40.916 lat (usec): min=4516, max=74659, avg=17439.59, stdev=12798.99 00:10:40.916 clat percentiles (usec): 00:10:40.916 | 1.00th=[ 5145], 5.00th=[ 8455], 10.00th=[ 9110], 20.00th=[ 9634], 00:10:40.916 | 30.00th=[10552], 40.00th=[11076], 50.00th=[11338], 60.00th=[12911], 00:10:40.916 | 70.00th=[15533], 80.00th=[21890], 90.00th=[38011], 95.00th=[50594], 00:10:40.916 | 99.00th=[57410], 99.50th=[57410], 99.90th=[63177], 99.95th=[72877], 00:10:40.916 | 99.99th=[74974] 00:10:40.916 write: IOPS=3872, BW=15.1MiB/s (15.9MB/s)(15.2MiB/1003msec); 0 zone resets 00:10:40.916 slat (usec): min=4, max=23897, avg=133.90, stdev=1026.29 00:10:40.916 clat (usec): min=523, max=74262, avg=16625.79, stdev=11397.36 00:10:40.916 lat (usec): min=860, max=74279, avg=16759.69, stdev=11503.06 00:10:40.916 clat percentiles (usec): 00:10:40.916 | 1.00th=[ 4555], 5.00th=[ 7635], 10.00th=[ 8717], 20.00th=[10159], 00:10:40.916 | 30.00th=[10814], 40.00th=[11076], 50.00th=[11600], 60.00th=[12911], 00:10:40.916 | 70.00th=[14615], 80.00th=[22414], 90.00th=[34866], 95.00th=[45876], 00:10:40.916 | 99.00th=[60031], 99.50th=[60031], 99.90th=[60556], 99.95th=[63177], 00:10:40.916 | 99.99th=[73925] 00:10:40.916 bw ( KiB/s): min=12288, max=17760, per=26.12%, avg=15024.00, stdev=3869.29, samples=2 00:10:40.916 iops : min= 3072, max= 4440, avg=3756.00, stdev=967.32, samples=2 00:10:40.916 lat (usec) : 750=0.01%, 1000=0.05% 00:10:40.916 lat (msec) : 4=0.13%, 10=20.98%, 20=56.28%, 50=18.22%, 100=4.31% 00:10:40.916 cpu : usr=2.99%, sys=5.79%, ctx=316, majf=0, minf=1 00:10:40.916 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:10:40.916 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:40.916 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:40.916 issued rwts: total=3584,3884,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:40.916 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:40.916 job1: (groupid=0, jobs=1): err= 0: pid=155771: Mon Nov 18 06:56:01 2024 00:10:40.916 read: IOPS=3059, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1004msec) 00:10:40.916 slat (usec): min=2, max=20950, avg=154.08, stdev=1077.26 00:10:40.916 clat (msec): min=5, max=100, avg=18.15, stdev=14.26 00:10:40.916 lat (msec): min=5, max=101, avg=18.30, stdev=14.36 00:10:40.916 clat percentiles (msec): 00:10:40.916 | 1.00th=[ 9], 5.00th=[ 10], 10.00th=[ 10], 20.00th=[ 12], 00:10:40.916 | 30.00th=[ 13], 40.00th=[ 14], 50.00th=[ 15], 60.00th=[ 15], 00:10:40.916 | 70.00th=[ 17], 80.00th=[ 19], 90.00th=[ 28], 95.00th=[ 44], 00:10:40.916 | 99.00th=[ 100], 99.50th=[ 101], 99.90th=[ 101], 99.95th=[ 102], 00:10:40.916 | 99.99th=[ 102] 00:10:40.916 write: IOPS=3523, BW=13.8MiB/s (14.4MB/s)(13.8MiB/1004msec); 0 zone resets 00:10:40.916 slat (usec): min=3, max=16323, avg=139.53, stdev=895.15 00:10:40.916 clat (usec): min=1324, max=101244, avg=20230.99, stdev=15300.21 00:10:40.916 lat (usec): min=1410, max=101251, avg=20370.52, stdev=15369.41 00:10:40.916 clat percentiles (msec): 00:10:40.916 | 1.00th=[ 4], 5.00th=[ 7], 10.00th=[ 9], 20.00th=[ 11], 00:10:40.916 | 30.00th=[ 12], 40.00th=[ 13], 50.00th=[ 14], 60.00th=[ 17], 00:10:40.916 | 70.00th=[ 22], 80.00th=[ 26], 90.00th=[ 44], 95.00th=[ 55], 00:10:40.916 | 99.00th=[ 69], 99.50th=[ 77], 99.90th=[ 102], 99.95th=[ 102], 00:10:40.916 | 99.99th=[ 102] 00:10:40.916 bw ( KiB/s): min=12736, max=14573, per=23.74%, avg=13654.50, stdev=1298.96, samples=2 00:10:40.916 iops : min= 3184, max= 3643, avg=3413.50, stdev=324.56, samples=2 00:10:40.916 lat (msec) : 2=0.39%, 4=0.54%, 10=12.42%, 20=58.88%, 50=22.42% 00:10:40.916 lat (msec) : 100=5.04%, 250=0.30% 00:10:40.916 cpu : usr=4.89%, sys=4.49%, ctx=315, majf=0, minf=1 00:10:40.916 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:10:40.916 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:40.916 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:40.916 issued rwts: total=3072,3538,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:40.916 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:40.916 job2: (groupid=0, jobs=1): err= 0: pid=155786: Mon Nov 18 06:56:01 2024 00:10:40.916 read: IOPS=4079, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1004msec) 00:10:40.916 slat (usec): min=2, max=12093, avg=112.98, stdev=776.97 00:10:40.916 clat (usec): min=2630, max=42331, avg=14162.74, stdev=4760.84 00:10:40.916 lat (usec): min=2643, max=42342, avg=14275.72, stdev=4821.40 00:10:40.916 clat percentiles (usec): 00:10:40.916 | 1.00th=[ 5014], 5.00th=[ 8848], 10.00th=[11076], 20.00th=[11731], 00:10:40.916 | 30.00th=[12125], 40.00th=[12518], 50.00th=[13042], 60.00th=[13435], 00:10:40.916 | 70.00th=[14746], 80.00th=[16188], 90.00th=[17695], 95.00th=[21627], 00:10:40.916 | 99.00th=[36963], 99.50th=[38536], 99.90th=[42206], 99.95th=[42206], 00:10:40.916 | 99.99th=[42206] 00:10:40.916 write: IOPS=4439, BW=17.3MiB/s (18.2MB/s)(17.4MiB/1004msec); 0 zone resets 00:10:40.916 slat (usec): min=4, max=12650, avg=108.52, stdev=706.23 00:10:40.916 clat (usec): min=242, max=42335, avg=15373.03, stdev=6933.95 00:10:40.916 lat (usec): min=723, max=42348, avg=15481.55, stdev=6995.60 00:10:40.916 clat percentiles (usec): 00:10:40.916 | 1.00th=[ 963], 5.00th=[ 5932], 10.00th=[ 9110], 20.00th=[10814], 00:10:40.916 | 30.00th=[12256], 40.00th=[12649], 50.00th=[13042], 60.00th=[14222], 00:10:40.916 | 70.00th=[16450], 80.00th=[19792], 90.00th=[27132], 95.00th=[30802], 00:10:40.916 | 99.00th=[34341], 99.50th=[34866], 99.90th=[35914], 99.95th=[38536], 00:10:40.916 | 99.99th=[42206] 00:10:40.916 bw ( KiB/s): min=14160, max=20480, per=30.11%, avg=17320.00, stdev=4468.91, samples=2 00:10:40.916 iops : min= 3540, max= 5120, avg=4330.00, stdev=1117.23, samples=2 00:10:40.916 lat (usec) : 250=0.01%, 750=0.06%, 1000=0.57% 00:10:40.916 lat (msec) : 2=0.11%, 4=0.48%, 10=9.40%, 20=76.34%, 50=13.04% 00:10:40.916 cpu : usr=5.88%, sys=5.48%, ctx=341, majf=0, minf=1 00:10:40.916 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:10:40.916 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:40.917 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:40.917 issued rwts: total=4096,4457,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:40.917 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:40.917 job3: (groupid=0, jobs=1): err= 0: pid=155787: Mon Nov 18 06:56:01 2024 00:10:40.917 read: IOPS=2332, BW=9331KiB/s (9555kB/s)(9368KiB/1004msec) 00:10:40.917 slat (usec): min=2, max=41179, avg=188.73, stdev=1417.30 00:10:40.917 clat (usec): min=1258, max=89615, avg=21882.79, stdev=14600.26 00:10:40.917 lat (usec): min=5456, max=90863, avg=22071.53, stdev=14704.14 00:10:40.917 clat percentiles (usec): 00:10:40.917 | 1.00th=[ 7767], 5.00th=[11600], 10.00th=[11731], 20.00th=[14091], 00:10:40.917 | 30.00th=[14353], 40.00th=[14484], 50.00th=[15664], 60.00th=[16909], 00:10:40.917 | 70.00th=[19792], 80.00th=[25035], 90.00th=[48497], 95.00th=[58459], 00:10:40.917 | 99.00th=[76022], 99.50th=[76022], 99.90th=[76022], 99.95th=[76022], 00:10:40.917 | 99.99th=[89654] 00:10:40.917 write: IOPS=2549, BW=9.96MiB/s (10.4MB/s)(10.0MiB/1004msec); 0 zone resets 00:10:40.917 slat (usec): min=4, max=11678, avg=211.25, stdev=967.54 00:10:40.917 clat (msec): min=2, max=111, avg=29.15, stdev=19.95 00:10:40.917 lat (msec): min=2, max=111, avg=29.36, stdev=20.06 00:10:40.917 clat percentiles (msec): 00:10:40.917 | 1.00th=[ 5], 5.00th=[ 9], 10.00th=[ 13], 20.00th=[ 15], 00:10:40.917 | 30.00th=[ 21], 40.00th=[ 22], 50.00th=[ 23], 60.00th=[ 26], 00:10:40.917 | 70.00th=[ 31], 80.00th=[ 40], 90.00th=[ 55], 95.00th=[ 66], 00:10:40.917 | 99.00th=[ 105], 99.50th=[ 111], 99.90th=[ 112], 99.95th=[ 112], 00:10:40.917 | 99.99th=[ 112] 00:10:40.917 bw ( KiB/s): min= 8464, max=12016, per=17.80%, avg=10240.00, stdev=2511.64, samples=2 00:10:40.917 iops : min= 2116, max= 3004, avg=2560.00, stdev=627.91, samples=2 00:10:40.917 lat (msec) : 2=0.02%, 4=0.41%, 10=4.02%, 20=42.49%, 50=41.57% 00:10:40.917 lat (msec) : 100=10.53%, 250=0.96% 00:10:40.917 cpu : usr=2.19%, sys=3.49%, ctx=307, majf=0, minf=1 00:10:40.917 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:10:40.917 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:40.917 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:40.917 issued rwts: total=2342,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:40.917 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:40.917 00:10:40.917 Run status group 0 (all jobs): 00:10:40.917 READ: bw=50.9MiB/s (53.4MB/s), 9331KiB/s-15.9MiB/s (9555kB/s-16.7MB/s), io=51.1MiB (53.6MB), run=1003-1004msec 00:10:40.917 WRITE: bw=56.2MiB/s (58.9MB/s), 9.96MiB/s-17.3MiB/s (10.4MB/s-18.2MB/s), io=56.4MiB (59.1MB), run=1003-1004msec 00:10:40.917 00:10:40.917 Disk stats (read/write): 00:10:40.917 nvme0n1: ios=2605/2867, merge=0/0, ticks=24540/20333, in_queue=44873, util=97.39% 00:10:40.917 nvme0n2: ios=2809/3072, merge=0/0, ticks=34444/44628, in_queue=79072, util=87.60% 00:10:40.917 nvme0n3: ios=3636/3631, merge=0/0, ticks=43584/48130, in_queue=91714, util=97.39% 00:10:40.917 nvme0n4: ios=2102/2431, merge=0/0, ticks=33707/67237, in_queue=100944, util=97.79% 00:10:40.917 06:56:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:10:40.917 [global] 00:10:40.917 thread=1 00:10:40.917 invalidate=1 00:10:40.917 rw=randwrite 00:10:40.917 time_based=1 00:10:40.917 runtime=1 00:10:40.917 ioengine=libaio 00:10:40.917 direct=1 00:10:40.917 bs=4096 00:10:40.917 iodepth=128 00:10:40.917 norandommap=0 00:10:40.917 numjobs=1 00:10:40.917 00:10:40.917 verify_dump=1 00:10:40.917 verify_backlog=512 00:10:40.917 verify_state_save=0 00:10:40.917 do_verify=1 00:10:40.917 verify=crc32c-intel 00:10:40.917 [job0] 00:10:40.917 filename=/dev/nvme0n1 00:10:40.917 [job1] 00:10:40.917 filename=/dev/nvme0n2 00:10:40.917 [job2] 00:10:40.917 filename=/dev/nvme0n3 00:10:40.917 [job3] 00:10:40.917 filename=/dev/nvme0n4 00:10:40.917 Could not set queue depth (nvme0n1) 00:10:40.917 Could not set queue depth (nvme0n2) 00:10:40.917 Could not set queue depth (nvme0n3) 00:10:40.917 Could not set queue depth (nvme0n4) 00:10:40.917 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:40.917 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:40.917 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:40.917 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:40.917 fio-3.35 00:10:40.917 Starting 4 threads 00:10:42.295 00:10:42.295 job0: (groupid=0, jobs=1): err= 0: pid=156013: Mon Nov 18 06:56:02 2024 00:10:42.295 read: IOPS=3053, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1006msec) 00:10:42.295 slat (usec): min=2, max=23933, avg=163.71, stdev=1121.22 00:10:42.295 clat (usec): min=7136, max=61821, avg=20220.85, stdev=10902.82 00:10:42.295 lat (usec): min=7158, max=61832, avg=20384.56, stdev=11003.52 00:10:42.295 clat percentiles (usec): 00:10:42.295 | 1.00th=[ 8094], 5.00th=[10159], 10.00th=[10552], 20.00th=[11600], 00:10:42.295 | 30.00th=[12780], 40.00th=[14091], 50.00th=[15664], 60.00th=[19530], 00:10:42.295 | 70.00th=[23725], 80.00th=[27395], 90.00th=[36963], 95.00th=[43779], 00:10:42.295 | 99.00th=[59507], 99.50th=[60031], 99.90th=[61604], 99.95th=[61604], 00:10:42.295 | 99.99th=[61604] 00:10:42.295 write: IOPS=3511, BW=13.7MiB/s (14.4MB/s)(13.8MiB/1006msec); 0 zone resets 00:10:42.295 slat (usec): min=3, max=12578, avg=133.02, stdev=868.61 00:10:42.295 clat (usec): min=2022, max=52589, avg=18432.82, stdev=8364.70 00:10:42.295 lat (usec): min=6359, max=52626, avg=18565.85, stdev=8415.06 00:10:42.295 clat percentiles (usec): 00:10:42.295 | 1.00th=[ 7701], 5.00th=[10159], 10.00th=[10552], 20.00th=[11731], 00:10:42.295 | 30.00th=[13042], 40.00th=[13698], 50.00th=[14615], 60.00th=[16188], 00:10:42.295 | 70.00th=[22676], 80.00th=[25822], 90.00th=[33424], 95.00th=[34341], 00:10:42.295 | 99.00th=[42730], 99.50th=[52167], 99.90th=[52691], 99.95th=[52691], 00:10:42.295 | 99.99th=[52691] 00:10:42.295 bw ( KiB/s): min=12288, max=14952, per=20.45%, avg=13620.00, stdev=1883.73, samples=2 00:10:42.295 iops : min= 3072, max= 3738, avg=3405.00, stdev=470.93, samples=2 00:10:42.295 lat (msec) : 4=0.02%, 10=3.48%, 20=60.12%, 50=35.05%, 100=1.33% 00:10:42.295 cpu : usr=4.08%, sys=5.67%, ctx=211, majf=0, minf=1 00:10:42.295 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:10:42.295 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:42.295 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:42.295 issued rwts: total=3072,3533,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:42.295 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:42.295 job1: (groupid=0, jobs=1): err= 0: pid=156014: Mon Nov 18 06:56:02 2024 00:10:42.295 read: IOPS=5079, BW=19.8MiB/s (20.8MB/s)(19.9MiB/1005msec) 00:10:42.295 slat (usec): min=2, max=15725, avg=94.83, stdev=648.17 00:10:42.295 clat (usec): min=716, max=32334, avg=12092.04, stdev=3691.64 00:10:42.295 lat (usec): min=4059, max=32342, avg=12186.87, stdev=3722.27 00:10:42.295 clat percentiles (usec): 00:10:42.295 | 1.00th=[ 5735], 5.00th=[ 8356], 10.00th=[ 9110], 20.00th=[10290], 00:10:42.295 | 30.00th=[10552], 40.00th=[10945], 50.00th=[11338], 60.00th=[11600], 00:10:42.295 | 70.00th=[11994], 80.00th=[13173], 90.00th=[16319], 95.00th=[19792], 00:10:42.295 | 99.00th=[27657], 99.50th=[30278], 99.90th=[32375], 99.95th=[32375], 00:10:42.295 | 99.99th=[32375] 00:10:42.295 write: IOPS=5094, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1005msec); 0 zone resets 00:10:42.295 slat (usec): min=3, max=7784, avg=92.80, stdev=473.03 00:10:42.295 clat (usec): min=2301, max=46578, avg=12769.00, stdev=6493.81 00:10:42.295 lat (usec): min=2307, max=46585, avg=12861.80, stdev=6536.62 00:10:42.295 clat percentiles (usec): 00:10:42.295 | 1.00th=[ 4621], 5.00th=[ 6587], 10.00th=[ 9110], 20.00th=[10421], 00:10:42.295 | 30.00th=[11207], 40.00th=[11338], 50.00th=[11600], 60.00th=[11731], 00:10:42.295 | 70.00th=[11994], 80.00th=[12518], 90.00th=[15008], 95.00th=[29230], 00:10:42.295 | 99.00th=[42730], 99.50th=[44303], 99.90th=[46400], 99.95th=[46400], 00:10:42.295 | 99.99th=[46400] 00:10:42.295 bw ( KiB/s): min=19856, max=21104, per=30.75%, avg=20480.00, stdev=882.47, samples=2 00:10:42.295 iops : min= 4964, max= 5276, avg=5120.00, stdev=220.62, samples=2 00:10:42.295 lat (usec) : 750=0.01% 00:10:42.295 lat (msec) : 4=0.22%, 10=16.11%, 20=78.01%, 50=5.64% 00:10:42.295 cpu : usr=5.08%, sys=8.86%, ctx=555, majf=0, minf=1 00:10:42.295 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:10:42.295 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:42.295 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:42.295 issued rwts: total=5105,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:42.295 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:42.295 job2: (groupid=0, jobs=1): err= 0: pid=156015: Mon Nov 18 06:56:02 2024 00:10:42.295 read: IOPS=4067, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1007msec) 00:10:42.295 slat (usec): min=2, max=10885, avg=116.45, stdev=681.22 00:10:42.295 clat (usec): min=5207, max=34933, avg=14858.12, stdev=3540.73 00:10:42.295 lat (usec): min=5215, max=36751, avg=14974.57, stdev=3587.71 00:10:42.295 clat percentiles (usec): 00:10:42.295 | 1.00th=[ 8160], 5.00th=[10814], 10.00th=[11338], 20.00th=[12256], 00:10:42.295 | 30.00th=[12911], 40.00th=[13435], 50.00th=[13829], 60.00th=[14615], 00:10:42.295 | 70.00th=[16450], 80.00th=[17171], 90.00th=[19268], 95.00th=[21627], 00:10:42.295 | 99.00th=[27395], 99.50th=[27395], 99.90th=[31065], 99.95th=[31065], 00:10:42.295 | 99.99th=[34866] 00:10:42.295 write: IOPS=4215, BW=16.5MiB/s (17.3MB/s)(16.6MiB/1007msec); 0 zone resets 00:10:42.295 slat (usec): min=3, max=29765, avg=104.98, stdev=867.94 00:10:42.295 clat (usec): min=455, max=76017, avg=15483.39, stdev=8831.79 00:10:42.295 lat (usec): min=500, max=76060, avg=15588.36, stdev=8906.53 00:10:42.295 clat percentiles (usec): 00:10:42.295 | 1.00th=[ 2212], 5.00th=[ 6194], 10.00th=[10028], 20.00th=[11600], 00:10:42.295 | 30.00th=[12387], 40.00th=[12911], 50.00th=[13173], 60.00th=[13960], 00:10:42.295 | 70.00th=[14746], 80.00th=[17433], 90.00th=[21103], 95.00th=[31327], 00:10:42.295 | 99.00th=[58983], 99.50th=[58983], 99.90th=[58983], 99.95th=[58983], 00:10:42.295 | 99.99th=[76022] 00:10:42.295 bw ( KiB/s): min=14760, max=18184, per=24.73%, avg=16472.00, stdev=2421.13, samples=2 00:10:42.295 iops : min= 3690, max= 4546, avg=4118.00, stdev=605.28, samples=2 00:10:42.295 lat (usec) : 500=0.01%, 750=0.04% 00:10:42.295 lat (msec) : 2=0.08%, 4=0.83%, 10=5.65%, 20=82.00%, 50=10.62% 00:10:42.295 lat (msec) : 100=0.77% 00:10:42.295 cpu : usr=5.07%, sys=7.26%, ctx=405, majf=0, minf=1 00:10:42.295 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:10:42.295 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:42.295 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:42.295 issued rwts: total=4096,4245,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:42.295 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:42.295 job3: (groupid=0, jobs=1): err= 0: pid=156016: Mon Nov 18 06:56:02 2024 00:10:42.295 read: IOPS=3552, BW=13.9MiB/s (14.5MB/s)(14.0MiB/1009msec) 00:10:42.295 slat (usec): min=2, max=21720, avg=136.18, stdev=846.33 00:10:42.295 clat (usec): min=8137, max=40690, avg=18167.70, stdev=5469.12 00:10:42.295 lat (usec): min=8155, max=40715, avg=18303.88, stdev=5525.78 00:10:42.295 clat percentiles (usec): 00:10:42.295 | 1.00th=[ 8356], 5.00th=[10683], 10.00th=[12125], 20.00th=[12911], 00:10:42.295 | 30.00th=[13435], 40.00th=[16450], 50.00th=[18220], 60.00th=[19530], 00:10:42.295 | 70.00th=[21627], 80.00th=[22938], 90.00th=[24773], 95.00th=[27132], 00:10:42.295 | 99.00th=[33162], 99.50th=[33424], 99.90th=[35390], 99.95th=[35390], 00:10:42.296 | 99.99th=[40633] 00:10:42.296 write: IOPS=3868, BW=15.1MiB/s (15.8MB/s)(15.2MiB/1009msec); 0 zone resets 00:10:42.296 slat (usec): min=3, max=14065, avg=123.23, stdev=721.33 00:10:42.296 clat (usec): min=985, max=57037, avg=16102.18, stdev=7423.39 00:10:42.296 lat (usec): min=993, max=57065, avg=16225.41, stdev=7487.33 00:10:42.296 clat percentiles (usec): 00:10:42.296 | 1.00th=[ 9765], 5.00th=[11600], 10.00th=[11994], 20.00th=[12256], 00:10:42.296 | 30.00th=[12649], 40.00th=[13042], 50.00th=[13566], 60.00th=[14615], 00:10:42.296 | 70.00th=[15270], 80.00th=[17433], 90.00th=[23987], 95.00th=[37487], 00:10:42.296 | 99.00th=[46924], 99.50th=[51643], 99.90th=[56886], 99.95th=[56886], 00:10:42.296 | 99.99th=[56886] 00:10:42.296 bw ( KiB/s): min=13816, max=16384, per=22.67%, avg=15100.00, stdev=1815.85, samples=2 00:10:42.296 iops : min= 3454, max= 4096, avg=3775.00, stdev=453.96, samples=2 00:10:42.296 lat (usec) : 1000=0.03% 00:10:42.296 lat (msec) : 10=2.04%, 20=74.29%, 50=23.23%, 100=0.41% 00:10:42.296 cpu : usr=3.77%, sys=7.44%, ctx=380, majf=0, minf=1 00:10:42.296 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:10:42.296 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:42.296 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:42.296 issued rwts: total=3584,3903,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:42.296 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:42.296 00:10:42.296 Run status group 0 (all jobs): 00:10:42.296 READ: bw=61.4MiB/s (64.4MB/s), 11.9MiB/s-19.8MiB/s (12.5MB/s-20.8MB/s), io=61.9MiB (64.9MB), run=1005-1009msec 00:10:42.296 WRITE: bw=65.0MiB/s (68.2MB/s), 13.7MiB/s-19.9MiB/s (14.4MB/s-20.9MB/s), io=65.6MiB (68.8MB), run=1005-1009msec 00:10:42.296 00:10:42.296 Disk stats (read/write): 00:10:42.296 nvme0n1: ios=2584/2847, merge=0/0, ticks=21507/16241, in_queue=37748, util=93.79% 00:10:42.296 nvme0n2: ios=4149/4395, merge=0/0, ticks=35535/38945, in_queue=74480, util=98.58% 00:10:42.296 nvme0n3: ios=3428/3584, merge=0/0, ticks=30073/33522, in_queue=63595, util=96.78% 00:10:42.296 nvme0n4: ios=3310/3584, merge=0/0, ticks=28786/25629, in_queue=54415, util=96.54% 00:10:42.296 06:56:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:10:42.296 06:56:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=156152 00:10:42.296 06:56:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:10:42.296 06:56:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:10:42.296 [global] 00:10:42.296 thread=1 00:10:42.296 invalidate=1 00:10:42.296 rw=read 00:10:42.296 time_based=1 00:10:42.296 runtime=10 00:10:42.296 ioengine=libaio 00:10:42.296 direct=1 00:10:42.296 bs=4096 00:10:42.296 iodepth=1 00:10:42.296 norandommap=1 00:10:42.296 numjobs=1 00:10:42.296 00:10:42.296 [job0] 00:10:42.296 filename=/dev/nvme0n1 00:10:42.296 [job1] 00:10:42.296 filename=/dev/nvme0n2 00:10:42.296 [job2] 00:10:42.296 filename=/dev/nvme0n3 00:10:42.296 [job3] 00:10:42.296 filename=/dev/nvme0n4 00:10:42.296 Could not set queue depth (nvme0n1) 00:10:42.296 Could not set queue depth (nvme0n2) 00:10:42.296 Could not set queue depth (nvme0n3) 00:10:42.296 Could not set queue depth (nvme0n4) 00:10:42.296 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:42.296 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:42.296 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:42.296 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:42.296 fio-3.35 00:10:42.296 Starting 4 threads 00:10:45.583 06:56:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:10:45.583 06:56:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:10:45.583 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=3698688, buflen=4096 00:10:45.583 fio: pid=156245, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:45.583 06:56:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:45.583 06:56:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:10:45.842 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=319488, buflen=4096 00:10:45.842 fio: pid=156244, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:46.100 06:56:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:46.100 06:56:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:10:46.100 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=356352, buflen=4096 00:10:46.100 fio: pid=156242, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:46.359 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=63647744, buflen=4096 00:10:46.359 fio: pid=156243, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:46.359 06:56:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:46.359 06:56:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:10:46.359 00:10:46.359 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=156242: Mon Nov 18 06:56:07 2024 00:10:46.359 read: IOPS=25, BW=100KiB/s (103kB/s)(348KiB/3468msec) 00:10:46.359 slat (usec): min=10, max=23982, avg=405.08, stdev=2751.40 00:10:46.359 clat (usec): min=304, max=46972, avg=39167.64, stdev=8585.89 00:10:46.359 lat (usec): min=321, max=65018, avg=39577.20, stdev=9102.44 00:10:46.359 clat percentiles (usec): 00:10:46.359 | 1.00th=[ 306], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:10:46.359 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:10:46.359 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:10:46.359 | 99.00th=[46924], 99.50th=[46924], 99.90th=[46924], 99.95th=[46924], 00:10:46.359 | 99.99th=[46924] 00:10:46.359 bw ( KiB/s): min= 88, max= 112, per=0.57%, avg=101.33, stdev= 9.69, samples=6 00:10:46.359 iops : min= 22, max= 28, avg=25.33, stdev= 2.42, samples=6 00:10:46.359 lat (usec) : 500=4.55% 00:10:46.359 lat (msec) : 50=94.32% 00:10:46.359 cpu : usr=0.12%, sys=0.00%, ctx=90, majf=0, minf=2 00:10:46.359 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:46.359 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:46.359 complete : 0=1.1%, 4=98.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:46.359 issued rwts: total=88,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:46.359 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:46.359 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=156243: Mon Nov 18 06:56:07 2024 00:10:46.359 read: IOPS=4136, BW=16.2MiB/s (16.9MB/s)(60.7MiB/3757msec) 00:10:46.359 slat (usec): min=5, max=31605, avg=15.65, stdev=315.99 00:10:46.359 clat (usec): min=170, max=3049, avg=221.88, stdev=41.34 00:10:46.359 lat (usec): min=175, max=31928, avg=237.53, stdev=320.35 00:10:46.359 clat percentiles (usec): 00:10:46.359 | 1.00th=[ 180], 5.00th=[ 186], 10.00th=[ 190], 20.00th=[ 198], 00:10:46.359 | 30.00th=[ 204], 40.00th=[ 210], 50.00th=[ 221], 60.00th=[ 227], 00:10:46.359 | 70.00th=[ 235], 80.00th=[ 241], 90.00th=[ 249], 95.00th=[ 262], 00:10:46.359 | 99.00th=[ 314], 99.50th=[ 359], 99.90th=[ 537], 99.95th=[ 627], 00:10:46.359 | 99.99th=[ 1647] 00:10:46.359 bw ( KiB/s): min=14904, max=17992, per=94.03%, avg=16626.43, stdev=1128.73, samples=7 00:10:46.359 iops : min= 3726, max= 4498, avg=4156.57, stdev=282.21, samples=7 00:10:46.359 lat (usec) : 250=90.40%, 500=9.45%, 750=0.12%, 1000=0.01% 00:10:46.359 lat (msec) : 2=0.01%, 4=0.01% 00:10:46.359 cpu : usr=3.54%, sys=6.10%, ctx=15546, majf=0, minf=2 00:10:46.359 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:46.359 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:46.359 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:46.359 issued rwts: total=15540,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:46.359 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:46.359 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=156244: Mon Nov 18 06:56:07 2024 00:10:46.359 read: IOPS=24, BW=97.7KiB/s (100kB/s)(312KiB/3192msec) 00:10:46.359 slat (usec): min=12, max=1892, avg=46.10, stdev=210.58 00:10:46.359 clat (usec): min=580, max=44955, avg=40572.66, stdev=4630.85 00:10:46.359 lat (usec): min=611, max=44969, avg=40619.14, stdev=4637.44 00:10:46.359 clat percentiles (usec): 00:10:46.359 | 1.00th=[ 578], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:10:46.360 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:10:46.360 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:10:46.360 | 99.00th=[44827], 99.50th=[44827], 99.90th=[44827], 99.95th=[44827], 00:10:46.360 | 99.99th=[44827] 00:10:46.360 bw ( KiB/s): min= 96, max= 104, per=0.55%, avg=97.33, stdev= 3.27, samples=6 00:10:46.360 iops : min= 24, max= 26, avg=24.33, stdev= 0.82, samples=6 00:10:46.360 lat (usec) : 750=1.27% 00:10:46.360 lat (msec) : 50=97.47% 00:10:46.360 cpu : usr=0.09%, sys=0.00%, ctx=80, majf=0, minf=1 00:10:46.360 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:46.360 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:46.360 complete : 0=1.2%, 4=98.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:46.360 issued rwts: total=79,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:46.360 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:46.360 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=156245: Mon Nov 18 06:56:07 2024 00:10:46.360 read: IOPS=308, BW=1234KiB/s (1264kB/s)(3612KiB/2927msec) 00:10:46.360 slat (nsec): min=6569, max=48346, avg=11417.62, stdev=6615.09 00:10:46.360 clat (usec): min=198, max=41201, avg=3201.39, stdev=10519.13 00:10:46.360 lat (usec): min=207, max=41217, avg=3212.80, stdev=10522.58 00:10:46.360 clat percentiles (usec): 00:10:46.360 | 1.00th=[ 206], 5.00th=[ 212], 10.00th=[ 219], 20.00th=[ 225], 00:10:46.360 | 30.00th=[ 235], 40.00th=[ 269], 50.00th=[ 277], 60.00th=[ 285], 00:10:46.360 | 70.00th=[ 297], 80.00th=[ 318], 90.00th=[ 424], 95.00th=[41157], 00:10:46.360 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:10:46.360 | 99.99th=[41157] 00:10:46.360 bw ( KiB/s): min= 96, max= 2472, per=5.53%, avg=977.60, stdev=998.45, samples=5 00:10:46.360 iops : min= 24, max= 618, avg=244.40, stdev=249.61, samples=5 00:10:46.360 lat (usec) : 250=36.95%, 500=55.31%, 750=0.44% 00:10:46.360 lat (msec) : 50=7.19% 00:10:46.360 cpu : usr=0.27%, sys=0.41%, ctx=904, majf=0, minf=1 00:10:46.360 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:46.360 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:46.360 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:46.360 issued rwts: total=904,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:46.360 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:46.360 00:10:46.360 Run status group 0 (all jobs): 00:10:46.360 READ: bw=17.3MiB/s (18.1MB/s), 97.7KiB/s-16.2MiB/s (100kB/s-16.9MB/s), io=64.9MiB (68.0MB), run=2927-3757msec 00:10:46.360 00:10:46.360 Disk stats (read/write): 00:10:46.360 nvme0n1: ios=84/0, merge=0/0, ticks=3281/0, in_queue=3281, util=94.96% 00:10:46.360 nvme0n2: ios=14898/0, merge=0/0, ticks=3182/0, in_queue=3182, util=94.28% 00:10:46.360 nvme0n3: ios=76/0, merge=0/0, ticks=3084/0, in_queue=3084, util=96.74% 00:10:46.360 nvme0n4: ios=901/0, merge=0/0, ticks=2796/0, in_queue=2796, util=96.73% 00:10:46.622 06:56:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:46.622 06:56:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:10:46.880 06:56:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:46.880 06:56:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:10:47.139 06:56:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:47.139 06:56:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:10:47.397 06:56:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:47.397 06:56:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:10:47.655 06:56:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:10:47.655 06:56:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 156152 00:10:47.655 06:56:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:10:47.655 06:56:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:47.913 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:47.913 06:56:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:47.913 06:56:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:10:47.913 06:56:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:10:47.913 06:56:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:47.913 06:56:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:10:47.913 06:56:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:47.913 06:56:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:10:47.913 06:56:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:10:47.913 06:56:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:10:47.913 nvmf hotplug test: fio failed as expected 00:10:47.913 06:56:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:48.172 06:56:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:10:48.172 06:56:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:10:48.172 06:56:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:10:48.172 06:56:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:10:48.172 06:56:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:10:48.172 06:56:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:48.172 06:56:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:10:48.172 06:56:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:48.172 06:56:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:10:48.172 06:56:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:48.172 06:56:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:48.172 rmmod nvme_tcp 00:10:48.172 rmmod nvme_fabrics 00:10:48.172 rmmod nvme_keyring 00:10:48.172 06:56:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:48.172 06:56:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:10:48.172 06:56:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:10:48.172 06:56:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 154121 ']' 00:10:48.172 06:56:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 154121 00:10:48.172 06:56:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 154121 ']' 00:10:48.172 06:56:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 154121 00:10:48.172 06:56:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:10:48.172 06:56:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:48.172 06:56:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 154121 00:10:48.172 06:56:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:48.172 06:56:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:48.172 06:56:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 154121' 00:10:48.172 killing process with pid 154121 00:10:48.172 06:56:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 154121 00:10:48.172 06:56:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 154121 00:10:48.432 06:56:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:48.432 06:56:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:48.432 06:56:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:48.432 06:56:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:10:48.432 06:56:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:10:48.432 06:56:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:48.432 06:56:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:10:48.432 06:56:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:48.432 06:56:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:48.432 06:56:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:48.432 06:56:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:48.432 06:56:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:50.342 06:56:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:50.342 00:10:50.342 real 0m24.193s 00:10:50.342 user 1m25.365s 00:10:50.342 sys 0m6.755s 00:10:50.342 06:56:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:50.342 06:56:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:50.342 ************************************ 00:10:50.342 END TEST nvmf_fio_target 00:10:50.342 ************************************ 00:10:50.602 06:56:11 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:10:50.602 06:56:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:50.602 06:56:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:50.602 06:56:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:50.602 ************************************ 00:10:50.602 START TEST nvmf_bdevio 00:10:50.602 ************************************ 00:10:50.602 06:56:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:10:50.602 * Looking for test storage... 00:10:50.602 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:50.602 06:56:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:50.602 06:56:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lcov --version 00:10:50.602 06:56:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:50.602 06:56:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:50.602 06:56:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:50.602 06:56:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:50.602 06:56:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:50.602 06:56:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:10:50.602 06:56:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:10:50.602 06:56:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:10:50.602 06:56:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:10:50.602 06:56:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:10:50.602 06:56:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:10:50.602 06:56:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:10:50.602 06:56:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:50.602 06:56:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:10:50.602 06:56:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:10:50.602 06:56:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:50.602 06:56:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:50.602 06:56:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:10:50.602 06:56:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:10:50.602 06:56:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:50.602 06:56:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:10:50.602 06:56:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:10:50.602 06:56:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:10:50.602 06:56:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:10:50.602 06:56:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:50.602 06:56:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:10:50.602 06:56:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:10:50.602 06:56:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:50.602 06:56:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:50.602 06:56:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:10:50.602 06:56:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:50.602 06:56:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:50.602 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:50.602 --rc genhtml_branch_coverage=1 00:10:50.602 --rc genhtml_function_coverage=1 00:10:50.602 --rc genhtml_legend=1 00:10:50.602 --rc geninfo_all_blocks=1 00:10:50.602 --rc geninfo_unexecuted_blocks=1 00:10:50.602 00:10:50.602 ' 00:10:50.602 06:56:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:50.602 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:50.602 --rc genhtml_branch_coverage=1 00:10:50.602 --rc genhtml_function_coverage=1 00:10:50.602 --rc genhtml_legend=1 00:10:50.602 --rc geninfo_all_blocks=1 00:10:50.602 --rc geninfo_unexecuted_blocks=1 00:10:50.602 00:10:50.602 ' 00:10:50.602 06:56:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:50.602 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:50.602 --rc genhtml_branch_coverage=1 00:10:50.602 --rc genhtml_function_coverage=1 00:10:50.602 --rc genhtml_legend=1 00:10:50.602 --rc geninfo_all_blocks=1 00:10:50.602 --rc geninfo_unexecuted_blocks=1 00:10:50.602 00:10:50.602 ' 00:10:50.602 06:56:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:50.602 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:50.602 --rc genhtml_branch_coverage=1 00:10:50.602 --rc genhtml_function_coverage=1 00:10:50.602 --rc genhtml_legend=1 00:10:50.602 --rc geninfo_all_blocks=1 00:10:50.602 --rc geninfo_unexecuted_blocks=1 00:10:50.602 00:10:50.602 ' 00:10:50.602 06:56:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:50.602 06:56:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:10:50.602 06:56:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:50.602 06:56:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:50.602 06:56:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:50.602 06:56:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:50.602 06:56:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:50.602 06:56:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:50.602 06:56:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:50.602 06:56:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:50.602 06:56:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:50.602 06:56:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:50.602 06:56:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:50.602 06:56:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:50.602 06:56:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:50.602 06:56:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:50.602 06:56:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:50.602 06:56:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:50.602 06:56:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:50.602 06:56:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:10:50.602 06:56:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:50.602 06:56:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:50.602 06:56:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:50.602 06:56:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:50.602 06:56:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:50.603 06:56:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:50.603 06:56:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:10:50.603 06:56:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:50.603 06:56:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:10:50.603 06:56:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:50.603 06:56:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:50.603 06:56:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:50.603 06:56:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:50.603 06:56:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:50.603 06:56:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:50.603 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:50.603 06:56:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:50.603 06:56:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:50.603 06:56:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:50.603 06:56:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:50.603 06:56:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:50.603 06:56:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:10:50.603 06:56:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:50.603 06:56:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:50.603 06:56:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:50.603 06:56:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:50.603 06:56:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:50.603 06:56:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:50.603 06:56:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:50.603 06:56:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:50.603 06:56:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:50.603 06:56:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:50.603 06:56:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:10:50.603 06:56:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:53.140 06:56:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:53.140 06:56:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:10:53.140 06:56:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:53.140 06:56:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:53.140 06:56:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:53.140 06:56:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:53.141 06:56:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:53.141 06:56:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:10:53.141 06:56:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:53.141 06:56:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:10:53.141 06:56:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:10:53.141 06:56:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:10:53.141 06:56:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:10:53.141 06:56:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:10:53.141 06:56:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:10:53.141 06:56:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:53.141 06:56:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:53.141 06:56:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:53.141 06:56:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:53.141 06:56:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:53.141 06:56:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:53.141 06:56:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:53.141 06:56:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:53.141 06:56:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:53.141 06:56:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:53.141 06:56:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:53.141 06:56:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:53.141 06:56:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:53.141 06:56:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:53.141 06:56:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:53.141 06:56:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:53.141 06:56:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:53.141 06:56:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:53.141 06:56:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:53.141 06:56:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:10:53.141 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:10:53.141 06:56:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:53.141 06:56:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:53.141 06:56:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:53.141 06:56:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:53.141 06:56:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:53.141 06:56:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:53.141 06:56:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:10:53.141 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:10:53.141 06:56:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:53.141 06:56:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:53.141 06:56:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:53.141 06:56:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:53.141 06:56:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:53.141 06:56:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:53.141 06:56:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:53.141 06:56:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:53.141 06:56:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:53.141 06:56:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:53.141 06:56:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:53.141 06:56:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:53.141 06:56:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:53.141 06:56:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:53.141 06:56:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:53.141 06:56:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:10:53.141 Found net devices under 0000:0a:00.0: cvl_0_0 00:10:53.141 06:56:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:53.141 06:56:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:53.141 06:56:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:53.141 06:56:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:53.141 06:56:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:53.141 06:56:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:53.141 06:56:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:53.141 06:56:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:53.141 06:56:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:10:53.141 Found net devices under 0000:0a:00.1: cvl_0_1 00:10:53.141 06:56:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:53.141 06:56:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:53.141 06:56:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:10:53.141 06:56:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:53.141 06:56:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:53.141 06:56:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:53.141 06:56:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:53.141 06:56:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:53.141 06:56:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:53.141 06:56:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:53.141 06:56:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:53.141 06:56:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:53.141 06:56:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:53.141 06:56:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:53.141 06:56:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:53.141 06:56:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:53.141 06:56:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:53.141 06:56:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:53.141 06:56:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:53.141 06:56:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:53.141 06:56:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:53.141 06:56:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:53.141 06:56:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:53.141 06:56:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:53.141 06:56:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:53.141 06:56:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:53.141 06:56:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:53.141 06:56:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:53.141 06:56:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:53.141 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:53.141 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.130 ms 00:10:53.141 00:10:53.141 --- 10.0.0.2 ping statistics --- 00:10:53.141 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:53.141 rtt min/avg/max/mdev = 0.130/0.130/0.130/0.000 ms 00:10:53.141 06:56:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:53.141 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:53.141 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.111 ms 00:10:53.141 00:10:53.141 --- 10.0.0.1 ping statistics --- 00:10:53.141 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:53.141 rtt min/avg/max/mdev = 0.111/0.111/0.111/0.000 ms 00:10:53.141 06:56:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:53.141 06:56:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:10:53.141 06:56:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:53.141 06:56:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:53.141 06:56:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:53.141 06:56:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:53.141 06:56:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:53.141 06:56:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:53.141 06:56:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:53.142 06:56:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:10:53.142 06:56:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:53.142 06:56:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:53.142 06:56:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:53.142 06:56:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=158982 00:10:53.142 06:56:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:10:53.142 06:56:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 158982 00:10:53.142 06:56:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 158982 ']' 00:10:53.142 06:56:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:53.142 06:56:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:53.142 06:56:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:53.142 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:53.142 06:56:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:53.142 06:56:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:53.142 [2024-11-18 06:56:13.920784] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:10:53.142 [2024-11-18 06:56:13.920885] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:53.142 [2024-11-18 06:56:13.989245] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:53.142 [2024-11-18 06:56:14.032794] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:53.142 [2024-11-18 06:56:14.032867] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:53.142 [2024-11-18 06:56:14.032880] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:53.142 [2024-11-18 06:56:14.032891] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:53.142 [2024-11-18 06:56:14.032900] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:53.142 [2024-11-18 06:56:14.034460] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:10:53.142 [2024-11-18 06:56:14.034592] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:10:53.142 [2024-11-18 06:56:14.034657] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:10:53.142 [2024-11-18 06:56:14.034661] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:53.401 06:56:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:53.401 06:56:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:10:53.401 06:56:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:53.401 06:56:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:53.401 06:56:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:53.401 06:56:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:53.401 06:56:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:53.401 06:56:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.401 06:56:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:53.401 [2024-11-18 06:56:14.178441] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:53.401 06:56:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.401 06:56:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:53.401 06:56:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.401 06:56:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:53.401 Malloc0 00:10:53.401 06:56:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.401 06:56:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:53.401 06:56:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.401 06:56:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:53.401 06:56:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.401 06:56:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:53.401 06:56:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.401 06:56:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:53.401 06:56:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.401 06:56:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:53.401 06:56:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.401 06:56:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:53.401 [2024-11-18 06:56:14.250433] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:53.401 06:56:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.401 06:56:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:10:53.401 06:56:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:10:53.401 06:56:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:10:53.401 06:56:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:10:53.401 06:56:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:53.401 06:56:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:53.401 { 00:10:53.401 "params": { 00:10:53.401 "name": "Nvme$subsystem", 00:10:53.401 "trtype": "$TEST_TRANSPORT", 00:10:53.401 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:53.401 "adrfam": "ipv4", 00:10:53.401 "trsvcid": "$NVMF_PORT", 00:10:53.401 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:53.401 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:53.401 "hdgst": ${hdgst:-false}, 00:10:53.401 "ddgst": ${ddgst:-false} 00:10:53.401 }, 00:10:53.401 "method": "bdev_nvme_attach_controller" 00:10:53.401 } 00:10:53.401 EOF 00:10:53.401 )") 00:10:53.401 06:56:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:10:53.401 06:56:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:10:53.401 06:56:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:10:53.401 06:56:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:53.401 "params": { 00:10:53.401 "name": "Nvme1", 00:10:53.401 "trtype": "tcp", 00:10:53.401 "traddr": "10.0.0.2", 00:10:53.401 "adrfam": "ipv4", 00:10:53.401 "trsvcid": "4420", 00:10:53.401 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:53.401 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:53.401 "hdgst": false, 00:10:53.401 "ddgst": false 00:10:53.401 }, 00:10:53.401 "method": "bdev_nvme_attach_controller" 00:10:53.401 }' 00:10:53.401 [2024-11-18 06:56:14.300708] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:10:53.401 [2024-11-18 06:56:14.300794] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid159034 ] 00:10:53.401 [2024-11-18 06:56:14.372347] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:53.660 [2024-11-18 06:56:14.425045] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:53.660 [2024-11-18 06:56:14.425098] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:53.660 [2024-11-18 06:56:14.425102] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:53.918 I/O targets: 00:10:53.918 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:10:53.918 00:10:53.918 00:10:53.918 CUnit - A unit testing framework for C - Version 2.1-3 00:10:53.918 http://cunit.sourceforge.net/ 00:10:53.918 00:10:53.918 00:10:53.918 Suite: bdevio tests on: Nvme1n1 00:10:53.918 Test: blockdev write read block ...passed 00:10:53.918 Test: blockdev write zeroes read block ...passed 00:10:53.918 Test: blockdev write zeroes read no split ...passed 00:10:54.176 Test: blockdev write zeroes read split ...passed 00:10:54.176 Test: blockdev write zeroes read split partial ...passed 00:10:54.176 Test: blockdev reset ...[2024-11-18 06:56:14.969280] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:10:54.177 [2024-11-18 06:56:14.969385] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x973b70 (9): Bad file descriptor 00:10:54.177 [2024-11-18 06:56:15.073006] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:10:54.177 passed 00:10:54.177 Test: blockdev write read 8 blocks ...passed 00:10:54.177 Test: blockdev write read size > 128k ...passed 00:10:54.177 Test: blockdev write read invalid size ...passed 00:10:54.435 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:54.435 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:54.435 Test: blockdev write read max offset ...passed 00:10:54.435 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:54.435 Test: blockdev writev readv 8 blocks ...passed 00:10:54.435 Test: blockdev writev readv 30 x 1block ...passed 00:10:54.435 Test: blockdev writev readv block ...passed 00:10:54.435 Test: blockdev writev readv size > 128k ...passed 00:10:54.435 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:54.435 Test: blockdev comparev and writev ...[2024-11-18 06:56:15.368645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:54.435 [2024-11-18 06:56:15.368682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:10:54.435 [2024-11-18 06:56:15.368707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:54.435 [2024-11-18 06:56:15.368725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:10:54.435 [2024-11-18 06:56:15.369119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:54.435 [2024-11-18 06:56:15.369143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:10:54.435 [2024-11-18 06:56:15.369176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:54.435 [2024-11-18 06:56:15.369194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:10:54.435 [2024-11-18 06:56:15.369578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:54.435 [2024-11-18 06:56:15.369604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:10:54.435 [2024-11-18 06:56:15.369626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:54.435 [2024-11-18 06:56:15.369643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:10:54.435 [2024-11-18 06:56:15.370007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:54.435 [2024-11-18 06:56:15.370031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:10:54.435 [2024-11-18 06:56:15.370053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:54.435 [2024-11-18 06:56:15.370070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:10:54.435 passed 00:10:54.694 Test: blockdev nvme passthru rw ...passed 00:10:54.694 Test: blockdev nvme passthru vendor specific ...[2024-11-18 06:56:15.453923] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:54.694 [2024-11-18 06:56:15.454002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:10:54.694 [2024-11-18 06:56:15.454229] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:54.694 [2024-11-18 06:56:15.454255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:10:54.694 [2024-11-18 06:56:15.454395] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:54.694 [2024-11-18 06:56:15.454419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:10:54.694 [2024-11-18 06:56:15.454565] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:54.694 [2024-11-18 06:56:15.454590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:10:54.694 passed 00:10:54.694 Test: blockdev nvme admin passthru ...passed 00:10:54.694 Test: blockdev copy ...passed 00:10:54.694 00:10:54.694 Run Summary: Type Total Ran Passed Failed Inactive 00:10:54.694 suites 1 1 n/a 0 0 00:10:54.694 tests 23 23 23 0 0 00:10:54.694 asserts 152 152 152 0 n/a 00:10:54.694 00:10:54.694 Elapsed time = 1.479 seconds 00:10:54.952 06:56:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:54.952 06:56:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.952 06:56:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:54.952 06:56:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.953 06:56:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:10:54.953 06:56:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:10:54.953 06:56:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:54.953 06:56:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:10:54.953 06:56:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:54.953 06:56:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:10:54.953 06:56:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:54.953 06:56:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:54.953 rmmod nvme_tcp 00:10:54.953 rmmod nvme_fabrics 00:10:54.953 rmmod nvme_keyring 00:10:54.953 06:56:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:54.953 06:56:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:10:54.953 06:56:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:10:54.953 06:56:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 158982 ']' 00:10:54.953 06:56:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 158982 00:10:54.953 06:56:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 158982 ']' 00:10:54.953 06:56:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 158982 00:10:54.953 06:56:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:10:54.953 06:56:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:54.953 06:56:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 158982 00:10:54.953 06:56:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:10:54.953 06:56:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:10:54.953 06:56:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 158982' 00:10:54.953 killing process with pid 158982 00:10:54.953 06:56:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 158982 00:10:54.953 06:56:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 158982 00:10:55.213 06:56:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:55.213 06:56:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:55.213 06:56:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:55.213 06:56:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:10:55.213 06:56:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:10:55.213 06:56:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:10:55.213 06:56:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:55.213 06:56:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:55.213 06:56:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:55.213 06:56:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:55.213 06:56:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:55.213 06:56:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:57.136 06:56:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:57.136 00:10:57.136 real 0m6.731s 00:10:57.136 user 0m11.599s 00:10:57.136 sys 0m2.307s 00:10:57.136 06:56:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:57.136 06:56:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:57.136 ************************************ 00:10:57.136 END TEST nvmf_bdevio 00:10:57.136 ************************************ 00:10:57.136 06:56:18 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:10:57.136 00:10:57.136 real 3m55.958s 00:10:57.136 user 10m16.235s 00:10:57.136 sys 1m7.060s 00:10:57.136 06:56:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:57.136 06:56:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:57.136 ************************************ 00:10:57.136 END TEST nvmf_target_core 00:10:57.136 ************************************ 00:10:57.395 06:56:18 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:10:57.395 06:56:18 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:57.395 06:56:18 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:57.396 06:56:18 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:57.396 ************************************ 00:10:57.396 START TEST nvmf_target_extra 00:10:57.396 ************************************ 00:10:57.396 06:56:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:10:57.396 * Looking for test storage... 00:10:57.396 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:10:57.396 06:56:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:57.396 06:56:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # lcov --version 00:10:57.396 06:56:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:57.396 06:56:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:57.396 06:56:18 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:57.396 06:56:18 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:57.396 06:56:18 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:57.396 06:56:18 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:10:57.396 06:56:18 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:10:57.396 06:56:18 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:10:57.396 06:56:18 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:10:57.396 06:56:18 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:10:57.396 06:56:18 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:10:57.396 06:56:18 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:10:57.396 06:56:18 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:57.396 06:56:18 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:10:57.396 06:56:18 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:10:57.396 06:56:18 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:57.396 06:56:18 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:57.396 06:56:18 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:10:57.396 06:56:18 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:10:57.396 06:56:18 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:57.396 06:56:18 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:10:57.396 06:56:18 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:10:57.396 06:56:18 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:10:57.396 06:56:18 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:10:57.396 06:56:18 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:57.396 06:56:18 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:10:57.396 06:56:18 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:10:57.396 06:56:18 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:57.396 06:56:18 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:57.396 06:56:18 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:10:57.396 06:56:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:57.396 06:56:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:57.396 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:57.396 --rc genhtml_branch_coverage=1 00:10:57.396 --rc genhtml_function_coverage=1 00:10:57.396 --rc genhtml_legend=1 00:10:57.396 --rc geninfo_all_blocks=1 00:10:57.396 --rc geninfo_unexecuted_blocks=1 00:10:57.396 00:10:57.396 ' 00:10:57.396 06:56:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:57.396 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:57.396 --rc genhtml_branch_coverage=1 00:10:57.396 --rc genhtml_function_coverage=1 00:10:57.396 --rc genhtml_legend=1 00:10:57.396 --rc geninfo_all_blocks=1 00:10:57.396 --rc geninfo_unexecuted_blocks=1 00:10:57.396 00:10:57.396 ' 00:10:57.396 06:56:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:57.396 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:57.396 --rc genhtml_branch_coverage=1 00:10:57.396 --rc genhtml_function_coverage=1 00:10:57.396 --rc genhtml_legend=1 00:10:57.396 --rc geninfo_all_blocks=1 00:10:57.396 --rc geninfo_unexecuted_blocks=1 00:10:57.396 00:10:57.396 ' 00:10:57.396 06:56:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:57.396 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:57.396 --rc genhtml_branch_coverage=1 00:10:57.396 --rc genhtml_function_coverage=1 00:10:57.396 --rc genhtml_legend=1 00:10:57.396 --rc geninfo_all_blocks=1 00:10:57.396 --rc geninfo_unexecuted_blocks=1 00:10:57.396 00:10:57.396 ' 00:10:57.396 06:56:18 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:57.396 06:56:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:10:57.396 06:56:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:57.396 06:56:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:57.396 06:56:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:57.396 06:56:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:57.396 06:56:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:57.396 06:56:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:57.396 06:56:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:57.396 06:56:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:57.396 06:56:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:57.396 06:56:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:57.396 06:56:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:57.396 06:56:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:57.396 06:56:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:57.396 06:56:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:57.396 06:56:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:57.396 06:56:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:57.396 06:56:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:57.396 06:56:18 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:10:57.396 06:56:18 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:57.396 06:56:18 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:57.396 06:56:18 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:57.396 06:56:18 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:57.396 06:56:18 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:57.396 06:56:18 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:57.396 06:56:18 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:10:57.396 06:56:18 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:57.396 06:56:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:10:57.396 06:56:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:57.396 06:56:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:57.396 06:56:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:57.396 06:56:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:57.397 06:56:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:57.397 06:56:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:57.397 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:57.397 06:56:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:57.397 06:56:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:57.397 06:56:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:57.397 06:56:18 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:10:57.397 06:56:18 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:10:57.397 06:56:18 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:10:57.397 06:56:18 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:10:57.397 06:56:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:57.397 06:56:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:57.397 06:56:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:57.397 ************************************ 00:10:57.397 START TEST nvmf_example 00:10:57.397 ************************************ 00:10:57.397 06:56:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:10:57.657 * Looking for test storage... 00:10:57.657 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:57.657 06:56:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:57.657 06:56:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # lcov --version 00:10:57.657 06:56:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:57.657 06:56:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:57.657 06:56:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:57.657 06:56:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:57.657 06:56:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:57.657 06:56:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:10:57.657 06:56:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:10:57.657 06:56:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:10:57.657 06:56:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:10:57.657 06:56:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:10:57.657 06:56:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:10:57.657 06:56:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:10:57.657 06:56:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:57.657 06:56:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:10:57.657 06:56:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:10:57.657 06:56:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:57.657 06:56:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:57.657 06:56:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:10:57.657 06:56:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:10:57.657 06:56:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:57.657 06:56:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:10:57.657 06:56:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:10:57.657 06:56:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:10:57.657 06:56:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:10:57.657 06:56:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:57.657 06:56:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:10:57.657 06:56:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:10:57.657 06:56:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:57.657 06:56:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:57.657 06:56:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:10:57.657 06:56:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:57.657 06:56:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:57.657 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:57.657 --rc genhtml_branch_coverage=1 00:10:57.657 --rc genhtml_function_coverage=1 00:10:57.657 --rc genhtml_legend=1 00:10:57.657 --rc geninfo_all_blocks=1 00:10:57.657 --rc geninfo_unexecuted_blocks=1 00:10:57.657 00:10:57.657 ' 00:10:57.657 06:56:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:57.657 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:57.657 --rc genhtml_branch_coverage=1 00:10:57.657 --rc genhtml_function_coverage=1 00:10:57.657 --rc genhtml_legend=1 00:10:57.657 --rc geninfo_all_blocks=1 00:10:57.657 --rc geninfo_unexecuted_blocks=1 00:10:57.657 00:10:57.657 ' 00:10:57.657 06:56:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:57.657 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:57.657 --rc genhtml_branch_coverage=1 00:10:57.657 --rc genhtml_function_coverage=1 00:10:57.657 --rc genhtml_legend=1 00:10:57.657 --rc geninfo_all_blocks=1 00:10:57.657 --rc geninfo_unexecuted_blocks=1 00:10:57.657 00:10:57.657 ' 00:10:57.657 06:56:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:57.657 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:57.657 --rc genhtml_branch_coverage=1 00:10:57.657 --rc genhtml_function_coverage=1 00:10:57.657 --rc genhtml_legend=1 00:10:57.657 --rc geninfo_all_blocks=1 00:10:57.657 --rc geninfo_unexecuted_blocks=1 00:10:57.657 00:10:57.657 ' 00:10:57.657 06:56:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:57.657 06:56:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:10:57.657 06:56:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:57.657 06:56:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:57.657 06:56:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:57.657 06:56:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:57.657 06:56:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:57.657 06:56:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:57.657 06:56:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:57.657 06:56:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:57.657 06:56:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:57.657 06:56:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:57.657 06:56:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:57.657 06:56:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:57.657 06:56:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:57.657 06:56:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:57.657 06:56:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:57.657 06:56:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:57.657 06:56:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:57.657 06:56:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:10:57.657 06:56:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:57.657 06:56:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:57.657 06:56:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:57.657 06:56:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:57.658 06:56:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:57.658 06:56:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:57.658 06:56:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:10:57.658 06:56:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:57.658 06:56:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # : 0 00:10:57.658 06:56:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:57.658 06:56:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:57.658 06:56:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:57.658 06:56:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:57.658 06:56:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:57.658 06:56:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:57.658 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:57.658 06:56:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:57.658 06:56:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:57.658 06:56:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:57.658 06:56:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:10:57.658 06:56:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:10:57.658 06:56:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:10:57.658 06:56:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:10:57.658 06:56:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:10:57.658 06:56:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:10:57.658 06:56:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:10:57.658 06:56:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:10:57.658 06:56:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:57.658 06:56:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:57.658 06:56:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:10:57.658 06:56:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:57.658 06:56:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:57.658 06:56:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:57.658 06:56:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:57.658 06:56:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:57.658 06:56:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:57.658 06:56:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:57.658 06:56:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:57.658 06:56:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:57.658 06:56:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:57.658 06:56:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@309 -- # xtrace_disable 00:10:57.658 06:56:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:00.192 06:56:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:00.192 06:56:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # pci_devs=() 00:11:00.192 06:56:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:00.192 06:56:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:00.192 06:56:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:00.192 06:56:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:00.192 06:56:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:00.192 06:56:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # net_devs=() 00:11:00.192 06:56:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:00.192 06:56:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # e810=() 00:11:00.192 06:56:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # local -ga e810 00:11:00.192 06:56:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # x722=() 00:11:00.192 06:56:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # local -ga x722 00:11:00.192 06:56:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # mlx=() 00:11:00.192 06:56:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # local -ga mlx 00:11:00.192 06:56:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:00.192 06:56:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:00.192 06:56:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:00.192 06:56:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:00.192 06:56:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:00.192 06:56:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:00.192 06:56:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:00.192 06:56:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:00.192 06:56:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:00.192 06:56:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:00.192 06:56:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:00.192 06:56:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:00.192 06:56:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:00.192 06:56:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:00.192 06:56:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:00.192 06:56:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:00.192 06:56:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:00.192 06:56:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:00.192 06:56:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:00.192 06:56:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:00.192 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:00.192 06:56:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:00.192 06:56:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:00.192 06:56:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:00.192 06:56:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:00.192 06:56:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:00.192 06:56:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:00.192 06:56:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:00.192 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:00.192 06:56:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:00.192 06:56:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:00.192 06:56:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:00.192 06:56:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:00.192 06:56:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:00.192 06:56:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:00.192 06:56:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:00.192 06:56:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:00.192 06:56:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:00.192 06:56:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:00.192 06:56:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:00.192 06:56:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:00.192 06:56:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:00.192 06:56:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:00.192 06:56:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:00.192 06:56:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:00.192 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:00.192 06:56:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:00.192 06:56:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:00.192 06:56:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:00.192 06:56:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:00.192 06:56:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:00.192 06:56:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:00.192 06:56:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:00.192 06:56:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:00.192 06:56:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:00.192 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:00.192 06:56:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:00.192 06:56:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:00.192 06:56:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # is_hw=yes 00:11:00.192 06:56:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:00.192 06:56:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:00.192 06:56:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:00.192 06:56:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:00.192 06:56:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:00.192 06:56:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:00.192 06:56:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:00.192 06:56:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:00.192 06:56:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:00.193 06:56:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:00.193 06:56:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:00.193 06:56:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:00.193 06:56:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:00.193 06:56:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:00.193 06:56:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:00.193 06:56:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:00.193 06:56:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:00.193 06:56:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:00.193 06:56:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:00.193 06:56:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:00.193 06:56:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:00.193 06:56:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:00.193 06:56:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:00.193 06:56:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:00.193 06:56:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:00.193 06:56:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:00.193 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:00.193 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.185 ms 00:11:00.193 00:11:00.193 --- 10.0.0.2 ping statistics --- 00:11:00.193 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:00.193 rtt min/avg/max/mdev = 0.185/0.185/0.185/0.000 ms 00:11:00.193 06:56:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:00.193 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:00.193 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.109 ms 00:11:00.193 00:11:00.193 --- 10.0.0.1 ping statistics --- 00:11:00.193 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:00.193 rtt min/avg/max/mdev = 0.109/0.109/0.109/0.000 ms 00:11:00.193 06:56:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:00.193 06:56:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@450 -- # return 0 00:11:00.193 06:56:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:00.193 06:56:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:00.193 06:56:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:00.193 06:56:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:00.193 06:56:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:00.193 06:56:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:00.193 06:56:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:00.193 06:56:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:11:00.193 06:56:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:11:00.193 06:56:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:00.193 06:56:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:00.193 06:56:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:11:00.193 06:56:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:11:00.193 06:56:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=161297 00:11:00.193 06:56:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:11:00.193 06:56:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:00.193 06:56:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 161297 00:11:00.193 06:56:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # '[' -z 161297 ']' 00:11:00.193 06:56:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:00.193 06:56:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:00.193 06:56:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:00.193 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:00.193 06:56:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:00.193 06:56:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:01.127 06:56:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:01.127 06:56:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@868 -- # return 0 00:11:01.127 06:56:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:11:01.127 06:56:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:01.127 06:56:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:01.127 06:56:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:01.127 06:56:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.127 06:56:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:01.127 06:56:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.127 06:56:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:11:01.127 06:56:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.127 06:56:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:01.127 06:56:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.127 06:56:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:11:01.127 06:56:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:01.127 06:56:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.127 06:56:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:01.127 06:56:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.127 06:56:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:11:01.127 06:56:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:01.127 06:56:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.127 06:56:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:01.127 06:56:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.127 06:56:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:01.127 06:56:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.127 06:56:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:01.127 06:56:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.127 06:56:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:11:01.127 06:56:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:11:13.343 Initializing NVMe Controllers 00:11:13.343 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:13.343 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:11:13.343 Initialization complete. Launching workers. 00:11:13.343 ======================================================== 00:11:13.343 Latency(us) 00:11:13.343 Device Information : IOPS MiB/s Average min max 00:11:13.343 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 15062.60 58.84 4249.94 874.12 15954.50 00:11:13.343 ======================================================== 00:11:13.343 Total : 15062.60 58.84 4249.94 874.12 15954.50 00:11:13.343 00:11:13.343 06:56:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:11:13.343 06:56:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:11:13.343 06:56:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:13.343 06:56:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # sync 00:11:13.343 06:56:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:13.343 06:56:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set +e 00:11:13.343 06:56:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:13.343 06:56:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:13.343 rmmod nvme_tcp 00:11:13.343 rmmod nvme_fabrics 00:11:13.343 rmmod nvme_keyring 00:11:13.343 06:56:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:13.343 06:56:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # set -e 00:11:13.343 06:56:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # return 0 00:11:13.343 06:56:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@517 -- # '[' -n 161297 ']' 00:11:13.343 06:56:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@518 -- # killprocess 161297 00:11:13.343 06:56:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # '[' -z 161297 ']' 00:11:13.343 06:56:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # kill -0 161297 00:11:13.343 06:56:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # uname 00:11:13.343 06:56:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:13.343 06:56:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 161297 00:11:13.343 06:56:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # process_name=nvmf 00:11:13.343 06:56:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@964 -- # '[' nvmf = sudo ']' 00:11:13.343 06:56:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@972 -- # echo 'killing process with pid 161297' 00:11:13.343 killing process with pid 161297 00:11:13.343 06:56:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@973 -- # kill 161297 00:11:13.343 06:56:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@978 -- # wait 161297 00:11:13.344 nvmf threads initialize successfully 00:11:13.344 bdev subsystem init successfully 00:11:13.344 created a nvmf target service 00:11:13.344 create targets's poll groups done 00:11:13.344 all subsystems of target started 00:11:13.344 nvmf target is running 00:11:13.344 all subsystems of target stopped 00:11:13.344 destroy targets's poll groups done 00:11:13.344 destroyed the nvmf target service 00:11:13.344 bdev subsystem finish successfully 00:11:13.344 nvmf threads destroy successfully 00:11:13.344 06:56:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:13.344 06:56:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:13.344 06:56:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:13.344 06:56:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # iptr 00:11:13.344 06:56:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-save 00:11:13.344 06:56:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:13.344 06:56:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-restore 00:11:13.344 06:56:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:13.344 06:56:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:13.344 06:56:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:13.344 06:56:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:13.344 06:56:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:13.913 06:56:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:13.913 06:56:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:11:13.913 06:56:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:13.913 06:56:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:13.913 00:11:13.913 real 0m16.421s 00:11:13.913 user 0m45.999s 00:11:13.914 sys 0m3.577s 00:11:13.914 06:56:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:13.914 06:56:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:13.914 ************************************ 00:11:13.914 END TEST nvmf_example 00:11:13.914 ************************************ 00:11:13.914 06:56:34 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:11:13.914 06:56:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:13.914 06:56:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:13.914 06:56:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:13.914 ************************************ 00:11:13.914 START TEST nvmf_filesystem 00:11:13.914 ************************************ 00:11:13.914 06:56:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:11:13.914 * Looking for test storage... 00:11:13.914 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:13.914 06:56:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:13.914 06:56:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lcov --version 00:11:13.914 06:56:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:14.178 06:56:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:14.178 06:56:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:14.178 06:56:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:14.178 06:56:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:14.178 06:56:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:11:14.178 06:56:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:11:14.178 06:56:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:11:14.178 06:56:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:11:14.178 06:56:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:11:14.178 06:56:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:11:14.178 06:56:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:11:14.178 06:56:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:14.178 06:56:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:11:14.178 06:56:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:11:14.178 06:56:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:14.178 06:56:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:14.178 06:56:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:11:14.178 06:56:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:11:14.178 06:56:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:14.178 06:56:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:11:14.178 06:56:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:11:14.178 06:56:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:11:14.178 06:56:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:11:14.178 06:56:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:14.178 06:56:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:11:14.178 06:56:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:11:14.178 06:56:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:14.178 06:56:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:14.178 06:56:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:11:14.178 06:56:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:14.178 06:56:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:14.178 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:14.178 --rc genhtml_branch_coverage=1 00:11:14.178 --rc genhtml_function_coverage=1 00:11:14.178 --rc genhtml_legend=1 00:11:14.178 --rc geninfo_all_blocks=1 00:11:14.178 --rc geninfo_unexecuted_blocks=1 00:11:14.178 00:11:14.178 ' 00:11:14.178 06:56:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:14.178 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:14.178 --rc genhtml_branch_coverage=1 00:11:14.178 --rc genhtml_function_coverage=1 00:11:14.178 --rc genhtml_legend=1 00:11:14.178 --rc geninfo_all_blocks=1 00:11:14.178 --rc geninfo_unexecuted_blocks=1 00:11:14.178 00:11:14.178 ' 00:11:14.178 06:56:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:14.178 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:14.178 --rc genhtml_branch_coverage=1 00:11:14.178 --rc genhtml_function_coverage=1 00:11:14.178 --rc genhtml_legend=1 00:11:14.178 --rc geninfo_all_blocks=1 00:11:14.178 --rc geninfo_unexecuted_blocks=1 00:11:14.178 00:11:14.178 ' 00:11:14.178 06:56:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:14.178 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:14.178 --rc genhtml_branch_coverage=1 00:11:14.178 --rc genhtml_function_coverage=1 00:11:14.178 --rc genhtml_legend=1 00:11:14.178 --rc geninfo_all_blocks=1 00:11:14.178 --rc geninfo_unexecuted_blocks=1 00:11:14.178 00:11:14.178 ' 00:11:14.178 06:56:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:11:14.178 06:56:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:11:14.178 06:56:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:11:14.178 06:56:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:11:14.178 06:56:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:11:14.178 06:56:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:11:14.178 06:56:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:11:14.178 06:56:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:11:14.178 06:56:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:11:14.178 06:56:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:11:14.178 06:56:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:11:14.178 06:56:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:11:14.178 06:56:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:11:14.178 06:56:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:11:14.178 06:56:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:11:14.178 06:56:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:11:14.178 06:56:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:11:14.178 06:56:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:11:14.178 06:56:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:11:14.178 06:56:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:11:14.178 06:56:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:11:14.178 06:56:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:11:14.178 06:56:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:11:14.178 06:56:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:11:14.178 06:56:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:11:14.178 06:56:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:11:14.178 06:56:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:11:14.178 06:56:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:11:14.178 06:56:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:11:14.178 06:56:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:11:14.178 06:56:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:11:14.178 06:56:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_CET=n 00:11:14.178 06:56:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:11:14.178 06:56:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:11:14.178 06:56:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:11:14.178 06:56:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:11:14.178 06:56:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:11:14.178 06:56:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:11:14.178 06:56:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:11:14.178 06:56:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:11:14.179 06:56:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:11:14.179 06:56:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:11:14.179 06:56:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:11:14.179 06:56:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:11:14.179 06:56:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:11:14.179 06:56:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:11:14.179 06:56:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:11:14.179 06:56:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:11:14.179 06:56:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:11:14.179 06:56:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:11:14.179 06:56:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:11:14.179 06:56:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:11:14.179 06:56:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR=//var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:14.179 06:56:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:11:14.179 06:56:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:11:14.179 06:56:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:11:14.179 06:56:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:11:14.179 06:56:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:11:14.179 06:56:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:11:14.179 06:56:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:11:14.179 06:56:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:11:14.179 06:56:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:11:14.179 06:56:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:11:14.179 06:56:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:11:14.179 06:56:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:11:14.179 06:56:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=y 00:11:14.179 06:56:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:11:14.179 06:56:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:11:14.179 06:56:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:11:14.179 06:56:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:11:14.179 06:56:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:11:14.179 06:56:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:11:14.179 06:56:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:11:14.179 06:56:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:11:14.179 06:56:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:11:14.179 06:56:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:11:14.179 06:56:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:11:14.179 06:56:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:11:14.179 06:56:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:11:14.179 06:56:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:11:14.179 06:56:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:11:14.179 06:56:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:11:14.179 06:56:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:11:14.179 06:56:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:11:14.179 06:56:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_FC=n 00:11:14.179 06:56:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:11:14.179 06:56:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:11:14.179 06:56:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:11:14.179 06:56:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:11:14.179 06:56:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:11:14.179 06:56:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:11:14.179 06:56:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:11:14.179 06:56:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:11:14.179 06:56:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:11:14.179 06:56:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:11:14.179 06:56:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:11:14.179 06:56:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:11:14.179 06:56:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:11:14.179 06:56:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@90 -- # CONFIG_URING=n 00:11:14.179 06:56:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:11:14.179 06:56:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:11:14.179 06:56:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:11:14.179 06:56:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:11:14.179 06:56:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:11:14.179 06:56:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:14.179 06:56:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:11:14.179 06:56:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:14.179 06:56:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:11:14.179 06:56:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:11:14.179 06:56:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:11:14.179 06:56:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:11:14.179 06:56:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:11:14.179 06:56:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:11:14.179 06:56:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:11:14.179 06:56:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:11:14.179 #define SPDK_CONFIG_H 00:11:14.179 #define SPDK_CONFIG_AIO_FSDEV 1 00:11:14.179 #define SPDK_CONFIG_APPS 1 00:11:14.179 #define SPDK_CONFIG_ARCH native 00:11:14.179 #undef SPDK_CONFIG_ASAN 00:11:14.179 #undef SPDK_CONFIG_AVAHI 00:11:14.179 #undef SPDK_CONFIG_CET 00:11:14.179 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:11:14.179 #define SPDK_CONFIG_COVERAGE 1 00:11:14.179 #define SPDK_CONFIG_CROSS_PREFIX 00:11:14.179 #undef SPDK_CONFIG_CRYPTO 00:11:14.179 #undef SPDK_CONFIG_CRYPTO_MLX5 00:11:14.179 #undef SPDK_CONFIG_CUSTOMOCF 00:11:14.179 #undef SPDK_CONFIG_DAOS 00:11:14.179 #define SPDK_CONFIG_DAOS_DIR 00:11:14.179 #define SPDK_CONFIG_DEBUG 1 00:11:14.179 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:11:14.179 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:11:14.179 #define SPDK_CONFIG_DPDK_INC_DIR //var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:14.179 #define SPDK_CONFIG_DPDK_LIB_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:11:14.179 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:11:14.179 #undef SPDK_CONFIG_DPDK_UADK 00:11:14.179 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:11:14.179 #define SPDK_CONFIG_EXAMPLES 1 00:11:14.179 #undef SPDK_CONFIG_FC 00:11:14.179 #define SPDK_CONFIG_FC_PATH 00:11:14.179 #define SPDK_CONFIG_FIO_PLUGIN 1 00:11:14.179 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:11:14.179 #define SPDK_CONFIG_FSDEV 1 00:11:14.179 #undef SPDK_CONFIG_FUSE 00:11:14.179 #undef SPDK_CONFIG_FUZZER 00:11:14.179 #define SPDK_CONFIG_FUZZER_LIB 00:11:14.179 #undef SPDK_CONFIG_GOLANG 00:11:14.179 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:11:14.179 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:11:14.179 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:11:14.179 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:11:14.179 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:11:14.179 #undef SPDK_CONFIG_HAVE_LIBBSD 00:11:14.179 #undef SPDK_CONFIG_HAVE_LZ4 00:11:14.179 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:11:14.179 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:11:14.179 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:11:14.179 #define SPDK_CONFIG_IDXD 1 00:11:14.179 #define SPDK_CONFIG_IDXD_KERNEL 1 00:11:14.179 #undef SPDK_CONFIG_IPSEC_MB 00:11:14.179 #define SPDK_CONFIG_IPSEC_MB_DIR 00:11:14.179 #define SPDK_CONFIG_ISAL 1 00:11:14.179 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:11:14.179 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:11:14.179 #define SPDK_CONFIG_LIBDIR 00:11:14.179 #undef SPDK_CONFIG_LTO 00:11:14.179 #define SPDK_CONFIG_MAX_LCORES 128 00:11:14.179 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:11:14.180 #define SPDK_CONFIG_NVME_CUSE 1 00:11:14.180 #undef SPDK_CONFIG_OCF 00:11:14.180 #define SPDK_CONFIG_OCF_PATH 00:11:14.180 #define SPDK_CONFIG_OPENSSL_PATH 00:11:14.180 #undef SPDK_CONFIG_PGO_CAPTURE 00:11:14.180 #define SPDK_CONFIG_PGO_DIR 00:11:14.180 #undef SPDK_CONFIG_PGO_USE 00:11:14.180 #define SPDK_CONFIG_PREFIX /usr/local 00:11:14.180 #undef SPDK_CONFIG_RAID5F 00:11:14.180 #undef SPDK_CONFIG_RBD 00:11:14.180 #define SPDK_CONFIG_RDMA 1 00:11:14.180 #define SPDK_CONFIG_RDMA_PROV verbs 00:11:14.180 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:11:14.180 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:11:14.180 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:11:14.180 #define SPDK_CONFIG_SHARED 1 00:11:14.180 #undef SPDK_CONFIG_SMA 00:11:14.180 #define SPDK_CONFIG_TESTS 1 00:11:14.180 #undef SPDK_CONFIG_TSAN 00:11:14.180 #define SPDK_CONFIG_UBLK 1 00:11:14.180 #define SPDK_CONFIG_UBSAN 1 00:11:14.180 #undef SPDK_CONFIG_UNIT_TESTS 00:11:14.180 #undef SPDK_CONFIG_URING 00:11:14.180 #define SPDK_CONFIG_URING_PATH 00:11:14.180 #undef SPDK_CONFIG_URING_ZNS 00:11:14.180 #undef SPDK_CONFIG_USDT 00:11:14.180 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:11:14.180 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:11:14.180 #define SPDK_CONFIG_VFIO_USER 1 00:11:14.180 #define SPDK_CONFIG_VFIO_USER_DIR 00:11:14.180 #define SPDK_CONFIG_VHOST 1 00:11:14.180 #define SPDK_CONFIG_VIRTIO 1 00:11:14.180 #undef SPDK_CONFIG_VTUNE 00:11:14.180 #define SPDK_CONFIG_VTUNE_DIR 00:11:14.180 #define SPDK_CONFIG_WERROR 1 00:11:14.180 #define SPDK_CONFIG_WPDK_DIR 00:11:14.180 #undef SPDK_CONFIG_XNVME 00:11:14.180 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:11:14.180 06:56:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:11:14.180 06:56:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:14.180 06:56:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:11:14.180 06:56:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:14.180 06:56:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:14.180 06:56:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:14.180 06:56:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:14.180 06:56:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:14.180 06:56:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:14.180 06:56:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:11:14.180 06:56:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:14.180 06:56:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:11:14.180 06:56:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:11:14.180 06:56:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:11:14.180 06:56:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:11:14.180 06:56:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:11:14.180 06:56:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:11:14.180 06:56:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:11:14.180 06:56:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:11:14.180 06:56:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:11:14.180 06:56:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:11:14.180 06:56:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:11:14.180 06:56:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:11:14.180 06:56:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:11:14.180 06:56:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:11:14.180 06:56:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:11:14.180 06:56:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:11:14.180 06:56:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:11:14.180 06:56:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:11:14.180 06:56:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:11:14.180 06:56:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:11:14.180 06:56:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:11:14.180 06:56:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:11:14.180 06:56:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:11:14.180 06:56:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:11:14.180 06:56:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:11:14.180 06:56:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:11:14.180 06:56:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:11:14.180 06:56:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 1 00:11:14.180 06:56:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:11:14.180 06:56:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:11:14.180 06:56:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:11:14.180 06:56:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:11:14.180 06:56:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:11:14.180 06:56:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:11:14.180 06:56:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:11:14.180 06:56:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:11:14.180 06:56:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:11:14.180 06:56:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:11:14.180 06:56:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:11:14.180 06:56:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:11:14.180 06:56:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:11:14.180 06:56:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:11:14.180 06:56:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:11:14.180 06:56:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:11:14.180 06:56:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:11:14.180 06:56:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:11:14.180 06:56:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:11:14.180 06:56:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:11:14.180 06:56:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:11:14.180 06:56:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:11:14.180 06:56:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:11:14.180 06:56:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:11:14.180 06:56:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:11:14.180 06:56:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:11:14.180 06:56:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:11:14.180 06:56:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:11:14.180 06:56:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:11:14.180 06:56:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:11:14.180 06:56:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:11:14.180 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:11:14.181 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:11:14.181 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:11:14.181 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:11:14.181 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:11:14.181 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:11:14.181 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:11:14.181 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:11:14.181 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:11:14.181 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:11:14.181 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:11:14.181 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:11:14.181 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:11:14.181 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:11:14.181 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:11:14.181 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:11:14.181 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:11:14.181 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:11:14.181 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:11:14.181 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:11:14.181 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:11:14.181 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:11:14.181 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:11:14.181 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:11:14.181 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:11:14.181 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:11:14.181 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:11:14.181 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:11:14.181 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:11:14.181 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:11:14.181 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 0 00:11:14.181 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:11:14.181 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:11:14.181 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:11:14.181 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:11:14.181 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:11:14.181 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:11:14.181 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:11:14.181 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:11:14.181 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:11:14.181 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:11:14.181 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:11:14.181 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:11:14.181 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:11:14.181 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:11:14.181 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:11:14.181 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:11:14.181 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:11:14.181 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : v22.11.4 00:11:14.181 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:11:14.181 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:11:14.181 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:11:14.181 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:11:14.181 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:11:14.181 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:11:14.181 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:11:14.181 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:11:14.181 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:11:14.181 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:11:14.181 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:11:14.181 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:11:14.181 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:11:14.181 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:11:14.181 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:11:14.181 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:11:14.181 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:11:14.181 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:11:14.181 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:11:14.181 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:11:14.181 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:11:14.181 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:11:14.181 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:11:14.181 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:11:14.181 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:11:14.181 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:11:14.181 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:11:14.181 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:11:14.181 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:11:14.181 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:11:14.181 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:11:14.181 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:11:14.181 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:11:14.181 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:11:14.181 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:11:14.181 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # : 0 00:11:14.181 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:11:14.181 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:11:14.181 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:11:14.181 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:11:14.181 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:11:14.181 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:14.181 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:14.181 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:14.182 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:14.182 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:11:14.182 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:11:14.182 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:11:14.182 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:11:14.182 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:11:14.182 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:11:14.182 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:11:14.182 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:11:14.182 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:11:14.182 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:11:14.182 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:11:14.182 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:11:14.182 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@206 -- # cat 00:11:14.182 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:11:14.182 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:11:14.182 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:11:14.182 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:11:14.182 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:11:14.182 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:11:14.182 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:11:14.182 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:14.182 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:14.182 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:14.182 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:14.182 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:11:14.182 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:11:14.182 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:11:14.182 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:11:14.182 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:11:14.182 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:11:14.182 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:11:14.182 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:11:14.182 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:11:14.182 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:11:14.182 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@269 -- # _LCOV= 00:11:14.182 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:11:14.182 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:11:14.182 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:11:14.182 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:11:14.182 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@275 -- # lcov_opt= 00:11:14.182 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:11:14.182 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # export valgrind= 00:11:14.182 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # valgrind= 00:11:14.182 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # uname -s 00:11:14.182 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:11:14.182 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:11:14.182 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:11:14.182 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:11:14.182 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@289 -- # MAKE=make 00:11:14.182 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j48 00:11:14.182 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:11:14.182 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:11:14.182 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:11:14.182 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # TEST_MODE= 00:11:14.182 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@310 -- # for i in "$@" 00:11:14.183 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@311 -- # case "$i" in 00:11:14.183 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@316 -- # TEST_TRANSPORT=tcp 00:11:14.183 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # [[ -z 163003 ]] 00:11:14.183 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # kill -0 163003 00:11:14.183 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1678 -- # set_test_storage 2147483648 00:11:14.183 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:11:14.183 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:11:14.183 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local mount target_dir 00:11:14.183 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:11:14.183 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:11:14.183 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:11:14.183 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:11:14.183 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.0EyuO6 00:11:14.183 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:11:14.183 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:11:14.183 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:11:14.183 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@368 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.0EyuO6/tests/target /tmp/spdk.0EyuO6 00:11:14.183 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:11:14.183 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:14.183 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # df -T 00:11:14.183 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:11:14.183 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_devtmpfs 00:11:14.183 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:11:14.183 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=67108864 00:11:14.183 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=67108864 00:11:14.183 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:11:14.183 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:14.183 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/pmem0 00:11:14.183 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=ext2 00:11:14.183 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=4096 00:11:14.183 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=5284429824 00:11:14.183 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=5284425728 00:11:14.183 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:14.183 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_root 00:11:14.183 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=overlay 00:11:14.183 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=54532104192 00:11:14.183 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=61988532224 00:11:14.183 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=7456428032 00:11:14.183 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:14.183 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:14.183 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:14.183 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=30984232960 00:11:14.183 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=30994264064 00:11:14.183 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=10031104 00:11:14.183 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:14.183 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:14.183 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:14.183 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=12375277568 00:11:14.183 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=12397707264 00:11:14.183 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=22429696 00:11:14.183 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:14.183 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:14.183 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:14.183 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=30993956864 00:11:14.183 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=30994268160 00:11:14.183 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=311296 00:11:14.183 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:14.183 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:14.183 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:14.183 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=6198837248 00:11:14.183 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=6198849536 00:11:14.183 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:11:14.183 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:14.183 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:11:14.183 * Looking for test storage... 00:11:14.183 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@381 -- # local target_space new_size 00:11:14.183 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:11:14.183 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:14.183 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:11:14.183 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # mount=/ 00:11:14.183 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@387 -- # target_space=54532104192 00:11:14.183 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:11:14.183 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:11:14.183 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == tmpfs ]] 00:11:14.183 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == ramfs ]] 00:11:14.183 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ / == / ]] 00:11:14.183 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@394 -- # new_size=9671020544 00:11:14.183 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@395 -- # (( new_size * 100 / sizes[/] > 95 )) 00:11:14.183 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:14.183 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:14.183 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:14.183 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:14.183 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@402 -- # return 0 00:11:14.183 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # set -o errtrace 00:11:14.183 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # shopt -s extdebug 00:11:14.183 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:11:14.183 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1684 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:11:14.183 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1685 -- # true 00:11:14.183 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1687 -- # xtrace_fd 00:11:14.183 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:11:14.183 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:11:14.183 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:11:14.183 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:11:14.183 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:11:14.183 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:11:14.183 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:11:14.183 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:11:14.183 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:14.183 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lcov --version 00:11:14.184 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:14.445 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:14.445 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:14.445 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:14.445 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:14.445 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:11:14.445 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:11:14.445 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:11:14.445 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:11:14.445 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:11:14.445 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:11:14.445 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:11:14.445 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:14.445 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:11:14.445 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:11:14.445 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:14.445 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:14.445 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:11:14.445 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:11:14.445 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:14.445 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:11:14.445 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:11:14.445 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:11:14.445 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:11:14.445 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:14.445 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:11:14.445 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:11:14.445 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:14.445 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:14.445 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:11:14.445 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:14.445 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:14.445 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:14.445 --rc genhtml_branch_coverage=1 00:11:14.445 --rc genhtml_function_coverage=1 00:11:14.445 --rc genhtml_legend=1 00:11:14.445 --rc geninfo_all_blocks=1 00:11:14.445 --rc geninfo_unexecuted_blocks=1 00:11:14.445 00:11:14.445 ' 00:11:14.445 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:14.445 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:14.445 --rc genhtml_branch_coverage=1 00:11:14.445 --rc genhtml_function_coverage=1 00:11:14.445 --rc genhtml_legend=1 00:11:14.445 --rc geninfo_all_blocks=1 00:11:14.445 --rc geninfo_unexecuted_blocks=1 00:11:14.445 00:11:14.445 ' 00:11:14.445 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:14.445 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:14.445 --rc genhtml_branch_coverage=1 00:11:14.445 --rc genhtml_function_coverage=1 00:11:14.445 --rc genhtml_legend=1 00:11:14.445 --rc geninfo_all_blocks=1 00:11:14.445 --rc geninfo_unexecuted_blocks=1 00:11:14.445 00:11:14.445 ' 00:11:14.445 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:14.445 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:14.445 --rc genhtml_branch_coverage=1 00:11:14.445 --rc genhtml_function_coverage=1 00:11:14.445 --rc genhtml_legend=1 00:11:14.445 --rc geninfo_all_blocks=1 00:11:14.445 --rc geninfo_unexecuted_blocks=1 00:11:14.445 00:11:14.445 ' 00:11:14.445 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:14.445 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:11:14.445 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:14.445 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:14.445 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:14.445 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:14.445 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:14.445 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:14.445 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:14.445 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:14.445 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:14.445 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:14.445 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:14.445 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:14.445 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:14.445 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:14.445 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:14.445 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:14.445 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:14.445 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:11:14.445 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:14.445 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:14.445 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:14.445 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:14.445 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:14.445 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:14.445 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:11:14.446 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:14.446 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0 00:11:14.446 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:14.446 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:14.446 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:14.446 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:14.446 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:14.446 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:14.446 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:14.446 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:14.446 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:14.446 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:14.446 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:11:14.446 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:11:14.446 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:11:14.446 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:14.446 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:14.446 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:14.446 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:14.446 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:14.446 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:14.446 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:14.446 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:14.446 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:14.446 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:14.446 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@309 -- # xtrace_disable 00:11:14.446 06:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:16.986 06:56:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:16.986 06:56:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # pci_devs=() 00:11:16.986 06:56:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:16.986 06:56:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:16.986 06:56:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:16.986 06:56:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:16.986 06:56:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:16.986 06:56:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # net_devs=() 00:11:16.986 06:56:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:16.986 06:56:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # e810=() 00:11:16.986 06:56:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # local -ga e810 00:11:16.986 06:56:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # x722=() 00:11:16.986 06:56:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # local -ga x722 00:11:16.986 06:56:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # mlx=() 00:11:16.986 06:56:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # local -ga mlx 00:11:16.986 06:56:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:16.986 06:56:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:16.986 06:56:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:16.986 06:56:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:16.986 06:56:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:16.986 06:56:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:16.986 06:56:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:16.986 06:56:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:16.986 06:56:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:16.986 06:56:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:16.986 06:56:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:16.986 06:56:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:16.986 06:56:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:16.986 06:56:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:16.986 06:56:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:16.987 06:56:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:16.987 06:56:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:16.987 06:56:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:16.987 06:56:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:16.987 06:56:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:16.987 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:16.987 06:56:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:16.987 06:56:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:16.987 06:56:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:16.987 06:56:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:16.987 06:56:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:16.987 06:56:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:16.987 06:56:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:16.987 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:16.987 06:56:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:16.987 06:56:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:16.987 06:56:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:16.987 06:56:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:16.987 06:56:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:16.987 06:56:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:16.987 06:56:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:16.987 06:56:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:16.987 06:56:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:16.987 06:56:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:16.987 06:56:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:16.987 06:56:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:16.987 06:56:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:16.987 06:56:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:16.987 06:56:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:16.987 06:56:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:16.987 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:16.987 06:56:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:16.987 06:56:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:16.987 06:56:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:16.987 06:56:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:16.987 06:56:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:16.987 06:56:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:16.987 06:56:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:16.987 06:56:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:16.987 06:56:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:16.987 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:16.987 06:56:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:16.987 06:56:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:16.987 06:56:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # is_hw=yes 00:11:16.987 06:56:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:16.987 06:56:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:16.987 06:56:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:16.987 06:56:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:16.987 06:56:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:16.987 06:56:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:16.987 06:56:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:16.987 06:56:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:16.987 06:56:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:16.987 06:56:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:16.987 06:56:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:16.987 06:56:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:16.987 06:56:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:16.987 06:56:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:16.987 06:56:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:16.987 06:56:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:16.987 06:56:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:16.987 06:56:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:16.987 06:56:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:16.987 06:56:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:16.987 06:56:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:16.987 06:56:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:16.987 06:56:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:16.987 06:56:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:16.987 06:56:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:16.987 06:56:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:16.987 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:16.987 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.192 ms 00:11:16.987 00:11:16.987 --- 10.0.0.2 ping statistics --- 00:11:16.987 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:16.987 rtt min/avg/max/mdev = 0.192/0.192/0.192/0.000 ms 00:11:16.987 06:56:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:16.987 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:16.987 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.082 ms 00:11:16.987 00:11:16.987 --- 10.0.0.1 ping statistics --- 00:11:16.987 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:16.987 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:11:16.987 06:56:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:16.987 06:56:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@450 -- # return 0 00:11:16.987 06:56:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:16.987 06:56:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:16.987 06:56:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:16.987 06:56:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:16.987 06:56:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:16.987 06:56:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:16.987 06:56:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:16.987 06:56:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:11:16.987 06:56:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:16.987 06:56:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:16.987 06:56:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:16.987 ************************************ 00:11:16.987 START TEST nvmf_filesystem_no_in_capsule 00:11:16.987 ************************************ 00:11:16.987 06:56:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 0 00:11:16.987 06:56:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:11:16.987 06:56:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:11:16.987 06:56:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:16.987 06:56:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:16.987 06:56:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:16.988 06:56:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=164673 00:11:16.988 06:56:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:16.988 06:56:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 164673 00:11:16.988 06:56:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 164673 ']' 00:11:16.988 06:56:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:16.988 06:56:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:16.988 06:56:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:16.988 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:16.988 06:56:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:16.988 06:56:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:16.988 [2024-11-18 06:56:37.646914] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:11:16.988 [2024-11-18 06:56:37.647009] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:16.988 [2024-11-18 06:56:37.721088] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:16.988 [2024-11-18 06:56:37.770723] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:16.988 [2024-11-18 06:56:37.770795] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:16.988 [2024-11-18 06:56:37.770808] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:16.988 [2024-11-18 06:56:37.770819] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:16.988 [2024-11-18 06:56:37.770844] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:16.988 [2024-11-18 06:56:37.772316] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:16.988 [2024-11-18 06:56:37.772382] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:16.988 [2024-11-18 06:56:37.772451] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:16.988 [2024-11-18 06:56:37.772454] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:16.988 06:56:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:16.988 06:56:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:11:16.988 06:56:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:16.988 06:56:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:16.988 06:56:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:16.988 06:56:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:16.988 06:56:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:11:16.988 06:56:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:11:16.988 06:56:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.988 06:56:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:16.988 [2024-11-18 06:56:37.920903] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:16.988 06:56:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.988 06:56:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:11:16.988 06:56:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.988 06:56:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:17.247 Malloc1 00:11:17.247 06:56:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.247 06:56:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:17.247 06:56:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.247 06:56:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:17.247 06:56:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.247 06:56:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:17.247 06:56:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.247 06:56:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:17.247 06:56:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.247 06:56:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:17.247 06:56:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.247 06:56:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:17.247 [2024-11-18 06:56:38.114901] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:17.248 06:56:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.248 06:56:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:11:17.248 06:56:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:11:17.248 06:56:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:11:17.248 06:56:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:11:17.248 06:56:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:11:17.248 06:56:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:11:17.248 06:56:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.248 06:56:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:17.248 06:56:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.248 06:56:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:11:17.248 { 00:11:17.248 "name": "Malloc1", 00:11:17.248 "aliases": [ 00:11:17.248 "bf11e5be-0c1b-4109-898c-fcf4c93ecbd0" 00:11:17.248 ], 00:11:17.248 "product_name": "Malloc disk", 00:11:17.248 "block_size": 512, 00:11:17.248 "num_blocks": 1048576, 00:11:17.248 "uuid": "bf11e5be-0c1b-4109-898c-fcf4c93ecbd0", 00:11:17.248 "assigned_rate_limits": { 00:11:17.248 "rw_ios_per_sec": 0, 00:11:17.248 "rw_mbytes_per_sec": 0, 00:11:17.248 "r_mbytes_per_sec": 0, 00:11:17.248 "w_mbytes_per_sec": 0 00:11:17.248 }, 00:11:17.248 "claimed": true, 00:11:17.248 "claim_type": "exclusive_write", 00:11:17.248 "zoned": false, 00:11:17.248 "supported_io_types": { 00:11:17.248 "read": true, 00:11:17.248 "write": true, 00:11:17.248 "unmap": true, 00:11:17.248 "flush": true, 00:11:17.248 "reset": true, 00:11:17.248 "nvme_admin": false, 00:11:17.248 "nvme_io": false, 00:11:17.248 "nvme_io_md": false, 00:11:17.248 "write_zeroes": true, 00:11:17.248 "zcopy": true, 00:11:17.248 "get_zone_info": false, 00:11:17.248 "zone_management": false, 00:11:17.248 "zone_append": false, 00:11:17.248 "compare": false, 00:11:17.248 "compare_and_write": false, 00:11:17.248 "abort": true, 00:11:17.248 "seek_hole": false, 00:11:17.248 "seek_data": false, 00:11:17.248 "copy": true, 00:11:17.248 "nvme_iov_md": false 00:11:17.248 }, 00:11:17.248 "memory_domains": [ 00:11:17.248 { 00:11:17.248 "dma_device_id": "system", 00:11:17.248 "dma_device_type": 1 00:11:17.248 }, 00:11:17.248 { 00:11:17.248 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:17.248 "dma_device_type": 2 00:11:17.248 } 00:11:17.248 ], 00:11:17.248 "driver_specific": {} 00:11:17.248 } 00:11:17.248 ]' 00:11:17.248 06:56:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:11:17.248 06:56:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:11:17.248 06:56:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:11:17.248 06:56:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:11:17.248 06:56:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:11:17.248 06:56:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:11:17.248 06:56:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:11:17.248 06:56:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:18.183 06:56:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:11:18.183 06:56:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:11:18.183 06:56:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:18.183 06:56:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:18.183 06:56:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:11:20.085 06:56:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:20.085 06:56:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:20.085 06:56:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:20.085 06:56:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:20.085 06:56:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:20.085 06:56:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:11:20.085 06:56:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:11:20.085 06:56:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:11:20.085 06:56:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:11:20.085 06:56:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:11:20.085 06:56:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:11:20.085 06:56:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:11:20.085 06:56:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:11:20.085 06:56:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:11:20.085 06:56:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:11:20.085 06:56:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:11:20.085 06:56:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:11:20.343 06:56:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:11:21.278 06:56:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:11:22.211 06:56:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:11:22.211 06:56:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:11:22.211 06:56:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:22.211 06:56:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:22.211 06:56:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:22.469 ************************************ 00:11:22.469 START TEST filesystem_ext4 00:11:22.469 ************************************ 00:11:22.469 06:56:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:11:22.469 06:56:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:11:22.469 06:56:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:22.469 06:56:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:11:22.469 06:56:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:11:22.469 06:56:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:22.469 06:56:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:11:22.469 06:56:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # local force 00:11:22.469 06:56:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:11:22.469 06:56:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:11:22.469 06:56:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:11:22.469 mke2fs 1.47.0 (5-Feb-2023) 00:11:22.469 Discarding device blocks: 0/522240 done 00:11:22.469 Creating filesystem with 522240 1k blocks and 130560 inodes 00:11:22.469 Filesystem UUID: c60ea090-700f-4b05-8fe9-c57748f1a907 00:11:22.469 Superblock backups stored on blocks: 00:11:22.469 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:11:22.469 00:11:22.469 Allocating group tables: 0/64 done 00:11:22.469 Writing inode tables: 0/64 done 00:11:23.844 Creating journal (8192 blocks): done 00:11:25.602 Writing superblocks and filesystem accounting information: 0/64 4/64 done 00:11:25.602 00:11:25.602 06:56:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@949 -- # return 0 00:11:25.602 06:56:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:30.867 06:56:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:31.127 06:56:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:11:31.127 06:56:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:31.127 06:56:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:11:31.127 06:56:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:11:31.127 06:56:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:31.127 06:56:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 164673 00:11:31.127 06:56:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:31.127 06:56:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:31.127 06:56:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:31.127 06:56:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:31.127 00:11:31.127 real 0m8.735s 00:11:31.127 user 0m0.012s 00:11:31.127 sys 0m0.111s 00:11:31.127 06:56:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:31.127 06:56:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:11:31.127 ************************************ 00:11:31.127 END TEST filesystem_ext4 00:11:31.127 ************************************ 00:11:31.127 06:56:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:11:31.127 06:56:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:31.127 06:56:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:31.127 06:56:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:31.127 ************************************ 00:11:31.127 START TEST filesystem_btrfs 00:11:31.127 ************************************ 00:11:31.127 06:56:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:11:31.127 06:56:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:11:31.127 06:56:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:31.127 06:56:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:11:31.127 06:56:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:11:31.127 06:56:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:31.127 06:56:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:11:31.127 06:56:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # local force 00:11:31.127 06:56:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:11:31.127 06:56:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:11:31.127 06:56:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:11:31.386 btrfs-progs v6.8.1 00:11:31.386 See https://btrfs.readthedocs.io for more information. 00:11:31.386 00:11:31.386 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:11:31.386 NOTE: several default settings have changed in version 5.15, please make sure 00:11:31.386 this does not affect your deployments: 00:11:31.386 - DUP for metadata (-m dup) 00:11:31.386 - enabled no-holes (-O no-holes) 00:11:31.386 - enabled free-space-tree (-R free-space-tree) 00:11:31.386 00:11:31.386 Label: (null) 00:11:31.386 UUID: 1db40a9b-540b-4ac8-9a67-52a66398cdf1 00:11:31.386 Node size: 16384 00:11:31.386 Sector size: 4096 (CPU page size: 4096) 00:11:31.386 Filesystem size: 510.00MiB 00:11:31.386 Block group profiles: 00:11:31.386 Data: single 8.00MiB 00:11:31.386 Metadata: DUP 32.00MiB 00:11:31.386 System: DUP 8.00MiB 00:11:31.386 SSD detected: yes 00:11:31.386 Zoned device: no 00:11:31.386 Features: extref, skinny-metadata, no-holes, free-space-tree 00:11:31.386 Checksum: crc32c 00:11:31.386 Number of devices: 1 00:11:31.386 Devices: 00:11:31.386 ID SIZE PATH 00:11:31.386 1 510.00MiB /dev/nvme0n1p1 00:11:31.386 00:11:31.386 06:56:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@949 -- # return 0 00:11:31.386 06:56:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:32.321 06:56:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:32.321 06:56:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:11:32.321 06:56:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:32.321 06:56:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:11:32.321 06:56:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:11:32.321 06:56:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:32.321 06:56:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 164673 00:11:32.321 06:56:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:32.321 06:56:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:32.321 06:56:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:32.321 06:56:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:32.321 00:11:32.321 real 0m1.181s 00:11:32.321 user 0m0.020s 00:11:32.321 sys 0m0.136s 00:11:32.321 06:56:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:32.321 06:56:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:11:32.321 ************************************ 00:11:32.321 END TEST filesystem_btrfs 00:11:32.321 ************************************ 00:11:32.321 06:56:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:11:32.321 06:56:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:32.321 06:56:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:32.321 06:56:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:32.321 ************************************ 00:11:32.321 START TEST filesystem_xfs 00:11:32.321 ************************************ 00:11:32.321 06:56:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:11:32.321 06:56:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:11:32.321 06:56:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:32.321 06:56:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:11:32.321 06:56:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:11:32.321 06:56:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:32.321 06:56:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # local i=0 00:11:32.321 06:56:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # local force 00:11:32.321 06:56:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:11:32.321 06:56:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@938 -- # force=-f 00:11:32.321 06:56:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:11:32.580 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:11:32.580 = sectsz=512 attr=2, projid32bit=1 00:11:32.580 = crc=1 finobt=1, sparse=1, rmapbt=0 00:11:32.580 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:11:32.580 data = bsize=4096 blocks=130560, imaxpct=25 00:11:32.580 = sunit=0 swidth=0 blks 00:11:32.580 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:11:32.580 log =internal log bsize=4096 blocks=16384, version=2 00:11:32.580 = sectsz=512 sunit=0 blks, lazy-count=1 00:11:32.580 realtime =none extsz=4096 blocks=0, rtextents=0 00:11:33.146 Discarding blocks...Done. 00:11:33.146 06:56:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@949 -- # return 0 00:11:33.146 06:56:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:36.430 06:56:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:36.430 06:56:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:11:36.430 06:56:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:36.430 06:56:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:11:36.430 06:56:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:11:36.430 06:56:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:36.430 06:56:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 164673 00:11:36.430 06:56:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:36.430 06:56:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:36.430 06:56:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:36.430 06:56:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:36.430 00:11:36.430 real 0m3.893s 00:11:36.430 user 0m0.015s 00:11:36.430 sys 0m0.102s 00:11:36.430 06:56:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:36.430 06:56:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:11:36.430 ************************************ 00:11:36.430 END TEST filesystem_xfs 00:11:36.430 ************************************ 00:11:36.430 06:56:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:11:36.689 06:56:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:11:36.689 06:56:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:36.689 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:36.689 06:56:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:36.689 06:56:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:11:36.689 06:56:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:36.689 06:56:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:36.689 06:56:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:36.689 06:56:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:36.689 06:56:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:11:36.689 06:56:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:36.689 06:56:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.689 06:56:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:36.689 06:56:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.689 06:56:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:11:36.689 06:56:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 164673 00:11:36.689 06:56:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 164673 ']' 00:11:36.689 06:56:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # kill -0 164673 00:11:36.689 06:56:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # uname 00:11:36.689 06:56:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:36.689 06:56:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 164673 00:11:36.689 06:56:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:36.689 06:56:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:36.689 06:56:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 164673' 00:11:36.689 killing process with pid 164673 00:11:36.689 06:56:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@973 -- # kill 164673 00:11:36.689 06:56:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@978 -- # wait 164673 00:11:37.256 06:56:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:11:37.256 00:11:37.256 real 0m20.383s 00:11:37.256 user 1m19.130s 00:11:37.256 sys 0m2.501s 00:11:37.256 06:56:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:37.256 06:56:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:37.256 ************************************ 00:11:37.256 END TEST nvmf_filesystem_no_in_capsule 00:11:37.256 ************************************ 00:11:37.256 06:56:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:11:37.256 06:56:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:37.256 06:56:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:37.256 06:56:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:37.256 ************************************ 00:11:37.256 START TEST nvmf_filesystem_in_capsule 00:11:37.256 ************************************ 00:11:37.256 06:56:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 4096 00:11:37.256 06:56:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:11:37.256 06:56:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:11:37.256 06:56:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:37.256 06:56:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:37.256 06:56:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:37.256 06:56:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=167278 00:11:37.256 06:56:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:37.256 06:56:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 167278 00:11:37.256 06:56:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 167278 ']' 00:11:37.256 06:56:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:37.256 06:56:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:37.256 06:56:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:37.256 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:37.256 06:56:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:37.256 06:56:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:37.256 [2024-11-18 06:56:58.082335] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:11:37.256 [2024-11-18 06:56:58.082419] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:37.256 [2024-11-18 06:56:58.154357] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:37.257 [2024-11-18 06:56:58.201597] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:37.257 [2024-11-18 06:56:58.201655] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:37.257 [2024-11-18 06:56:58.201671] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:37.257 [2024-11-18 06:56:58.201683] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:37.257 [2024-11-18 06:56:58.201694] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:37.257 [2024-11-18 06:56:58.203276] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:37.257 [2024-11-18 06:56:58.203311] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:37.257 [2024-11-18 06:56:58.203396] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:37.257 [2024-11-18 06:56:58.203399] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:37.516 06:56:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:37.516 06:56:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:11:37.516 06:56:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:37.516 06:56:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:37.516 06:56:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:37.516 06:56:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:37.516 06:56:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:11:37.516 06:56:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:11:37.516 06:56:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.516 06:56:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:37.516 [2024-11-18 06:56:58.349437] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:37.516 06:56:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.516 06:56:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:11:37.516 06:56:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.516 06:56:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:37.775 Malloc1 00:11:37.775 06:56:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.775 06:56:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:37.775 06:56:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.775 06:56:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:37.775 06:56:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.775 06:56:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:37.775 06:56:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.775 06:56:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:37.775 06:56:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.775 06:56:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:37.775 06:56:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.775 06:56:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:37.775 [2024-11-18 06:56:58.534393] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:37.775 06:56:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.775 06:56:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:11:37.775 06:56:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:11:37.775 06:56:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:11:37.775 06:56:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:11:37.775 06:56:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:11:37.775 06:56:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:11:37.775 06:56:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.775 06:56:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:37.775 06:56:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.775 06:56:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:11:37.775 { 00:11:37.775 "name": "Malloc1", 00:11:37.775 "aliases": [ 00:11:37.775 "988c7a7c-7781-45fd-ba6e-22fcd286d7b0" 00:11:37.775 ], 00:11:37.775 "product_name": "Malloc disk", 00:11:37.775 "block_size": 512, 00:11:37.775 "num_blocks": 1048576, 00:11:37.775 "uuid": "988c7a7c-7781-45fd-ba6e-22fcd286d7b0", 00:11:37.775 "assigned_rate_limits": { 00:11:37.775 "rw_ios_per_sec": 0, 00:11:37.775 "rw_mbytes_per_sec": 0, 00:11:37.775 "r_mbytes_per_sec": 0, 00:11:37.775 "w_mbytes_per_sec": 0 00:11:37.775 }, 00:11:37.775 "claimed": true, 00:11:37.775 "claim_type": "exclusive_write", 00:11:37.775 "zoned": false, 00:11:37.775 "supported_io_types": { 00:11:37.775 "read": true, 00:11:37.775 "write": true, 00:11:37.775 "unmap": true, 00:11:37.775 "flush": true, 00:11:37.775 "reset": true, 00:11:37.775 "nvme_admin": false, 00:11:37.775 "nvme_io": false, 00:11:37.775 "nvme_io_md": false, 00:11:37.775 "write_zeroes": true, 00:11:37.775 "zcopy": true, 00:11:37.775 "get_zone_info": false, 00:11:37.775 "zone_management": false, 00:11:37.775 "zone_append": false, 00:11:37.775 "compare": false, 00:11:37.775 "compare_and_write": false, 00:11:37.775 "abort": true, 00:11:37.775 "seek_hole": false, 00:11:37.775 "seek_data": false, 00:11:37.775 "copy": true, 00:11:37.775 "nvme_iov_md": false 00:11:37.775 }, 00:11:37.775 "memory_domains": [ 00:11:37.775 { 00:11:37.775 "dma_device_id": "system", 00:11:37.775 "dma_device_type": 1 00:11:37.775 }, 00:11:37.775 { 00:11:37.775 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:37.775 "dma_device_type": 2 00:11:37.775 } 00:11:37.775 ], 00:11:37.775 "driver_specific": {} 00:11:37.775 } 00:11:37.775 ]' 00:11:37.775 06:56:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:11:37.775 06:56:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:11:37.775 06:56:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:11:37.775 06:56:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:11:37.775 06:56:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:11:37.775 06:56:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:11:37.775 06:56:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:11:37.775 06:56:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:38.345 06:56:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:11:38.345 06:56:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:11:38.345 06:56:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:38.345 06:56:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:38.345 06:56:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:11:40.880 06:57:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:40.880 06:57:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:40.880 06:57:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:40.880 06:57:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:40.880 06:57:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:40.880 06:57:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:11:40.880 06:57:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:11:40.880 06:57:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:11:40.880 06:57:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:11:40.880 06:57:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:11:40.880 06:57:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:11:40.880 06:57:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:11:40.880 06:57:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:11:40.880 06:57:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:11:40.880 06:57:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:11:40.880 06:57:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:11:40.880 06:57:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:11:40.880 06:57:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:11:41.446 06:57:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:11:42.381 06:57:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:11:42.381 06:57:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:11:42.381 06:57:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:42.381 06:57:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:42.381 06:57:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:42.381 ************************************ 00:11:42.381 START TEST filesystem_in_capsule_ext4 00:11:42.381 ************************************ 00:11:42.381 06:57:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:11:42.381 06:57:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:11:42.381 06:57:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:42.381 06:57:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:11:42.381 06:57:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:11:42.381 06:57:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:42.381 06:57:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:11:42.381 06:57:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # local force 00:11:42.381 06:57:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:11:42.381 06:57:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:11:42.381 06:57:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:11:42.381 mke2fs 1.47.0 (5-Feb-2023) 00:11:42.640 Discarding device blocks: 0/522240 done 00:11:42.640 Creating filesystem with 522240 1k blocks and 130560 inodes 00:11:42.640 Filesystem UUID: 6c75aaae-9366-447d-92bf-1a2df521236a 00:11:42.640 Superblock backups stored on blocks: 00:11:42.640 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:11:42.640 00:11:42.640 Allocating group tables: 0/64 done 00:11:42.640 Writing inode tables: 0/64 done 00:11:44.540 Creating journal (8192 blocks): done 00:11:44.540 Writing superblocks and filesystem accounting information: 0/64 done 00:11:44.540 00:11:44.540 06:57:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@949 -- # return 0 00:11:44.540 06:57:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:51.101 06:57:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:51.101 06:57:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:11:51.101 06:57:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:51.101 06:57:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:11:51.101 06:57:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:11:51.101 06:57:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:51.101 06:57:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 167278 00:11:51.101 06:57:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:51.101 06:57:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:51.101 06:57:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:51.101 06:57:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:51.101 00:11:51.101 real 0m7.935s 00:11:51.101 user 0m0.021s 00:11:51.101 sys 0m0.064s 00:11:51.101 06:57:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:51.101 06:57:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:11:51.101 ************************************ 00:11:51.101 END TEST filesystem_in_capsule_ext4 00:11:51.101 ************************************ 00:11:51.101 06:57:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:11:51.101 06:57:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:51.101 06:57:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:51.101 06:57:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:51.101 ************************************ 00:11:51.101 START TEST filesystem_in_capsule_btrfs 00:11:51.101 ************************************ 00:11:51.101 06:57:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:11:51.101 06:57:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:11:51.101 06:57:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:51.101 06:57:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:11:51.101 06:57:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:11:51.101 06:57:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:51.101 06:57:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:11:51.101 06:57:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # local force 00:11:51.101 06:57:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:11:51.101 06:57:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:11:51.101 06:57:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:11:51.101 btrfs-progs v6.8.1 00:11:51.101 See https://btrfs.readthedocs.io for more information. 00:11:51.101 00:11:51.101 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:11:51.101 NOTE: several default settings have changed in version 5.15, please make sure 00:11:51.101 this does not affect your deployments: 00:11:51.101 - DUP for metadata (-m dup) 00:11:51.101 - enabled no-holes (-O no-holes) 00:11:51.101 - enabled free-space-tree (-R free-space-tree) 00:11:51.101 00:11:51.101 Label: (null) 00:11:51.101 UUID: faf043c5-cb81-4809-a24f-e3c338118a5d 00:11:51.101 Node size: 16384 00:11:51.101 Sector size: 4096 (CPU page size: 4096) 00:11:51.101 Filesystem size: 510.00MiB 00:11:51.101 Block group profiles: 00:11:51.101 Data: single 8.00MiB 00:11:51.101 Metadata: DUP 32.00MiB 00:11:51.101 System: DUP 8.00MiB 00:11:51.101 SSD detected: yes 00:11:51.101 Zoned device: no 00:11:51.101 Features: extref, skinny-metadata, no-holes, free-space-tree 00:11:51.101 Checksum: crc32c 00:11:51.101 Number of devices: 1 00:11:51.101 Devices: 00:11:51.101 ID SIZE PATH 00:11:51.101 1 510.00MiB /dev/nvme0n1p1 00:11:51.101 00:11:51.101 06:57:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@949 -- # return 0 00:11:51.101 06:57:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:51.101 06:57:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:51.101 06:57:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:11:51.101 06:57:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:51.101 06:57:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:11:51.101 06:57:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:11:51.101 06:57:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:51.101 06:57:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 167278 00:11:51.101 06:57:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:51.101 06:57:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:51.101 06:57:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:51.101 06:57:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:51.101 00:11:51.101 real 0m0.365s 00:11:51.101 user 0m0.025s 00:11:51.101 sys 0m0.088s 00:11:51.101 06:57:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:51.101 06:57:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:11:51.101 ************************************ 00:11:51.101 END TEST filesystem_in_capsule_btrfs 00:11:51.101 ************************************ 00:11:51.101 06:57:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:11:51.101 06:57:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:51.101 06:57:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:51.101 06:57:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:51.101 ************************************ 00:11:51.101 START TEST filesystem_in_capsule_xfs 00:11:51.101 ************************************ 00:11:51.102 06:57:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:11:51.102 06:57:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:11:51.102 06:57:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:51.102 06:57:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:11:51.102 06:57:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:11:51.102 06:57:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:51.102 06:57:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # local i=0 00:11:51.102 06:57:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # local force 00:11:51.102 06:57:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:11:51.102 06:57:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@938 -- # force=-f 00:11:51.102 06:57:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:11:51.102 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:11:51.102 = sectsz=512 attr=2, projid32bit=1 00:11:51.102 = crc=1 finobt=1, sparse=1, rmapbt=0 00:11:51.102 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:11:51.102 data = bsize=4096 blocks=130560, imaxpct=25 00:11:51.102 = sunit=0 swidth=0 blks 00:11:51.102 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:11:51.102 log =internal log bsize=4096 blocks=16384, version=2 00:11:51.102 = sectsz=512 sunit=0 blks, lazy-count=1 00:11:51.102 realtime =none extsz=4096 blocks=0, rtextents=0 00:11:52.040 Discarding blocks...Done. 00:11:52.040 06:57:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@949 -- # return 0 00:11:52.040 06:57:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:54.580 06:57:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:54.580 06:57:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:11:54.580 06:57:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:54.580 06:57:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:11:54.580 06:57:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:11:54.580 06:57:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:54.580 06:57:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 167278 00:11:54.580 06:57:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:54.580 06:57:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:54.580 06:57:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:54.580 06:57:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:54.580 00:11:54.580 real 0m3.387s 00:11:54.580 user 0m0.011s 00:11:54.580 sys 0m0.065s 00:11:54.580 06:57:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:54.580 06:57:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:11:54.580 ************************************ 00:11:54.580 END TEST filesystem_in_capsule_xfs 00:11:54.580 ************************************ 00:11:54.580 06:57:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:11:54.580 06:57:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:11:54.580 06:57:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:54.580 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:54.580 06:57:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:54.580 06:57:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:11:54.580 06:57:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:54.580 06:57:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:54.580 06:57:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:54.580 06:57:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:54.580 06:57:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:11:54.580 06:57:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:54.580 06:57:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.580 06:57:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:54.580 06:57:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.580 06:57:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:11:54.580 06:57:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 167278 00:11:54.580 06:57:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 167278 ']' 00:11:54.580 06:57:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # kill -0 167278 00:11:54.580 06:57:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # uname 00:11:54.580 06:57:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:54.580 06:57:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 167278 00:11:54.580 06:57:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:54.580 06:57:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:54.580 06:57:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 167278' 00:11:54.580 killing process with pid 167278 00:11:54.580 06:57:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@973 -- # kill 167278 00:11:54.580 06:57:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@978 -- # wait 167278 00:11:55.149 06:57:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:11:55.149 00:11:55.149 real 0m17.833s 00:11:55.149 user 1m9.131s 00:11:55.149 sys 0m2.226s 00:11:55.149 06:57:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:55.149 06:57:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:55.149 ************************************ 00:11:55.149 END TEST nvmf_filesystem_in_capsule 00:11:55.149 ************************************ 00:11:55.149 06:57:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:11:55.149 06:57:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:55.149 06:57:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # sync 00:11:55.149 06:57:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:55.149 06:57:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e 00:11:55.149 06:57:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:55.149 06:57:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:55.149 rmmod nvme_tcp 00:11:55.149 rmmod nvme_fabrics 00:11:55.149 rmmod nvme_keyring 00:11:55.149 06:57:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:55.149 06:57:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e 00:11:55.149 06:57:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0 00:11:55.149 06:57:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:11:55.149 06:57:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:55.149 06:57:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:55.149 06:57:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:55.149 06:57:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # iptr 00:11:55.149 06:57:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-save 00:11:55.149 06:57:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:55.149 06:57:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-restore 00:11:55.149 06:57:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:55.149 06:57:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:55.149 06:57:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:55.149 06:57:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:55.149 06:57:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:57.061 06:57:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:57.061 00:11:57.061 real 0m43.198s 00:11:57.061 user 2m29.433s 00:11:57.061 sys 0m6.550s 00:11:57.061 06:57:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:57.061 06:57:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:57.061 ************************************ 00:11:57.061 END TEST nvmf_filesystem 00:11:57.061 ************************************ 00:11:57.061 06:57:18 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:11:57.061 06:57:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:57.061 06:57:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:57.061 06:57:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:57.320 ************************************ 00:11:57.320 START TEST nvmf_target_discovery 00:11:57.320 ************************************ 00:11:57.320 06:57:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:11:57.320 * Looking for test storage... 00:11:57.320 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:57.320 06:57:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:57.320 06:57:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # lcov --version 00:11:57.320 06:57:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:57.321 06:57:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:57.321 06:57:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:57.321 06:57:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:57.321 06:57:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:57.321 06:57:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:11:57.321 06:57:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:11:57.321 06:57:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:11:57.321 06:57:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:11:57.321 06:57:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:11:57.321 06:57:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:11:57.321 06:57:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:11:57.321 06:57:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:57.321 06:57:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:11:57.321 06:57:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:11:57.321 06:57:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:57.321 06:57:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:57.321 06:57:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:11:57.321 06:57:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:11:57.321 06:57:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:57.321 06:57:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:11:57.321 06:57:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:11:57.321 06:57:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:11:57.321 06:57:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:11:57.321 06:57:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:57.321 06:57:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:11:57.321 06:57:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:11:57.321 06:57:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:57.321 06:57:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:57.321 06:57:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:11:57.321 06:57:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:57.321 06:57:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:57.321 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:57.321 --rc genhtml_branch_coverage=1 00:11:57.321 --rc genhtml_function_coverage=1 00:11:57.321 --rc genhtml_legend=1 00:11:57.321 --rc geninfo_all_blocks=1 00:11:57.321 --rc geninfo_unexecuted_blocks=1 00:11:57.321 00:11:57.321 ' 00:11:57.321 06:57:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:57.321 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:57.321 --rc genhtml_branch_coverage=1 00:11:57.321 --rc genhtml_function_coverage=1 00:11:57.321 --rc genhtml_legend=1 00:11:57.321 --rc geninfo_all_blocks=1 00:11:57.321 --rc geninfo_unexecuted_blocks=1 00:11:57.321 00:11:57.321 ' 00:11:57.321 06:57:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:57.321 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:57.321 --rc genhtml_branch_coverage=1 00:11:57.321 --rc genhtml_function_coverage=1 00:11:57.321 --rc genhtml_legend=1 00:11:57.321 --rc geninfo_all_blocks=1 00:11:57.321 --rc geninfo_unexecuted_blocks=1 00:11:57.321 00:11:57.321 ' 00:11:57.321 06:57:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:57.321 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:57.321 --rc genhtml_branch_coverage=1 00:11:57.321 --rc genhtml_function_coverage=1 00:11:57.321 --rc genhtml_legend=1 00:11:57.321 --rc geninfo_all_blocks=1 00:11:57.321 --rc geninfo_unexecuted_blocks=1 00:11:57.321 00:11:57.321 ' 00:11:57.321 06:57:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:57.321 06:57:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:11:57.321 06:57:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:57.321 06:57:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:57.321 06:57:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:57.321 06:57:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:57.321 06:57:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:57.321 06:57:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:57.321 06:57:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:57.321 06:57:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:57.321 06:57:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:57.321 06:57:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:57.321 06:57:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:57.321 06:57:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:57.321 06:57:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:57.321 06:57:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:57.321 06:57:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:57.321 06:57:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:57.321 06:57:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:57.321 06:57:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:11:57.321 06:57:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:57.321 06:57:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:57.321 06:57:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:57.321 06:57:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:57.321 06:57:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:57.321 06:57:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:57.321 06:57:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:11:57.321 06:57:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:57.321 06:57:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0 00:11:57.321 06:57:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:57.321 06:57:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:57.321 06:57:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:57.321 06:57:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:57.321 06:57:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:57.321 06:57:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:57.321 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:57.321 06:57:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:57.321 06:57:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:57.322 06:57:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:57.322 06:57:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:11:57.322 06:57:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:11:57.322 06:57:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:11:57.322 06:57:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:11:57.322 06:57:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:11:57.322 06:57:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:57.322 06:57:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:57.322 06:57:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:57.322 06:57:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:57.322 06:57:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:57.322 06:57:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:57.322 06:57:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:57.322 06:57:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:57.322 06:57:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:57.322 06:57:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:57.322 06:57:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:11:57.322 06:57:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:59.858 06:57:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:59.858 06:57:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:11:59.858 06:57:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:59.858 06:57:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:59.858 06:57:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:59.858 06:57:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:59.858 06:57:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:59.858 06:57:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:11:59.858 06:57:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:59.858 06:57:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # e810=() 00:11:59.858 06:57:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:11:59.858 06:57:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # x722=() 00:11:59.858 06:57:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:11:59.858 06:57:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # mlx=() 00:11:59.858 06:57:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:11:59.858 06:57:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:59.858 06:57:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:59.858 06:57:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:59.858 06:57:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:59.858 06:57:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:59.858 06:57:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:59.858 06:57:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:59.858 06:57:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:59.858 06:57:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:59.858 06:57:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:59.858 06:57:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:59.858 06:57:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:59.858 06:57:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:59.858 06:57:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:59.858 06:57:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:59.858 06:57:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:59.858 06:57:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:59.858 06:57:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:59.858 06:57:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:59.858 06:57:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:59.858 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:59.858 06:57:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:59.858 06:57:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:59.858 06:57:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:59.858 06:57:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:59.858 06:57:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:59.858 06:57:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:59.858 06:57:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:59.858 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:59.858 06:57:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:59.858 06:57:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:59.858 06:57:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:59.858 06:57:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:59.858 06:57:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:59.858 06:57:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:59.858 06:57:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:59.858 06:57:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:59.858 06:57:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:59.858 06:57:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:59.858 06:57:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:59.858 06:57:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:59.858 06:57:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:59.858 06:57:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:59.858 06:57:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:59.858 06:57:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:59.858 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:59.858 06:57:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:59.858 06:57:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:59.858 06:57:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:59.858 06:57:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:59.858 06:57:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:59.858 06:57:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:59.858 06:57:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:59.858 06:57:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:59.858 06:57:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:59.858 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:59.858 06:57:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:59.858 06:57:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:59.858 06:57:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:11:59.858 06:57:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:59.858 06:57:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:59.859 06:57:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:59.859 06:57:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:59.859 06:57:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:59.859 06:57:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:59.859 06:57:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:59.859 06:57:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:59.859 06:57:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:59.859 06:57:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:59.859 06:57:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:59.859 06:57:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:59.859 06:57:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:59.859 06:57:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:59.859 06:57:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:59.859 06:57:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:59.859 06:57:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:59.859 06:57:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:59.859 06:57:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:59.859 06:57:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:59.859 06:57:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:59.859 06:57:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:59.859 06:57:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:59.859 06:57:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:59.859 06:57:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:59.859 06:57:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:59.859 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:59.859 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.240 ms 00:11:59.859 00:11:59.859 --- 10.0.0.2 ping statistics --- 00:11:59.859 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:59.859 rtt min/avg/max/mdev = 0.240/0.240/0.240/0.000 ms 00:11:59.859 06:57:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:59.859 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:59.859 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.125 ms 00:11:59.859 00:11:59.859 --- 10.0.0.1 ping statistics --- 00:11:59.859 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:59.859 rtt min/avg/max/mdev = 0.125/0.125/0.125/0.000 ms 00:11:59.859 06:57:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:59.859 06:57:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@450 -- # return 0 00:11:59.859 06:57:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:59.859 06:57:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:59.859 06:57:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:59.859 06:57:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:59.859 06:57:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:59.859 06:57:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:59.859 06:57:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:59.859 06:57:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:11:59.859 06:57:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:59.859 06:57:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:59.859 06:57:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:59.859 06:57:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@509 -- # nvmfpid=172204 00:11:59.859 06:57:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:59.859 06:57:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@510 -- # waitforlisten 172204 00:11:59.859 06:57:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # '[' -z 172204 ']' 00:11:59.859 06:57:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:59.859 06:57:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:59.859 06:57:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:59.859 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:59.859 06:57:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:59.859 06:57:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:59.859 [2024-11-18 06:57:20.568421] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:11:59.859 [2024-11-18 06:57:20.568523] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:59.859 [2024-11-18 06:57:20.648184] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:59.859 [2024-11-18 06:57:20.697642] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:59.859 [2024-11-18 06:57:20.697705] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:59.859 [2024-11-18 06:57:20.697735] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:59.859 [2024-11-18 06:57:20.697747] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:59.859 [2024-11-18 06:57:20.697757] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:59.859 [2024-11-18 06:57:20.699550] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:59.859 [2024-11-18 06:57:20.699575] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:59.859 [2024-11-18 06:57:20.699644] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:59.859 [2024-11-18 06:57:20.699647] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:59.859 06:57:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:59.859 06:57:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@868 -- # return 0 00:11:59.859 06:57:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:59.859 06:57:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:59.859 06:57:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:00.119 06:57:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:00.119 06:57:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:00.119 06:57:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.119 06:57:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:00.119 [2024-11-18 06:57:20.839929] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:00.119 06:57:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.119 06:57:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:12:00.119 06:57:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:00.119 06:57:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:12:00.119 06:57:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.119 06:57:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:00.119 Null1 00:12:00.119 06:57:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.119 06:57:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:00.119 06:57:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.119 06:57:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:00.119 06:57:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.119 06:57:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:12:00.119 06:57:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.119 06:57:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:00.119 06:57:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.119 06:57:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:00.119 06:57:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.119 06:57:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:00.119 [2024-11-18 06:57:20.884219] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:00.119 06:57:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.119 06:57:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:00.119 06:57:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:12:00.119 06:57:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.119 06:57:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:00.119 Null2 00:12:00.119 06:57:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.119 06:57:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:12:00.119 06:57:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.119 06:57:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:00.119 06:57:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.119 06:57:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:12:00.119 06:57:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.119 06:57:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:00.119 06:57:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.119 06:57:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:12:00.119 06:57:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.119 06:57:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:00.119 06:57:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.119 06:57:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:00.119 06:57:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:12:00.119 06:57:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.119 06:57:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:00.119 Null3 00:12:00.119 06:57:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.119 06:57:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:12:00.119 06:57:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.119 06:57:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:00.119 06:57:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.119 06:57:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:12:00.119 06:57:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.119 06:57:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:00.119 06:57:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.119 06:57:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:12:00.119 06:57:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.119 06:57:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:00.119 06:57:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.119 06:57:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:00.119 06:57:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:12:00.119 06:57:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.119 06:57:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:00.119 Null4 00:12:00.119 06:57:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.119 06:57:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:12:00.119 06:57:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.119 06:57:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:00.119 06:57:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.119 06:57:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:12:00.119 06:57:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.119 06:57:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:00.119 06:57:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.119 06:57:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:12:00.119 06:57:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.119 06:57:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:00.119 06:57:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.119 06:57:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:00.119 06:57:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.119 06:57:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:00.119 06:57:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.119 06:57:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:12:00.119 06:57:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.119 06:57:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:00.119 06:57:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.119 06:57:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 4420 00:12:00.379 00:12:00.379 Discovery Log Number of Records 6, Generation counter 6 00:12:00.379 =====Discovery Log Entry 0====== 00:12:00.379 trtype: tcp 00:12:00.379 adrfam: ipv4 00:12:00.379 subtype: current discovery subsystem 00:12:00.379 treq: not required 00:12:00.379 portid: 0 00:12:00.379 trsvcid: 4420 00:12:00.379 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:12:00.379 traddr: 10.0.0.2 00:12:00.379 eflags: explicit discovery connections, duplicate discovery information 00:12:00.379 sectype: none 00:12:00.379 =====Discovery Log Entry 1====== 00:12:00.379 trtype: tcp 00:12:00.379 adrfam: ipv4 00:12:00.379 subtype: nvme subsystem 00:12:00.379 treq: not required 00:12:00.379 portid: 0 00:12:00.379 trsvcid: 4420 00:12:00.379 subnqn: nqn.2016-06.io.spdk:cnode1 00:12:00.379 traddr: 10.0.0.2 00:12:00.379 eflags: none 00:12:00.379 sectype: none 00:12:00.379 =====Discovery Log Entry 2====== 00:12:00.379 trtype: tcp 00:12:00.379 adrfam: ipv4 00:12:00.379 subtype: nvme subsystem 00:12:00.379 treq: not required 00:12:00.379 portid: 0 00:12:00.379 trsvcid: 4420 00:12:00.379 subnqn: nqn.2016-06.io.spdk:cnode2 00:12:00.379 traddr: 10.0.0.2 00:12:00.379 eflags: none 00:12:00.379 sectype: none 00:12:00.379 =====Discovery Log Entry 3====== 00:12:00.379 trtype: tcp 00:12:00.379 adrfam: ipv4 00:12:00.379 subtype: nvme subsystem 00:12:00.379 treq: not required 00:12:00.379 portid: 0 00:12:00.379 trsvcid: 4420 00:12:00.379 subnqn: nqn.2016-06.io.spdk:cnode3 00:12:00.379 traddr: 10.0.0.2 00:12:00.379 eflags: none 00:12:00.379 sectype: none 00:12:00.379 =====Discovery Log Entry 4====== 00:12:00.379 trtype: tcp 00:12:00.379 adrfam: ipv4 00:12:00.379 subtype: nvme subsystem 00:12:00.379 treq: not required 00:12:00.379 portid: 0 00:12:00.379 trsvcid: 4420 00:12:00.379 subnqn: nqn.2016-06.io.spdk:cnode4 00:12:00.379 traddr: 10.0.0.2 00:12:00.379 eflags: none 00:12:00.379 sectype: none 00:12:00.379 =====Discovery Log Entry 5====== 00:12:00.379 trtype: tcp 00:12:00.379 adrfam: ipv4 00:12:00.379 subtype: discovery subsystem referral 00:12:00.379 treq: not required 00:12:00.379 portid: 0 00:12:00.379 trsvcid: 4430 00:12:00.379 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:12:00.379 traddr: 10.0.0.2 00:12:00.379 eflags: none 00:12:00.379 sectype: none 00:12:00.379 06:57:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:12:00.379 Perform nvmf subsystem discovery via RPC 00:12:00.379 06:57:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:12:00.379 06:57:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.379 06:57:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:00.379 [ 00:12:00.379 { 00:12:00.379 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:00.379 "subtype": "Discovery", 00:12:00.379 "listen_addresses": [ 00:12:00.379 { 00:12:00.379 "trtype": "TCP", 00:12:00.379 "adrfam": "IPv4", 00:12:00.379 "traddr": "10.0.0.2", 00:12:00.379 "trsvcid": "4420" 00:12:00.379 } 00:12:00.379 ], 00:12:00.379 "allow_any_host": true, 00:12:00.379 "hosts": [] 00:12:00.379 }, 00:12:00.379 { 00:12:00.379 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:00.379 "subtype": "NVMe", 00:12:00.379 "listen_addresses": [ 00:12:00.379 { 00:12:00.379 "trtype": "TCP", 00:12:00.379 "adrfam": "IPv4", 00:12:00.379 "traddr": "10.0.0.2", 00:12:00.379 "trsvcid": "4420" 00:12:00.379 } 00:12:00.379 ], 00:12:00.379 "allow_any_host": true, 00:12:00.379 "hosts": [], 00:12:00.379 "serial_number": "SPDK00000000000001", 00:12:00.379 "model_number": "SPDK bdev Controller", 00:12:00.379 "max_namespaces": 32, 00:12:00.379 "min_cntlid": 1, 00:12:00.379 "max_cntlid": 65519, 00:12:00.379 "namespaces": [ 00:12:00.379 { 00:12:00.379 "nsid": 1, 00:12:00.379 "bdev_name": "Null1", 00:12:00.379 "name": "Null1", 00:12:00.379 "nguid": "8EDE76A5B13B4C1BAAC9599CD0CFE379", 00:12:00.379 "uuid": "8ede76a5-b13b-4c1b-aac9-599cd0cfe379" 00:12:00.379 } 00:12:00.379 ] 00:12:00.379 }, 00:12:00.379 { 00:12:00.379 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:12:00.379 "subtype": "NVMe", 00:12:00.379 "listen_addresses": [ 00:12:00.379 { 00:12:00.379 "trtype": "TCP", 00:12:00.379 "adrfam": "IPv4", 00:12:00.379 "traddr": "10.0.0.2", 00:12:00.379 "trsvcid": "4420" 00:12:00.379 } 00:12:00.379 ], 00:12:00.379 "allow_any_host": true, 00:12:00.379 "hosts": [], 00:12:00.379 "serial_number": "SPDK00000000000002", 00:12:00.379 "model_number": "SPDK bdev Controller", 00:12:00.379 "max_namespaces": 32, 00:12:00.379 "min_cntlid": 1, 00:12:00.379 "max_cntlid": 65519, 00:12:00.379 "namespaces": [ 00:12:00.379 { 00:12:00.379 "nsid": 1, 00:12:00.379 "bdev_name": "Null2", 00:12:00.379 "name": "Null2", 00:12:00.379 "nguid": "FB12F7E93A224D999ADD1BA448398B92", 00:12:00.379 "uuid": "fb12f7e9-3a22-4d99-9add-1ba448398b92" 00:12:00.379 } 00:12:00.379 ] 00:12:00.379 }, 00:12:00.379 { 00:12:00.379 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:12:00.379 "subtype": "NVMe", 00:12:00.379 "listen_addresses": [ 00:12:00.379 { 00:12:00.379 "trtype": "TCP", 00:12:00.379 "adrfam": "IPv4", 00:12:00.379 "traddr": "10.0.0.2", 00:12:00.379 "trsvcid": "4420" 00:12:00.379 } 00:12:00.379 ], 00:12:00.379 "allow_any_host": true, 00:12:00.379 "hosts": [], 00:12:00.379 "serial_number": "SPDK00000000000003", 00:12:00.379 "model_number": "SPDK bdev Controller", 00:12:00.379 "max_namespaces": 32, 00:12:00.379 "min_cntlid": 1, 00:12:00.379 "max_cntlid": 65519, 00:12:00.379 "namespaces": [ 00:12:00.379 { 00:12:00.379 "nsid": 1, 00:12:00.379 "bdev_name": "Null3", 00:12:00.379 "name": "Null3", 00:12:00.379 "nguid": "1646C1D9C1CF4B79ABF06E567186DFB7", 00:12:00.379 "uuid": "1646c1d9-c1cf-4b79-abf0-6e567186dfb7" 00:12:00.379 } 00:12:00.379 ] 00:12:00.379 }, 00:12:00.379 { 00:12:00.379 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:12:00.379 "subtype": "NVMe", 00:12:00.379 "listen_addresses": [ 00:12:00.379 { 00:12:00.379 "trtype": "TCP", 00:12:00.379 "adrfam": "IPv4", 00:12:00.379 "traddr": "10.0.0.2", 00:12:00.379 "trsvcid": "4420" 00:12:00.379 } 00:12:00.379 ], 00:12:00.379 "allow_any_host": true, 00:12:00.379 "hosts": [], 00:12:00.379 "serial_number": "SPDK00000000000004", 00:12:00.379 "model_number": "SPDK bdev Controller", 00:12:00.379 "max_namespaces": 32, 00:12:00.379 "min_cntlid": 1, 00:12:00.379 "max_cntlid": 65519, 00:12:00.379 "namespaces": [ 00:12:00.379 { 00:12:00.379 "nsid": 1, 00:12:00.379 "bdev_name": "Null4", 00:12:00.379 "name": "Null4", 00:12:00.379 "nguid": "B2F580D645484C10B1E0DFBED2DD9C1E", 00:12:00.379 "uuid": "b2f580d6-4548-4c10-b1e0-dfbed2dd9c1e" 00:12:00.379 } 00:12:00.379 ] 00:12:00.379 } 00:12:00.379 ] 00:12:00.379 06:57:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.379 06:57:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:12:00.379 06:57:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:00.379 06:57:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:00.379 06:57:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.379 06:57:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:00.379 06:57:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.379 06:57:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:12:00.379 06:57:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.379 06:57:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:00.379 06:57:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.379 06:57:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:00.379 06:57:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:12:00.379 06:57:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.379 06:57:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:00.379 06:57:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.380 06:57:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:12:00.380 06:57:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.380 06:57:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:00.380 06:57:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.380 06:57:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:00.380 06:57:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:12:00.380 06:57:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.380 06:57:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:00.380 06:57:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.380 06:57:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:12:00.380 06:57:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.380 06:57:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:00.380 06:57:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.380 06:57:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:00.380 06:57:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:12:00.380 06:57:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.380 06:57:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:00.380 06:57:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.380 06:57:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:12:00.380 06:57:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.380 06:57:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:00.380 06:57:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.380 06:57:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:12:00.380 06:57:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.380 06:57:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:00.380 06:57:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.380 06:57:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:12:00.380 06:57:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:12:00.380 06:57:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.380 06:57:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:00.380 06:57:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.640 06:57:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:12:00.641 06:57:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:12:00.641 06:57:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:12:00.641 06:57:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:12:00.641 06:57:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:00.641 06:57:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync 00:12:00.641 06:57:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:00.641 06:57:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e 00:12:00.641 06:57:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:00.641 06:57:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:00.641 rmmod nvme_tcp 00:12:00.641 rmmod nvme_fabrics 00:12:00.641 rmmod nvme_keyring 00:12:00.641 06:57:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:00.641 06:57:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e 00:12:00.641 06:57:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0 00:12:00.641 06:57:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@517 -- # '[' -n 172204 ']' 00:12:00.641 06:57:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@518 -- # killprocess 172204 00:12:00.641 06:57:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # '[' -z 172204 ']' 00:12:00.641 06:57:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # kill -0 172204 00:12:00.641 06:57:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # uname 00:12:00.641 06:57:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:00.641 06:57:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 172204 00:12:00.641 06:57:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:00.641 06:57:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:00.641 06:57:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 172204' 00:12:00.641 killing process with pid 172204 00:12:00.641 06:57:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@973 -- # kill 172204 00:12:00.641 06:57:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@978 -- # wait 172204 00:12:00.900 06:57:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:00.900 06:57:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:00.900 06:57:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:00.900 06:57:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # iptr 00:12:00.900 06:57:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-save 00:12:00.900 06:57:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:00.900 06:57:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:12:00.901 06:57:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:00.901 06:57:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:00.901 06:57:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:00.901 06:57:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:00.901 06:57:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:02.810 06:57:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:02.810 00:12:02.810 real 0m5.663s 00:12:02.810 user 0m4.846s 00:12:02.810 sys 0m1.993s 00:12:02.810 06:57:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:02.810 06:57:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:02.810 ************************************ 00:12:02.810 END TEST nvmf_target_discovery 00:12:02.810 ************************************ 00:12:02.810 06:57:23 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:12:02.810 06:57:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:02.810 06:57:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:02.810 06:57:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:02.810 ************************************ 00:12:02.810 START TEST nvmf_referrals 00:12:02.810 ************************************ 00:12:02.810 06:57:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:12:03.070 * Looking for test storage... 00:12:03.070 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:03.070 06:57:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:03.070 06:57:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # lcov --version 00:12:03.070 06:57:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:03.070 06:57:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:03.070 06:57:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:03.070 06:57:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:03.070 06:57:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:03.070 06:57:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:12:03.070 06:57:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:12:03.070 06:57:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:12:03.070 06:57:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:12:03.070 06:57:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:12:03.070 06:57:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:12:03.070 06:57:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:12:03.070 06:57:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:03.070 06:57:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:12:03.070 06:57:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:12:03.070 06:57:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:03.070 06:57:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:03.070 06:57:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:12:03.070 06:57:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:12:03.070 06:57:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:03.070 06:57:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:12:03.070 06:57:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:12:03.070 06:57:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:12:03.070 06:57:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:12:03.070 06:57:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:03.070 06:57:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:12:03.070 06:57:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:12:03.070 06:57:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:03.070 06:57:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:03.070 06:57:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:12:03.070 06:57:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:03.070 06:57:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:03.070 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:03.070 --rc genhtml_branch_coverage=1 00:12:03.070 --rc genhtml_function_coverage=1 00:12:03.070 --rc genhtml_legend=1 00:12:03.070 --rc geninfo_all_blocks=1 00:12:03.070 --rc geninfo_unexecuted_blocks=1 00:12:03.070 00:12:03.070 ' 00:12:03.070 06:57:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:03.070 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:03.070 --rc genhtml_branch_coverage=1 00:12:03.070 --rc genhtml_function_coverage=1 00:12:03.070 --rc genhtml_legend=1 00:12:03.070 --rc geninfo_all_blocks=1 00:12:03.070 --rc geninfo_unexecuted_blocks=1 00:12:03.070 00:12:03.070 ' 00:12:03.070 06:57:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:03.070 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:03.070 --rc genhtml_branch_coverage=1 00:12:03.070 --rc genhtml_function_coverage=1 00:12:03.070 --rc genhtml_legend=1 00:12:03.070 --rc geninfo_all_blocks=1 00:12:03.070 --rc geninfo_unexecuted_blocks=1 00:12:03.070 00:12:03.070 ' 00:12:03.070 06:57:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:03.070 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:03.070 --rc genhtml_branch_coverage=1 00:12:03.070 --rc genhtml_function_coverage=1 00:12:03.070 --rc genhtml_legend=1 00:12:03.070 --rc geninfo_all_blocks=1 00:12:03.070 --rc geninfo_unexecuted_blocks=1 00:12:03.070 00:12:03.070 ' 00:12:03.070 06:57:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:03.070 06:57:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:12:03.070 06:57:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:03.070 06:57:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:03.070 06:57:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:03.070 06:57:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:03.070 06:57:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:03.070 06:57:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:03.070 06:57:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:03.070 06:57:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:03.070 06:57:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:03.070 06:57:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:03.070 06:57:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:03.070 06:57:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:03.070 06:57:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:03.070 06:57:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:03.070 06:57:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:03.070 06:57:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:03.070 06:57:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:03.070 06:57:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:12:03.070 06:57:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:03.070 06:57:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:03.070 06:57:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:03.070 06:57:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:03.070 06:57:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:03.070 06:57:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:03.070 06:57:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:12:03.070 06:57:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:03.070 06:57:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # : 0 00:12:03.070 06:57:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:03.070 06:57:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:03.070 06:57:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:03.070 06:57:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:03.070 06:57:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:03.070 06:57:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:03.071 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:03.071 06:57:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:03.071 06:57:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:03.071 06:57:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:03.071 06:57:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:12:03.071 06:57:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:12:03.071 06:57:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:12:03.071 06:57:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:12:03.071 06:57:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:12:03.071 06:57:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:12:03.071 06:57:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:12:03.071 06:57:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:03.071 06:57:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:03.071 06:57:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:03.071 06:57:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:03.071 06:57:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:03.071 06:57:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:03.071 06:57:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:03.071 06:57:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:03.071 06:57:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:03.071 06:57:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:03.071 06:57:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@309 -- # xtrace_disable 00:12:03.071 06:57:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:05.612 06:57:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:05.612 06:57:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # pci_devs=() 00:12:05.612 06:57:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:05.612 06:57:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:05.612 06:57:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:05.612 06:57:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:05.612 06:57:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:05.612 06:57:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # net_devs=() 00:12:05.612 06:57:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:05.612 06:57:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # e810=() 00:12:05.612 06:57:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # local -ga e810 00:12:05.612 06:57:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # x722=() 00:12:05.612 06:57:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # local -ga x722 00:12:05.612 06:57:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # mlx=() 00:12:05.612 06:57:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # local -ga mlx 00:12:05.612 06:57:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:05.612 06:57:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:05.612 06:57:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:05.612 06:57:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:05.612 06:57:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:05.612 06:57:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:05.612 06:57:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:05.612 06:57:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:05.612 06:57:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:05.612 06:57:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:05.612 06:57:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:05.612 06:57:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:05.612 06:57:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:05.612 06:57:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:05.612 06:57:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:05.612 06:57:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:05.612 06:57:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:05.612 06:57:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:05.612 06:57:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:05.612 06:57:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:05.612 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:05.612 06:57:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:05.612 06:57:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:05.612 06:57:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:05.612 06:57:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:05.612 06:57:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:05.612 06:57:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:05.612 06:57:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:05.612 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:05.612 06:57:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:05.612 06:57:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:05.612 06:57:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:05.612 06:57:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:05.612 06:57:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:05.612 06:57:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:05.612 06:57:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:05.612 06:57:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:05.612 06:57:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:05.612 06:57:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:05.612 06:57:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:05.612 06:57:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:05.612 06:57:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:05.612 06:57:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:05.612 06:57:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:05.612 06:57:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:05.613 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:05.613 06:57:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:05.613 06:57:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:05.613 06:57:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:05.613 06:57:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:05.613 06:57:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:05.613 06:57:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:05.613 06:57:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:05.613 06:57:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:05.613 06:57:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:05.613 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:05.613 06:57:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:05.613 06:57:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:05.613 06:57:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # is_hw=yes 00:12:05.613 06:57:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:05.613 06:57:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:05.613 06:57:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:05.613 06:57:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:05.613 06:57:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:05.613 06:57:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:05.613 06:57:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:05.613 06:57:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:05.613 06:57:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:05.613 06:57:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:05.613 06:57:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:05.613 06:57:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:05.613 06:57:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:05.613 06:57:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:05.613 06:57:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:05.613 06:57:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:05.613 06:57:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:05.613 06:57:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:05.613 06:57:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:05.613 06:57:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:05.613 06:57:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:05.613 06:57:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:05.613 06:57:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:05.613 06:57:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:05.613 06:57:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:05.613 06:57:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:05.613 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:05.613 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.287 ms 00:12:05.613 00:12:05.613 --- 10.0.0.2 ping statistics --- 00:12:05.613 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:05.613 rtt min/avg/max/mdev = 0.287/0.287/0.287/0.000 ms 00:12:05.613 06:57:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:05.613 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:05.613 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.140 ms 00:12:05.613 00:12:05.613 --- 10.0.0.1 ping statistics --- 00:12:05.613 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:05.613 rtt min/avg/max/mdev = 0.140/0.140/0.140/0.000 ms 00:12:05.613 06:57:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:05.613 06:57:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@450 -- # return 0 00:12:05.613 06:57:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:05.613 06:57:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:05.613 06:57:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:05.613 06:57:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:05.613 06:57:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:05.613 06:57:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:05.613 06:57:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:05.613 06:57:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:12:05.613 06:57:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:05.613 06:57:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:05.613 06:57:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:05.613 06:57:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@509 -- # nvmfpid=174300 00:12:05.613 06:57:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:05.613 06:57:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@510 -- # waitforlisten 174300 00:12:05.613 06:57:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # '[' -z 174300 ']' 00:12:05.613 06:57:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:05.613 06:57:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:05.613 06:57:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:05.613 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:05.613 06:57:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:05.613 06:57:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:05.613 [2024-11-18 06:57:26.273655] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:12:05.613 [2024-11-18 06:57:26.273742] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:05.613 [2024-11-18 06:57:26.344170] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:05.613 [2024-11-18 06:57:26.390131] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:05.613 [2024-11-18 06:57:26.390182] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:05.613 [2024-11-18 06:57:26.390211] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:05.613 [2024-11-18 06:57:26.390223] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:05.613 [2024-11-18 06:57:26.390232] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:05.613 [2024-11-18 06:57:26.391925] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:05.613 [2024-11-18 06:57:26.391991] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:05.613 [2024-11-18 06:57:26.392056] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:05.613 [2024-11-18 06:57:26.392059] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:05.613 06:57:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:05.613 06:57:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@868 -- # return 0 00:12:05.613 06:57:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:05.613 06:57:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:05.613 06:57:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:05.613 06:57:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:05.613 06:57:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:05.613 06:57:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.613 06:57:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:05.614 [2024-11-18 06:57:26.531907] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:05.614 06:57:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.614 06:57:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:12:05.614 06:57:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.614 06:57:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:05.614 [2024-11-18 06:57:26.544174] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:12:05.614 06:57:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.614 06:57:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:12:05.614 06:57:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.614 06:57:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:05.614 06:57:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.614 06:57:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:12:05.614 06:57:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.614 06:57:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:05.614 06:57:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.614 06:57:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:12:05.614 06:57:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.614 06:57:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:05.614 06:57:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.614 06:57:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:05.614 06:57:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:12:05.614 06:57:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.614 06:57:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:05.614 06:57:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.873 06:57:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:12:05.873 06:57:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:12:05.873 06:57:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:05.873 06:57:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:05.873 06:57:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.873 06:57:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:05.873 06:57:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:05.873 06:57:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:05.873 06:57:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.873 06:57:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:12:05.873 06:57:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:12:05.873 06:57:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:12:05.873 06:57:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:05.873 06:57:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:05.873 06:57:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:05.873 06:57:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:05.873 06:57:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:05.873 06:57:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:12:05.873 06:57:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:12:05.873 06:57:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:12:05.873 06:57:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.873 06:57:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:05.873 06:57:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.873 06:57:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:12:05.873 06:57:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.873 06:57:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:05.873 06:57:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.873 06:57:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:12:05.873 06:57:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.873 06:57:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:06.133 06:57:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.133 06:57:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:06.133 06:57:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:12:06.134 06:57:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.134 06:57:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:06.134 06:57:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.134 06:57:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:12:06.134 06:57:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:12:06.134 06:57:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:06.134 06:57:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:06.134 06:57:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:06.134 06:57:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:06.134 06:57:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:06.134 06:57:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:12:06.134 06:57:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:12:06.134 06:57:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:12:06.134 06:57:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.134 06:57:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:06.134 06:57:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.134 06:57:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:12:06.134 06:57:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.134 06:57:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:06.134 06:57:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.392 06:57:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:12:06.392 06:57:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:06.392 06:57:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:06.392 06:57:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.392 06:57:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:06.392 06:57:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:06.392 06:57:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:06.392 06:57:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.392 06:57:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:12:06.392 06:57:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:12:06.392 06:57:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:12:06.392 06:57:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:06.392 06:57:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:06.392 06:57:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:06.392 06:57:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:06.392 06:57:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:06.392 06:57:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:12:06.392 06:57:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:12:06.392 06:57:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:12:06.392 06:57:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:12:06.392 06:57:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:12:06.392 06:57:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:06.392 06:57:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:12:06.651 06:57:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:12:06.651 06:57:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:12:06.651 06:57:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:12:06.651 06:57:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:12:06.651 06:57:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:06.651 06:57:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:12:06.911 06:57:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:12:06.911 06:57:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:12:06.911 06:57:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.911 06:57:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:06.911 06:57:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.911 06:57:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:12:06.911 06:57:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:06.911 06:57:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:06.911 06:57:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.911 06:57:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:06.911 06:57:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:06.911 06:57:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:06.911 06:57:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.911 06:57:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:12:06.911 06:57:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:12:06.911 06:57:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:12:06.911 06:57:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:06.911 06:57:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:06.911 06:57:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:06.911 06:57:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:06.911 06:57:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:07.172 06:57:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:12:07.172 06:57:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:12:07.172 06:57:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:12:07.172 06:57:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:12:07.172 06:57:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:12:07.172 06:57:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:07.172 06:57:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:12:07.172 06:57:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:12:07.172 06:57:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:12:07.172 06:57:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:12:07.172 06:57:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:12:07.172 06:57:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:07.172 06:57:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:12:07.433 06:57:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:12:07.433 06:57:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:12:07.433 06:57:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.433 06:57:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:07.433 06:57:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.433 06:57:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:07.433 06:57:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:12:07.433 06:57:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.433 06:57:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:07.433 06:57:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.433 06:57:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:12:07.433 06:57:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:12:07.433 06:57:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:07.433 06:57:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:07.433 06:57:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:07.433 06:57:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:07.433 06:57:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:07.693 06:57:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:12:07.693 06:57:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:12:07.693 06:57:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:12:07.693 06:57:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:12:07.693 06:57:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:07.693 06:57:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # sync 00:12:07.693 06:57:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:07.693 06:57:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set +e 00:12:07.693 06:57:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:07.693 06:57:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:07.693 rmmod nvme_tcp 00:12:07.693 rmmod nvme_fabrics 00:12:07.693 rmmod nvme_keyring 00:12:07.693 06:57:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:07.693 06:57:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # set -e 00:12:07.693 06:57:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # return 0 00:12:07.693 06:57:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@517 -- # '[' -n 174300 ']' 00:12:07.693 06:57:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@518 -- # killprocess 174300 00:12:07.693 06:57:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # '[' -z 174300 ']' 00:12:07.693 06:57:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # kill -0 174300 00:12:07.693 06:57:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # uname 00:12:07.693 06:57:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:07.693 06:57:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 174300 00:12:07.693 06:57:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:07.693 06:57:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:07.693 06:57:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@972 -- # echo 'killing process with pid 174300' 00:12:07.693 killing process with pid 174300 00:12:07.693 06:57:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@973 -- # kill 174300 00:12:07.693 06:57:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@978 -- # wait 174300 00:12:07.970 06:57:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:07.970 06:57:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:07.970 06:57:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:07.970 06:57:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # iptr 00:12:07.970 06:57:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-save 00:12:07.970 06:57:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:07.970 06:57:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-restore 00:12:07.970 06:57:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:07.970 06:57:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:07.970 06:57:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:07.970 06:57:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:07.970 06:57:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:09.880 06:57:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:09.880 00:12:09.880 real 0m7.056s 00:12:09.880 user 0m11.076s 00:12:09.880 sys 0m2.325s 00:12:09.880 06:57:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:09.880 06:57:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:09.880 ************************************ 00:12:09.880 END TEST nvmf_referrals 00:12:09.880 ************************************ 00:12:09.880 06:57:30 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:12:09.880 06:57:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:09.880 06:57:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:09.880 06:57:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:10.139 ************************************ 00:12:10.139 START TEST nvmf_connect_disconnect 00:12:10.139 ************************************ 00:12:10.139 06:57:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:12:10.139 * Looking for test storage... 00:12:10.139 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:10.139 06:57:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:10.139 06:57:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # lcov --version 00:12:10.139 06:57:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:10.139 06:57:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:10.139 06:57:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:10.139 06:57:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:10.139 06:57:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:10.139 06:57:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:12:10.139 06:57:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:12:10.139 06:57:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:12:10.139 06:57:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:12:10.139 06:57:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:12:10.139 06:57:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:12:10.139 06:57:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:12:10.139 06:57:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:10.139 06:57:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:12:10.139 06:57:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:12:10.139 06:57:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:10.139 06:57:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:10.139 06:57:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:12:10.139 06:57:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:12:10.139 06:57:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:10.139 06:57:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:12:10.139 06:57:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:12:10.139 06:57:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:12:10.139 06:57:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:12:10.139 06:57:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:10.139 06:57:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:12:10.139 06:57:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:12:10.140 06:57:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:10.140 06:57:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:10.140 06:57:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:12:10.140 06:57:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:10.140 06:57:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:10.140 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:10.140 --rc genhtml_branch_coverage=1 00:12:10.140 --rc genhtml_function_coverage=1 00:12:10.140 --rc genhtml_legend=1 00:12:10.140 --rc geninfo_all_blocks=1 00:12:10.140 --rc geninfo_unexecuted_blocks=1 00:12:10.140 00:12:10.140 ' 00:12:10.140 06:57:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:10.140 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:10.140 --rc genhtml_branch_coverage=1 00:12:10.140 --rc genhtml_function_coverage=1 00:12:10.140 --rc genhtml_legend=1 00:12:10.140 --rc geninfo_all_blocks=1 00:12:10.140 --rc geninfo_unexecuted_blocks=1 00:12:10.140 00:12:10.140 ' 00:12:10.140 06:57:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:10.140 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:10.140 --rc genhtml_branch_coverage=1 00:12:10.140 --rc genhtml_function_coverage=1 00:12:10.140 --rc genhtml_legend=1 00:12:10.140 --rc geninfo_all_blocks=1 00:12:10.140 --rc geninfo_unexecuted_blocks=1 00:12:10.140 00:12:10.140 ' 00:12:10.140 06:57:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:10.140 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:10.140 --rc genhtml_branch_coverage=1 00:12:10.140 --rc genhtml_function_coverage=1 00:12:10.140 --rc genhtml_legend=1 00:12:10.140 --rc geninfo_all_blocks=1 00:12:10.140 --rc geninfo_unexecuted_blocks=1 00:12:10.140 00:12:10.140 ' 00:12:10.140 06:57:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:10.140 06:57:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:12:10.140 06:57:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:10.140 06:57:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:10.140 06:57:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:10.140 06:57:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:10.140 06:57:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:10.140 06:57:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:10.140 06:57:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:10.140 06:57:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:10.140 06:57:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:10.140 06:57:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:10.140 06:57:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:10.140 06:57:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:10.140 06:57:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:10.140 06:57:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:10.140 06:57:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:10.140 06:57:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:10.140 06:57:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:10.140 06:57:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:12:10.140 06:57:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:10.140 06:57:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:10.140 06:57:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:10.140 06:57:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:10.140 06:57:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:10.140 06:57:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:10.140 06:57:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:12:10.140 06:57:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:10.140 06:57:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0 00:12:10.140 06:57:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:10.140 06:57:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:10.140 06:57:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:10.140 06:57:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:10.140 06:57:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:10.140 06:57:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:10.140 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:10.140 06:57:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:10.140 06:57:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:10.140 06:57:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:10.140 06:57:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:10.140 06:57:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:10.140 06:57:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:12:10.140 06:57:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:10.140 06:57:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:10.140 06:57:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:10.140 06:57:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:10.140 06:57:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:10.140 06:57:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:10.140 06:57:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:10.140 06:57:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:10.140 06:57:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:10.140 06:57:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:10.140 06:57:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:12:10.140 06:57:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:12.680 06:57:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:12.680 06:57:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:12:12.680 06:57:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:12.680 06:57:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:12.680 06:57:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:12.680 06:57:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:12.680 06:57:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:12.680 06:57:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:12:12.680 06:57:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:12.680 06:57:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # e810=() 00:12:12.680 06:57:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:12:12.680 06:57:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # x722=() 00:12:12.680 06:57:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:12:12.680 06:57:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:12:12.680 06:57:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:12:12.680 06:57:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:12.680 06:57:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:12.680 06:57:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:12.680 06:57:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:12.680 06:57:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:12.680 06:57:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:12.680 06:57:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:12.680 06:57:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:12.681 06:57:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:12.681 06:57:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:12.681 06:57:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:12.681 06:57:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:12.681 06:57:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:12.681 06:57:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:12.681 06:57:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:12.681 06:57:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:12.681 06:57:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:12.681 06:57:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:12.681 06:57:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:12.681 06:57:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:12.681 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:12.681 06:57:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:12.681 06:57:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:12.681 06:57:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:12.681 06:57:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:12.681 06:57:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:12.681 06:57:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:12.681 06:57:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:12.681 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:12.681 06:57:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:12.681 06:57:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:12.681 06:57:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:12.681 06:57:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:12.681 06:57:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:12.681 06:57:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:12.681 06:57:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:12.681 06:57:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:12.681 06:57:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:12.681 06:57:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:12.681 06:57:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:12.681 06:57:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:12.681 06:57:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:12.681 06:57:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:12.681 06:57:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:12.681 06:57:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:12.681 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:12.681 06:57:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:12.681 06:57:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:12.681 06:57:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:12.681 06:57:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:12.681 06:57:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:12.681 06:57:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:12.681 06:57:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:12.681 06:57:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:12.681 06:57:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:12.681 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:12.681 06:57:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:12.681 06:57:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:12.681 06:57:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:12:12.681 06:57:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:12.681 06:57:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:12.681 06:57:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:12.681 06:57:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:12.681 06:57:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:12.681 06:57:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:12.681 06:57:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:12.681 06:57:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:12.681 06:57:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:12.681 06:57:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:12.681 06:57:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:12.681 06:57:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:12.681 06:57:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:12.681 06:57:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:12.681 06:57:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:12.681 06:57:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:12.681 06:57:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:12.681 06:57:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:12.681 06:57:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:12.681 06:57:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:12.681 06:57:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:12.681 06:57:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:12.681 06:57:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:12.681 06:57:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:12.681 06:57:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:12.681 06:57:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:12.681 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:12.681 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.274 ms 00:12:12.681 00:12:12.681 --- 10.0.0.2 ping statistics --- 00:12:12.681 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:12.681 rtt min/avg/max/mdev = 0.274/0.274/0.274/0.000 ms 00:12:12.681 06:57:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:12.681 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:12.681 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.134 ms 00:12:12.681 00:12:12.681 --- 10.0.0.1 ping statistics --- 00:12:12.681 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:12.681 rtt min/avg/max/mdev = 0.134/0.134/0.134/0.000 ms 00:12:12.681 06:57:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:12.681 06:57:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # return 0 00:12:12.681 06:57:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:12.681 06:57:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:12.681 06:57:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:12.681 06:57:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:12.681 06:57:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:12.681 06:57:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:12.682 06:57:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:12.682 06:57:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:12:12.682 06:57:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:12.682 06:57:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:12.682 06:57:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:12.682 06:57:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@509 -- # nvmfpid=176608 00:12:12.682 06:57:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:12.682 06:57:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@510 -- # waitforlisten 176608 00:12:12.682 06:57:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # '[' -z 176608 ']' 00:12:12.682 06:57:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:12.682 06:57:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:12.682 06:57:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:12.682 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:12.682 06:57:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:12.682 06:57:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:12.682 [2024-11-18 06:57:33.496948] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:12:12.682 [2024-11-18 06:57:33.497060] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:12.682 [2024-11-18 06:57:33.569632] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:12.682 [2024-11-18 06:57:33.616681] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:12.682 [2024-11-18 06:57:33.616734] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:12.682 [2024-11-18 06:57:33.616760] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:12.682 [2024-11-18 06:57:33.616786] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:12.682 [2024-11-18 06:57:33.616796] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:12.682 [2024-11-18 06:57:33.618225] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:12.682 [2024-11-18 06:57:33.618324] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:12.682 [2024-11-18 06:57:33.618407] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:12.682 [2024-11-18 06:57:33.618410] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:12.943 06:57:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:12.943 06:57:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@868 -- # return 0 00:12:12.943 06:57:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:12.943 06:57:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:12.943 06:57:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:12.943 06:57:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:12.943 06:57:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:12:12.943 06:57:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.943 06:57:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:12.943 [2024-11-18 06:57:33.762572] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:12.943 06:57:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.943 06:57:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:12:12.943 06:57:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.943 06:57:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:12.943 06:57:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.943 06:57:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:12:12.943 06:57:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:12.943 06:57:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.943 06:57:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:12.943 06:57:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.943 06:57:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:12.943 06:57:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.943 06:57:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:12.943 06:57:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.943 06:57:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:12.943 06:57:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.943 06:57:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:12.943 [2024-11-18 06:57:33.834019] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:12.943 06:57:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.943 06:57:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 1 -eq 1 ']' 00:12:12.943 06:57:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@27 -- # num_iterations=100 00:12:12.943 06:57:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@29 -- # NVME_CONNECT='nvme connect -i 8' 00:12:12.943 06:57:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:12:15.487 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:18.025 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:19.931 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:22.469 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:25.010 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:26.919 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:29.457 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:31.998 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:33.905 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:36.442 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:38.995 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:40.906 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:43.443 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:45.980 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:47.891 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:50.437 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:52.977 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:54.886 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:57.425 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:59.335 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:01.876 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:04.414 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:06.322 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:08.867 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:11.408 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:13.949 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:15.857 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:18.397 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:20.935 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:22.845 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:25.389 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:27.297 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:29.835 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:32.371 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:34.912 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:36.821 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:39.353 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:41.889 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:43.799 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:46.337 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:48.877 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:50.787 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:53.331 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:55.873 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:57.782 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:00.321 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:02.231 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:04.769 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:07.305 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:09.844 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:11.750 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:14.291 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:16.197 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:18.730 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:20.635 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:23.175 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:25.716 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:27.622 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:30.163 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:32.073 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:34.614 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:37.150 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:39.055 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:41.588 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:44.122 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:46.030 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:48.566 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:51.110 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:53.017 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:55.554 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:58.090 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:59.999 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:02.538 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:05.074 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:07.612 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:09.521 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:12.063 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:13.973 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:16.506 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:18.415 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:20.952 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:23.490 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:25.398 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:27.931 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:30.477 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:32.383 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:34.919 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:37.459 [2024-11-18 07:00:57.935291] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d4e1e0 is same with the state(6) to be set 00:15:37.459 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:39.380 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:41.919 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:44.460 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:46.365 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:48.905 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:51.440 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:53.347 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:55.884 [2024-11-18 07:01:16.526322] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d4e1e0 is same with the state(6) to be set 00:15:55.884 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:58.427 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:00.360 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:02.900 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:04.807 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:04.807 07:01:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:16:04.807 07:01:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:16:04.807 07:01:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:04.807 07:01:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync 00:16:04.807 07:01:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:04.807 07:01:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e 00:16:04.807 07:01:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:04.807 07:01:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:04.807 rmmod nvme_tcp 00:16:04.807 rmmod nvme_fabrics 00:16:04.807 rmmod nvme_keyring 00:16:04.807 07:01:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:04.807 07:01:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e 00:16:04.807 07:01:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0 00:16:04.807 07:01:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@517 -- # '[' -n 176608 ']' 00:16:04.807 07:01:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@518 -- # killprocess 176608 00:16:04.807 07:01:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # '[' -z 176608 ']' 00:16:04.807 07:01:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # kill -0 176608 00:16:04.807 07:01:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # uname 00:16:04.807 07:01:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:04.807 07:01:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 176608 00:16:05.066 07:01:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:05.066 07:01:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:05.066 07:01:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 176608' 00:16:05.066 killing process with pid 176608 00:16:05.066 07:01:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@973 -- # kill 176608 00:16:05.066 07:01:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@978 -- # wait 176608 00:16:05.066 07:01:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:05.066 07:01:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:05.066 07:01:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:05.066 07:01:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # iptr 00:16:05.066 07:01:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:16:05.066 07:01:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:05.066 07:01:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:16:05.066 07:01:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:05.066 07:01:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:16:05.066 07:01:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:05.066 07:01:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:05.066 07:01:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:07.610 07:01:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:16:07.610 00:16:07.610 real 3m57.196s 00:16:07.610 user 15m2.435s 00:16:07.610 sys 0m36.380s 00:16:07.610 07:01:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:07.610 07:01:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:16:07.610 ************************************ 00:16:07.610 END TEST nvmf_connect_disconnect 00:16:07.610 ************************************ 00:16:07.610 07:01:28 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:16:07.611 07:01:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:07.611 07:01:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:07.611 07:01:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:07.611 ************************************ 00:16:07.611 START TEST nvmf_multitarget 00:16:07.611 ************************************ 00:16:07.611 07:01:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:16:07.611 * Looking for test storage... 00:16:07.611 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:07.611 07:01:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:07.611 07:01:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # lcov --version 00:16:07.611 07:01:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:07.611 07:01:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:07.611 07:01:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:07.611 07:01:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:07.611 07:01:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:07.611 07:01:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:16:07.611 07:01:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:16:07.611 07:01:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:16:07.611 07:01:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:16:07.611 07:01:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:16:07.611 07:01:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:16:07.611 07:01:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:16:07.611 07:01:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:07.611 07:01:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:16:07.611 07:01:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:16:07.611 07:01:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:07.611 07:01:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:07.611 07:01:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:16:07.611 07:01:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:16:07.611 07:01:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:07.611 07:01:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:16:07.611 07:01:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:16:07.611 07:01:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:16:07.611 07:01:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:16:07.611 07:01:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:07.611 07:01:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:16:07.611 07:01:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:16:07.611 07:01:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:07.611 07:01:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:07.611 07:01:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:16:07.611 07:01:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:07.611 07:01:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:07.611 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:07.611 --rc genhtml_branch_coverage=1 00:16:07.611 --rc genhtml_function_coverage=1 00:16:07.611 --rc genhtml_legend=1 00:16:07.611 --rc geninfo_all_blocks=1 00:16:07.611 --rc geninfo_unexecuted_blocks=1 00:16:07.611 00:16:07.611 ' 00:16:07.611 07:01:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:07.611 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:07.611 --rc genhtml_branch_coverage=1 00:16:07.611 --rc genhtml_function_coverage=1 00:16:07.611 --rc genhtml_legend=1 00:16:07.611 --rc geninfo_all_blocks=1 00:16:07.611 --rc geninfo_unexecuted_blocks=1 00:16:07.611 00:16:07.611 ' 00:16:07.611 07:01:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:07.611 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:07.611 --rc genhtml_branch_coverage=1 00:16:07.611 --rc genhtml_function_coverage=1 00:16:07.611 --rc genhtml_legend=1 00:16:07.611 --rc geninfo_all_blocks=1 00:16:07.611 --rc geninfo_unexecuted_blocks=1 00:16:07.611 00:16:07.611 ' 00:16:07.611 07:01:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:07.611 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:07.611 --rc genhtml_branch_coverage=1 00:16:07.611 --rc genhtml_function_coverage=1 00:16:07.611 --rc genhtml_legend=1 00:16:07.611 --rc geninfo_all_blocks=1 00:16:07.611 --rc geninfo_unexecuted_blocks=1 00:16:07.611 00:16:07.611 ' 00:16:07.611 07:01:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:07.611 07:01:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:16:07.611 07:01:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:07.611 07:01:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:07.611 07:01:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:07.611 07:01:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:07.611 07:01:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:07.611 07:01:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:07.611 07:01:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:07.611 07:01:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:07.611 07:01:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:07.611 07:01:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:07.611 07:01:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:07.611 07:01:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:07.611 07:01:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:07.611 07:01:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:07.611 07:01:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:07.611 07:01:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:07.611 07:01:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:07.611 07:01:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:16:07.611 07:01:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:07.611 07:01:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:07.611 07:01:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:07.611 07:01:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:07.611 07:01:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:07.611 07:01:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:07.611 07:01:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:16:07.611 07:01:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:07.611 07:01:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0 00:16:07.611 07:01:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:07.611 07:01:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:07.612 07:01:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:07.612 07:01:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:07.612 07:01:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:07.612 07:01:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:07.612 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:07.612 07:01:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:07.612 07:01:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:07.612 07:01:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:07.612 07:01:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:16:07.612 07:01:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:16:07.612 07:01:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:07.612 07:01:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:07.612 07:01:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:07.612 07:01:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:07.612 07:01:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:07.612 07:01:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:07.612 07:01:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:07.612 07:01:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:07.612 07:01:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:16:07.612 07:01:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:16:07.612 07:01:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@309 -- # xtrace_disable 00:16:07.612 07:01:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:16:09.519 07:01:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:09.519 07:01:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # pci_devs=() 00:16:09.519 07:01:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # local -a pci_devs 00:16:09.519 07:01:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # pci_net_devs=() 00:16:09.519 07:01:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:16:09.519 07:01:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # pci_drivers=() 00:16:09.519 07:01:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # local -A pci_drivers 00:16:09.519 07:01:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # net_devs=() 00:16:09.519 07:01:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # local -ga net_devs 00:16:09.519 07:01:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # e810=() 00:16:09.519 07:01:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # local -ga e810 00:16:09.519 07:01:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # x722=() 00:16:09.519 07:01:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # local -ga x722 00:16:09.519 07:01:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # mlx=() 00:16:09.519 07:01:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # local -ga mlx 00:16:09.519 07:01:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:09.519 07:01:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:09.519 07:01:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:09.519 07:01:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:09.519 07:01:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:09.519 07:01:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:09.519 07:01:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:09.519 07:01:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:16:09.519 07:01:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:09.519 07:01:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:09.519 07:01:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:09.519 07:01:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:09.519 07:01:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:16:09.519 07:01:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:16:09.519 07:01:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:16:09.519 07:01:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:16:09.519 07:01:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:16:09.519 07:01:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:16:09.519 07:01:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:09.519 07:01:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:16:09.519 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:16:09.519 07:01:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:09.519 07:01:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:09.519 07:01:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:09.519 07:01:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:09.519 07:01:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:09.519 07:01:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:09.519 07:01:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:16:09.519 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:16:09.519 07:01:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:09.519 07:01:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:09.519 07:01:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:09.519 07:01:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:09.519 07:01:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:09.519 07:01:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:16:09.519 07:01:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:16:09.519 07:01:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:16:09.519 07:01:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:09.519 07:01:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:09.519 07:01:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:09.519 07:01:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:09.519 07:01:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:09.519 07:01:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:09.519 07:01:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:09.519 07:01:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:16:09.519 Found net devices under 0000:0a:00.0: cvl_0_0 00:16:09.519 07:01:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:09.519 07:01:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:09.519 07:01:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:09.519 07:01:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:09.519 07:01:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:09.519 07:01:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:09.519 07:01:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:09.519 07:01:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:09.519 07:01:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:16:09.519 Found net devices under 0000:0a:00.1: cvl_0_1 00:16:09.519 07:01:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:09.519 07:01:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:16:09.519 07:01:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # is_hw=yes 00:16:09.519 07:01:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:16:09.519 07:01:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:16:09.519 07:01:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:16:09.519 07:01:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:09.519 07:01:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:09.519 07:01:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:09.519 07:01:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:09.519 07:01:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:16:09.519 07:01:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:09.519 07:01:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:09.519 07:01:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:16:09.519 07:01:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:16:09.519 07:01:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:09.519 07:01:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:09.519 07:01:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:16:09.519 07:01:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:16:09.519 07:01:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:16:09.519 07:01:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:09.519 07:01:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:09.519 07:01:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:09.519 07:01:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:16:09.519 07:01:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:09.778 07:01:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:09.778 07:01:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:09.778 07:01:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:16:09.778 07:01:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:16:09.778 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:09.778 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.302 ms 00:16:09.778 00:16:09.778 --- 10.0.0.2 ping statistics --- 00:16:09.778 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:09.778 rtt min/avg/max/mdev = 0.302/0.302/0.302/0.000 ms 00:16:09.778 07:01:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:09.778 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:09.778 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.141 ms 00:16:09.778 00:16:09.778 --- 10.0.0.1 ping statistics --- 00:16:09.778 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:09.778 rtt min/avg/max/mdev = 0.141/0.141/0.141/0.000 ms 00:16:09.778 07:01:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:09.778 07:01:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@450 -- # return 0 00:16:09.778 07:01:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:09.778 07:01:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:09.778 07:01:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:09.778 07:01:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:09.778 07:01:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:09.778 07:01:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:09.778 07:01:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:09.778 07:01:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:16:09.778 07:01:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:09.778 07:01:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:09.778 07:01:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:16:09.778 07:01:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@509 -- # nvmfpid=207808 00:16:09.778 07:01:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:09.778 07:01:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@510 -- # waitforlisten 207808 00:16:09.779 07:01:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # '[' -z 207808 ']' 00:16:09.779 07:01:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:09.779 07:01:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:09.779 07:01:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:09.779 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:09.779 07:01:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:09.779 07:01:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:16:09.779 [2024-11-18 07:01:30.609880] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:16:09.779 [2024-11-18 07:01:30.609981] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:09.779 [2024-11-18 07:01:30.691319] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:09.779 [2024-11-18 07:01:30.743570] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:09.779 [2024-11-18 07:01:30.743626] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:09.779 [2024-11-18 07:01:30.743656] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:09.779 [2024-11-18 07:01:30.743668] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:09.779 [2024-11-18 07:01:30.743678] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:09.779 [2024-11-18 07:01:30.745252] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:09.779 [2024-11-18 07:01:30.745283] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:09.779 [2024-11-18 07:01:30.745341] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:16:09.779 [2024-11-18 07:01:30.745343] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:10.037 07:01:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:10.037 07:01:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@868 -- # return 0 00:16:10.037 07:01:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:10.037 07:01:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:10.037 07:01:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:16:10.037 07:01:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:10.037 07:01:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:16:10.037 07:01:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:16:10.037 07:01:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:16:10.295 07:01:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:16:10.295 07:01:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:16:10.295 "nvmf_tgt_1" 00:16:10.295 07:01:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:16:10.295 "nvmf_tgt_2" 00:16:10.554 07:01:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:16:10.554 07:01:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:16:10.554 07:01:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:16:10.554 07:01:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:16:10.554 true 00:16:10.554 07:01:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:16:10.812 true 00:16:10.812 07:01:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:16:10.812 07:01:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:16:10.812 07:01:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:16:10.812 07:01:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:16:10.812 07:01:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:16:10.812 07:01:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:10.812 07:01:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # sync 00:16:10.812 07:01:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:10.812 07:01:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e 00:16:10.812 07:01:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:10.812 07:01:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:10.812 rmmod nvme_tcp 00:16:10.812 rmmod nvme_fabrics 00:16:10.812 rmmod nvme_keyring 00:16:11.073 07:01:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:11.073 07:01:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e 00:16:11.073 07:01:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0 00:16:11.073 07:01:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@517 -- # '[' -n 207808 ']' 00:16:11.073 07:01:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@518 -- # killprocess 207808 00:16:11.073 07:01:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # '[' -z 207808 ']' 00:16:11.073 07:01:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # kill -0 207808 00:16:11.073 07:01:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # uname 00:16:11.073 07:01:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:11.073 07:01:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 207808 00:16:11.073 07:01:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:11.073 07:01:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:11.073 07:01:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@972 -- # echo 'killing process with pid 207808' 00:16:11.073 killing process with pid 207808 00:16:11.073 07:01:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@973 -- # kill 207808 00:16:11.073 07:01:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@978 -- # wait 207808 00:16:11.334 07:01:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:11.334 07:01:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:11.334 07:01:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:11.334 07:01:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # iptr 00:16:11.334 07:01:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-save 00:16:11.334 07:01:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:11.334 07:01:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-restore 00:16:11.334 07:01:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:11.334 07:01:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # remove_spdk_ns 00:16:11.334 07:01:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:11.334 07:01:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:11.334 07:01:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:13.246 07:01:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:16:13.246 00:16:13.246 real 0m5.989s 00:16:13.246 user 0m6.996s 00:16:13.246 sys 0m2.037s 00:16:13.246 07:01:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:13.246 07:01:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:16:13.246 ************************************ 00:16:13.246 END TEST nvmf_multitarget 00:16:13.246 ************************************ 00:16:13.246 07:01:34 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:16:13.246 07:01:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:13.246 07:01:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:13.246 07:01:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:13.246 ************************************ 00:16:13.246 START TEST nvmf_rpc 00:16:13.246 ************************************ 00:16:13.246 07:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:16:13.246 * Looking for test storage... 00:16:13.246 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:13.246 07:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:13.246 07:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:16:13.246 07:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:13.506 07:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:13.506 07:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:13.506 07:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:13.506 07:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:13.506 07:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:16:13.506 07:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:16:13.506 07:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:16:13.506 07:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:16:13.506 07:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:16:13.506 07:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:16:13.506 07:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:16:13.506 07:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:13.506 07:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:16:13.506 07:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:16:13.506 07:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:13.506 07:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:13.506 07:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:16:13.506 07:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:16:13.506 07:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:13.506 07:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:16:13.506 07:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:16:13.506 07:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:16:13.506 07:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:16:13.506 07:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:13.506 07:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:16:13.506 07:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:16:13.507 07:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:13.507 07:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:13.507 07:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:16:13.507 07:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:13.507 07:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:13.507 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:13.507 --rc genhtml_branch_coverage=1 00:16:13.507 --rc genhtml_function_coverage=1 00:16:13.507 --rc genhtml_legend=1 00:16:13.507 --rc geninfo_all_blocks=1 00:16:13.507 --rc geninfo_unexecuted_blocks=1 00:16:13.507 00:16:13.507 ' 00:16:13.507 07:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:13.507 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:13.507 --rc genhtml_branch_coverage=1 00:16:13.507 --rc genhtml_function_coverage=1 00:16:13.507 --rc genhtml_legend=1 00:16:13.507 --rc geninfo_all_blocks=1 00:16:13.507 --rc geninfo_unexecuted_blocks=1 00:16:13.507 00:16:13.507 ' 00:16:13.507 07:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:13.507 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:13.507 --rc genhtml_branch_coverage=1 00:16:13.507 --rc genhtml_function_coverage=1 00:16:13.507 --rc genhtml_legend=1 00:16:13.507 --rc geninfo_all_blocks=1 00:16:13.507 --rc geninfo_unexecuted_blocks=1 00:16:13.507 00:16:13.507 ' 00:16:13.507 07:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:13.507 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:13.507 --rc genhtml_branch_coverage=1 00:16:13.507 --rc genhtml_function_coverage=1 00:16:13.507 --rc genhtml_legend=1 00:16:13.507 --rc geninfo_all_blocks=1 00:16:13.507 --rc geninfo_unexecuted_blocks=1 00:16:13.507 00:16:13.507 ' 00:16:13.507 07:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:13.507 07:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:16:13.507 07:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:13.507 07:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:13.507 07:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:13.507 07:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:13.507 07:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:13.507 07:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:13.507 07:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:13.507 07:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:13.507 07:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:13.507 07:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:13.507 07:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:13.507 07:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:13.507 07:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:13.507 07:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:13.507 07:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:13.507 07:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:13.507 07:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:13.507 07:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:16:13.507 07:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:13.507 07:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:13.507 07:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:13.507 07:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:13.507 07:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:13.507 07:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:13.507 07:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:16:13.507 07:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:13.507 07:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # : 0 00:16:13.507 07:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:13.507 07:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:13.507 07:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:13.507 07:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:13.507 07:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:13.507 07:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:13.507 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:13.507 07:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:13.507 07:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:13.507 07:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:13.507 07:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:16:13.507 07:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:16:13.507 07:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:13.507 07:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:13.507 07:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:13.507 07:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:13.507 07:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:13.507 07:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:13.507 07:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:13.507 07:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:13.507 07:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:16:13.507 07:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:16:13.507 07:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@309 -- # xtrace_disable 00:16:13.507 07:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:16.045 07:01:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:16.045 07:01:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # pci_devs=() 00:16:16.045 07:01:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # local -a pci_devs 00:16:16.045 07:01:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:16:16.045 07:01:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:16:16.045 07:01:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # pci_drivers=() 00:16:16.045 07:01:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:16:16.045 07:01:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # net_devs=() 00:16:16.045 07:01:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # local -ga net_devs 00:16:16.045 07:01:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # e810=() 00:16:16.045 07:01:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # local -ga e810 00:16:16.045 07:01:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # x722=() 00:16:16.045 07:01:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # local -ga x722 00:16:16.045 07:01:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # mlx=() 00:16:16.045 07:01:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # local -ga mlx 00:16:16.045 07:01:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:16.045 07:01:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:16.045 07:01:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:16.045 07:01:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:16.045 07:01:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:16.045 07:01:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:16.045 07:01:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:16.045 07:01:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:16:16.045 07:01:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:16.045 07:01:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:16.045 07:01:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:16.045 07:01:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:16.045 07:01:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:16:16.045 07:01:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:16:16.045 07:01:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:16:16.045 07:01:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:16:16.045 07:01:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:16:16.045 07:01:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:16:16.045 07:01:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:16.045 07:01:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:16:16.045 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:16:16.045 07:01:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:16.045 07:01:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:16.045 07:01:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:16.045 07:01:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:16.045 07:01:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:16.045 07:01:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:16.045 07:01:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:16:16.045 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:16:16.045 07:01:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:16.045 07:01:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:16.045 07:01:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:16.045 07:01:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:16.045 07:01:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:16.045 07:01:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:16:16.045 07:01:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:16:16.045 07:01:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:16:16.045 07:01:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:16.045 07:01:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:16.045 07:01:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:16.045 07:01:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:16.045 07:01:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:16.045 07:01:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:16.045 07:01:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:16.045 07:01:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:16:16.045 Found net devices under 0000:0a:00.0: cvl_0_0 00:16:16.045 07:01:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:16.045 07:01:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:16.046 07:01:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:16.046 07:01:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:16.046 07:01:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:16.046 07:01:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:16.046 07:01:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:16.046 07:01:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:16.046 07:01:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:16:16.046 Found net devices under 0000:0a:00.1: cvl_0_1 00:16:16.046 07:01:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:16.046 07:01:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:16:16.046 07:01:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # is_hw=yes 00:16:16.046 07:01:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:16:16.046 07:01:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:16:16.046 07:01:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:16:16.046 07:01:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:16.046 07:01:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:16.046 07:01:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:16.046 07:01:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:16.046 07:01:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:16:16.046 07:01:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:16.046 07:01:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:16.046 07:01:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:16:16.046 07:01:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:16:16.046 07:01:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:16.046 07:01:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:16.046 07:01:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:16:16.046 07:01:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:16:16.046 07:01:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:16:16.046 07:01:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:16.046 07:01:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:16.046 07:01:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:16.046 07:01:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:16:16.046 07:01:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:16.046 07:01:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:16.046 07:01:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:16.046 07:01:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:16:16.046 07:01:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:16:16.046 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:16.046 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.221 ms 00:16:16.046 00:16:16.046 --- 10.0.0.2 ping statistics --- 00:16:16.046 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:16.046 rtt min/avg/max/mdev = 0.221/0.221/0.221/0.000 ms 00:16:16.046 07:01:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:16.046 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:16.046 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.099 ms 00:16:16.046 00:16:16.046 --- 10.0.0.1 ping statistics --- 00:16:16.046 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:16.046 rtt min/avg/max/mdev = 0.099/0.099/0.099/0.000 ms 00:16:16.046 07:01:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:16.046 07:01:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@450 -- # return 0 00:16:16.046 07:01:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:16.046 07:01:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:16.046 07:01:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:16.046 07:01:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:16.046 07:01:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:16.046 07:01:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:16.046 07:01:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:16.046 07:01:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:16:16.046 07:01:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:16.046 07:01:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:16.046 07:01:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:16.046 07:01:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@509 -- # nvmfpid=209917 00:16:16.046 07:01:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:16.046 07:01:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@510 -- # waitforlisten 209917 00:16:16.046 07:01:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # '[' -z 209917 ']' 00:16:16.046 07:01:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:16.046 07:01:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:16.046 07:01:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:16.046 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:16.046 07:01:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:16.046 07:01:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:16.046 [2024-11-18 07:01:36.783177] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:16:16.046 [2024-11-18 07:01:36.783255] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:16.046 [2024-11-18 07:01:36.858636] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:16.046 [2024-11-18 07:01:36.904329] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:16.046 [2024-11-18 07:01:36.904383] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:16.046 [2024-11-18 07:01:36.904411] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:16.046 [2024-11-18 07:01:36.904423] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:16.046 [2024-11-18 07:01:36.904434] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:16.046 [2024-11-18 07:01:36.906073] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:16.046 [2024-11-18 07:01:36.906138] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:16.046 [2024-11-18 07:01:36.906160] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:16:16.046 [2024-11-18 07:01:36.906164] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:16.046 07:01:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:16.046 07:01:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@868 -- # return 0 00:16:16.046 07:01:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:16.046 07:01:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:16.046 07:01:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:16.305 07:01:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:16.305 07:01:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:16:16.305 07:01:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.305 07:01:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:16.305 07:01:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.305 07:01:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:16:16.305 "tick_rate": 2700000000, 00:16:16.305 "poll_groups": [ 00:16:16.305 { 00:16:16.305 "name": "nvmf_tgt_poll_group_000", 00:16:16.305 "admin_qpairs": 0, 00:16:16.305 "io_qpairs": 0, 00:16:16.305 "current_admin_qpairs": 0, 00:16:16.305 "current_io_qpairs": 0, 00:16:16.305 "pending_bdev_io": 0, 00:16:16.305 "completed_nvme_io": 0, 00:16:16.305 "transports": [] 00:16:16.305 }, 00:16:16.305 { 00:16:16.305 "name": "nvmf_tgt_poll_group_001", 00:16:16.305 "admin_qpairs": 0, 00:16:16.305 "io_qpairs": 0, 00:16:16.305 "current_admin_qpairs": 0, 00:16:16.305 "current_io_qpairs": 0, 00:16:16.305 "pending_bdev_io": 0, 00:16:16.305 "completed_nvme_io": 0, 00:16:16.305 "transports": [] 00:16:16.305 }, 00:16:16.305 { 00:16:16.305 "name": "nvmf_tgt_poll_group_002", 00:16:16.305 "admin_qpairs": 0, 00:16:16.305 "io_qpairs": 0, 00:16:16.305 "current_admin_qpairs": 0, 00:16:16.305 "current_io_qpairs": 0, 00:16:16.305 "pending_bdev_io": 0, 00:16:16.305 "completed_nvme_io": 0, 00:16:16.305 "transports": [] 00:16:16.305 }, 00:16:16.305 { 00:16:16.305 "name": "nvmf_tgt_poll_group_003", 00:16:16.305 "admin_qpairs": 0, 00:16:16.305 "io_qpairs": 0, 00:16:16.305 "current_admin_qpairs": 0, 00:16:16.305 "current_io_qpairs": 0, 00:16:16.305 "pending_bdev_io": 0, 00:16:16.305 "completed_nvme_io": 0, 00:16:16.305 "transports": [] 00:16:16.305 } 00:16:16.305 ] 00:16:16.305 }' 00:16:16.305 07:01:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:16:16.305 07:01:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:16:16.305 07:01:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:16:16.305 07:01:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:16:16.305 07:01:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:16:16.305 07:01:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:16:16.305 07:01:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:16:16.305 07:01:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:16.305 07:01:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.305 07:01:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:16.306 [2024-11-18 07:01:37.143497] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:16.306 07:01:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.306 07:01:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:16:16.306 07:01:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.306 07:01:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:16.306 07:01:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.306 07:01:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:16:16.306 "tick_rate": 2700000000, 00:16:16.306 "poll_groups": [ 00:16:16.306 { 00:16:16.306 "name": "nvmf_tgt_poll_group_000", 00:16:16.306 "admin_qpairs": 0, 00:16:16.306 "io_qpairs": 0, 00:16:16.306 "current_admin_qpairs": 0, 00:16:16.306 "current_io_qpairs": 0, 00:16:16.306 "pending_bdev_io": 0, 00:16:16.306 "completed_nvme_io": 0, 00:16:16.306 "transports": [ 00:16:16.306 { 00:16:16.306 "trtype": "TCP" 00:16:16.306 } 00:16:16.306 ] 00:16:16.306 }, 00:16:16.306 { 00:16:16.306 "name": "nvmf_tgt_poll_group_001", 00:16:16.306 "admin_qpairs": 0, 00:16:16.306 "io_qpairs": 0, 00:16:16.306 "current_admin_qpairs": 0, 00:16:16.306 "current_io_qpairs": 0, 00:16:16.306 "pending_bdev_io": 0, 00:16:16.306 "completed_nvme_io": 0, 00:16:16.306 "transports": [ 00:16:16.306 { 00:16:16.306 "trtype": "TCP" 00:16:16.306 } 00:16:16.306 ] 00:16:16.306 }, 00:16:16.306 { 00:16:16.306 "name": "nvmf_tgt_poll_group_002", 00:16:16.306 "admin_qpairs": 0, 00:16:16.306 "io_qpairs": 0, 00:16:16.306 "current_admin_qpairs": 0, 00:16:16.306 "current_io_qpairs": 0, 00:16:16.306 "pending_bdev_io": 0, 00:16:16.306 "completed_nvme_io": 0, 00:16:16.306 "transports": [ 00:16:16.306 { 00:16:16.306 "trtype": "TCP" 00:16:16.306 } 00:16:16.306 ] 00:16:16.306 }, 00:16:16.306 { 00:16:16.306 "name": "nvmf_tgt_poll_group_003", 00:16:16.306 "admin_qpairs": 0, 00:16:16.306 "io_qpairs": 0, 00:16:16.306 "current_admin_qpairs": 0, 00:16:16.306 "current_io_qpairs": 0, 00:16:16.306 "pending_bdev_io": 0, 00:16:16.306 "completed_nvme_io": 0, 00:16:16.306 "transports": [ 00:16:16.306 { 00:16:16.306 "trtype": "TCP" 00:16:16.306 } 00:16:16.306 ] 00:16:16.306 } 00:16:16.306 ] 00:16:16.306 }' 00:16:16.306 07:01:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:16:16.306 07:01:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:16:16.306 07:01:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:16:16.306 07:01:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:16:16.306 07:01:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:16:16.306 07:01:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:16:16.306 07:01:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:16:16.306 07:01:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:16:16.306 07:01:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:16:16.306 07:01:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:16:16.306 07:01:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:16:16.306 07:01:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:16:16.306 07:01:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:16:16.306 07:01:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:16:16.306 07:01:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.306 07:01:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:16.566 Malloc1 00:16:16.566 07:01:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.566 07:01:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:16:16.566 07:01:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.566 07:01:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:16.566 07:01:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.566 07:01:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:16.566 07:01:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.566 07:01:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:16.566 07:01:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.566 07:01:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:16:16.566 07:01:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.566 07:01:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:16.566 07:01:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.566 07:01:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:16.566 07:01:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.566 07:01:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:16.566 [2024-11-18 07:01:37.320052] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:16.566 07:01:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.567 07:01:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:16:16.567 07:01:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:16:16.567 07:01:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:16:16.567 07:01:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:16:16.567 07:01:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:16.567 07:01:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:16:16.567 07:01:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:16.567 07:01:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:16:16.567 07:01:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:16.567 07:01:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:16:16.567 07:01:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:16:16.567 07:01:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:16:16.567 [2024-11-18 07:01:37.342662] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55' 00:16:16.567 Failed to write to /dev/nvme-fabrics: Input/output error 00:16:16.567 could not add new controller: failed to write to nvme-fabrics device 00:16:16.567 07:01:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:16:16.567 07:01:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:16.567 07:01:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:16.567 07:01:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:16.567 07:01:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:16.567 07:01:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.567 07:01:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:16.567 07:01:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.567 07:01:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:17.137 07:01:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:16:17.137 07:01:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:16:17.137 07:01:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:17.137 07:01:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:16:17.137 07:01:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:16:19.045 07:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:19.045 07:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:19.045 07:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:16:19.045 07:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:16:19.045 07:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:19.045 07:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:16:19.045 07:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:19.307 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:19.307 07:01:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:19.307 07:01:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:16:19.307 07:01:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:16:19.307 07:01:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:19.307 07:01:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:16:19.307 07:01:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:19.307 07:01:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:16:19.307 07:01:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:19.307 07:01:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.307 07:01:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:19.307 07:01:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.307 07:01:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:19.307 07:01:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:16:19.307 07:01:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:19.307 07:01:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:16:19.307 07:01:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:19.307 07:01:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:16:19.307 07:01:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:19.307 07:01:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:16:19.307 07:01:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:19.307 07:01:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:16:19.307 07:01:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:16:19.307 07:01:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:19.307 [2024-11-18 07:01:40.155188] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55' 00:16:19.307 Failed to write to /dev/nvme-fabrics: Input/output error 00:16:19.307 could not add new controller: failed to write to nvme-fabrics device 00:16:19.307 07:01:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:16:19.307 07:01:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:19.307 07:01:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:19.307 07:01:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:19.307 07:01:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:16:19.307 07:01:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.307 07:01:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:19.307 07:01:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.307 07:01:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:19.875 07:01:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:16:19.875 07:01:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:16:19.875 07:01:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:19.875 07:01:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:16:19.875 07:01:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:16:22.418 07:01:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:22.418 07:01:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:22.418 07:01:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:16:22.418 07:01:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:16:22.418 07:01:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:22.418 07:01:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:16:22.418 07:01:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:22.418 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:22.418 07:01:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:22.418 07:01:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:16:22.418 07:01:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:16:22.418 07:01:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:22.418 07:01:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:16:22.418 07:01:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:22.418 07:01:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:16:22.418 07:01:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:22.419 07:01:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.419 07:01:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:22.419 07:01:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.419 07:01:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:16:22.419 07:01:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:16:22.419 07:01:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:22.419 07:01:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.419 07:01:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:22.419 07:01:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.419 07:01:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:22.419 07:01:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.419 07:01:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:22.419 [2024-11-18 07:01:42.911840] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:22.419 07:01:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.419 07:01:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:16:22.419 07:01:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.419 07:01:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:22.419 07:01:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.419 07:01:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:22.419 07:01:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.419 07:01:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:22.419 07:01:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.419 07:01:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:22.679 07:01:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:16:22.679 07:01:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:16:22.679 07:01:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:22.679 07:01:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:16:22.679 07:01:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:16:24.581 07:01:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:24.581 07:01:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:24.581 07:01:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:16:24.840 07:01:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:16:24.840 07:01:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:24.840 07:01:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:16:24.840 07:01:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:24.840 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:24.840 07:01:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:24.840 07:01:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:16:24.840 07:01:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:16:24.840 07:01:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:24.840 07:01:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:16:24.840 07:01:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:24.840 07:01:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:16:24.840 07:01:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:24.840 07:01:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.840 07:01:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:24.840 07:01:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.840 07:01:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:24.840 07:01:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.840 07:01:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:24.840 07:01:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.840 07:01:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:16:24.840 07:01:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:24.840 07:01:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.840 07:01:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:24.840 07:01:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.840 07:01:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:24.840 07:01:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.840 07:01:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:24.840 [2024-11-18 07:01:45.704167] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:24.840 07:01:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.840 07:01:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:16:24.840 07:01:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.840 07:01:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:24.840 07:01:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.840 07:01:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:24.840 07:01:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.840 07:01:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:24.840 07:01:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.840 07:01:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:25.409 07:01:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:16:25.409 07:01:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:16:25.409 07:01:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:25.409 07:01:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:16:25.409 07:01:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:16:27.950 07:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:27.950 07:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:27.950 07:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:16:27.950 07:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:16:27.950 07:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:27.950 07:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:16:27.950 07:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:27.950 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:27.950 07:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:27.950 07:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:16:27.950 07:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:16:27.950 07:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:27.950 07:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:16:27.950 07:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:27.950 07:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:16:27.950 07:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:27.950 07:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.950 07:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:27.950 07:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.950 07:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:27.950 07:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.950 07:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:27.950 07:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.950 07:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:16:27.950 07:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:27.950 07:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.950 07:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:27.950 07:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.950 07:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:27.950 07:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.950 07:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:27.950 [2024-11-18 07:01:48.465827] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:27.950 07:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.950 07:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:16:27.950 07:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.950 07:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:27.950 07:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.950 07:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:27.950 07:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.950 07:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:27.950 07:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.950 07:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:28.210 07:01:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:16:28.210 07:01:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:16:28.210 07:01:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:28.210 07:01:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:16:28.210 07:01:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:16:30.748 07:01:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:30.748 07:01:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:30.748 07:01:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:16:30.748 07:01:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:16:30.748 07:01:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:30.748 07:01:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:16:30.748 07:01:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:30.748 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:30.748 07:01:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:30.748 07:01:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:16:30.748 07:01:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:16:30.748 07:01:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:30.748 07:01:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:16:30.748 07:01:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:30.748 07:01:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:16:30.748 07:01:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:30.748 07:01:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.748 07:01:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:30.749 07:01:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.749 07:01:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:30.749 07:01:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.749 07:01:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:30.749 07:01:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.749 07:01:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:16:30.749 07:01:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:30.749 07:01:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.749 07:01:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:30.749 07:01:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.749 07:01:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:30.749 07:01:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.749 07:01:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:30.749 [2024-11-18 07:01:51.236418] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:30.749 07:01:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.749 07:01:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:16:30.749 07:01:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.749 07:01:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:30.749 07:01:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.749 07:01:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:30.749 07:01:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.749 07:01:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:30.749 07:01:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.749 07:01:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:31.008 07:01:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:16:31.008 07:01:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:16:31.008 07:01:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:31.008 07:01:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:16:31.008 07:01:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:16:33.551 07:01:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:33.551 07:01:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:33.551 07:01:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:16:33.551 07:01:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:16:33.551 07:01:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:33.551 07:01:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:16:33.551 07:01:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:33.551 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:33.552 07:01:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:33.552 07:01:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:16:33.552 07:01:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:16:33.552 07:01:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:33.552 07:01:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:16:33.552 07:01:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:33.552 07:01:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:16:33.552 07:01:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:33.552 07:01:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.552 07:01:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:33.552 07:01:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.552 07:01:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:33.552 07:01:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.552 07:01:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:33.552 07:01:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.552 07:01:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:16:33.552 07:01:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:33.552 07:01:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.552 07:01:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:33.552 07:01:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.552 07:01:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:33.552 07:01:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.552 07:01:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:33.552 [2024-11-18 07:01:54.069367] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:33.552 07:01:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.552 07:01:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:16:33.552 07:01:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.552 07:01:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:33.552 07:01:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.552 07:01:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:33.552 07:01:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.552 07:01:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:33.552 07:01:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.552 07:01:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:33.811 07:01:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:16:33.811 07:01:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:16:33.811 07:01:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:33.811 07:01:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:16:33.811 07:01:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:16:35.718 07:01:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:35.980 07:01:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:35.980 07:01:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:16:35.980 07:01:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:16:35.980 07:01:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:35.980 07:01:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:16:35.980 07:01:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:35.980 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:35.980 07:01:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:35.980 07:01:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:16:35.980 07:01:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:16:35.980 07:01:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:35.980 07:01:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:16:35.980 07:01:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:35.980 07:01:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:16:35.980 07:01:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:35.980 07:01:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.980 07:01:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:35.980 07:01:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.980 07:01:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:35.980 07:01:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.980 07:01:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:35.980 07:01:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.980 07:01:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:16:35.980 07:01:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:16:35.980 07:01:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:35.980 07:01:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.980 07:01:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:35.980 07:01:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.980 07:01:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:35.980 07:01:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.980 07:01:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:35.980 [2024-11-18 07:01:56.821439] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:35.980 07:01:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.980 07:01:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:35.980 07:01:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.980 07:01:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:35.980 07:01:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.980 07:01:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:35.980 07:01:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.980 07:01:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:35.980 07:01:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.981 07:01:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:35.981 07:01:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.981 07:01:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:35.981 07:01:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.981 07:01:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:35.981 07:01:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.981 07:01:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:35.981 07:01:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.981 07:01:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:16:35.981 07:01:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:35.981 07:01:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.981 07:01:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:35.981 07:01:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.981 07:01:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:35.981 07:01:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.981 07:01:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:35.981 [2024-11-18 07:01:56.869553] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:35.981 07:01:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.981 07:01:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:35.981 07:01:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.981 07:01:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:35.981 07:01:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.981 07:01:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:35.981 07:01:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.981 07:01:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:35.981 07:01:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.981 07:01:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:35.981 07:01:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.981 07:01:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:35.981 07:01:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.981 07:01:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:35.981 07:01:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.981 07:01:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:35.981 07:01:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.981 07:01:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:16:35.981 07:01:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:35.981 07:01:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.981 07:01:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:35.981 07:01:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.981 07:01:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:35.981 07:01:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.981 07:01:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:35.981 [2024-11-18 07:01:56.917701] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:35.981 07:01:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.981 07:01:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:35.981 07:01:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.981 07:01:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:35.981 07:01:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.981 07:01:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:35.981 07:01:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.981 07:01:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:35.981 07:01:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.981 07:01:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:35.981 07:01:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.981 07:01:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:35.981 07:01:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.981 07:01:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:35.981 07:01:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.981 07:01:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:35.981 07:01:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.981 07:01:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:16:35.981 07:01:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:35.981 07:01:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.981 07:01:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:36.241 07:01:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.241 07:01:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:36.241 07:01:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.241 07:01:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:36.241 [2024-11-18 07:01:56.965896] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:36.241 07:01:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.241 07:01:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:36.242 07:01:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.242 07:01:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:36.242 07:01:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.242 07:01:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:36.242 07:01:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.242 07:01:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:36.242 07:01:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.242 07:01:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:36.242 07:01:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.242 07:01:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:36.242 07:01:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.242 07:01:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:36.242 07:01:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.242 07:01:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:36.242 07:01:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.242 07:01:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:16:36.242 07:01:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:36.242 07:01:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.242 07:01:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:36.242 07:01:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.242 07:01:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:36.242 07:01:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.242 07:01:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:36.242 [2024-11-18 07:01:57.014070] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:36.242 07:01:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.242 07:01:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:36.242 07:01:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.242 07:01:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:36.242 07:01:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.242 07:01:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:36.242 07:01:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.242 07:01:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:36.242 07:01:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.242 07:01:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:36.242 07:01:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.242 07:01:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:36.242 07:01:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.242 07:01:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:36.242 07:01:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.242 07:01:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:36.242 07:01:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.242 07:01:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:16:36.242 07:01:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.242 07:01:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:36.242 07:01:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.242 07:01:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:16:36.242 "tick_rate": 2700000000, 00:16:36.242 "poll_groups": [ 00:16:36.242 { 00:16:36.242 "name": "nvmf_tgt_poll_group_000", 00:16:36.242 "admin_qpairs": 2, 00:16:36.242 "io_qpairs": 84, 00:16:36.242 "current_admin_qpairs": 0, 00:16:36.242 "current_io_qpairs": 0, 00:16:36.242 "pending_bdev_io": 0, 00:16:36.242 "completed_nvme_io": 134, 00:16:36.242 "transports": [ 00:16:36.242 { 00:16:36.242 "trtype": "TCP" 00:16:36.242 } 00:16:36.242 ] 00:16:36.242 }, 00:16:36.242 { 00:16:36.242 "name": "nvmf_tgt_poll_group_001", 00:16:36.242 "admin_qpairs": 2, 00:16:36.242 "io_qpairs": 84, 00:16:36.242 "current_admin_qpairs": 0, 00:16:36.242 "current_io_qpairs": 0, 00:16:36.242 "pending_bdev_io": 0, 00:16:36.242 "completed_nvme_io": 136, 00:16:36.242 "transports": [ 00:16:36.242 { 00:16:36.242 "trtype": "TCP" 00:16:36.242 } 00:16:36.242 ] 00:16:36.242 }, 00:16:36.242 { 00:16:36.242 "name": "nvmf_tgt_poll_group_002", 00:16:36.242 "admin_qpairs": 1, 00:16:36.242 "io_qpairs": 84, 00:16:36.242 "current_admin_qpairs": 0, 00:16:36.242 "current_io_qpairs": 0, 00:16:36.242 "pending_bdev_io": 0, 00:16:36.242 "completed_nvme_io": 281, 00:16:36.242 "transports": [ 00:16:36.242 { 00:16:36.242 "trtype": "TCP" 00:16:36.242 } 00:16:36.242 ] 00:16:36.242 }, 00:16:36.242 { 00:16:36.242 "name": "nvmf_tgt_poll_group_003", 00:16:36.242 "admin_qpairs": 2, 00:16:36.242 "io_qpairs": 84, 00:16:36.242 "current_admin_qpairs": 0, 00:16:36.242 "current_io_qpairs": 0, 00:16:36.242 "pending_bdev_io": 0, 00:16:36.242 "completed_nvme_io": 135, 00:16:36.242 "transports": [ 00:16:36.242 { 00:16:36.242 "trtype": "TCP" 00:16:36.242 } 00:16:36.242 ] 00:16:36.242 } 00:16:36.242 ] 00:16:36.242 }' 00:16:36.242 07:01:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:16:36.242 07:01:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:16:36.242 07:01:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:16:36.242 07:01:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:16:36.242 07:01:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:16:36.242 07:01:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:16:36.242 07:01:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:16:36.242 07:01:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:16:36.242 07:01:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:16:36.242 07:01:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 336 > 0 )) 00:16:36.242 07:01:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:16:36.242 07:01:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:16:36.242 07:01:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:16:36.242 07:01:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:36.242 07:01:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # sync 00:16:36.242 07:01:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:36.242 07:01:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set +e 00:16:36.242 07:01:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:36.242 07:01:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:36.242 rmmod nvme_tcp 00:16:36.242 rmmod nvme_fabrics 00:16:36.242 rmmod nvme_keyring 00:16:36.242 07:01:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:36.502 07:01:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # set -e 00:16:36.502 07:01:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # return 0 00:16:36.502 07:01:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@517 -- # '[' -n 209917 ']' 00:16:36.502 07:01:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@518 -- # killprocess 209917 00:16:36.502 07:01:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # '[' -z 209917 ']' 00:16:36.502 07:01:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # kill -0 209917 00:16:36.502 07:01:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # uname 00:16:36.502 07:01:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:36.502 07:01:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 209917 00:16:36.502 07:01:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:36.502 07:01:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:36.502 07:01:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 209917' 00:16:36.502 killing process with pid 209917 00:16:36.502 07:01:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@973 -- # kill 209917 00:16:36.502 07:01:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@978 -- # wait 209917 00:16:36.762 07:01:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:36.762 07:01:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:36.762 07:01:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:36.762 07:01:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # iptr 00:16:36.762 07:01:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-save 00:16:36.762 07:01:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:36.762 07:01:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-restore 00:16:36.762 07:01:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:36.762 07:01:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:16:36.762 07:01:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:36.762 07:01:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:36.762 07:01:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:38.671 07:01:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:16:38.671 00:16:38.671 real 0m25.388s 00:16:38.671 user 1m21.811s 00:16:38.671 sys 0m4.302s 00:16:38.671 07:01:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:38.671 07:01:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:38.671 ************************************ 00:16:38.671 END TEST nvmf_rpc 00:16:38.671 ************************************ 00:16:38.671 07:01:59 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:16:38.671 07:01:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:38.671 07:01:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:38.671 07:01:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:38.671 ************************************ 00:16:38.671 START TEST nvmf_invalid 00:16:38.671 ************************************ 00:16:38.671 07:01:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:16:38.930 * Looking for test storage... 00:16:38.930 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:38.930 07:01:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:38.930 07:01:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # lcov --version 00:16:38.930 07:01:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:38.930 07:01:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:38.930 07:01:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:38.930 07:01:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:38.930 07:01:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:38.930 07:01:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:16:38.930 07:01:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:16:38.930 07:01:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:16:38.930 07:01:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:16:38.930 07:01:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:16:38.930 07:01:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:16:38.930 07:01:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:16:38.930 07:01:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:38.930 07:01:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:16:38.930 07:01:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:16:38.930 07:01:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:38.930 07:01:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:38.930 07:01:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:16:38.930 07:01:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:16:38.930 07:01:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:38.931 07:01:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:16:38.931 07:01:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:16:38.931 07:01:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:16:38.931 07:01:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:16:38.931 07:01:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:38.931 07:01:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:16:38.931 07:01:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:16:38.931 07:01:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:38.931 07:01:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:38.931 07:01:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:16:38.931 07:01:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:38.931 07:01:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:38.931 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:38.931 --rc genhtml_branch_coverage=1 00:16:38.931 --rc genhtml_function_coverage=1 00:16:38.931 --rc genhtml_legend=1 00:16:38.931 --rc geninfo_all_blocks=1 00:16:38.931 --rc geninfo_unexecuted_blocks=1 00:16:38.931 00:16:38.931 ' 00:16:38.931 07:01:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:38.931 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:38.931 --rc genhtml_branch_coverage=1 00:16:38.931 --rc genhtml_function_coverage=1 00:16:38.931 --rc genhtml_legend=1 00:16:38.931 --rc geninfo_all_blocks=1 00:16:38.931 --rc geninfo_unexecuted_blocks=1 00:16:38.931 00:16:38.931 ' 00:16:38.931 07:01:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:38.931 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:38.931 --rc genhtml_branch_coverage=1 00:16:38.931 --rc genhtml_function_coverage=1 00:16:38.931 --rc genhtml_legend=1 00:16:38.931 --rc geninfo_all_blocks=1 00:16:38.931 --rc geninfo_unexecuted_blocks=1 00:16:38.931 00:16:38.931 ' 00:16:38.931 07:01:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:38.931 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:38.931 --rc genhtml_branch_coverage=1 00:16:38.931 --rc genhtml_function_coverage=1 00:16:38.931 --rc genhtml_legend=1 00:16:38.931 --rc geninfo_all_blocks=1 00:16:38.931 --rc geninfo_unexecuted_blocks=1 00:16:38.931 00:16:38.931 ' 00:16:38.931 07:01:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:38.931 07:01:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:16:38.931 07:01:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:38.931 07:01:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:38.931 07:01:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:38.931 07:01:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:38.931 07:01:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:38.931 07:01:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:38.931 07:01:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:38.931 07:01:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:38.931 07:01:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:38.931 07:01:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:38.931 07:01:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:38.931 07:01:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:38.931 07:01:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:38.931 07:01:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:38.931 07:01:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:38.931 07:01:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:38.931 07:01:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:38.931 07:01:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:16:38.931 07:01:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:38.931 07:01:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:38.931 07:01:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:38.931 07:01:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:38.931 07:01:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:38.931 07:01:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:38.931 07:01:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:16:38.931 07:01:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:38.931 07:01:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # : 0 00:16:38.931 07:01:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:38.931 07:01:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:38.931 07:01:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:38.931 07:01:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:38.931 07:01:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:38.931 07:01:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:38.931 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:38.931 07:01:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:38.931 07:01:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:38.931 07:01:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:38.931 07:01:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:16:38.931 07:01:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:38.931 07:01:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:16:38.931 07:01:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:16:38.931 07:01:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:16:38.931 07:01:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:16:38.931 07:01:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:38.931 07:01:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:38.931 07:01:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:38.931 07:01:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:38.931 07:01:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:38.931 07:01:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:38.931 07:01:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:38.931 07:01:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:38.931 07:01:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:16:38.931 07:01:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:16:38.931 07:01:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@309 -- # xtrace_disable 00:16:38.931 07:01:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:16:41.473 07:02:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:41.473 07:02:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # pci_devs=() 00:16:41.473 07:02:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # local -a pci_devs 00:16:41.473 07:02:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:16:41.473 07:02:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:16:41.473 07:02:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # pci_drivers=() 00:16:41.473 07:02:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:16:41.473 07:02:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # net_devs=() 00:16:41.473 07:02:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # local -ga net_devs 00:16:41.473 07:02:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # e810=() 00:16:41.473 07:02:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # local -ga e810 00:16:41.473 07:02:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # x722=() 00:16:41.473 07:02:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # local -ga x722 00:16:41.473 07:02:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # mlx=() 00:16:41.473 07:02:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # local -ga mlx 00:16:41.473 07:02:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:41.473 07:02:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:41.473 07:02:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:41.473 07:02:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:41.474 07:02:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:41.474 07:02:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:41.474 07:02:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:41.474 07:02:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:16:41.474 07:02:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:41.474 07:02:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:41.474 07:02:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:41.474 07:02:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:41.474 07:02:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:16:41.474 07:02:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:16:41.474 07:02:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:16:41.474 07:02:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:16:41.474 07:02:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:16:41.474 07:02:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:16:41.474 07:02:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:41.474 07:02:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:16:41.474 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:16:41.474 07:02:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:41.474 07:02:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:41.474 07:02:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:41.474 07:02:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:41.474 07:02:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:41.474 07:02:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:41.474 07:02:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:16:41.474 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:16:41.474 07:02:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:41.474 07:02:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:41.474 07:02:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:41.474 07:02:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:41.474 07:02:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:41.474 07:02:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:16:41.474 07:02:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:16:41.474 07:02:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:16:41.474 07:02:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:41.474 07:02:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:41.474 07:02:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:41.474 07:02:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:41.474 07:02:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:41.474 07:02:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:41.474 07:02:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:41.474 07:02:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:16:41.474 Found net devices under 0000:0a:00.0: cvl_0_0 00:16:41.474 07:02:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:41.474 07:02:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:41.474 07:02:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:41.474 07:02:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:41.474 07:02:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:41.474 07:02:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:41.474 07:02:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:41.474 07:02:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:41.474 07:02:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:16:41.474 Found net devices under 0000:0a:00.1: cvl_0_1 00:16:41.474 07:02:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:41.474 07:02:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:16:41.474 07:02:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # is_hw=yes 00:16:41.474 07:02:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:16:41.474 07:02:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:16:41.474 07:02:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:16:41.474 07:02:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:41.474 07:02:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:41.474 07:02:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:41.474 07:02:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:41.474 07:02:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:16:41.474 07:02:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:41.474 07:02:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:41.474 07:02:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:16:41.474 07:02:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:16:41.474 07:02:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:41.474 07:02:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:41.474 07:02:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:16:41.474 07:02:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:16:41.474 07:02:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:16:41.474 07:02:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:41.474 07:02:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:41.474 07:02:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:41.474 07:02:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:16:41.474 07:02:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:41.474 07:02:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:41.474 07:02:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:41.474 07:02:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:16:41.474 07:02:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:16:41.474 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:41.474 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.183 ms 00:16:41.474 00:16:41.474 --- 10.0.0.2 ping statistics --- 00:16:41.474 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:41.474 rtt min/avg/max/mdev = 0.183/0.183/0.183/0.000 ms 00:16:41.474 07:02:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:41.474 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:41.474 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.074 ms 00:16:41.474 00:16:41.474 --- 10.0.0.1 ping statistics --- 00:16:41.474 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:41.474 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:16:41.474 07:02:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:41.474 07:02:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@450 -- # return 0 00:16:41.474 07:02:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:41.474 07:02:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:41.474 07:02:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:41.474 07:02:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:41.474 07:02:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:41.474 07:02:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:41.474 07:02:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:41.474 07:02:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:16:41.474 07:02:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:41.474 07:02:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:41.474 07:02:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:16:41.474 07:02:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@509 -- # nvmfpid=214401 00:16:41.474 07:02:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:41.474 07:02:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@510 -- # waitforlisten 214401 00:16:41.474 07:02:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # '[' -z 214401 ']' 00:16:41.475 07:02:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:41.475 07:02:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:41.475 07:02:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:41.475 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:41.475 07:02:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:41.475 07:02:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:16:41.475 [2024-11-18 07:02:02.191149] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:16:41.475 [2024-11-18 07:02:02.191238] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:41.475 [2024-11-18 07:02:02.266371] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:41.475 [2024-11-18 07:02:02.315213] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:41.475 [2024-11-18 07:02:02.315268] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:41.475 [2024-11-18 07:02:02.315297] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:41.475 [2024-11-18 07:02:02.315315] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:41.475 [2024-11-18 07:02:02.315326] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:41.475 [2024-11-18 07:02:02.317022] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:41.475 [2024-11-18 07:02:02.317078] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:41.475 [2024-11-18 07:02:02.317144] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:16:41.475 [2024-11-18 07:02:02.317147] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:41.475 07:02:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:41.475 07:02:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@868 -- # return 0 00:16:41.475 07:02:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:41.475 07:02:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:41.475 07:02:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:16:41.733 07:02:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:41.733 07:02:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:16:41.733 07:02:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode16891 00:16:41.733 [2024-11-18 07:02:02.706371] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:16:41.994 07:02:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:16:41.994 { 00:16:41.994 "nqn": "nqn.2016-06.io.spdk:cnode16891", 00:16:41.994 "tgt_name": "foobar", 00:16:41.994 "method": "nvmf_create_subsystem", 00:16:41.994 "req_id": 1 00:16:41.994 } 00:16:41.994 Got JSON-RPC error response 00:16:41.994 response: 00:16:41.994 { 00:16:41.994 "code": -32603, 00:16:41.994 "message": "Unable to find target foobar" 00:16:41.994 }' 00:16:41.994 07:02:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:16:41.994 { 00:16:41.994 "nqn": "nqn.2016-06.io.spdk:cnode16891", 00:16:41.994 "tgt_name": "foobar", 00:16:41.994 "method": "nvmf_create_subsystem", 00:16:41.994 "req_id": 1 00:16:41.994 } 00:16:41.994 Got JSON-RPC error response 00:16:41.994 response: 00:16:41.994 { 00:16:41.994 "code": -32603, 00:16:41.994 "message": "Unable to find target foobar" 00:16:41.994 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:16:41.994 07:02:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:16:41.994 07:02:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode26920 00:16:42.253 [2024-11-18 07:02:02.979304] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode26920: invalid serial number 'SPDKISFASTANDAWESOME' 00:16:42.253 07:02:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:16:42.253 { 00:16:42.253 "nqn": "nqn.2016-06.io.spdk:cnode26920", 00:16:42.253 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:16:42.253 "method": "nvmf_create_subsystem", 00:16:42.253 "req_id": 1 00:16:42.253 } 00:16:42.253 Got JSON-RPC error response 00:16:42.253 response: 00:16:42.253 { 00:16:42.253 "code": -32602, 00:16:42.253 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:16:42.253 }' 00:16:42.253 07:02:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:16:42.253 { 00:16:42.253 "nqn": "nqn.2016-06.io.spdk:cnode26920", 00:16:42.253 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:16:42.253 "method": "nvmf_create_subsystem", 00:16:42.253 "req_id": 1 00:16:42.253 } 00:16:42.253 Got JSON-RPC error response 00:16:42.253 response: 00:16:42.253 { 00:16:42.253 "code": -32602, 00:16:42.253 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:16:42.253 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:16:42.253 07:02:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:16:42.253 07:02:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode5621 00:16:42.513 [2024-11-18 07:02:03.248157] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode5621: invalid model number 'SPDK_Controller' 00:16:42.513 07:02:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:16:42.513 { 00:16:42.513 "nqn": "nqn.2016-06.io.spdk:cnode5621", 00:16:42.513 "model_number": "SPDK_Controller\u001f", 00:16:42.513 "method": "nvmf_create_subsystem", 00:16:42.513 "req_id": 1 00:16:42.513 } 00:16:42.513 Got JSON-RPC error response 00:16:42.513 response: 00:16:42.513 { 00:16:42.513 "code": -32602, 00:16:42.513 "message": "Invalid MN SPDK_Controller\u001f" 00:16:42.513 }' 00:16:42.513 07:02:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:16:42.513 { 00:16:42.513 "nqn": "nqn.2016-06.io.spdk:cnode5621", 00:16:42.513 "model_number": "SPDK_Controller\u001f", 00:16:42.513 "method": "nvmf_create_subsystem", 00:16:42.513 "req_id": 1 00:16:42.513 } 00:16:42.513 Got JSON-RPC error response 00:16:42.513 response: 00:16:42.513 { 00:16:42.513 "code": -32602, 00:16:42.513 "message": "Invalid MN SPDK_Controller\u001f" 00:16:42.513 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:16:42.513 07:02:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:16:42.513 07:02:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:16:42.513 07:02:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:16:42.513 07:02:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:16:42.513 07:02:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:16:42.513 07:02:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:16:42.513 07:02:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:42.513 07:02:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:16:42.513 07:02:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:16:42.513 07:02:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:16:42.513 07:02:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:42.513 07:02:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:42.513 07:02:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:16:42.513 07:02:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:16:42.513 07:02:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:16:42.513 07:02:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:42.513 07:02:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:42.513 07:02:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:16:42.513 07:02:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:16:42.513 07:02:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:16:42.513 07:02:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:42.513 07:02:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:42.513 07:02:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:16:42.513 07:02:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:16:42.513 07:02:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:16:42.513 07:02:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:42.513 07:02:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:42.513 07:02:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:16:42.513 07:02:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:16:42.513 07:02:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:16:42.513 07:02:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:42.513 07:02:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:42.513 07:02:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:16:42.513 07:02:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:16:42.513 07:02:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:16:42.513 07:02:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:42.513 07:02:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:42.513 07:02:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:16:42.513 07:02:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:16:42.513 07:02:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:16:42.513 07:02:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:42.513 07:02:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:42.513 07:02:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:16:42.513 07:02:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:16:42.513 07:02:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:16:42.513 07:02:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:42.513 07:02:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:42.513 07:02:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:16:42.513 07:02:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:16:42.513 07:02:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:16:42.513 07:02:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:42.513 07:02:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:42.513 07:02:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:16:42.513 07:02:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:16:42.513 07:02:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:16:42.513 07:02:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:42.513 07:02:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:42.513 07:02:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:16:42.514 07:02:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:16:42.514 07:02:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:16:42.514 07:02:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:42.514 07:02:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:42.514 07:02:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:16:42.514 07:02:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:16:42.514 07:02:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:16:42.514 07:02:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:42.514 07:02:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:42.514 07:02:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:16:42.514 07:02:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:16:42.514 07:02:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:16:42.514 07:02:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:42.514 07:02:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:42.514 07:02:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:16:42.514 07:02:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:16:42.514 07:02:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:16:42.514 07:02:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:42.514 07:02:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:42.514 07:02:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:16:42.514 07:02:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:16:42.514 07:02:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:16:42.514 07:02:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:42.514 07:02:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:42.514 07:02:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:16:42.514 07:02:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:16:42.514 07:02:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:16:42.514 07:02:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:42.514 07:02:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:42.514 07:02:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:16:42.514 07:02:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:16:42.514 07:02:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:16:42.514 07:02:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:42.514 07:02:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:42.514 07:02:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:16:42.514 07:02:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:16:42.514 07:02:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:16:42.514 07:02:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:42.514 07:02:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:42.514 07:02:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:16:42.514 07:02:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:16:42.514 07:02:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:16:42.514 07:02:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:42.514 07:02:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:42.514 07:02:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:16:42.514 07:02:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:16:42.514 07:02:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:16:42.514 07:02:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:42.514 07:02:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:42.514 07:02:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:16:42.514 07:02:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:16:42.514 07:02:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:16:42.514 07:02:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:42.514 07:02:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:42.514 07:02:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[  == \- ]] 00:16:42.514 07:02:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo '+TqPcU /dev/null' 00:16:45.879 07:02:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:47.789 07:02:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:16:47.789 00:16:47.789 real 0m9.120s 00:16:47.789 user 0m21.475s 00:16:47.789 sys 0m2.577s 00:16:47.789 07:02:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:47.789 07:02:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:16:47.789 ************************************ 00:16:47.789 END TEST nvmf_invalid 00:16:47.789 ************************************ 00:16:47.789 07:02:08 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:16:47.789 07:02:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:47.789 07:02:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:47.789 07:02:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:48.049 ************************************ 00:16:48.049 START TEST nvmf_connect_stress 00:16:48.049 ************************************ 00:16:48.049 07:02:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:16:48.049 * Looking for test storage... 00:16:48.049 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:48.049 07:02:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:48.049 07:02:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:16:48.049 07:02:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:48.049 07:02:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:48.049 07:02:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:48.049 07:02:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:48.049 07:02:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:48.049 07:02:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 00:16:48.049 07:02:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 00:16:48.049 07:02:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 00:16:48.049 07:02:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 00:16:48.049 07:02:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 00:16:48.049 07:02:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 00:16:48.049 07:02:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 00:16:48.049 07:02:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:48.049 07:02:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 00:16:48.049 07:02:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 00:16:48.049 07:02:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:48.049 07:02:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:48.049 07:02:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:16:48.049 07:02:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:16:48.049 07:02:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:48.049 07:02:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:16:48.049 07:02:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:16:48.049 07:02:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:16:48.049 07:02:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:16:48.049 07:02:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:48.049 07:02:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:16:48.049 07:02:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:16:48.049 07:02:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:48.049 07:02:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:48.049 07:02:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:16:48.049 07:02:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:48.049 07:02:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:48.049 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:48.049 --rc genhtml_branch_coverage=1 00:16:48.049 --rc genhtml_function_coverage=1 00:16:48.049 --rc genhtml_legend=1 00:16:48.049 --rc geninfo_all_blocks=1 00:16:48.049 --rc geninfo_unexecuted_blocks=1 00:16:48.049 00:16:48.049 ' 00:16:48.049 07:02:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:48.049 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:48.049 --rc genhtml_branch_coverage=1 00:16:48.049 --rc genhtml_function_coverage=1 00:16:48.049 --rc genhtml_legend=1 00:16:48.049 --rc geninfo_all_blocks=1 00:16:48.049 --rc geninfo_unexecuted_blocks=1 00:16:48.049 00:16:48.049 ' 00:16:48.049 07:02:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:48.049 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:48.049 --rc genhtml_branch_coverage=1 00:16:48.049 --rc genhtml_function_coverage=1 00:16:48.049 --rc genhtml_legend=1 00:16:48.049 --rc geninfo_all_blocks=1 00:16:48.049 --rc geninfo_unexecuted_blocks=1 00:16:48.049 00:16:48.049 ' 00:16:48.049 07:02:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:48.049 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:48.049 --rc genhtml_branch_coverage=1 00:16:48.049 --rc genhtml_function_coverage=1 00:16:48.049 --rc genhtml_legend=1 00:16:48.049 --rc geninfo_all_blocks=1 00:16:48.050 --rc geninfo_unexecuted_blocks=1 00:16:48.050 00:16:48.050 ' 00:16:48.050 07:02:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:48.050 07:02:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:16:48.050 07:02:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:48.050 07:02:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:48.050 07:02:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:48.050 07:02:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:48.050 07:02:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:48.050 07:02:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:48.050 07:02:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:48.050 07:02:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:48.050 07:02:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:48.050 07:02:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:48.050 07:02:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:48.050 07:02:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:48.050 07:02:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:48.050 07:02:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:48.050 07:02:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:48.050 07:02:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:48.050 07:02:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:48.050 07:02:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:16:48.050 07:02:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:48.050 07:02:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:48.050 07:02:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:48.050 07:02:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:48.050 07:02:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:48.050 07:02:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:48.050 07:02:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:16:48.050 07:02:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:48.050 07:02:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0 00:16:48.050 07:02:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:48.050 07:02:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:48.050 07:02:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:48.050 07:02:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:48.050 07:02:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:48.050 07:02:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:48.050 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:48.050 07:02:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:48.050 07:02:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:48.050 07:02:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:48.050 07:02:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:16:48.050 07:02:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:48.050 07:02:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:48.050 07:02:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:48.050 07:02:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:48.050 07:02:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:48.050 07:02:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:48.050 07:02:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:48.050 07:02:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:48.050 07:02:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:16:48.050 07:02:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:16:48.050 07:02:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:16:48.050 07:02:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:50.591 07:02:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:50.591 07:02:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:16:50.591 07:02:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:16:50.591 07:02:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:16:50.591 07:02:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:16:50.591 07:02:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:16:50.591 07:02:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:16:50.591 07:02:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # net_devs=() 00:16:50.591 07:02:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:16:50.591 07:02:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # e810=() 00:16:50.591 07:02:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # local -ga e810 00:16:50.591 07:02:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # x722=() 00:16:50.591 07:02:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # local -ga x722 00:16:50.591 07:02:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # mlx=() 00:16:50.591 07:02:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:16:50.591 07:02:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:50.591 07:02:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:50.591 07:02:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:50.591 07:02:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:50.591 07:02:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:50.591 07:02:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:50.591 07:02:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:50.591 07:02:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:16:50.591 07:02:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:50.591 07:02:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:50.591 07:02:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:50.591 07:02:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:50.591 07:02:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:16:50.591 07:02:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:16:50.591 07:02:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:16:50.591 07:02:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:16:50.591 07:02:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:16:50.591 07:02:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:16:50.591 07:02:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:50.591 07:02:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:16:50.591 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:16:50.591 07:02:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:50.591 07:02:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:50.591 07:02:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:50.591 07:02:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:50.591 07:02:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:50.591 07:02:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:50.591 07:02:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:16:50.591 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:16:50.591 07:02:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:50.591 07:02:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:50.591 07:02:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:50.591 07:02:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:50.591 07:02:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:50.591 07:02:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:16:50.591 07:02:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:16:50.591 07:02:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:16:50.591 07:02:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:50.591 07:02:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:50.591 07:02:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:50.591 07:02:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:50.591 07:02:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:50.591 07:02:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:50.591 07:02:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:50.591 07:02:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:16:50.591 Found net devices under 0000:0a:00.0: cvl_0_0 00:16:50.591 07:02:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:50.591 07:02:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:50.591 07:02:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:50.591 07:02:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:50.591 07:02:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:50.591 07:02:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:50.591 07:02:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:50.591 07:02:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:50.591 07:02:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:16:50.591 Found net devices under 0000:0a:00.1: cvl_0_1 00:16:50.591 07:02:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:50.591 07:02:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:16:50.591 07:02:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:16:50.591 07:02:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:16:50.591 07:02:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:16:50.591 07:02:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:16:50.591 07:02:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:50.591 07:02:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:50.591 07:02:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:50.591 07:02:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:50.591 07:02:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:16:50.591 07:02:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:50.591 07:02:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:50.591 07:02:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:16:50.591 07:02:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:16:50.591 07:02:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:50.591 07:02:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:50.591 07:02:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:16:50.591 07:02:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:16:50.591 07:02:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:16:50.591 07:02:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:50.592 07:02:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:50.592 07:02:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:50.592 07:02:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:16:50.592 07:02:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:50.592 07:02:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:50.592 07:02:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:50.592 07:02:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:16:50.592 07:02:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:16:50.592 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:50.592 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.205 ms 00:16:50.592 00:16:50.592 --- 10.0.0.2 ping statistics --- 00:16:50.592 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:50.592 rtt min/avg/max/mdev = 0.205/0.205/0.205/0.000 ms 00:16:50.592 07:02:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:50.592 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:50.592 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.086 ms 00:16:50.592 00:16:50.592 --- 10.0.0.1 ping statistics --- 00:16:50.592 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:50.592 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:16:50.592 07:02:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:50.592 07:02:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@450 -- # return 0 00:16:50.592 07:02:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:50.592 07:02:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:50.592 07:02:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:50.592 07:02:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:50.592 07:02:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:50.592 07:02:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:50.592 07:02:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:50.592 07:02:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:16:50.592 07:02:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:50.592 07:02:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:50.592 07:02:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:50.592 07:02:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@509 -- # nvmfpid=217060 00:16:50.592 07:02:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:16:50.592 07:02:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@510 -- # waitforlisten 217060 00:16:50.592 07:02:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # '[' -z 217060 ']' 00:16:50.592 07:02:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:50.592 07:02:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:50.592 07:02:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:50.592 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:50.592 07:02:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:50.592 07:02:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:50.592 [2024-11-18 07:02:11.237300] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:16:50.592 [2024-11-18 07:02:11.237394] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:50.592 [2024-11-18 07:02:11.313411] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:50.592 [2024-11-18 07:02:11.359466] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:50.592 [2024-11-18 07:02:11.359532] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:50.592 [2024-11-18 07:02:11.359547] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:50.592 [2024-11-18 07:02:11.359559] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:50.592 [2024-11-18 07:02:11.359568] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:50.592 [2024-11-18 07:02:11.360936] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:50.592 [2024-11-18 07:02:11.360998] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:16:50.592 [2024-11-18 07:02:11.361001] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:50.592 07:02:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:50.592 07:02:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@868 -- # return 0 00:16:50.592 07:02:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:50.592 07:02:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:50.592 07:02:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:50.592 07:02:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:50.592 07:02:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:50.592 07:02:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.592 07:02:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:50.592 [2024-11-18 07:02:11.503120] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:50.592 07:02:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.592 07:02:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:16:50.592 07:02:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.592 07:02:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:50.592 07:02:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.592 07:02:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:50.592 07:02:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.592 07:02:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:50.592 [2024-11-18 07:02:11.520592] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:50.592 07:02:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.592 07:02:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:16:50.592 07:02:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.592 07:02:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:50.592 NULL1 00:16:50.592 07:02:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.592 07:02:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=217201 00:16:50.592 07:02:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:16:50.592 07:02:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:16:50.592 07:02:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:16:50.592 07:02:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:16:50.592 07:02:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:50.592 07:02:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:50.592 07:02:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:50.592 07:02:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:50.592 07:02:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:50.592 07:02:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:50.592 07:02:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:50.592 07:02:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:50.592 07:02:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:50.592 07:02:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:50.592 07:02:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:50.592 07:02:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:50.592 07:02:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:50.592 07:02:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:50.592 07:02:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:50.592 07:02:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:50.592 07:02:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:50.592 07:02:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:50.592 07:02:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:50.592 07:02:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:50.592 07:02:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:50.593 07:02:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:50.593 07:02:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:50.593 07:02:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:50.593 07:02:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:50.593 07:02:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:50.593 07:02:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:50.593 07:02:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:50.853 07:02:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:50.853 07:02:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:50.853 07:02:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:50.853 07:02:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:50.853 07:02:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:50.853 07:02:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:50.853 07:02:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:50.853 07:02:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:50.853 07:02:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:50.853 07:02:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:50.853 07:02:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:50.853 07:02:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:50.853 07:02:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 217201 00:16:50.853 07:02:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:50.853 07:02:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.853 07:02:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:51.114 07:02:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.114 07:02:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 217201 00:16:51.114 07:02:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:51.114 07:02:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.114 07:02:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:51.374 07:02:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.374 07:02:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 217201 00:16:51.374 07:02:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:51.374 07:02:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.374 07:02:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:51.634 07:02:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.634 07:02:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 217201 00:16:51.634 07:02:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:51.634 07:02:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.634 07:02:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:51.893 07:02:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.893 07:02:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 217201 00:16:51.893 07:02:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:51.893 07:02:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.893 07:02:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:52.512 07:02:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.512 07:02:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 217201 00:16:52.512 07:02:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:52.512 07:02:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.512 07:02:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:52.784 07:02:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.784 07:02:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 217201 00:16:52.784 07:02:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:52.784 07:02:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.784 07:02:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:53.070 07:02:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.070 07:02:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 217201 00:16:53.070 07:02:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:53.070 07:02:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.070 07:02:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:53.354 07:02:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.354 07:02:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 217201 00:16:53.354 07:02:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:53.354 07:02:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.354 07:02:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:53.635 07:02:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.635 07:02:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 217201 00:16:53.635 07:02:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:53.635 07:02:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.636 07:02:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:53.919 07:02:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.919 07:02:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 217201 00:16:53.919 07:02:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:53.919 07:02:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.919 07:02:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:54.198 07:02:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.198 07:02:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 217201 00:16:54.198 07:02:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:54.198 07:02:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.198 07:02:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:54.473 07:02:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.473 07:02:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 217201 00:16:54.473 07:02:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:54.473 07:02:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.473 07:02:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:55.096 07:02:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.096 07:02:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 217201 00:16:55.096 07:02:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:55.096 07:02:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.096 07:02:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:55.365 07:02:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.365 07:02:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 217201 00:16:55.365 07:02:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:55.365 07:02:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.365 07:02:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:55.635 07:02:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.635 07:02:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 217201 00:16:55.635 07:02:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:55.635 07:02:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.635 07:02:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:55.910 07:02:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.910 07:02:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 217201 00:16:55.910 07:02:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:55.910 07:02:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.910 07:02:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:56.192 07:02:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.192 07:02:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 217201 00:16:56.192 07:02:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:56.192 07:02:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.192 07:02:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:56.500 07:02:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.500 07:02:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 217201 00:16:56.500 07:02:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:56.500 07:02:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.500 07:02:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:56.794 07:02:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.794 07:02:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 217201 00:16:56.794 07:02:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:56.794 07:02:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.794 07:02:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:57.067 07:02:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.067 07:02:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 217201 00:16:57.067 07:02:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:57.067 07:02:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.067 07:02:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:57.679 07:02:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.679 07:02:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 217201 00:16:57.679 07:02:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:57.679 07:02:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.679 07:02:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:57.679 07:02:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.679 07:02:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 217201 00:16:57.679 07:02:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:57.679 07:02:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.679 07:02:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:58.334 07:02:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.334 07:02:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 217201 00:16:58.334 07:02:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:58.334 07:02:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.334 07:02:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:58.610 07:02:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.610 07:02:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 217201 00:16:58.610 07:02:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:58.610 07:02:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.610 07:02:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:58.883 07:02:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.883 07:02:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 217201 00:16:58.883 07:02:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:58.883 07:02:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.883 07:02:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:59.163 07:02:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.163 07:02:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 217201 00:16:59.163 07:02:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:59.163 07:02:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.163 07:02:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:59.440 07:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.440 07:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 217201 00:16:59.440 07:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:59.440 07:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.440 07:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:59.718 07:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.718 07:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 217201 00:16:59.719 07:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:59.719 07:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.719 07:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:59.981 07:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.982 07:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 217201 00:16:59.982 07:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:59.982 07:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.982 07:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:00.551 07:02:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.551 07:02:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 217201 00:17:00.551 07:02:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:00.551 07:02:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.551 07:02:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:00.811 07:02:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.811 07:02:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 217201 00:17:00.811 07:02:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:00.811 07:02:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.811 07:02:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:00.811 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:01.071 07:02:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.071 07:02:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 217201 00:17:01.071 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (217201) - No such process 00:17:01.071 07:02:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 217201 00:17:01.071 07:02:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:17:01.071 07:02:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:17:01.071 07:02:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:17:01.071 07:02:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:01.071 07:02:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync 00:17:01.071 07:02:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:01.071 07:02:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e 00:17:01.071 07:02:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:01.071 07:02:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:01.071 rmmod nvme_tcp 00:17:01.071 rmmod nvme_fabrics 00:17:01.071 rmmod nvme_keyring 00:17:01.071 07:02:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:01.071 07:02:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e 00:17:01.071 07:02:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0 00:17:01.071 07:02:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@517 -- # '[' -n 217060 ']' 00:17:01.071 07:02:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@518 -- # killprocess 217060 00:17:01.071 07:02:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # '[' -z 217060 ']' 00:17:01.071 07:02:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # kill -0 217060 00:17:01.071 07:02:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # uname 00:17:01.071 07:02:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:01.071 07:02:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 217060 00:17:01.071 07:02:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:01.071 07:02:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:01.071 07:02:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 217060' 00:17:01.071 killing process with pid 217060 00:17:01.071 07:02:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@973 -- # kill 217060 00:17:01.071 07:02:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@978 -- # wait 217060 00:17:01.330 07:02:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:01.330 07:02:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:01.330 07:02:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:01.330 07:02:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # iptr 00:17:01.330 07:02:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-save 00:17:01.330 07:02:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:01.330 07:02:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-restore 00:17:01.330 07:02:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:01.330 07:02:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:01.330 07:02:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:01.330 07:02:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:01.330 07:02:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:03.240 07:02:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:03.240 00:17:03.240 real 0m15.408s 00:17:03.240 user 0m39.906s 00:17:03.240 sys 0m4.607s 00:17:03.240 07:02:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:03.240 07:02:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:03.240 ************************************ 00:17:03.240 END TEST nvmf_connect_stress 00:17:03.240 ************************************ 00:17:03.240 07:02:24 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:17:03.240 07:02:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:03.240 07:02:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:03.240 07:02:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:03.500 ************************************ 00:17:03.500 START TEST nvmf_fused_ordering 00:17:03.500 ************************************ 00:17:03.500 07:02:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:17:03.500 * Looking for test storage... 00:17:03.500 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:03.500 07:02:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:03.501 07:02:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # lcov --version 00:17:03.501 07:02:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:03.501 07:02:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:03.501 07:02:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:03.501 07:02:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:03.501 07:02:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:03.501 07:02:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:17:03.501 07:02:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:17:03.501 07:02:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:17:03.501 07:02:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:17:03.501 07:02:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:17:03.501 07:02:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:17:03.501 07:02:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:17:03.501 07:02:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:03.501 07:02:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:17:03.501 07:02:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:17:03.501 07:02:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:03.501 07:02:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:03.501 07:02:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:17:03.501 07:02:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:17:03.501 07:02:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:03.501 07:02:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:17:03.501 07:02:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:17:03.501 07:02:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:17:03.501 07:02:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:17:03.501 07:02:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:03.501 07:02:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:17:03.501 07:02:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:17:03.501 07:02:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:03.501 07:02:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:03.501 07:02:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:17:03.501 07:02:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:03.501 07:02:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:03.501 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:03.501 --rc genhtml_branch_coverage=1 00:17:03.501 --rc genhtml_function_coverage=1 00:17:03.501 --rc genhtml_legend=1 00:17:03.501 --rc geninfo_all_blocks=1 00:17:03.501 --rc geninfo_unexecuted_blocks=1 00:17:03.501 00:17:03.501 ' 00:17:03.501 07:02:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:03.501 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:03.501 --rc genhtml_branch_coverage=1 00:17:03.501 --rc genhtml_function_coverage=1 00:17:03.501 --rc genhtml_legend=1 00:17:03.501 --rc geninfo_all_blocks=1 00:17:03.501 --rc geninfo_unexecuted_blocks=1 00:17:03.501 00:17:03.501 ' 00:17:03.501 07:02:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:03.501 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:03.501 --rc genhtml_branch_coverage=1 00:17:03.501 --rc genhtml_function_coverage=1 00:17:03.501 --rc genhtml_legend=1 00:17:03.501 --rc geninfo_all_blocks=1 00:17:03.501 --rc geninfo_unexecuted_blocks=1 00:17:03.501 00:17:03.501 ' 00:17:03.501 07:02:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:03.501 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:03.501 --rc genhtml_branch_coverage=1 00:17:03.501 --rc genhtml_function_coverage=1 00:17:03.501 --rc genhtml_legend=1 00:17:03.501 --rc geninfo_all_blocks=1 00:17:03.501 --rc geninfo_unexecuted_blocks=1 00:17:03.501 00:17:03.501 ' 00:17:03.501 07:02:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:03.501 07:02:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:17:03.501 07:02:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:03.501 07:02:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:03.501 07:02:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:03.501 07:02:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:03.501 07:02:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:03.501 07:02:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:03.501 07:02:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:03.501 07:02:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:03.501 07:02:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:03.501 07:02:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:03.501 07:02:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:03.501 07:02:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:03.501 07:02:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:03.501 07:02:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:03.501 07:02:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:03.501 07:02:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:03.501 07:02:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:03.501 07:02:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:17:03.501 07:02:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:03.501 07:02:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:03.501 07:02:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:03.501 07:02:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:03.501 07:02:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:03.501 07:02:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:03.501 07:02:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:17:03.501 07:02:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:03.501 07:02:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0 00:17:03.501 07:02:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:03.501 07:02:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:03.501 07:02:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:03.501 07:02:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:03.501 07:02:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:03.501 07:02:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:03.501 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:03.501 07:02:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:03.501 07:02:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:03.501 07:02:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:03.501 07:02:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:17:03.501 07:02:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:03.501 07:02:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:03.501 07:02:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:03.501 07:02:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:03.501 07:02:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:03.501 07:02:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:03.501 07:02:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:03.501 07:02:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:03.501 07:02:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:03.501 07:02:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:03.501 07:02:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@309 -- # xtrace_disable 00:17:03.501 07:02:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:06.036 07:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:06.036 07:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # pci_devs=() 00:17:06.036 07:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:06.036 07:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:06.036 07:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:06.036 07:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:06.036 07:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:06.036 07:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # net_devs=() 00:17:06.036 07:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:06.036 07:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # e810=() 00:17:06.036 07:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # local -ga e810 00:17:06.036 07:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # x722=() 00:17:06.036 07:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # local -ga x722 00:17:06.036 07:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # mlx=() 00:17:06.036 07:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # local -ga mlx 00:17:06.036 07:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:06.036 07:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:06.036 07:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:06.036 07:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:06.036 07:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:06.036 07:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:06.036 07:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:06.036 07:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:06.036 07:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:06.037 07:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:06.037 07:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:06.037 07:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:06.037 07:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:06.037 07:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:06.037 07:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:06.037 07:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:06.037 07:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:06.037 07:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:06.037 07:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:06.037 07:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:06.037 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:06.037 07:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:06.037 07:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:06.037 07:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:06.037 07:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:06.037 07:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:06.037 07:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:06.037 07:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:06.037 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:06.037 07:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:06.037 07:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:06.037 07:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:06.037 07:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:06.037 07:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:06.037 07:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:06.037 07:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:06.037 07:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:06.037 07:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:06.037 07:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:06.037 07:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:06.037 07:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:06.037 07:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:06.037 07:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:06.037 07:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:06.037 07:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:06.037 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:06.037 07:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:06.037 07:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:06.037 07:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:06.037 07:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:06.037 07:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:06.037 07:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:06.037 07:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:06.037 07:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:06.037 07:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:06.037 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:06.037 07:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:06.037 07:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:06.037 07:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # is_hw=yes 00:17:06.037 07:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:06.037 07:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:17:06.037 07:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:17:06.037 07:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:06.037 07:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:06.037 07:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:06.037 07:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:06.037 07:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:06.037 07:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:06.037 07:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:06.037 07:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:06.037 07:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:06.037 07:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:06.037 07:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:06.037 07:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:06.037 07:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:06.037 07:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:06.037 07:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:06.037 07:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:06.037 07:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:06.037 07:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:06.037 07:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:06.037 07:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:06.037 07:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:06.037 07:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:06.037 07:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:06.037 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:06.037 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.316 ms 00:17:06.037 00:17:06.037 --- 10.0.0.2 ping statistics --- 00:17:06.037 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:06.037 rtt min/avg/max/mdev = 0.316/0.316/0.316/0.000 ms 00:17:06.037 07:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:06.037 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:06.037 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.136 ms 00:17:06.037 00:17:06.037 --- 10.0.0.1 ping statistics --- 00:17:06.037 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:06.037 rtt min/avg/max/mdev = 0.136/0.136/0.136/0.000 ms 00:17:06.037 07:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:06.037 07:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@450 -- # return 0 00:17:06.037 07:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:06.037 07:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:06.037 07:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:06.037 07:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:06.037 07:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:06.037 07:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:06.037 07:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:06.037 07:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:17:06.037 07:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:06.037 07:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:06.037 07:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:06.037 07:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@509 -- # nvmfpid=220401 00:17:06.037 07:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:06.037 07:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@510 -- # waitforlisten 220401 00:17:06.037 07:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # '[' -z 220401 ']' 00:17:06.037 07:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:06.037 07:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:06.037 07:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:06.038 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:06.038 07:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:06.038 07:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:06.038 [2024-11-18 07:02:26.815697] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:17:06.038 [2024-11-18 07:02:26.815785] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:06.038 [2024-11-18 07:02:26.886208] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:06.038 [2024-11-18 07:02:26.931048] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:06.038 [2024-11-18 07:02:26.931101] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:06.038 [2024-11-18 07:02:26.931124] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:06.038 [2024-11-18 07:02:26.931135] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:06.038 [2024-11-18 07:02:26.931145] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:06.038 [2024-11-18 07:02:26.931813] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:06.296 07:02:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:06.296 07:02:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@868 -- # return 0 00:17:06.296 07:02:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:06.296 07:02:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:06.296 07:02:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:06.296 07:02:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:06.296 07:02:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:06.296 07:02:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.296 07:02:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:06.296 [2024-11-18 07:02:27.069805] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:06.296 07:02:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.296 07:02:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:17:06.296 07:02:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.296 07:02:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:06.296 07:02:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.296 07:02:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:06.296 07:02:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.296 07:02:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:06.296 [2024-11-18 07:02:27.086020] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:06.296 07:02:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.296 07:02:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:17:06.296 07:02:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.296 07:02:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:06.296 NULL1 00:17:06.296 07:02:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.296 07:02:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:17:06.296 07:02:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.296 07:02:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:06.296 07:02:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.296 07:02:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:17:06.296 07:02:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.296 07:02:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:06.296 07:02:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.296 07:02:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:17:06.296 [2024-11-18 07:02:27.129160] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:17:06.296 [2024-11-18 07:02:27.129193] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid220427 ] 00:17:06.868 Attached to nqn.2016-06.io.spdk:cnode1 00:17:06.868 Namespace ID: 1 size: 1GB 00:17:06.868 fused_ordering(0) 00:17:06.868 fused_ordering(1) 00:17:06.868 fused_ordering(2) 00:17:06.868 fused_ordering(3) 00:17:06.868 fused_ordering(4) 00:17:06.868 fused_ordering(5) 00:17:06.868 fused_ordering(6) 00:17:06.868 fused_ordering(7) 00:17:06.868 fused_ordering(8) 00:17:06.868 fused_ordering(9) 00:17:06.868 fused_ordering(10) 00:17:06.868 fused_ordering(11) 00:17:06.868 fused_ordering(12) 00:17:06.868 fused_ordering(13) 00:17:06.868 fused_ordering(14) 00:17:06.868 fused_ordering(15) 00:17:06.868 fused_ordering(16) 00:17:06.868 fused_ordering(17) 00:17:06.868 fused_ordering(18) 00:17:06.868 fused_ordering(19) 00:17:06.868 fused_ordering(20) 00:17:06.868 fused_ordering(21) 00:17:06.868 fused_ordering(22) 00:17:06.868 fused_ordering(23) 00:17:06.868 fused_ordering(24) 00:17:06.868 fused_ordering(25) 00:17:06.868 fused_ordering(26) 00:17:06.868 fused_ordering(27) 00:17:06.868 fused_ordering(28) 00:17:06.868 fused_ordering(29) 00:17:06.868 fused_ordering(30) 00:17:06.868 fused_ordering(31) 00:17:06.868 fused_ordering(32) 00:17:06.868 fused_ordering(33) 00:17:06.868 fused_ordering(34) 00:17:06.868 fused_ordering(35) 00:17:06.868 fused_ordering(36) 00:17:06.868 fused_ordering(37) 00:17:06.868 fused_ordering(38) 00:17:06.868 fused_ordering(39) 00:17:06.868 fused_ordering(40) 00:17:06.868 fused_ordering(41) 00:17:06.868 fused_ordering(42) 00:17:06.868 fused_ordering(43) 00:17:06.868 fused_ordering(44) 00:17:06.868 fused_ordering(45) 00:17:06.868 fused_ordering(46) 00:17:06.868 fused_ordering(47) 00:17:06.868 fused_ordering(48) 00:17:06.868 fused_ordering(49) 00:17:06.868 fused_ordering(50) 00:17:06.868 fused_ordering(51) 00:17:06.868 fused_ordering(52) 00:17:06.868 fused_ordering(53) 00:17:06.868 fused_ordering(54) 00:17:06.868 fused_ordering(55) 00:17:06.868 fused_ordering(56) 00:17:06.868 fused_ordering(57) 00:17:06.868 fused_ordering(58) 00:17:06.868 fused_ordering(59) 00:17:06.868 fused_ordering(60) 00:17:06.868 fused_ordering(61) 00:17:06.868 fused_ordering(62) 00:17:06.868 fused_ordering(63) 00:17:06.868 fused_ordering(64) 00:17:06.868 fused_ordering(65) 00:17:06.868 fused_ordering(66) 00:17:06.868 fused_ordering(67) 00:17:06.868 fused_ordering(68) 00:17:06.868 fused_ordering(69) 00:17:06.868 fused_ordering(70) 00:17:06.868 fused_ordering(71) 00:17:06.868 fused_ordering(72) 00:17:06.868 fused_ordering(73) 00:17:06.868 fused_ordering(74) 00:17:06.868 fused_ordering(75) 00:17:06.868 fused_ordering(76) 00:17:06.868 fused_ordering(77) 00:17:06.868 fused_ordering(78) 00:17:06.868 fused_ordering(79) 00:17:06.868 fused_ordering(80) 00:17:06.868 fused_ordering(81) 00:17:06.868 fused_ordering(82) 00:17:06.868 fused_ordering(83) 00:17:06.868 fused_ordering(84) 00:17:06.868 fused_ordering(85) 00:17:06.868 fused_ordering(86) 00:17:06.868 fused_ordering(87) 00:17:06.868 fused_ordering(88) 00:17:06.868 fused_ordering(89) 00:17:06.868 fused_ordering(90) 00:17:06.868 fused_ordering(91) 00:17:06.868 fused_ordering(92) 00:17:06.868 fused_ordering(93) 00:17:06.868 fused_ordering(94) 00:17:06.868 fused_ordering(95) 00:17:06.868 fused_ordering(96) 00:17:06.868 fused_ordering(97) 00:17:06.868 fused_ordering(98) 00:17:06.868 fused_ordering(99) 00:17:06.868 fused_ordering(100) 00:17:06.868 fused_ordering(101) 00:17:06.868 fused_ordering(102) 00:17:06.868 fused_ordering(103) 00:17:06.868 fused_ordering(104) 00:17:06.868 fused_ordering(105) 00:17:06.868 fused_ordering(106) 00:17:06.868 fused_ordering(107) 00:17:06.868 fused_ordering(108) 00:17:06.868 fused_ordering(109) 00:17:06.868 fused_ordering(110) 00:17:06.868 fused_ordering(111) 00:17:06.868 fused_ordering(112) 00:17:06.868 fused_ordering(113) 00:17:06.868 fused_ordering(114) 00:17:06.868 fused_ordering(115) 00:17:06.868 fused_ordering(116) 00:17:06.868 fused_ordering(117) 00:17:06.868 fused_ordering(118) 00:17:06.868 fused_ordering(119) 00:17:06.868 fused_ordering(120) 00:17:06.868 fused_ordering(121) 00:17:06.868 fused_ordering(122) 00:17:06.868 fused_ordering(123) 00:17:06.868 fused_ordering(124) 00:17:06.868 fused_ordering(125) 00:17:06.868 fused_ordering(126) 00:17:06.868 fused_ordering(127) 00:17:06.868 fused_ordering(128) 00:17:06.868 fused_ordering(129) 00:17:06.868 fused_ordering(130) 00:17:06.868 fused_ordering(131) 00:17:06.868 fused_ordering(132) 00:17:06.868 fused_ordering(133) 00:17:06.868 fused_ordering(134) 00:17:06.868 fused_ordering(135) 00:17:06.868 fused_ordering(136) 00:17:06.868 fused_ordering(137) 00:17:06.868 fused_ordering(138) 00:17:06.868 fused_ordering(139) 00:17:06.868 fused_ordering(140) 00:17:06.868 fused_ordering(141) 00:17:06.868 fused_ordering(142) 00:17:06.868 fused_ordering(143) 00:17:06.868 fused_ordering(144) 00:17:06.868 fused_ordering(145) 00:17:06.868 fused_ordering(146) 00:17:06.868 fused_ordering(147) 00:17:06.868 fused_ordering(148) 00:17:06.868 fused_ordering(149) 00:17:06.868 fused_ordering(150) 00:17:06.868 fused_ordering(151) 00:17:06.869 fused_ordering(152) 00:17:06.869 fused_ordering(153) 00:17:06.869 fused_ordering(154) 00:17:06.869 fused_ordering(155) 00:17:06.869 fused_ordering(156) 00:17:06.869 fused_ordering(157) 00:17:06.869 fused_ordering(158) 00:17:06.869 fused_ordering(159) 00:17:06.869 fused_ordering(160) 00:17:06.869 fused_ordering(161) 00:17:06.869 fused_ordering(162) 00:17:06.869 fused_ordering(163) 00:17:06.869 fused_ordering(164) 00:17:06.869 fused_ordering(165) 00:17:06.869 fused_ordering(166) 00:17:06.869 fused_ordering(167) 00:17:06.869 fused_ordering(168) 00:17:06.869 fused_ordering(169) 00:17:06.869 fused_ordering(170) 00:17:06.869 fused_ordering(171) 00:17:06.869 fused_ordering(172) 00:17:06.869 fused_ordering(173) 00:17:06.869 fused_ordering(174) 00:17:06.869 fused_ordering(175) 00:17:06.869 fused_ordering(176) 00:17:06.869 fused_ordering(177) 00:17:06.869 fused_ordering(178) 00:17:06.869 fused_ordering(179) 00:17:06.869 fused_ordering(180) 00:17:06.869 fused_ordering(181) 00:17:06.869 fused_ordering(182) 00:17:06.869 fused_ordering(183) 00:17:06.869 fused_ordering(184) 00:17:06.869 fused_ordering(185) 00:17:06.869 fused_ordering(186) 00:17:06.869 fused_ordering(187) 00:17:06.869 fused_ordering(188) 00:17:06.869 fused_ordering(189) 00:17:06.869 fused_ordering(190) 00:17:06.869 fused_ordering(191) 00:17:06.869 fused_ordering(192) 00:17:06.869 fused_ordering(193) 00:17:06.869 fused_ordering(194) 00:17:06.869 fused_ordering(195) 00:17:06.869 fused_ordering(196) 00:17:06.869 fused_ordering(197) 00:17:06.869 fused_ordering(198) 00:17:06.869 fused_ordering(199) 00:17:06.869 fused_ordering(200) 00:17:06.869 fused_ordering(201) 00:17:06.869 fused_ordering(202) 00:17:06.869 fused_ordering(203) 00:17:06.869 fused_ordering(204) 00:17:06.869 fused_ordering(205) 00:17:07.130 fused_ordering(206) 00:17:07.130 fused_ordering(207) 00:17:07.130 fused_ordering(208) 00:17:07.130 fused_ordering(209) 00:17:07.130 fused_ordering(210) 00:17:07.130 fused_ordering(211) 00:17:07.130 fused_ordering(212) 00:17:07.130 fused_ordering(213) 00:17:07.130 fused_ordering(214) 00:17:07.130 fused_ordering(215) 00:17:07.130 fused_ordering(216) 00:17:07.130 fused_ordering(217) 00:17:07.130 fused_ordering(218) 00:17:07.130 fused_ordering(219) 00:17:07.130 fused_ordering(220) 00:17:07.130 fused_ordering(221) 00:17:07.130 fused_ordering(222) 00:17:07.130 fused_ordering(223) 00:17:07.130 fused_ordering(224) 00:17:07.130 fused_ordering(225) 00:17:07.130 fused_ordering(226) 00:17:07.130 fused_ordering(227) 00:17:07.130 fused_ordering(228) 00:17:07.130 fused_ordering(229) 00:17:07.130 fused_ordering(230) 00:17:07.130 fused_ordering(231) 00:17:07.130 fused_ordering(232) 00:17:07.130 fused_ordering(233) 00:17:07.130 fused_ordering(234) 00:17:07.130 fused_ordering(235) 00:17:07.130 fused_ordering(236) 00:17:07.130 fused_ordering(237) 00:17:07.130 fused_ordering(238) 00:17:07.130 fused_ordering(239) 00:17:07.130 fused_ordering(240) 00:17:07.130 fused_ordering(241) 00:17:07.130 fused_ordering(242) 00:17:07.130 fused_ordering(243) 00:17:07.130 fused_ordering(244) 00:17:07.130 fused_ordering(245) 00:17:07.130 fused_ordering(246) 00:17:07.130 fused_ordering(247) 00:17:07.130 fused_ordering(248) 00:17:07.130 fused_ordering(249) 00:17:07.130 fused_ordering(250) 00:17:07.130 fused_ordering(251) 00:17:07.130 fused_ordering(252) 00:17:07.130 fused_ordering(253) 00:17:07.130 fused_ordering(254) 00:17:07.130 fused_ordering(255) 00:17:07.130 fused_ordering(256) 00:17:07.130 fused_ordering(257) 00:17:07.130 fused_ordering(258) 00:17:07.130 fused_ordering(259) 00:17:07.130 fused_ordering(260) 00:17:07.130 fused_ordering(261) 00:17:07.130 fused_ordering(262) 00:17:07.130 fused_ordering(263) 00:17:07.130 fused_ordering(264) 00:17:07.130 fused_ordering(265) 00:17:07.130 fused_ordering(266) 00:17:07.130 fused_ordering(267) 00:17:07.130 fused_ordering(268) 00:17:07.130 fused_ordering(269) 00:17:07.130 fused_ordering(270) 00:17:07.130 fused_ordering(271) 00:17:07.130 fused_ordering(272) 00:17:07.130 fused_ordering(273) 00:17:07.130 fused_ordering(274) 00:17:07.130 fused_ordering(275) 00:17:07.130 fused_ordering(276) 00:17:07.130 fused_ordering(277) 00:17:07.130 fused_ordering(278) 00:17:07.130 fused_ordering(279) 00:17:07.130 fused_ordering(280) 00:17:07.130 fused_ordering(281) 00:17:07.130 fused_ordering(282) 00:17:07.130 fused_ordering(283) 00:17:07.130 fused_ordering(284) 00:17:07.130 fused_ordering(285) 00:17:07.130 fused_ordering(286) 00:17:07.130 fused_ordering(287) 00:17:07.130 fused_ordering(288) 00:17:07.130 fused_ordering(289) 00:17:07.130 fused_ordering(290) 00:17:07.130 fused_ordering(291) 00:17:07.130 fused_ordering(292) 00:17:07.130 fused_ordering(293) 00:17:07.130 fused_ordering(294) 00:17:07.130 fused_ordering(295) 00:17:07.130 fused_ordering(296) 00:17:07.130 fused_ordering(297) 00:17:07.130 fused_ordering(298) 00:17:07.130 fused_ordering(299) 00:17:07.130 fused_ordering(300) 00:17:07.130 fused_ordering(301) 00:17:07.130 fused_ordering(302) 00:17:07.130 fused_ordering(303) 00:17:07.130 fused_ordering(304) 00:17:07.130 fused_ordering(305) 00:17:07.130 fused_ordering(306) 00:17:07.130 fused_ordering(307) 00:17:07.130 fused_ordering(308) 00:17:07.130 fused_ordering(309) 00:17:07.130 fused_ordering(310) 00:17:07.130 fused_ordering(311) 00:17:07.130 fused_ordering(312) 00:17:07.130 fused_ordering(313) 00:17:07.130 fused_ordering(314) 00:17:07.130 fused_ordering(315) 00:17:07.130 fused_ordering(316) 00:17:07.130 fused_ordering(317) 00:17:07.130 fused_ordering(318) 00:17:07.130 fused_ordering(319) 00:17:07.130 fused_ordering(320) 00:17:07.130 fused_ordering(321) 00:17:07.130 fused_ordering(322) 00:17:07.130 fused_ordering(323) 00:17:07.130 fused_ordering(324) 00:17:07.130 fused_ordering(325) 00:17:07.130 fused_ordering(326) 00:17:07.130 fused_ordering(327) 00:17:07.130 fused_ordering(328) 00:17:07.130 fused_ordering(329) 00:17:07.130 fused_ordering(330) 00:17:07.130 fused_ordering(331) 00:17:07.130 fused_ordering(332) 00:17:07.130 fused_ordering(333) 00:17:07.130 fused_ordering(334) 00:17:07.130 fused_ordering(335) 00:17:07.130 fused_ordering(336) 00:17:07.130 fused_ordering(337) 00:17:07.130 fused_ordering(338) 00:17:07.130 fused_ordering(339) 00:17:07.130 fused_ordering(340) 00:17:07.130 fused_ordering(341) 00:17:07.130 fused_ordering(342) 00:17:07.130 fused_ordering(343) 00:17:07.130 fused_ordering(344) 00:17:07.130 fused_ordering(345) 00:17:07.130 fused_ordering(346) 00:17:07.130 fused_ordering(347) 00:17:07.130 fused_ordering(348) 00:17:07.130 fused_ordering(349) 00:17:07.130 fused_ordering(350) 00:17:07.130 fused_ordering(351) 00:17:07.130 fused_ordering(352) 00:17:07.130 fused_ordering(353) 00:17:07.130 fused_ordering(354) 00:17:07.130 fused_ordering(355) 00:17:07.130 fused_ordering(356) 00:17:07.130 fused_ordering(357) 00:17:07.130 fused_ordering(358) 00:17:07.130 fused_ordering(359) 00:17:07.130 fused_ordering(360) 00:17:07.130 fused_ordering(361) 00:17:07.130 fused_ordering(362) 00:17:07.130 fused_ordering(363) 00:17:07.130 fused_ordering(364) 00:17:07.130 fused_ordering(365) 00:17:07.130 fused_ordering(366) 00:17:07.130 fused_ordering(367) 00:17:07.130 fused_ordering(368) 00:17:07.130 fused_ordering(369) 00:17:07.130 fused_ordering(370) 00:17:07.130 fused_ordering(371) 00:17:07.130 fused_ordering(372) 00:17:07.130 fused_ordering(373) 00:17:07.130 fused_ordering(374) 00:17:07.130 fused_ordering(375) 00:17:07.130 fused_ordering(376) 00:17:07.130 fused_ordering(377) 00:17:07.130 fused_ordering(378) 00:17:07.130 fused_ordering(379) 00:17:07.130 fused_ordering(380) 00:17:07.130 fused_ordering(381) 00:17:07.131 fused_ordering(382) 00:17:07.131 fused_ordering(383) 00:17:07.131 fused_ordering(384) 00:17:07.131 fused_ordering(385) 00:17:07.131 fused_ordering(386) 00:17:07.131 fused_ordering(387) 00:17:07.131 fused_ordering(388) 00:17:07.131 fused_ordering(389) 00:17:07.131 fused_ordering(390) 00:17:07.131 fused_ordering(391) 00:17:07.131 fused_ordering(392) 00:17:07.131 fused_ordering(393) 00:17:07.131 fused_ordering(394) 00:17:07.131 fused_ordering(395) 00:17:07.131 fused_ordering(396) 00:17:07.131 fused_ordering(397) 00:17:07.131 fused_ordering(398) 00:17:07.131 fused_ordering(399) 00:17:07.131 fused_ordering(400) 00:17:07.131 fused_ordering(401) 00:17:07.131 fused_ordering(402) 00:17:07.131 fused_ordering(403) 00:17:07.131 fused_ordering(404) 00:17:07.131 fused_ordering(405) 00:17:07.131 fused_ordering(406) 00:17:07.131 fused_ordering(407) 00:17:07.131 fused_ordering(408) 00:17:07.131 fused_ordering(409) 00:17:07.131 fused_ordering(410) 00:17:07.389 fused_ordering(411) 00:17:07.389 fused_ordering(412) 00:17:07.389 fused_ordering(413) 00:17:07.389 fused_ordering(414) 00:17:07.389 fused_ordering(415) 00:17:07.389 fused_ordering(416) 00:17:07.389 fused_ordering(417) 00:17:07.389 fused_ordering(418) 00:17:07.389 fused_ordering(419) 00:17:07.389 fused_ordering(420) 00:17:07.389 fused_ordering(421) 00:17:07.389 fused_ordering(422) 00:17:07.389 fused_ordering(423) 00:17:07.389 fused_ordering(424) 00:17:07.389 fused_ordering(425) 00:17:07.389 fused_ordering(426) 00:17:07.389 fused_ordering(427) 00:17:07.389 fused_ordering(428) 00:17:07.389 fused_ordering(429) 00:17:07.389 fused_ordering(430) 00:17:07.389 fused_ordering(431) 00:17:07.389 fused_ordering(432) 00:17:07.389 fused_ordering(433) 00:17:07.389 fused_ordering(434) 00:17:07.389 fused_ordering(435) 00:17:07.389 fused_ordering(436) 00:17:07.389 fused_ordering(437) 00:17:07.389 fused_ordering(438) 00:17:07.389 fused_ordering(439) 00:17:07.389 fused_ordering(440) 00:17:07.389 fused_ordering(441) 00:17:07.389 fused_ordering(442) 00:17:07.389 fused_ordering(443) 00:17:07.389 fused_ordering(444) 00:17:07.389 fused_ordering(445) 00:17:07.389 fused_ordering(446) 00:17:07.389 fused_ordering(447) 00:17:07.389 fused_ordering(448) 00:17:07.389 fused_ordering(449) 00:17:07.389 fused_ordering(450) 00:17:07.389 fused_ordering(451) 00:17:07.389 fused_ordering(452) 00:17:07.389 fused_ordering(453) 00:17:07.389 fused_ordering(454) 00:17:07.389 fused_ordering(455) 00:17:07.389 fused_ordering(456) 00:17:07.389 fused_ordering(457) 00:17:07.389 fused_ordering(458) 00:17:07.389 fused_ordering(459) 00:17:07.389 fused_ordering(460) 00:17:07.389 fused_ordering(461) 00:17:07.389 fused_ordering(462) 00:17:07.389 fused_ordering(463) 00:17:07.389 fused_ordering(464) 00:17:07.389 fused_ordering(465) 00:17:07.389 fused_ordering(466) 00:17:07.389 fused_ordering(467) 00:17:07.389 fused_ordering(468) 00:17:07.389 fused_ordering(469) 00:17:07.389 fused_ordering(470) 00:17:07.389 fused_ordering(471) 00:17:07.389 fused_ordering(472) 00:17:07.389 fused_ordering(473) 00:17:07.389 fused_ordering(474) 00:17:07.389 fused_ordering(475) 00:17:07.389 fused_ordering(476) 00:17:07.389 fused_ordering(477) 00:17:07.389 fused_ordering(478) 00:17:07.389 fused_ordering(479) 00:17:07.390 fused_ordering(480) 00:17:07.390 fused_ordering(481) 00:17:07.390 fused_ordering(482) 00:17:07.390 fused_ordering(483) 00:17:07.390 fused_ordering(484) 00:17:07.390 fused_ordering(485) 00:17:07.390 fused_ordering(486) 00:17:07.390 fused_ordering(487) 00:17:07.390 fused_ordering(488) 00:17:07.390 fused_ordering(489) 00:17:07.390 fused_ordering(490) 00:17:07.390 fused_ordering(491) 00:17:07.390 fused_ordering(492) 00:17:07.390 fused_ordering(493) 00:17:07.390 fused_ordering(494) 00:17:07.390 fused_ordering(495) 00:17:07.390 fused_ordering(496) 00:17:07.390 fused_ordering(497) 00:17:07.390 fused_ordering(498) 00:17:07.390 fused_ordering(499) 00:17:07.390 fused_ordering(500) 00:17:07.390 fused_ordering(501) 00:17:07.390 fused_ordering(502) 00:17:07.390 fused_ordering(503) 00:17:07.390 fused_ordering(504) 00:17:07.390 fused_ordering(505) 00:17:07.390 fused_ordering(506) 00:17:07.390 fused_ordering(507) 00:17:07.390 fused_ordering(508) 00:17:07.390 fused_ordering(509) 00:17:07.390 fused_ordering(510) 00:17:07.390 fused_ordering(511) 00:17:07.390 fused_ordering(512) 00:17:07.390 fused_ordering(513) 00:17:07.390 fused_ordering(514) 00:17:07.390 fused_ordering(515) 00:17:07.390 fused_ordering(516) 00:17:07.390 fused_ordering(517) 00:17:07.390 fused_ordering(518) 00:17:07.390 fused_ordering(519) 00:17:07.390 fused_ordering(520) 00:17:07.390 fused_ordering(521) 00:17:07.390 fused_ordering(522) 00:17:07.390 fused_ordering(523) 00:17:07.390 fused_ordering(524) 00:17:07.390 fused_ordering(525) 00:17:07.390 fused_ordering(526) 00:17:07.390 fused_ordering(527) 00:17:07.390 fused_ordering(528) 00:17:07.390 fused_ordering(529) 00:17:07.390 fused_ordering(530) 00:17:07.390 fused_ordering(531) 00:17:07.390 fused_ordering(532) 00:17:07.390 fused_ordering(533) 00:17:07.390 fused_ordering(534) 00:17:07.390 fused_ordering(535) 00:17:07.390 fused_ordering(536) 00:17:07.390 fused_ordering(537) 00:17:07.390 fused_ordering(538) 00:17:07.390 fused_ordering(539) 00:17:07.390 fused_ordering(540) 00:17:07.390 fused_ordering(541) 00:17:07.390 fused_ordering(542) 00:17:07.390 fused_ordering(543) 00:17:07.390 fused_ordering(544) 00:17:07.390 fused_ordering(545) 00:17:07.390 fused_ordering(546) 00:17:07.390 fused_ordering(547) 00:17:07.390 fused_ordering(548) 00:17:07.390 fused_ordering(549) 00:17:07.390 fused_ordering(550) 00:17:07.390 fused_ordering(551) 00:17:07.390 fused_ordering(552) 00:17:07.390 fused_ordering(553) 00:17:07.390 fused_ordering(554) 00:17:07.390 fused_ordering(555) 00:17:07.390 fused_ordering(556) 00:17:07.390 fused_ordering(557) 00:17:07.390 fused_ordering(558) 00:17:07.390 fused_ordering(559) 00:17:07.390 fused_ordering(560) 00:17:07.390 fused_ordering(561) 00:17:07.390 fused_ordering(562) 00:17:07.390 fused_ordering(563) 00:17:07.390 fused_ordering(564) 00:17:07.390 fused_ordering(565) 00:17:07.390 fused_ordering(566) 00:17:07.390 fused_ordering(567) 00:17:07.390 fused_ordering(568) 00:17:07.390 fused_ordering(569) 00:17:07.390 fused_ordering(570) 00:17:07.390 fused_ordering(571) 00:17:07.390 fused_ordering(572) 00:17:07.390 fused_ordering(573) 00:17:07.390 fused_ordering(574) 00:17:07.390 fused_ordering(575) 00:17:07.390 fused_ordering(576) 00:17:07.390 fused_ordering(577) 00:17:07.390 fused_ordering(578) 00:17:07.390 fused_ordering(579) 00:17:07.390 fused_ordering(580) 00:17:07.390 fused_ordering(581) 00:17:07.390 fused_ordering(582) 00:17:07.390 fused_ordering(583) 00:17:07.390 fused_ordering(584) 00:17:07.390 fused_ordering(585) 00:17:07.390 fused_ordering(586) 00:17:07.390 fused_ordering(587) 00:17:07.390 fused_ordering(588) 00:17:07.390 fused_ordering(589) 00:17:07.390 fused_ordering(590) 00:17:07.390 fused_ordering(591) 00:17:07.390 fused_ordering(592) 00:17:07.390 fused_ordering(593) 00:17:07.390 fused_ordering(594) 00:17:07.390 fused_ordering(595) 00:17:07.390 fused_ordering(596) 00:17:07.390 fused_ordering(597) 00:17:07.390 fused_ordering(598) 00:17:07.390 fused_ordering(599) 00:17:07.390 fused_ordering(600) 00:17:07.390 fused_ordering(601) 00:17:07.390 fused_ordering(602) 00:17:07.390 fused_ordering(603) 00:17:07.390 fused_ordering(604) 00:17:07.390 fused_ordering(605) 00:17:07.390 fused_ordering(606) 00:17:07.390 fused_ordering(607) 00:17:07.390 fused_ordering(608) 00:17:07.390 fused_ordering(609) 00:17:07.390 fused_ordering(610) 00:17:07.390 fused_ordering(611) 00:17:07.390 fused_ordering(612) 00:17:07.390 fused_ordering(613) 00:17:07.390 fused_ordering(614) 00:17:07.390 fused_ordering(615) 00:17:07.961 fused_ordering(616) 00:17:07.961 fused_ordering(617) 00:17:07.961 fused_ordering(618) 00:17:07.961 fused_ordering(619) 00:17:07.961 fused_ordering(620) 00:17:07.961 fused_ordering(621) 00:17:07.961 fused_ordering(622) 00:17:07.961 fused_ordering(623) 00:17:07.961 fused_ordering(624) 00:17:07.961 fused_ordering(625) 00:17:07.961 fused_ordering(626) 00:17:07.961 fused_ordering(627) 00:17:07.961 fused_ordering(628) 00:17:07.961 fused_ordering(629) 00:17:07.961 fused_ordering(630) 00:17:07.961 fused_ordering(631) 00:17:07.961 fused_ordering(632) 00:17:07.961 fused_ordering(633) 00:17:07.961 fused_ordering(634) 00:17:07.961 fused_ordering(635) 00:17:07.961 fused_ordering(636) 00:17:07.961 fused_ordering(637) 00:17:07.961 fused_ordering(638) 00:17:07.961 fused_ordering(639) 00:17:07.961 fused_ordering(640) 00:17:07.961 fused_ordering(641) 00:17:07.961 fused_ordering(642) 00:17:07.961 fused_ordering(643) 00:17:07.961 fused_ordering(644) 00:17:07.961 fused_ordering(645) 00:17:07.961 fused_ordering(646) 00:17:07.961 fused_ordering(647) 00:17:07.961 fused_ordering(648) 00:17:07.961 fused_ordering(649) 00:17:07.961 fused_ordering(650) 00:17:07.961 fused_ordering(651) 00:17:07.961 fused_ordering(652) 00:17:07.961 fused_ordering(653) 00:17:07.961 fused_ordering(654) 00:17:07.961 fused_ordering(655) 00:17:07.961 fused_ordering(656) 00:17:07.961 fused_ordering(657) 00:17:07.961 fused_ordering(658) 00:17:07.961 fused_ordering(659) 00:17:07.961 fused_ordering(660) 00:17:07.961 fused_ordering(661) 00:17:07.961 fused_ordering(662) 00:17:07.961 fused_ordering(663) 00:17:07.961 fused_ordering(664) 00:17:07.961 fused_ordering(665) 00:17:07.961 fused_ordering(666) 00:17:07.961 fused_ordering(667) 00:17:07.961 fused_ordering(668) 00:17:07.961 fused_ordering(669) 00:17:07.961 fused_ordering(670) 00:17:07.961 fused_ordering(671) 00:17:07.961 fused_ordering(672) 00:17:07.961 fused_ordering(673) 00:17:07.961 fused_ordering(674) 00:17:07.961 fused_ordering(675) 00:17:07.961 fused_ordering(676) 00:17:07.961 fused_ordering(677) 00:17:07.961 fused_ordering(678) 00:17:07.961 fused_ordering(679) 00:17:07.961 fused_ordering(680) 00:17:07.961 fused_ordering(681) 00:17:07.961 fused_ordering(682) 00:17:07.961 fused_ordering(683) 00:17:07.961 fused_ordering(684) 00:17:07.961 fused_ordering(685) 00:17:07.961 fused_ordering(686) 00:17:07.961 fused_ordering(687) 00:17:07.961 fused_ordering(688) 00:17:07.961 fused_ordering(689) 00:17:07.961 fused_ordering(690) 00:17:07.961 fused_ordering(691) 00:17:07.961 fused_ordering(692) 00:17:07.961 fused_ordering(693) 00:17:07.961 fused_ordering(694) 00:17:07.961 fused_ordering(695) 00:17:07.961 fused_ordering(696) 00:17:07.961 fused_ordering(697) 00:17:07.961 fused_ordering(698) 00:17:07.961 fused_ordering(699) 00:17:07.961 fused_ordering(700) 00:17:07.961 fused_ordering(701) 00:17:07.961 fused_ordering(702) 00:17:07.961 fused_ordering(703) 00:17:07.961 fused_ordering(704) 00:17:07.961 fused_ordering(705) 00:17:07.961 fused_ordering(706) 00:17:07.961 fused_ordering(707) 00:17:07.961 fused_ordering(708) 00:17:07.961 fused_ordering(709) 00:17:07.961 fused_ordering(710) 00:17:07.961 fused_ordering(711) 00:17:07.961 fused_ordering(712) 00:17:07.961 fused_ordering(713) 00:17:07.961 fused_ordering(714) 00:17:07.961 fused_ordering(715) 00:17:07.961 fused_ordering(716) 00:17:07.961 fused_ordering(717) 00:17:07.961 fused_ordering(718) 00:17:07.961 fused_ordering(719) 00:17:07.961 fused_ordering(720) 00:17:07.961 fused_ordering(721) 00:17:07.961 fused_ordering(722) 00:17:07.961 fused_ordering(723) 00:17:07.961 fused_ordering(724) 00:17:07.961 fused_ordering(725) 00:17:07.961 fused_ordering(726) 00:17:07.961 fused_ordering(727) 00:17:07.961 fused_ordering(728) 00:17:07.961 fused_ordering(729) 00:17:07.961 fused_ordering(730) 00:17:07.961 fused_ordering(731) 00:17:07.961 fused_ordering(732) 00:17:07.961 fused_ordering(733) 00:17:07.961 fused_ordering(734) 00:17:07.961 fused_ordering(735) 00:17:07.961 fused_ordering(736) 00:17:07.961 fused_ordering(737) 00:17:07.961 fused_ordering(738) 00:17:07.961 fused_ordering(739) 00:17:07.961 fused_ordering(740) 00:17:07.961 fused_ordering(741) 00:17:07.961 fused_ordering(742) 00:17:07.961 fused_ordering(743) 00:17:07.961 fused_ordering(744) 00:17:07.961 fused_ordering(745) 00:17:07.961 fused_ordering(746) 00:17:07.961 fused_ordering(747) 00:17:07.961 fused_ordering(748) 00:17:07.961 fused_ordering(749) 00:17:07.961 fused_ordering(750) 00:17:07.961 fused_ordering(751) 00:17:07.961 fused_ordering(752) 00:17:07.961 fused_ordering(753) 00:17:07.961 fused_ordering(754) 00:17:07.961 fused_ordering(755) 00:17:07.961 fused_ordering(756) 00:17:07.961 fused_ordering(757) 00:17:07.961 fused_ordering(758) 00:17:07.961 fused_ordering(759) 00:17:07.961 fused_ordering(760) 00:17:07.961 fused_ordering(761) 00:17:07.961 fused_ordering(762) 00:17:07.961 fused_ordering(763) 00:17:07.961 fused_ordering(764) 00:17:07.961 fused_ordering(765) 00:17:07.961 fused_ordering(766) 00:17:07.961 fused_ordering(767) 00:17:07.961 fused_ordering(768) 00:17:07.961 fused_ordering(769) 00:17:07.961 fused_ordering(770) 00:17:07.961 fused_ordering(771) 00:17:07.961 fused_ordering(772) 00:17:07.961 fused_ordering(773) 00:17:07.961 fused_ordering(774) 00:17:07.961 fused_ordering(775) 00:17:07.961 fused_ordering(776) 00:17:07.961 fused_ordering(777) 00:17:07.961 fused_ordering(778) 00:17:07.961 fused_ordering(779) 00:17:07.961 fused_ordering(780) 00:17:07.961 fused_ordering(781) 00:17:07.961 fused_ordering(782) 00:17:07.961 fused_ordering(783) 00:17:07.961 fused_ordering(784) 00:17:07.961 fused_ordering(785) 00:17:07.961 fused_ordering(786) 00:17:07.961 fused_ordering(787) 00:17:07.961 fused_ordering(788) 00:17:07.962 fused_ordering(789) 00:17:07.962 fused_ordering(790) 00:17:07.962 fused_ordering(791) 00:17:07.962 fused_ordering(792) 00:17:07.962 fused_ordering(793) 00:17:07.962 fused_ordering(794) 00:17:07.962 fused_ordering(795) 00:17:07.962 fused_ordering(796) 00:17:07.962 fused_ordering(797) 00:17:07.962 fused_ordering(798) 00:17:07.962 fused_ordering(799) 00:17:07.962 fused_ordering(800) 00:17:07.962 fused_ordering(801) 00:17:07.962 fused_ordering(802) 00:17:07.962 fused_ordering(803) 00:17:07.962 fused_ordering(804) 00:17:07.962 fused_ordering(805) 00:17:07.962 fused_ordering(806) 00:17:07.962 fused_ordering(807) 00:17:07.962 fused_ordering(808) 00:17:07.962 fused_ordering(809) 00:17:07.962 fused_ordering(810) 00:17:07.962 fused_ordering(811) 00:17:07.962 fused_ordering(812) 00:17:07.962 fused_ordering(813) 00:17:07.962 fused_ordering(814) 00:17:07.962 fused_ordering(815) 00:17:07.962 fused_ordering(816) 00:17:07.962 fused_ordering(817) 00:17:07.962 fused_ordering(818) 00:17:07.962 fused_ordering(819) 00:17:07.962 fused_ordering(820) 00:17:08.531 fused_ordering(821) 00:17:08.531 fused_ordering(822) 00:17:08.531 fused_ordering(823) 00:17:08.531 fused_ordering(824) 00:17:08.531 fused_ordering(825) 00:17:08.531 fused_ordering(826) 00:17:08.531 fused_ordering(827) 00:17:08.531 fused_ordering(828) 00:17:08.531 fused_ordering(829) 00:17:08.531 fused_ordering(830) 00:17:08.531 fused_ordering(831) 00:17:08.531 fused_ordering(832) 00:17:08.531 fused_ordering(833) 00:17:08.531 fused_ordering(834) 00:17:08.531 fused_ordering(835) 00:17:08.531 fused_ordering(836) 00:17:08.531 fused_ordering(837) 00:17:08.531 fused_ordering(838) 00:17:08.531 fused_ordering(839) 00:17:08.531 fused_ordering(840) 00:17:08.531 fused_ordering(841) 00:17:08.531 fused_ordering(842) 00:17:08.531 fused_ordering(843) 00:17:08.531 fused_ordering(844) 00:17:08.531 fused_ordering(845) 00:17:08.531 fused_ordering(846) 00:17:08.531 fused_ordering(847) 00:17:08.531 fused_ordering(848) 00:17:08.531 fused_ordering(849) 00:17:08.531 fused_ordering(850) 00:17:08.531 fused_ordering(851) 00:17:08.531 fused_ordering(852) 00:17:08.531 fused_ordering(853) 00:17:08.531 fused_ordering(854) 00:17:08.531 fused_ordering(855) 00:17:08.531 fused_ordering(856) 00:17:08.531 fused_ordering(857) 00:17:08.531 fused_ordering(858) 00:17:08.531 fused_ordering(859) 00:17:08.531 fused_ordering(860) 00:17:08.531 fused_ordering(861) 00:17:08.531 fused_ordering(862) 00:17:08.531 fused_ordering(863) 00:17:08.531 fused_ordering(864) 00:17:08.531 fused_ordering(865) 00:17:08.531 fused_ordering(866) 00:17:08.531 fused_ordering(867) 00:17:08.531 fused_ordering(868) 00:17:08.531 fused_ordering(869) 00:17:08.531 fused_ordering(870) 00:17:08.532 fused_ordering(871) 00:17:08.532 fused_ordering(872) 00:17:08.532 fused_ordering(873) 00:17:08.532 fused_ordering(874) 00:17:08.532 fused_ordering(875) 00:17:08.532 fused_ordering(876) 00:17:08.532 fused_ordering(877) 00:17:08.532 fused_ordering(878) 00:17:08.532 fused_ordering(879) 00:17:08.532 fused_ordering(880) 00:17:08.532 fused_ordering(881) 00:17:08.532 fused_ordering(882) 00:17:08.532 fused_ordering(883) 00:17:08.532 fused_ordering(884) 00:17:08.532 fused_ordering(885) 00:17:08.532 fused_ordering(886) 00:17:08.532 fused_ordering(887) 00:17:08.532 fused_ordering(888) 00:17:08.532 fused_ordering(889) 00:17:08.532 fused_ordering(890) 00:17:08.532 fused_ordering(891) 00:17:08.532 fused_ordering(892) 00:17:08.532 fused_ordering(893) 00:17:08.532 fused_ordering(894) 00:17:08.532 fused_ordering(895) 00:17:08.532 fused_ordering(896) 00:17:08.532 fused_ordering(897) 00:17:08.532 fused_ordering(898) 00:17:08.532 fused_ordering(899) 00:17:08.532 fused_ordering(900) 00:17:08.532 fused_ordering(901) 00:17:08.532 fused_ordering(902) 00:17:08.532 fused_ordering(903) 00:17:08.532 fused_ordering(904) 00:17:08.532 fused_ordering(905) 00:17:08.532 fused_ordering(906) 00:17:08.532 fused_ordering(907) 00:17:08.532 fused_ordering(908) 00:17:08.532 fused_ordering(909) 00:17:08.532 fused_ordering(910) 00:17:08.532 fused_ordering(911) 00:17:08.532 fused_ordering(912) 00:17:08.532 fused_ordering(913) 00:17:08.532 fused_ordering(914) 00:17:08.532 fused_ordering(915) 00:17:08.532 fused_ordering(916) 00:17:08.532 fused_ordering(917) 00:17:08.532 fused_ordering(918) 00:17:08.532 fused_ordering(919) 00:17:08.532 fused_ordering(920) 00:17:08.532 fused_ordering(921) 00:17:08.532 fused_ordering(922) 00:17:08.532 fused_ordering(923) 00:17:08.532 fused_ordering(924) 00:17:08.532 fused_ordering(925) 00:17:08.532 fused_ordering(926) 00:17:08.532 fused_ordering(927) 00:17:08.532 fused_ordering(928) 00:17:08.532 fused_ordering(929) 00:17:08.532 fused_ordering(930) 00:17:08.532 fused_ordering(931) 00:17:08.532 fused_ordering(932) 00:17:08.532 fused_ordering(933) 00:17:08.532 fused_ordering(934) 00:17:08.532 fused_ordering(935) 00:17:08.532 fused_ordering(936) 00:17:08.532 fused_ordering(937) 00:17:08.532 fused_ordering(938) 00:17:08.532 fused_ordering(939) 00:17:08.532 fused_ordering(940) 00:17:08.532 fused_ordering(941) 00:17:08.532 fused_ordering(942) 00:17:08.532 fused_ordering(943) 00:17:08.532 fused_ordering(944) 00:17:08.532 fused_ordering(945) 00:17:08.532 fused_ordering(946) 00:17:08.532 fused_ordering(947) 00:17:08.532 fused_ordering(948) 00:17:08.532 fused_ordering(949) 00:17:08.532 fused_ordering(950) 00:17:08.532 fused_ordering(951) 00:17:08.532 fused_ordering(952) 00:17:08.532 fused_ordering(953) 00:17:08.532 fused_ordering(954) 00:17:08.532 fused_ordering(955) 00:17:08.532 fused_ordering(956) 00:17:08.532 fused_ordering(957) 00:17:08.532 fused_ordering(958) 00:17:08.532 fused_ordering(959) 00:17:08.532 fused_ordering(960) 00:17:08.532 fused_ordering(961) 00:17:08.532 fused_ordering(962) 00:17:08.532 fused_ordering(963) 00:17:08.532 fused_ordering(964) 00:17:08.532 fused_ordering(965) 00:17:08.532 fused_ordering(966) 00:17:08.532 fused_ordering(967) 00:17:08.532 fused_ordering(968) 00:17:08.532 fused_ordering(969) 00:17:08.532 fused_ordering(970) 00:17:08.532 fused_ordering(971) 00:17:08.532 fused_ordering(972) 00:17:08.532 fused_ordering(973) 00:17:08.532 fused_ordering(974) 00:17:08.532 fused_ordering(975) 00:17:08.532 fused_ordering(976) 00:17:08.532 fused_ordering(977) 00:17:08.532 fused_ordering(978) 00:17:08.532 fused_ordering(979) 00:17:08.532 fused_ordering(980) 00:17:08.532 fused_ordering(981) 00:17:08.532 fused_ordering(982) 00:17:08.532 fused_ordering(983) 00:17:08.532 fused_ordering(984) 00:17:08.532 fused_ordering(985) 00:17:08.532 fused_ordering(986) 00:17:08.532 fused_ordering(987) 00:17:08.532 fused_ordering(988) 00:17:08.532 fused_ordering(989) 00:17:08.532 fused_ordering(990) 00:17:08.532 fused_ordering(991) 00:17:08.532 fused_ordering(992) 00:17:08.532 fused_ordering(993) 00:17:08.532 fused_ordering(994) 00:17:08.532 fused_ordering(995) 00:17:08.532 fused_ordering(996) 00:17:08.532 fused_ordering(997) 00:17:08.532 fused_ordering(998) 00:17:08.532 fused_ordering(999) 00:17:08.532 fused_ordering(1000) 00:17:08.532 fused_ordering(1001) 00:17:08.532 fused_ordering(1002) 00:17:08.532 fused_ordering(1003) 00:17:08.532 fused_ordering(1004) 00:17:08.532 fused_ordering(1005) 00:17:08.532 fused_ordering(1006) 00:17:08.532 fused_ordering(1007) 00:17:08.532 fused_ordering(1008) 00:17:08.532 fused_ordering(1009) 00:17:08.532 fused_ordering(1010) 00:17:08.532 fused_ordering(1011) 00:17:08.532 fused_ordering(1012) 00:17:08.532 fused_ordering(1013) 00:17:08.532 fused_ordering(1014) 00:17:08.532 fused_ordering(1015) 00:17:08.532 fused_ordering(1016) 00:17:08.532 fused_ordering(1017) 00:17:08.532 fused_ordering(1018) 00:17:08.532 fused_ordering(1019) 00:17:08.532 fused_ordering(1020) 00:17:08.532 fused_ordering(1021) 00:17:08.532 fused_ordering(1022) 00:17:08.532 fused_ordering(1023) 00:17:08.532 07:02:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:17:08.532 07:02:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:17:08.532 07:02:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:08.532 07:02:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync 00:17:08.532 07:02:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:08.532 07:02:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e 00:17:08.532 07:02:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:08.532 07:02:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:08.532 rmmod nvme_tcp 00:17:08.532 rmmod nvme_fabrics 00:17:08.532 rmmod nvme_keyring 00:17:08.532 07:02:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:08.532 07:02:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e 00:17:08.532 07:02:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0 00:17:08.532 07:02:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@517 -- # '[' -n 220401 ']' 00:17:08.532 07:02:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@518 -- # killprocess 220401 00:17:08.532 07:02:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # '[' -z 220401 ']' 00:17:08.532 07:02:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # kill -0 220401 00:17:08.532 07:02:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # uname 00:17:08.532 07:02:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:08.532 07:02:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 220401 00:17:08.532 07:02:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:08.532 07:02:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:08.532 07:02:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # echo 'killing process with pid 220401' 00:17:08.532 killing process with pid 220401 00:17:08.532 07:02:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@973 -- # kill 220401 00:17:08.532 07:02:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@978 -- # wait 220401 00:17:08.793 07:02:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:08.793 07:02:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:08.793 07:02:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:08.793 07:02:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # iptr 00:17:08.793 07:02:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-save 00:17:08.793 07:02:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:08.793 07:02:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-restore 00:17:08.793 07:02:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:08.793 07:02:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:08.793 07:02:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:08.793 07:02:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:08.793 07:02:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:10.700 07:02:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:10.700 00:17:10.700 real 0m7.433s 00:17:10.700 user 0m5.172s 00:17:10.700 sys 0m2.821s 00:17:10.700 07:02:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:10.700 07:02:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:10.700 ************************************ 00:17:10.700 END TEST nvmf_fused_ordering 00:17:10.700 ************************************ 00:17:10.959 07:02:31 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:17:10.959 07:02:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:10.959 07:02:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:10.959 07:02:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:10.959 ************************************ 00:17:10.959 START TEST nvmf_ns_masking 00:17:10.959 ************************************ 00:17:10.960 07:02:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1129 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:17:10.960 * Looking for test storage... 00:17:10.960 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:10.960 07:02:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:10.960 07:02:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # lcov --version 00:17:10.960 07:02:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:10.960 07:02:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:10.960 07:02:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:10.960 07:02:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:10.960 07:02:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:10.960 07:02:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:17:10.960 07:02:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:17:10.960 07:02:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:17:10.960 07:02:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:17:10.960 07:02:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:17:10.960 07:02:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:17:10.960 07:02:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:17:10.960 07:02:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:10.960 07:02:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:17:10.960 07:02:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:17:10.960 07:02:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:10.960 07:02:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:10.960 07:02:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:17:10.960 07:02:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:17:10.960 07:02:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:10.960 07:02:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:17:10.960 07:02:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:17:10.960 07:02:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:17:10.960 07:02:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:17:10.960 07:02:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:10.960 07:02:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:17:10.960 07:02:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:17:10.960 07:02:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:10.960 07:02:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:10.960 07:02:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:17:10.960 07:02:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:10.960 07:02:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:10.960 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:10.960 --rc genhtml_branch_coverage=1 00:17:10.960 --rc genhtml_function_coverage=1 00:17:10.960 --rc genhtml_legend=1 00:17:10.960 --rc geninfo_all_blocks=1 00:17:10.960 --rc geninfo_unexecuted_blocks=1 00:17:10.960 00:17:10.960 ' 00:17:10.960 07:02:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:10.960 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:10.960 --rc genhtml_branch_coverage=1 00:17:10.960 --rc genhtml_function_coverage=1 00:17:10.960 --rc genhtml_legend=1 00:17:10.960 --rc geninfo_all_blocks=1 00:17:10.960 --rc geninfo_unexecuted_blocks=1 00:17:10.960 00:17:10.960 ' 00:17:10.960 07:02:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:10.960 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:10.960 --rc genhtml_branch_coverage=1 00:17:10.960 --rc genhtml_function_coverage=1 00:17:10.960 --rc genhtml_legend=1 00:17:10.960 --rc geninfo_all_blocks=1 00:17:10.960 --rc geninfo_unexecuted_blocks=1 00:17:10.960 00:17:10.960 ' 00:17:10.960 07:02:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:10.960 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:10.960 --rc genhtml_branch_coverage=1 00:17:10.960 --rc genhtml_function_coverage=1 00:17:10.960 --rc genhtml_legend=1 00:17:10.960 --rc geninfo_all_blocks=1 00:17:10.960 --rc geninfo_unexecuted_blocks=1 00:17:10.960 00:17:10.960 ' 00:17:10.960 07:02:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:10.960 07:02:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:17:10.960 07:02:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:10.960 07:02:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:10.960 07:02:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:10.960 07:02:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:10.960 07:02:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:10.960 07:02:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:10.960 07:02:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:10.960 07:02:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:10.960 07:02:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:10.960 07:02:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:10.960 07:02:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:10.960 07:02:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:10.960 07:02:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:10.960 07:02:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:10.960 07:02:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:10.960 07:02:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:10.960 07:02:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:10.960 07:02:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:17:10.960 07:02:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:10.960 07:02:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:10.960 07:02:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:10.960 07:02:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:10.960 07:02:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:10.960 07:02:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:10.960 07:02:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:17:10.960 07:02:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:10.960 07:02:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0 00:17:10.960 07:02:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:10.960 07:02:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:10.960 07:02:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:10.960 07:02:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:10.960 07:02:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:10.960 07:02:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:10.960 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:10.961 07:02:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:10.961 07:02:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:10.961 07:02:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:10.961 07:02:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:10.961 07:02:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:17:10.961 07:02:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:17:10.961 07:02:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:17:10.961 07:02:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=7c6736da-d96e-49ff-ab06-7be64350e523 00:17:10.961 07:02:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:17:10.961 07:02:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=9021e573-d7ba-454d-bc33-c525c592c23c 00:17:10.961 07:02:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:17:10.961 07:02:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:17:10.961 07:02:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:17:10.961 07:02:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:17:10.961 07:02:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=61451c06-b072-410d-a279-33d776df9ec4 00:17:10.961 07:02:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:17:10.961 07:02:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:10.961 07:02:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:10.961 07:02:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:10.961 07:02:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:10.961 07:02:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:10.961 07:02:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:10.961 07:02:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:10.961 07:02:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:10.961 07:02:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:10.961 07:02:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:10.961 07:02:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@309 -- # xtrace_disable 00:17:10.961 07:02:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:13.504 07:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:13.504 07:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # pci_devs=() 00:17:13.504 07:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:13.504 07:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:13.504 07:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:13.504 07:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:13.504 07:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:13.504 07:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # net_devs=() 00:17:13.504 07:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:13.504 07:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # e810=() 00:17:13.504 07:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # local -ga e810 00:17:13.504 07:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # x722=() 00:17:13.504 07:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # local -ga x722 00:17:13.504 07:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # mlx=() 00:17:13.504 07:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # local -ga mlx 00:17:13.504 07:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:13.504 07:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:13.504 07:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:13.504 07:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:13.504 07:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:13.504 07:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:13.504 07:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:13.504 07:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:13.504 07:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:13.504 07:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:13.504 07:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:13.504 07:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:13.504 07:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:13.504 07:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:13.504 07:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:13.504 07:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:13.504 07:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:13.504 07:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:13.504 07:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:13.504 07:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:13.504 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:13.504 07:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:13.504 07:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:13.504 07:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:13.504 07:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:13.504 07:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:13.504 07:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:13.504 07:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:13.504 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:13.504 07:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:13.504 07:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:13.504 07:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:13.504 07:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:13.504 07:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:13.504 07:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:13.504 07:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:13.504 07:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:13.504 07:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:13.504 07:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:13.504 07:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:13.504 07:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:13.504 07:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:13.504 07:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:13.504 07:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:13.504 07:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:13.504 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:13.504 07:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:13.504 07:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:13.504 07:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:13.504 07:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:13.504 07:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:13.504 07:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:13.504 07:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:13.504 07:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:13.504 07:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:13.504 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:13.504 07:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:13.505 07:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:13.505 07:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # is_hw=yes 00:17:13.505 07:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:13.505 07:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:17:13.505 07:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:17:13.505 07:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:13.505 07:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:13.505 07:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:13.505 07:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:13.505 07:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:13.505 07:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:13.505 07:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:13.505 07:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:13.505 07:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:13.505 07:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:13.505 07:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:13.505 07:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:13.505 07:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:13.505 07:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:13.505 07:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:13.505 07:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:13.505 07:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:13.505 07:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:13.505 07:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:13.505 07:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:13.505 07:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:13.505 07:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:13.505 07:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:13.505 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:13.505 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.266 ms 00:17:13.505 00:17:13.505 --- 10.0.0.2 ping statistics --- 00:17:13.505 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:13.505 rtt min/avg/max/mdev = 0.266/0.266/0.266/0.000 ms 00:17:13.505 07:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:13.505 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:13.505 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.114 ms 00:17:13.505 00:17:13.505 --- 10.0.0.1 ping statistics --- 00:17:13.505 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:13.505 rtt min/avg/max/mdev = 0.114/0.114/0.114/0.000 ms 00:17:13.505 07:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:13.505 07:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@450 -- # return 0 00:17:13.505 07:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:13.505 07:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:13.505 07:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:13.505 07:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:13.505 07:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:13.505 07:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:13.505 07:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:13.505 07:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:17:13.505 07:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:13.505 07:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:13.505 07:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:13.505 07:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@509 -- # nvmfpid=222750 00:17:13.505 07:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:17:13.505 07:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@510 -- # waitforlisten 222750 00:17:13.505 07:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 222750 ']' 00:17:13.505 07:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:13.505 07:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:13.505 07:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:13.505 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:13.505 07:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:13.505 07:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:13.505 [2024-11-18 07:02:34.260163] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:17:13.505 [2024-11-18 07:02:34.260253] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:13.505 [2024-11-18 07:02:34.330311] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:13.505 [2024-11-18 07:02:34.373987] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:13.505 [2024-11-18 07:02:34.374055] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:13.505 [2024-11-18 07:02:34.374078] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:13.505 [2024-11-18 07:02:34.374090] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:13.505 [2024-11-18 07:02:34.374099] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:13.505 [2024-11-18 07:02:34.374803] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:13.764 07:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:13.764 07:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:17:13.764 07:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:13.764 07:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:13.764 07:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:13.764 07:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:13.764 07:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:17:14.023 [2024-11-18 07:02:34.757284] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:14.023 07:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:17:14.023 07:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:17:14.023 07:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:17:14.281 Malloc1 00:17:14.281 07:02:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:17:14.539 Malloc2 00:17:14.539 07:02:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:17:14.798 07:02:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:17:15.057 07:02:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:15.316 [2024-11-18 07:02:36.208283] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:15.316 07:02:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:17:15.316 07:02:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 61451c06-b072-410d-a279-33d776df9ec4 -a 10.0.0.2 -s 4420 -i 4 00:17:15.576 07:02:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:17:15.576 07:02:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:17:15.576 07:02:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:17:15.576 07:02:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:17:15.576 07:02:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:17:17.485 07:02:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:17:17.485 07:02:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:17:17.485 07:02:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:17:17.485 07:02:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:17:17.485 07:02:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:17:17.485 07:02:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:17:17.485 07:02:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:17:17.485 07:02:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:17:17.485 07:02:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:17:17.485 07:02:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:17:17.485 07:02:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:17:17.485 07:02:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:17.485 07:02:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:17.485 [ 0]:0x1 00:17:17.485 07:02:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:17.485 07:02:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:17.485 07:02:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=8c9c239d533b42e4b9bb8271378d6e38 00:17:17.485 07:02:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 8c9c239d533b42e4b9bb8271378d6e38 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:17.485 07:02:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:17:18.052 07:02:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:17:18.052 07:02:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:18.052 07:02:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:18.052 [ 0]:0x1 00:17:18.052 07:02:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:18.052 07:02:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:18.052 07:02:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=8c9c239d533b42e4b9bb8271378d6e38 00:17:18.052 07:02:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 8c9c239d533b42e4b9bb8271378d6e38 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:18.052 07:02:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:17:18.052 07:02:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:18.052 07:02:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:18.052 [ 1]:0x2 00:17:18.052 07:02:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:18.052 07:02:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:18.052 07:02:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=581c6fb170c84c0c8d83186c6f1e8acb 00:17:18.052 07:02:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 581c6fb170c84c0c8d83186c6f1e8acb != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:18.052 07:02:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:17:18.052 07:02:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:18.052 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:18.052 07:02:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:18.310 07:02:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:17:18.568 07:02:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:17:18.568 07:02:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 61451c06-b072-410d-a279-33d776df9ec4 -a 10.0.0.2 -s 4420 -i 4 00:17:18.826 07:02:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:17:18.826 07:02:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:17:18.826 07:02:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:17:18.826 07:02:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 1 ]] 00:17:18.826 07:02:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=1 00:17:18.826 07:02:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:17:20.735 07:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:17:20.735 07:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:17:20.735 07:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:17:20.735 07:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:17:20.735 07:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:17:20.735 07:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:17:20.735 07:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:17:20.735 07:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:17:20.735 07:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:17:20.735 07:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:17:20.735 07:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:17:20.735 07:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:17:20.735 07:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:17:20.735 07:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:17:20.735 07:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:20.735 07:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:17:20.735 07:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:20.735 07:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:17:20.735 07:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:20.735 07:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:20.735 07:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:20.735 07:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:20.735 07:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:17:20.735 07:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:20.735 07:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:17:20.735 07:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:20.735 07:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:20.735 07:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:20.735 07:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:17:20.735 07:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:20.735 07:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:20.735 [ 0]:0x2 00:17:20.735 07:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:20.735 07:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:20.735 07:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=581c6fb170c84c0c8d83186c6f1e8acb 00:17:20.735 07:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 581c6fb170c84c0c8d83186c6f1e8acb != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:20.735 07:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:17:21.304 07:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:17:21.304 07:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:21.304 07:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:21.304 [ 0]:0x1 00:17:21.304 07:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:21.304 07:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:21.304 07:02:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=8c9c239d533b42e4b9bb8271378d6e38 00:17:21.304 07:02:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 8c9c239d533b42e4b9bb8271378d6e38 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:21.304 07:02:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:17:21.304 07:02:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:21.304 07:02:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:21.304 [ 1]:0x2 00:17:21.304 07:02:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:21.304 07:02:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:21.304 07:02:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=581c6fb170c84c0c8d83186c6f1e8acb 00:17:21.304 07:02:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 581c6fb170c84c0c8d83186c6f1e8acb != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:21.304 07:02:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:17:21.563 07:02:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:17:21.563 07:02:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:17:21.563 07:02:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:17:21.563 07:02:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:17:21.563 07:02:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:21.563 07:02:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:17:21.563 07:02:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:21.563 07:02:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:17:21.563 07:02:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:21.563 07:02:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:21.563 07:02:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:21.563 07:02:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:21.563 07:02:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:17:21.563 07:02:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:21.563 07:02:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:17:21.563 07:02:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:21.563 07:02:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:21.563 07:02:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:21.563 07:02:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:17:21.563 07:02:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:21.563 07:02:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:21.563 [ 0]:0x2 00:17:21.563 07:02:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:21.563 07:02:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:21.563 07:02:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=581c6fb170c84c0c8d83186c6f1e8acb 00:17:21.563 07:02:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 581c6fb170c84c0c8d83186c6f1e8acb != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:21.563 07:02:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:17:21.563 07:02:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:21.563 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:21.563 07:02:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:17:21.823 07:02:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:17:21.823 07:02:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 61451c06-b072-410d-a279-33d776df9ec4 -a 10.0.0.2 -s 4420 -i 4 00:17:22.084 07:02:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:17:22.084 07:02:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:17:22.084 07:02:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:17:22.084 07:02:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:17:22.084 07:02:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:17:22.084 07:02:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:17:24.624 07:02:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:17:24.624 07:02:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:17:24.624 07:02:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:17:24.624 07:02:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:17:24.624 07:02:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:17:24.624 07:02:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:17:24.624 07:02:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:17:24.624 07:02:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:17:24.624 07:02:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:17:24.624 07:02:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:17:24.624 07:02:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:17:24.624 07:02:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:24.624 07:02:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:24.624 [ 0]:0x1 00:17:24.624 07:02:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:24.624 07:02:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:24.624 07:02:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=8c9c239d533b42e4b9bb8271378d6e38 00:17:24.624 07:02:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 8c9c239d533b42e4b9bb8271378d6e38 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:24.624 07:02:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:17:24.624 07:02:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:24.624 07:02:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:24.624 [ 1]:0x2 00:17:24.624 07:02:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:24.624 07:02:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:24.624 07:02:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=581c6fb170c84c0c8d83186c6f1e8acb 00:17:24.624 07:02:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 581c6fb170c84c0c8d83186c6f1e8acb != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:24.624 07:02:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:17:24.624 07:02:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:17:24.624 07:02:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:17:24.624 07:02:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:17:24.624 07:02:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:17:24.624 07:02:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:24.624 07:02:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:17:24.624 07:02:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:24.624 07:02:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:17:24.624 07:02:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:24.624 07:02:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:24.624 07:02:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:24.624 07:02:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:24.624 07:02:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:17:24.624 07:02:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:24.624 07:02:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:17:24.624 07:02:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:24.624 07:02:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:24.624 07:02:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:24.624 07:02:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:17:24.624 07:02:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:24.624 07:02:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:24.624 [ 0]:0x2 00:17:24.624 07:02:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:24.624 07:02:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:24.624 07:02:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=581c6fb170c84c0c8d83186c6f1e8acb 00:17:24.624 07:02:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 581c6fb170c84c0c8d83186c6f1e8acb != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:24.624 07:02:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:17:24.624 07:02:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:17:24.624 07:02:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:17:24.624 07:02:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:24.624 07:02:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:24.625 07:02:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:24.625 07:02:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:24.625 07:02:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:24.625 07:02:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:24.625 07:02:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:24.625 07:02:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:17:24.625 07:02:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:17:24.883 [2024-11-18 07:02:45.737235] nvmf_rpc.c:1870:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:17:24.883 request: 00:17:24.883 { 00:17:24.883 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:24.883 "nsid": 2, 00:17:24.883 "host": "nqn.2016-06.io.spdk:host1", 00:17:24.883 "method": "nvmf_ns_remove_host", 00:17:24.883 "req_id": 1 00:17:24.883 } 00:17:24.883 Got JSON-RPC error response 00:17:24.883 response: 00:17:24.883 { 00:17:24.883 "code": -32602, 00:17:24.883 "message": "Invalid parameters" 00:17:24.883 } 00:17:24.883 07:02:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:17:24.883 07:02:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:24.883 07:02:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:24.883 07:02:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:24.883 07:02:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:17:24.883 07:02:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:17:24.883 07:02:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:17:24.883 07:02:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:17:24.883 07:02:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:24.883 07:02:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:17:24.883 07:02:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:24.883 07:02:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:17:24.883 07:02:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:24.883 07:02:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:24.883 07:02:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:24.883 07:02:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:24.883 07:02:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:17:24.883 07:02:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:24.884 07:02:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:17:24.884 07:02:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:24.884 07:02:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:24.884 07:02:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:24.884 07:02:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:17:24.884 07:02:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:24.884 07:02:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:24.884 [ 0]:0x2 00:17:24.884 07:02:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:24.884 07:02:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:25.142 07:02:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=581c6fb170c84c0c8d83186c6f1e8acb 00:17:25.142 07:02:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 581c6fb170c84c0c8d83186c6f1e8acb != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:25.142 07:02:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:17:25.142 07:02:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:25.142 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:25.142 07:02:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:17:25.142 07:02:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=224239 00:17:25.142 07:02:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:17:25.142 07:02:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 224239 /var/tmp/host.sock 00:17:25.142 07:02:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 224239 ']' 00:17:25.142 07:02:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:17:25.142 07:02:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:25.142 07:02:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:17:25.143 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:17:25.143 07:02:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:25.143 07:02:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:25.143 [2024-11-18 07:02:46.077221] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:17:25.143 [2024-11-18 07:02:46.077293] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid224239 ] 00:17:25.402 [2024-11-18 07:02:46.143338] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:25.403 [2024-11-18 07:02:46.189961] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:25.661 07:02:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:25.661 07:02:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:17:25.661 07:02:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:25.920 07:02:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:17:26.178 07:02:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 7c6736da-d96e-49ff-ab06-7be64350e523 00:17:26.178 07:02:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:17:26.178 07:02:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 7C6736DAD96E49FFAB067BE64350E523 -i 00:17:26.437 07:02:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 9021e573-d7ba-454d-bc33-c525c592c23c 00:17:26.437 07:02:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:17:26.437 07:02:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 9021E573D7BA454DBC33C525C592C23C -i 00:17:26.695 07:02:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:17:27.262 07:02:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:17:27.262 07:02:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:17:27.262 07:02:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:17:27.831 nvme0n1 00:17:27.831 07:02:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:17:27.831 07:02:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:17:28.089 nvme1n2 00:17:28.089 07:02:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:17:28.089 07:02:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:17:28.089 07:02:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:17:28.089 07:02:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:17:28.090 07:02:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:17:28.356 07:02:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:17:28.356 07:02:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:17:28.356 07:02:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:17:28.356 07:02:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:17:28.618 07:02:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 7c6736da-d96e-49ff-ab06-7be64350e523 == \7\c\6\7\3\6\d\a\-\d\9\6\e\-\4\9\f\f\-\a\b\0\6\-\7\b\e\6\4\3\5\0\e\5\2\3 ]] 00:17:28.618 07:02:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:17:28.618 07:02:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:17:28.618 07:02:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:17:29.186 07:02:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 9021e573-d7ba-454d-bc33-c525c592c23c == \9\0\2\1\e\5\7\3\-\d\7\b\a\-\4\5\4\d\-\b\c\3\3\-\c\5\2\5\c\5\9\2\c\2\3\c ]] 00:17:29.186 07:02:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@137 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:29.445 07:02:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:17:29.704 07:02:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # uuid2nguid 7c6736da-d96e-49ff-ab06-7be64350e523 00:17:29.704 07:02:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:17:29.704 07:02:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 7C6736DAD96E49FFAB067BE64350E523 00:17:29.704 07:02:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:17:29.704 07:02:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 7C6736DAD96E49FFAB067BE64350E523 00:17:29.704 07:02:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:29.704 07:02:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:29.704 07:02:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:29.704 07:02:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:29.704 07:02:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:29.704 07:02:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:29.704 07:02:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:29.704 07:02:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:17:29.704 07:02:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 7C6736DAD96E49FFAB067BE64350E523 00:17:29.962 [2024-11-18 07:02:50.691227] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: invalid 00:17:29.962 [2024-11-18 07:02:50.691269] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1: bdev invalid cannot be opened, error=-19 00:17:29.962 [2024-11-18 07:02:50.691284] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.962 request: 00:17:29.962 { 00:17:29.962 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:29.962 "namespace": { 00:17:29.962 "bdev_name": "invalid", 00:17:29.962 "nsid": 1, 00:17:29.962 "nguid": "7C6736DAD96E49FFAB067BE64350E523", 00:17:29.962 "no_auto_visible": false 00:17:29.962 }, 00:17:29.962 "method": "nvmf_subsystem_add_ns", 00:17:29.962 "req_id": 1 00:17:29.962 } 00:17:29.962 Got JSON-RPC error response 00:17:29.962 response: 00:17:29.962 { 00:17:29.962 "code": -32602, 00:17:29.962 "message": "Invalid parameters" 00:17:29.962 } 00:17:29.962 07:02:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:17:29.962 07:02:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:29.962 07:02:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:29.962 07:02:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:29.962 07:02:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # uuid2nguid 7c6736da-d96e-49ff-ab06-7be64350e523 00:17:29.962 07:02:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:17:29.962 07:02:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 7C6736DAD96E49FFAB067BE64350E523 -i 00:17:30.220 07:02:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@143 -- # sleep 2s 00:17:32.126 07:02:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # hostrpc bdev_get_bdevs 00:17:32.126 07:02:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # jq length 00:17:32.126 07:02:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:17:32.383 07:02:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # (( 0 == 0 )) 00:17:32.383 07:02:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@146 -- # killprocess 224239 00:17:32.383 07:02:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 224239 ']' 00:17:32.383 07:02:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 224239 00:17:32.383 07:02:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:17:32.383 07:02:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:32.383 07:02:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 224239 00:17:32.383 07:02:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:32.383 07:02:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:32.383 07:02:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 224239' 00:17:32.383 killing process with pid 224239 00:17:32.383 07:02:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 224239 00:17:32.383 07:02:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 224239 00:17:32.952 07:02:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:33.210 07:02:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:17:33.210 07:02:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@150 -- # nvmftestfini 00:17:33.210 07:02:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:33.210 07:02:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync 00:17:33.210 07:02:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:33.210 07:02:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e 00:17:33.210 07:02:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:33.210 07:02:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:33.210 rmmod nvme_tcp 00:17:33.210 rmmod nvme_fabrics 00:17:33.210 rmmod nvme_keyring 00:17:33.211 07:02:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:33.211 07:02:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e 00:17:33.211 07:02:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0 00:17:33.211 07:02:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@517 -- # '[' -n 222750 ']' 00:17:33.211 07:02:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@518 -- # killprocess 222750 00:17:33.211 07:02:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 222750 ']' 00:17:33.211 07:02:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 222750 00:17:33.211 07:02:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:17:33.211 07:02:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:33.211 07:02:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 222750 00:17:33.211 07:02:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:33.211 07:02:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:33.211 07:02:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 222750' 00:17:33.211 killing process with pid 222750 00:17:33.211 07:02:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 222750 00:17:33.211 07:02:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 222750 00:17:33.469 07:02:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:33.469 07:02:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:33.469 07:02:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:33.469 07:02:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # iptr 00:17:33.469 07:02:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-save 00:17:33.469 07:02:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:33.469 07:02:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-restore 00:17:33.469 07:02:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:33.469 07:02:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:33.469 07:02:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:33.470 07:02:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:33.470 07:02:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:35.379 07:02:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:35.379 00:17:35.379 real 0m24.620s 00:17:35.379 user 0m35.881s 00:17:35.379 sys 0m4.629s 00:17:35.379 07:02:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:35.379 07:02:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:35.379 ************************************ 00:17:35.379 END TEST nvmf_ns_masking 00:17:35.379 ************************************ 00:17:35.638 07:02:56 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:17:35.638 07:02:56 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:17:35.638 07:02:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:35.638 07:02:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:35.638 07:02:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:35.638 ************************************ 00:17:35.638 START TEST nvmf_nvme_cli 00:17:35.638 ************************************ 00:17:35.638 07:02:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:17:35.638 * Looking for test storage... 00:17:35.638 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:35.638 07:02:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:35.638 07:02:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1693 -- # lcov --version 00:17:35.638 07:02:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:35.638 07:02:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:35.638 07:02:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:35.638 07:02:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:35.638 07:02:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:35.638 07:02:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # IFS=.-: 00:17:35.638 07:02:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # read -ra ver1 00:17:35.638 07:02:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # IFS=.-: 00:17:35.638 07:02:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # read -ra ver2 00:17:35.638 07:02:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@338 -- # local 'op=<' 00:17:35.638 07:02:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@340 -- # ver1_l=2 00:17:35.638 07:02:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@341 -- # ver2_l=1 00:17:35.638 07:02:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:35.638 07:02:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@344 -- # case "$op" in 00:17:35.638 07:02:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@345 -- # : 1 00:17:35.638 07:02:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:35.638 07:02:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:35.638 07:02:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # decimal 1 00:17:35.638 07:02:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=1 00:17:35.638 07:02:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:35.638 07:02:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 1 00:17:35.638 07:02:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # ver1[v]=1 00:17:35.638 07:02:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # decimal 2 00:17:35.638 07:02:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=2 00:17:35.638 07:02:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:35.638 07:02:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 2 00:17:35.638 07:02:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # ver2[v]=2 00:17:35.638 07:02:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:35.638 07:02:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:35.638 07:02:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # return 0 00:17:35.638 07:02:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:35.638 07:02:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:35.638 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:35.638 --rc genhtml_branch_coverage=1 00:17:35.638 --rc genhtml_function_coverage=1 00:17:35.638 --rc genhtml_legend=1 00:17:35.638 --rc geninfo_all_blocks=1 00:17:35.638 --rc geninfo_unexecuted_blocks=1 00:17:35.638 00:17:35.638 ' 00:17:35.638 07:02:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:35.638 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:35.638 --rc genhtml_branch_coverage=1 00:17:35.638 --rc genhtml_function_coverage=1 00:17:35.638 --rc genhtml_legend=1 00:17:35.638 --rc geninfo_all_blocks=1 00:17:35.638 --rc geninfo_unexecuted_blocks=1 00:17:35.638 00:17:35.638 ' 00:17:35.638 07:02:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:35.638 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:35.638 --rc genhtml_branch_coverage=1 00:17:35.638 --rc genhtml_function_coverage=1 00:17:35.638 --rc genhtml_legend=1 00:17:35.638 --rc geninfo_all_blocks=1 00:17:35.638 --rc geninfo_unexecuted_blocks=1 00:17:35.638 00:17:35.638 ' 00:17:35.638 07:02:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:35.638 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:35.638 --rc genhtml_branch_coverage=1 00:17:35.638 --rc genhtml_function_coverage=1 00:17:35.638 --rc genhtml_legend=1 00:17:35.638 --rc geninfo_all_blocks=1 00:17:35.638 --rc geninfo_unexecuted_blocks=1 00:17:35.638 00:17:35.638 ' 00:17:35.638 07:02:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:35.638 07:02:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:17:35.639 07:02:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:35.639 07:02:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:35.639 07:02:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:35.639 07:02:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:35.639 07:02:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:35.639 07:02:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:35.639 07:02:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:35.639 07:02:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:35.639 07:02:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:35.639 07:02:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:35.639 07:02:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:35.639 07:02:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:35.639 07:02:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:35.639 07:02:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:35.639 07:02:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:35.639 07:02:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:35.639 07:02:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:35.639 07:02:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@15 -- # shopt -s extglob 00:17:35.639 07:02:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:35.639 07:02:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:35.639 07:02:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:35.639 07:02:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:35.639 07:02:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:35.639 07:02:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:35.639 07:02:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:17:35.639 07:02:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:35.639 07:02:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # : 0 00:17:35.639 07:02:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:35.639 07:02:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:35.639 07:02:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:35.639 07:02:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:35.639 07:02:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:35.639 07:02:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:35.639 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:35.639 07:02:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:35.639 07:02:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:35.639 07:02:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:35.639 07:02:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:35.639 07:02:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:35.639 07:02:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:17:35.639 07:02:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:17:35.639 07:02:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:35.639 07:02:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:35.639 07:02:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:35.639 07:02:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:35.639 07:02:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:35.639 07:02:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:35.639 07:02:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:35.639 07:02:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:35.639 07:02:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:35.639 07:02:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:35.639 07:02:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@309 -- # xtrace_disable 00:17:35.639 07:02:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:38.171 07:02:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:38.171 07:02:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # pci_devs=() 00:17:38.171 07:02:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:38.171 07:02:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:38.171 07:02:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:38.171 07:02:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:38.171 07:02:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:38.171 07:02:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # net_devs=() 00:17:38.171 07:02:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:38.171 07:02:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # e810=() 00:17:38.171 07:02:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # local -ga e810 00:17:38.171 07:02:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # x722=() 00:17:38.171 07:02:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # local -ga x722 00:17:38.171 07:02:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # mlx=() 00:17:38.171 07:02:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # local -ga mlx 00:17:38.171 07:02:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:38.171 07:02:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:38.171 07:02:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:38.171 07:02:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:38.171 07:02:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:38.171 07:02:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:38.171 07:02:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:38.171 07:02:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:38.171 07:02:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:38.171 07:02:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:38.171 07:02:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:38.171 07:02:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:38.171 07:02:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:38.171 07:02:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:38.171 07:02:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:38.171 07:02:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:38.171 07:02:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:38.171 07:02:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:38.171 07:02:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:38.171 07:02:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:38.171 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:38.172 07:02:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:38.172 07:02:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:38.172 07:02:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:38.172 07:02:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:38.172 07:02:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:38.172 07:02:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:38.172 07:02:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:38.172 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:38.172 07:02:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:38.172 07:02:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:38.172 07:02:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:38.172 07:02:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:38.172 07:02:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:38.172 07:02:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:38.172 07:02:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:38.172 07:02:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:38.172 07:02:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:38.172 07:02:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:38.172 07:02:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:38.172 07:02:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:38.172 07:02:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:38.172 07:02:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:38.172 07:02:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:38.172 07:02:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:38.172 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:38.172 07:02:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:38.172 07:02:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:38.172 07:02:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:38.172 07:02:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:38.172 07:02:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:38.172 07:02:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:38.172 07:02:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:38.172 07:02:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:38.172 07:02:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:38.172 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:38.172 07:02:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:38.172 07:02:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:38.172 07:02:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # is_hw=yes 00:17:38.172 07:02:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:38.172 07:02:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:17:38.172 07:02:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:17:38.172 07:02:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:38.172 07:02:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:38.172 07:02:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:38.172 07:02:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:38.172 07:02:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:38.172 07:02:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:38.172 07:02:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:38.172 07:02:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:38.172 07:02:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:38.172 07:02:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:38.172 07:02:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:38.172 07:02:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:38.172 07:02:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:38.172 07:02:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:38.172 07:02:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:38.172 07:02:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:38.172 07:02:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:38.172 07:02:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:38.172 07:02:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:38.172 07:02:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:38.172 07:02:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:38.172 07:02:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:38.172 07:02:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:38.172 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:38.172 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.187 ms 00:17:38.172 00:17:38.172 --- 10.0.0.2 ping statistics --- 00:17:38.172 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:38.172 rtt min/avg/max/mdev = 0.187/0.187/0.187/0.000 ms 00:17:38.172 07:02:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:38.172 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:38.172 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.113 ms 00:17:38.172 00:17:38.172 --- 10.0.0.1 ping statistics --- 00:17:38.172 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:38.172 rtt min/avg/max/mdev = 0.113/0.113/0.113/0.000 ms 00:17:38.172 07:02:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:38.172 07:02:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@450 -- # return 0 00:17:38.172 07:02:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:38.172 07:02:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:38.172 07:02:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:38.172 07:02:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:38.172 07:02:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:38.172 07:02:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:38.172 07:02:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:38.172 07:02:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:17:38.172 07:02:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:38.172 07:02:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:38.172 07:02:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:38.172 07:02:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@509 -- # nvmfpid=227160 00:17:38.172 07:02:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:38.172 07:02:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@510 -- # waitforlisten 227160 00:17:38.172 07:02:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # '[' -z 227160 ']' 00:17:38.172 07:02:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:38.172 07:02:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:38.172 07:02:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:38.172 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:38.172 07:02:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:38.172 07:02:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:38.172 [2024-11-18 07:02:58.836614] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:17:38.172 [2024-11-18 07:02:58.836699] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:38.172 [2024-11-18 07:02:58.906045] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:38.172 [2024-11-18 07:02:58.952062] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:38.172 [2024-11-18 07:02:58.952115] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:38.172 [2024-11-18 07:02:58.952138] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:38.172 [2024-11-18 07:02:58.952149] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:38.172 [2024-11-18 07:02:58.952158] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:38.172 [2024-11-18 07:02:58.953825] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:38.172 [2024-11-18 07:02:58.953906] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:38.172 [2024-11-18 07:02:58.953971] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:38.172 [2024-11-18 07:02:58.953974] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:38.173 07:02:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:38.173 07:02:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@868 -- # return 0 00:17:38.173 07:02:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:38.173 07:02:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:38.173 07:02:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:38.173 07:02:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:38.173 07:02:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:38.173 07:02:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.173 07:02:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:38.173 [2024-11-18 07:02:59.147638] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:38.431 07:02:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.431 07:02:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:38.431 07:02:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.431 07:02:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:38.431 Malloc0 00:17:38.431 07:02:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.431 07:02:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:17:38.431 07:02:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.431 07:02:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:38.431 Malloc1 00:17:38.431 07:02:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.431 07:02:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:17:38.431 07:02:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.431 07:02:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:38.431 07:02:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.431 07:02:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:38.431 07:02:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.431 07:02:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:38.431 07:02:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.431 07:02:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:38.431 07:02:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.431 07:02:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:38.431 07:02:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.431 07:02:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:38.431 07:02:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.431 07:02:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:38.431 [2024-11-18 07:02:59.245162] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:38.431 07:02:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.431 07:02:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:38.431 07:02:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.431 07:02:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:38.431 07:02:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.431 07:02:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 4420 00:17:38.688 00:17:38.688 Discovery Log Number of Records 2, Generation counter 2 00:17:38.688 =====Discovery Log Entry 0====== 00:17:38.688 trtype: tcp 00:17:38.688 adrfam: ipv4 00:17:38.688 subtype: current discovery subsystem 00:17:38.688 treq: not required 00:17:38.688 portid: 0 00:17:38.688 trsvcid: 4420 00:17:38.688 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:17:38.688 traddr: 10.0.0.2 00:17:38.688 eflags: explicit discovery connections, duplicate discovery information 00:17:38.688 sectype: none 00:17:38.688 =====Discovery Log Entry 1====== 00:17:38.688 trtype: tcp 00:17:38.688 adrfam: ipv4 00:17:38.688 subtype: nvme subsystem 00:17:38.688 treq: not required 00:17:38.688 portid: 0 00:17:38.688 trsvcid: 4420 00:17:38.688 subnqn: nqn.2016-06.io.spdk:cnode1 00:17:38.688 traddr: 10.0.0.2 00:17:38.688 eflags: none 00:17:38.688 sectype: none 00:17:38.688 07:02:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:17:38.688 07:02:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:17:38.688 07:02:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:17:38.688 07:02:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:17:38.688 07:02:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:17:38.688 07:02:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:17:38.688 07:02:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:17:38.688 07:02:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:17:38.688 07:02:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:17:38.688 07:02:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:17:38.688 07:02:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:39.253 07:03:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:17:39.253 07:03:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1202 -- # local i=0 00:17:39.253 07:03:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:17:39.253 07:03:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:17:39.253 07:03:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:17:39.253 07:03:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # sleep 2 00:17:41.266 07:03:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:17:41.266 07:03:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:17:41.266 07:03:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:17:41.266 07:03:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:17:41.266 07:03:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:17:41.266 07:03:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # return 0 00:17:41.266 07:03:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:17:41.266 07:03:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:17:41.266 07:03:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:17:41.266 07:03:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:17:41.266 07:03:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:17:41.266 07:03:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:17:41.267 07:03:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:17:41.267 07:03:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:17:41.267 07:03:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:17:41.267 07:03:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:17:41.267 07:03:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:17:41.267 07:03:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:17:41.267 07:03:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:17:41.267 07:03:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:17:41.267 07:03:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1 00:17:41.267 /dev/nvme0n2 ]] 00:17:41.267 07:03:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:17:41.267 07:03:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:17:41.267 07:03:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:17:41.267 07:03:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:17:41.267 07:03:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:17:41.267 07:03:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:17:41.267 07:03:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:17:41.267 07:03:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:17:41.267 07:03:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:17:41.267 07:03:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:17:41.267 07:03:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:17:41.267 07:03:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:17:41.267 07:03:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:17:41.267 07:03:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:17:41.267 07:03:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:17:41.267 07:03:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:17:41.267 07:03:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:41.538 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:41.538 07:03:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:41.538 07:03:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1223 -- # local i=0 00:17:41.538 07:03:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:17:41.538 07:03:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:41.538 07:03:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:17:41.538 07:03:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:41.538 07:03:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1235 -- # return 0 00:17:41.538 07:03:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:17:41.538 07:03:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:41.538 07:03:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.538 07:03:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:41.538 07:03:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.538 07:03:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:17:41.538 07:03:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:17:41.538 07:03:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:41.538 07:03:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # sync 00:17:41.538 07:03:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:41.538 07:03:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set +e 00:17:41.538 07:03:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:41.538 07:03:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:41.538 rmmod nvme_tcp 00:17:41.538 rmmod nvme_fabrics 00:17:41.538 rmmod nvme_keyring 00:17:41.538 07:03:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:41.538 07:03:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@128 -- # set -e 00:17:41.538 07:03:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@129 -- # return 0 00:17:41.538 07:03:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@517 -- # '[' -n 227160 ']' 00:17:41.538 07:03:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@518 -- # killprocess 227160 00:17:41.538 07:03:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # '[' -z 227160 ']' 00:17:41.538 07:03:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # kill -0 227160 00:17:41.538 07:03:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # uname 00:17:41.538 07:03:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:41.538 07:03:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 227160 00:17:41.538 07:03:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:41.538 07:03:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:41.538 07:03:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # echo 'killing process with pid 227160' 00:17:41.538 killing process with pid 227160 00:17:41.538 07:03:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@973 -- # kill 227160 00:17:41.538 07:03:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@978 -- # wait 227160 00:17:41.796 07:03:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:41.796 07:03:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:41.796 07:03:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:41.796 07:03:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # iptr 00:17:41.796 07:03:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-save 00:17:41.796 07:03:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:41.796 07:03:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-restore 00:17:41.796 07:03:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:41.796 07:03:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:41.796 07:03:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:41.796 07:03:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:41.796 07:03:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:43.706 07:03:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:43.706 00:17:43.706 real 0m8.288s 00:17:43.706 user 0m15.278s 00:17:43.706 sys 0m2.297s 00:17:43.706 07:03:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:43.706 07:03:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:43.706 ************************************ 00:17:43.706 END TEST nvmf_nvme_cli 00:17:43.706 ************************************ 00:17:43.965 07:03:04 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 1 -eq 1 ]] 00:17:43.965 07:03:04 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@31 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:17:43.965 07:03:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:43.965 07:03:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:43.965 07:03:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:43.966 ************************************ 00:17:43.966 START TEST nvmf_vfio_user 00:17:43.966 ************************************ 00:17:43.966 07:03:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:17:43.966 * Looking for test storage... 00:17:43.966 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:43.966 07:03:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:43.966 07:03:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1693 -- # lcov --version 00:17:43.966 07:03:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:43.966 07:03:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:43.966 07:03:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:43.966 07:03:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:43.966 07:03:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:43.966 07:03:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # IFS=.-: 00:17:43.966 07:03:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # read -ra ver1 00:17:43.966 07:03:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # IFS=.-: 00:17:43.966 07:03:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # read -ra ver2 00:17:43.966 07:03:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@338 -- # local 'op=<' 00:17:43.966 07:03:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@340 -- # ver1_l=2 00:17:43.966 07:03:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@341 -- # ver2_l=1 00:17:43.966 07:03:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:43.966 07:03:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@344 -- # case "$op" in 00:17:43.966 07:03:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@345 -- # : 1 00:17:43.966 07:03:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:43.966 07:03:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:43.966 07:03:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # decimal 1 00:17:43.966 07:03:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=1 00:17:43.966 07:03:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:43.966 07:03:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 1 00:17:43.966 07:03:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # ver1[v]=1 00:17:43.966 07:03:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # decimal 2 00:17:43.966 07:03:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=2 00:17:43.966 07:03:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:43.966 07:03:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 2 00:17:43.966 07:03:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # ver2[v]=2 00:17:43.966 07:03:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:43.966 07:03:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:43.966 07:03:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # return 0 00:17:43.966 07:03:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:43.966 07:03:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:43.966 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:43.966 --rc genhtml_branch_coverage=1 00:17:43.966 --rc genhtml_function_coverage=1 00:17:43.966 --rc genhtml_legend=1 00:17:43.966 --rc geninfo_all_blocks=1 00:17:43.966 --rc geninfo_unexecuted_blocks=1 00:17:43.966 00:17:43.966 ' 00:17:43.966 07:03:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:43.966 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:43.966 --rc genhtml_branch_coverage=1 00:17:43.966 --rc genhtml_function_coverage=1 00:17:43.966 --rc genhtml_legend=1 00:17:43.966 --rc geninfo_all_blocks=1 00:17:43.966 --rc geninfo_unexecuted_blocks=1 00:17:43.966 00:17:43.966 ' 00:17:43.966 07:03:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:43.966 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:43.966 --rc genhtml_branch_coverage=1 00:17:43.966 --rc genhtml_function_coverage=1 00:17:43.966 --rc genhtml_legend=1 00:17:43.966 --rc geninfo_all_blocks=1 00:17:43.966 --rc geninfo_unexecuted_blocks=1 00:17:43.966 00:17:43.966 ' 00:17:43.966 07:03:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:43.966 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:43.966 --rc genhtml_branch_coverage=1 00:17:43.966 --rc genhtml_function_coverage=1 00:17:43.966 --rc genhtml_legend=1 00:17:43.966 --rc geninfo_all_blocks=1 00:17:43.966 --rc geninfo_unexecuted_blocks=1 00:17:43.966 00:17:43.966 ' 00:17:43.966 07:03:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:43.966 07:03:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:17:43.966 07:03:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:43.966 07:03:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:43.966 07:03:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:43.966 07:03:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:43.966 07:03:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:43.966 07:03:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:43.966 07:03:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:43.966 07:03:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:43.966 07:03:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:43.966 07:03:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:43.966 07:03:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:43.966 07:03:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:43.966 07:03:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:43.966 07:03:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:43.966 07:03:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:43.966 07:03:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:43.966 07:03:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:43.966 07:03:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@15 -- # shopt -s extglob 00:17:43.966 07:03:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:43.966 07:03:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:43.966 07:03:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:43.966 07:03:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:43.966 07:03:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:43.966 07:03:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:43.966 07:03:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:17:43.966 07:03:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:43.966 07:03:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@51 -- # : 0 00:17:43.966 07:03:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:43.966 07:03:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:43.966 07:03:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:43.966 07:03:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:43.967 07:03:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:43.967 07:03:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:43.967 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:43.967 07:03:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:43.967 07:03:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:43.967 07:03:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:43.967 07:03:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:17:43.967 07:03:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:17:43.967 07:03:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:17:43.967 07:03:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:43.967 07:03:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:17:43.967 07:03:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:17:43.967 07:03:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:17:43.967 07:03:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:17:43.967 07:03:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:17:43.967 07:03:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:17:43.967 07:03:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=228179 00:17:43.967 07:03:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:17:43.967 07:03:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 228179' 00:17:43.967 Process pid: 228179 00:17:43.967 07:03:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:17:43.967 07:03:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 228179 00:17:43.967 07:03:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 228179 ']' 00:17:43.967 07:03:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:43.967 07:03:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:43.967 07:03:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:43.967 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:43.967 07:03:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:43.967 07:03:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:17:43.967 [2024-11-18 07:03:04.935124] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:17:43.967 [2024-11-18 07:03:04.935217] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:44.227 [2024-11-18 07:03:05.004436] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:44.227 [2024-11-18 07:03:05.052698] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:44.227 [2024-11-18 07:03:05.052752] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:44.227 [2024-11-18 07:03:05.052778] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:44.227 [2024-11-18 07:03:05.052789] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:44.227 [2024-11-18 07:03:05.052799] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:44.227 [2024-11-18 07:03:05.054249] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:44.227 [2024-11-18 07:03:05.054308] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:44.227 [2024-11-18 07:03:05.054379] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:44.227 [2024-11-18 07:03:05.054376] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:44.227 07:03:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:44.227 07:03:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:17:44.227 07:03:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:17:45.611 07:03:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:17:45.612 07:03:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:17:45.612 07:03:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:17:45.612 07:03:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:17:45.612 07:03:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:17:45.612 07:03:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:17:45.872 Malloc1 00:17:45.872 07:03:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:17:46.130 07:03:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:17:46.389 07:03:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:17:46.648 07:03:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:17:46.648 07:03:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:17:46.648 07:03:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:17:47.215 Malloc2 00:17:47.215 07:03:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:17:47.473 07:03:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:17:47.731 07:03:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:17:47.992 07:03:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:17:47.992 07:03:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:17:47.992 07:03:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:17:47.992 07:03:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:17:47.992 07:03:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:17:47.992 07:03:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:17:47.992 [2024-11-18 07:03:08.790004] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:17:47.992 [2024-11-18 07:03:08.790041] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid229122 ] 00:17:47.992 [2024-11-18 07:03:08.836707] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:17:47.992 [2024-11-18 07:03:08.845953] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:17:47.992 [2024-11-18 07:03:08.845983] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f054e2aa000 00:17:47.992 [2024-11-18 07:03:08.846950] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:47.992 [2024-11-18 07:03:08.847938] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:47.992 [2024-11-18 07:03:08.848944] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:47.992 [2024-11-18 07:03:08.849951] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:17:47.992 [2024-11-18 07:03:08.850954] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:17:47.992 [2024-11-18 07:03:08.851963] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:47.992 [2024-11-18 07:03:08.852968] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:17:47.992 [2024-11-18 07:03:08.853974] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:47.992 [2024-11-18 07:03:08.854977] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:17:47.992 [2024-11-18 07:03:08.854997] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f054cfa2000 00:17:47.992 [2024-11-18 07:03:08.856119] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:17:47.992 [2024-11-18 07:03:08.875760] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:17:47.992 [2024-11-18 07:03:08.875823] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to connect adminq (no timeout) 00:17:47.992 [2024-11-18 07:03:08.878121] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:17:47.992 [2024-11-18 07:03:08.878176] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:17:47.992 [2024-11-18 07:03:08.878262] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for connect adminq (no timeout) 00:17:47.992 [2024-11-18 07:03:08.878287] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs (no timeout) 00:17:47.992 [2024-11-18 07:03:08.878298] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs wait for vs (no timeout) 00:17:47.992 [2024-11-18 07:03:08.879114] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:17:47.992 [2024-11-18 07:03:08.879132] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap (no timeout) 00:17:47.992 [2024-11-18 07:03:08.879144] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap wait for cap (no timeout) 00:17:47.992 [2024-11-18 07:03:08.880116] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:17:47.992 [2024-11-18 07:03:08.880136] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en (no timeout) 00:17:47.992 [2024-11-18 07:03:08.880148] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en wait for cc (timeout 15000 ms) 00:17:47.992 [2024-11-18 07:03:08.881121] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:17:47.992 [2024-11-18 07:03:08.881141] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:17:47.992 [2024-11-18 07:03:08.882129] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:17:47.992 [2024-11-18 07:03:08.882148] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 0 && CSTS.RDY = 0 00:17:47.992 [2024-11-18 07:03:08.882161] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to controller is disabled (timeout 15000 ms) 00:17:47.992 [2024-11-18 07:03:08.882174] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:17:47.992 [2024-11-18 07:03:08.882283] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Setting CC.EN = 1 00:17:47.992 [2024-11-18 07:03:08.882291] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:17:47.992 [2024-11-18 07:03:08.882299] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:17:47.992 [2024-11-18 07:03:08.883146] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:17:47.992 [2024-11-18 07:03:08.884141] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:17:47.992 [2024-11-18 07:03:08.885148] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:17:47.992 [2024-11-18 07:03:08.886143] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:17:47.992 [2024-11-18 07:03:08.886267] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:17:47.992 [2024-11-18 07:03:08.887157] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:17:47.992 [2024-11-18 07:03:08.887175] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:17:47.993 [2024-11-18 07:03:08.887184] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to reset admin queue (timeout 30000 ms) 00:17:47.993 [2024-11-18 07:03:08.887207] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller (no timeout) 00:17:47.993 [2024-11-18 07:03:08.887223] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify controller (timeout 30000 ms) 00:17:47.993 [2024-11-18 07:03:08.887245] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:17:47.993 [2024-11-18 07:03:08.887255] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:17:47.993 [2024-11-18 07:03:08.887262] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:47.993 [2024-11-18 07:03:08.887279] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:17:47.993 [2024-11-18 07:03:08.887344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:17:47.993 [2024-11-18 07:03:08.887359] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_xfer_size 131072 00:17:47.993 [2024-11-18 07:03:08.887367] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] MDTS max_xfer_size 131072 00:17:47.993 [2024-11-18 07:03:08.887374] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CNTLID 0x0001 00:17:47.993 [2024-11-18 07:03:08.887382] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:17:47.993 [2024-11-18 07:03:08.887393] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_sges 1 00:17:47.993 [2024-11-18 07:03:08.887406] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] fuses compare and write: 1 00:17:47.993 [2024-11-18 07:03:08.887414] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to configure AER (timeout 30000 ms) 00:17:47.993 [2024-11-18 07:03:08.887429] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for configure aer (timeout 30000 ms) 00:17:47.993 [2024-11-18 07:03:08.887445] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:17:47.993 [2024-11-18 07:03:08.887459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:17:47.993 [2024-11-18 07:03:08.887475] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:17:47.993 [2024-11-18 07:03:08.887487] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:17:47.993 [2024-11-18 07:03:08.887521] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:17:47.993 [2024-11-18 07:03:08.887534] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:17:47.993 [2024-11-18 07:03:08.887542] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:17:47.993 [2024-11-18 07:03:08.887554] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:17:47.993 [2024-11-18 07:03:08.887567] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:17:47.993 [2024-11-18 07:03:08.887579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:17:47.993 [2024-11-18 07:03:08.887594] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Controller adjusted keep alive timeout to 0 ms 00:17:47.993 [2024-11-18 07:03:08.887603] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:17:47.993 [2024-11-18 07:03:08.887614] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set number of queues (timeout 30000 ms) 00:17:47.993 [2024-11-18 07:03:08.887623] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:17:47.993 [2024-11-18 07:03:08.887636] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:17:47.993 [2024-11-18 07:03:08.887648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:17:47.993 [2024-11-18 07:03:08.887714] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify active ns (timeout 30000 ms) 00:17:47.993 [2024-11-18 07:03:08.887730] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:17:47.993 [2024-11-18 07:03:08.887743] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:17:47.993 [2024-11-18 07:03:08.887751] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:17:47.993 [2024-11-18 07:03:08.887757] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:47.993 [2024-11-18 07:03:08.887767] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:17:47.993 [2024-11-18 07:03:08.887785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:17:47.993 [2024-11-18 07:03:08.887801] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Namespace 1 was added 00:17:47.993 [2024-11-18 07:03:08.887836] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns (timeout 30000 ms) 00:17:47.993 [2024-11-18 07:03:08.887851] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify ns (timeout 30000 ms) 00:17:47.993 [2024-11-18 07:03:08.887862] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:17:47.993 [2024-11-18 07:03:08.887870] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:17:47.993 [2024-11-18 07:03:08.887876] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:47.993 [2024-11-18 07:03:08.887885] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:17:47.993 [2024-11-18 07:03:08.887916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:17:47.993 [2024-11-18 07:03:08.887936] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:17:47.993 [2024-11-18 07:03:08.887951] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:17:47.993 [2024-11-18 07:03:08.887962] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:17:47.993 [2024-11-18 07:03:08.887969] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:17:47.993 [2024-11-18 07:03:08.887975] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:47.993 [2024-11-18 07:03:08.887984] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:17:47.993 [2024-11-18 07:03:08.887995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:17:47.993 [2024-11-18 07:03:08.888008] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:17:47.993 [2024-11-18 07:03:08.888019] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported log pages (timeout 30000 ms) 00:17:47.993 [2024-11-18 07:03:08.888032] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported features (timeout 30000 ms) 00:17:47.993 [2024-11-18 07:03:08.888042] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:17:47.993 [2024-11-18 07:03:08.888050] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:17:47.993 [2024-11-18 07:03:08.888058] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host ID (timeout 30000 ms) 00:17:47.993 [2024-11-18 07:03:08.888066] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] NVMe-oF transport - not sending Set Features - Host ID 00:17:47.993 [2024-11-18 07:03:08.888073] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to transport ready (timeout 30000 ms) 00:17:47.993 [2024-11-18 07:03:08.888081] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to ready (no timeout) 00:17:47.993 [2024-11-18 07:03:08.888108] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:17:47.993 [2024-11-18 07:03:08.888126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:17:47.993 [2024-11-18 07:03:08.888144] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:17:47.993 [2024-11-18 07:03:08.888156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:17:47.993 [2024-11-18 07:03:08.888171] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:17:47.994 [2024-11-18 07:03:08.888182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:17:47.994 [2024-11-18 07:03:08.888197] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:17:47.994 [2024-11-18 07:03:08.888208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:17:47.994 [2024-11-18 07:03:08.888229] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:17:47.994 [2024-11-18 07:03:08.888238] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:17:47.994 [2024-11-18 07:03:08.888245] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:17:47.994 [2024-11-18 07:03:08.888251] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:17:47.994 [2024-11-18 07:03:08.888256] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:17:47.994 [2024-11-18 07:03:08.888265] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:17:47.994 [2024-11-18 07:03:08.888276] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:17:47.994 [2024-11-18 07:03:08.888284] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:17:47.994 [2024-11-18 07:03:08.888289] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:47.994 [2024-11-18 07:03:08.888298] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:17:47.994 [2024-11-18 07:03:08.888308] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:17:47.994 [2024-11-18 07:03:08.888315] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:17:47.994 [2024-11-18 07:03:08.888321] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:47.994 [2024-11-18 07:03:08.888329] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:17:47.994 [2024-11-18 07:03:08.888340] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:17:47.994 [2024-11-18 07:03:08.888348] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:17:47.994 [2024-11-18 07:03:08.888354] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:47.994 [2024-11-18 07:03:08.888362] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:17:47.994 [2024-11-18 07:03:08.888374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:17:47.994 [2024-11-18 07:03:08.888395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:17:47.994 [2024-11-18 07:03:08.888412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:17:47.994 [2024-11-18 07:03:08.888427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:17:47.994 ===================================================== 00:17:47.994 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:17:47.994 ===================================================== 00:17:47.994 Controller Capabilities/Features 00:17:47.994 ================================ 00:17:47.994 Vendor ID: 4e58 00:17:47.994 Subsystem Vendor ID: 4e58 00:17:47.994 Serial Number: SPDK1 00:17:47.994 Model Number: SPDK bdev Controller 00:17:47.994 Firmware Version: 25.01 00:17:47.994 Recommended Arb Burst: 6 00:17:47.994 IEEE OUI Identifier: 8d 6b 50 00:17:47.994 Multi-path I/O 00:17:47.994 May have multiple subsystem ports: Yes 00:17:47.994 May have multiple controllers: Yes 00:17:47.994 Associated with SR-IOV VF: No 00:17:47.994 Max Data Transfer Size: 131072 00:17:47.994 Max Number of Namespaces: 32 00:17:47.994 Max Number of I/O Queues: 127 00:17:47.994 NVMe Specification Version (VS): 1.3 00:17:47.994 NVMe Specification Version (Identify): 1.3 00:17:47.994 Maximum Queue Entries: 256 00:17:47.994 Contiguous Queues Required: Yes 00:17:47.994 Arbitration Mechanisms Supported 00:17:47.994 Weighted Round Robin: Not Supported 00:17:47.994 Vendor Specific: Not Supported 00:17:47.994 Reset Timeout: 15000 ms 00:17:47.994 Doorbell Stride: 4 bytes 00:17:47.994 NVM Subsystem Reset: Not Supported 00:17:47.994 Command Sets Supported 00:17:47.994 NVM Command Set: Supported 00:17:47.994 Boot Partition: Not Supported 00:17:47.994 Memory Page Size Minimum: 4096 bytes 00:17:47.994 Memory Page Size Maximum: 4096 bytes 00:17:47.994 Persistent Memory Region: Not Supported 00:17:47.994 Optional Asynchronous Events Supported 00:17:47.994 Namespace Attribute Notices: Supported 00:17:47.994 Firmware Activation Notices: Not Supported 00:17:47.994 ANA Change Notices: Not Supported 00:17:47.994 PLE Aggregate Log Change Notices: Not Supported 00:17:47.994 LBA Status Info Alert Notices: Not Supported 00:17:47.994 EGE Aggregate Log Change Notices: Not Supported 00:17:47.994 Normal NVM Subsystem Shutdown event: Not Supported 00:17:47.994 Zone Descriptor Change Notices: Not Supported 00:17:47.994 Discovery Log Change Notices: Not Supported 00:17:47.994 Controller Attributes 00:17:47.994 128-bit Host Identifier: Supported 00:17:47.994 Non-Operational Permissive Mode: Not Supported 00:17:47.994 NVM Sets: Not Supported 00:17:47.994 Read Recovery Levels: Not Supported 00:17:47.994 Endurance Groups: Not Supported 00:17:47.994 Predictable Latency Mode: Not Supported 00:17:47.994 Traffic Based Keep ALive: Not Supported 00:17:47.994 Namespace Granularity: Not Supported 00:17:47.994 SQ Associations: Not Supported 00:17:47.994 UUID List: Not Supported 00:17:47.994 Multi-Domain Subsystem: Not Supported 00:17:47.994 Fixed Capacity Management: Not Supported 00:17:47.994 Variable Capacity Management: Not Supported 00:17:47.994 Delete Endurance Group: Not Supported 00:17:47.994 Delete NVM Set: Not Supported 00:17:47.994 Extended LBA Formats Supported: Not Supported 00:17:47.994 Flexible Data Placement Supported: Not Supported 00:17:47.994 00:17:47.994 Controller Memory Buffer Support 00:17:47.994 ================================ 00:17:47.994 Supported: No 00:17:47.994 00:17:47.994 Persistent Memory Region Support 00:17:47.994 ================================ 00:17:47.994 Supported: No 00:17:47.994 00:17:47.994 Admin Command Set Attributes 00:17:47.994 ============================ 00:17:47.994 Security Send/Receive: Not Supported 00:17:47.994 Format NVM: Not Supported 00:17:47.994 Firmware Activate/Download: Not Supported 00:17:47.994 Namespace Management: Not Supported 00:17:47.994 Device Self-Test: Not Supported 00:17:47.994 Directives: Not Supported 00:17:47.994 NVMe-MI: Not Supported 00:17:47.994 Virtualization Management: Not Supported 00:17:47.994 Doorbell Buffer Config: Not Supported 00:17:47.994 Get LBA Status Capability: Not Supported 00:17:47.994 Command & Feature Lockdown Capability: Not Supported 00:17:47.995 Abort Command Limit: 4 00:17:47.995 Async Event Request Limit: 4 00:17:47.995 Number of Firmware Slots: N/A 00:17:47.995 Firmware Slot 1 Read-Only: N/A 00:17:47.995 Firmware Activation Without Reset: N/A 00:17:47.995 Multiple Update Detection Support: N/A 00:17:47.995 Firmware Update Granularity: No Information Provided 00:17:47.995 Per-Namespace SMART Log: No 00:17:47.995 Asymmetric Namespace Access Log Page: Not Supported 00:17:47.995 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:17:47.995 Command Effects Log Page: Supported 00:17:47.995 Get Log Page Extended Data: Supported 00:17:47.995 Telemetry Log Pages: Not Supported 00:17:47.995 Persistent Event Log Pages: Not Supported 00:17:47.995 Supported Log Pages Log Page: May Support 00:17:47.995 Commands Supported & Effects Log Page: Not Supported 00:17:47.995 Feature Identifiers & Effects Log Page:May Support 00:17:47.995 NVMe-MI Commands & Effects Log Page: May Support 00:17:47.995 Data Area 4 for Telemetry Log: Not Supported 00:17:47.995 Error Log Page Entries Supported: 128 00:17:47.995 Keep Alive: Supported 00:17:47.995 Keep Alive Granularity: 10000 ms 00:17:47.995 00:17:47.995 NVM Command Set Attributes 00:17:47.995 ========================== 00:17:47.995 Submission Queue Entry Size 00:17:47.995 Max: 64 00:17:47.995 Min: 64 00:17:47.995 Completion Queue Entry Size 00:17:47.995 Max: 16 00:17:47.995 Min: 16 00:17:47.995 Number of Namespaces: 32 00:17:47.995 Compare Command: Supported 00:17:47.995 Write Uncorrectable Command: Not Supported 00:17:47.995 Dataset Management Command: Supported 00:17:47.995 Write Zeroes Command: Supported 00:17:47.995 Set Features Save Field: Not Supported 00:17:47.995 Reservations: Not Supported 00:17:47.995 Timestamp: Not Supported 00:17:47.995 Copy: Supported 00:17:47.995 Volatile Write Cache: Present 00:17:47.995 Atomic Write Unit (Normal): 1 00:17:47.995 Atomic Write Unit (PFail): 1 00:17:47.995 Atomic Compare & Write Unit: 1 00:17:47.995 Fused Compare & Write: Supported 00:17:47.995 Scatter-Gather List 00:17:47.995 SGL Command Set: Supported (Dword aligned) 00:17:47.995 SGL Keyed: Not Supported 00:17:47.995 SGL Bit Bucket Descriptor: Not Supported 00:17:47.995 SGL Metadata Pointer: Not Supported 00:17:47.995 Oversized SGL: Not Supported 00:17:47.995 SGL Metadata Address: Not Supported 00:17:47.995 SGL Offset: Not Supported 00:17:47.995 Transport SGL Data Block: Not Supported 00:17:47.995 Replay Protected Memory Block: Not Supported 00:17:47.995 00:17:47.995 Firmware Slot Information 00:17:47.995 ========================= 00:17:47.995 Active slot: 1 00:17:47.995 Slot 1 Firmware Revision: 25.01 00:17:47.995 00:17:47.995 00:17:47.995 Commands Supported and Effects 00:17:47.995 ============================== 00:17:47.995 Admin Commands 00:17:47.995 -------------- 00:17:47.995 Get Log Page (02h): Supported 00:17:47.995 Identify (06h): Supported 00:17:47.995 Abort (08h): Supported 00:17:47.995 Set Features (09h): Supported 00:17:47.995 Get Features (0Ah): Supported 00:17:47.995 Asynchronous Event Request (0Ch): Supported 00:17:47.995 Keep Alive (18h): Supported 00:17:47.995 I/O Commands 00:17:47.995 ------------ 00:17:47.995 Flush (00h): Supported LBA-Change 00:17:47.995 Write (01h): Supported LBA-Change 00:17:47.995 Read (02h): Supported 00:17:47.995 Compare (05h): Supported 00:17:47.995 Write Zeroes (08h): Supported LBA-Change 00:17:47.995 Dataset Management (09h): Supported LBA-Change 00:17:47.995 Copy (19h): Supported LBA-Change 00:17:47.995 00:17:47.995 Error Log 00:17:47.995 ========= 00:17:47.995 00:17:47.995 Arbitration 00:17:47.995 =========== 00:17:47.995 Arbitration Burst: 1 00:17:47.995 00:17:47.995 Power Management 00:17:47.995 ================ 00:17:47.995 Number of Power States: 1 00:17:47.995 Current Power State: Power State #0 00:17:47.995 Power State #0: 00:17:47.995 Max Power: 0.00 W 00:17:47.995 Non-Operational State: Operational 00:17:47.995 Entry Latency: Not Reported 00:17:47.995 Exit Latency: Not Reported 00:17:47.995 Relative Read Throughput: 0 00:17:47.995 Relative Read Latency: 0 00:17:47.995 Relative Write Throughput: 0 00:17:47.995 Relative Write Latency: 0 00:17:47.995 Idle Power: Not Reported 00:17:47.995 Active Power: Not Reported 00:17:47.995 Non-Operational Permissive Mode: Not Supported 00:17:47.995 00:17:47.995 Health Information 00:17:47.995 ================== 00:17:47.995 Critical Warnings: 00:17:47.995 Available Spare Space: OK 00:17:47.995 Temperature: OK 00:17:47.995 Device Reliability: OK 00:17:47.995 Read Only: No 00:17:47.995 Volatile Memory Backup: OK 00:17:47.995 Current Temperature: 0 Kelvin (-273 Celsius) 00:17:47.995 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:17:47.995 Available Spare: 0% 00:17:47.995 Available Sp[2024-11-18 07:03:08.888582] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:17:47.995 [2024-11-18 07:03:08.888599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:17:47.995 [2024-11-18 07:03:08.888641] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Prepare to destruct SSD 00:17:47.995 [2024-11-18 07:03:08.888658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:47.995 [2024-11-18 07:03:08.888669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:47.995 [2024-11-18 07:03:08.888679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:47.995 [2024-11-18 07:03:08.888689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:47.995 [2024-11-18 07:03:08.889170] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:17:47.995 [2024-11-18 07:03:08.889190] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:17:47.995 [2024-11-18 07:03:08.890167] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:17:47.995 [2024-11-18 07:03:08.890244] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] RTD3E = 0 us 00:17:47.995 [2024-11-18 07:03:08.890258] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown timeout = 10000 ms 00:17:47.995 [2024-11-18 07:03:08.891179] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:17:47.995 [2024-11-18 07:03:08.891201] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown complete in 0 milliseconds 00:17:47.995 [2024-11-18 07:03:08.891253] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:17:47.995 [2024-11-18 07:03:08.893218] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:17:47.995 are Threshold: 0% 00:17:47.995 Life Percentage Used: 0% 00:17:47.995 Data Units Read: 0 00:17:47.995 Data Units Written: 0 00:17:47.995 Host Read Commands: 0 00:17:47.995 Host Write Commands: 0 00:17:47.995 Controller Busy Time: 0 minutes 00:17:47.995 Power Cycles: 0 00:17:47.995 Power On Hours: 0 hours 00:17:47.995 Unsafe Shutdowns: 0 00:17:47.995 Unrecoverable Media Errors: 0 00:17:47.995 Lifetime Error Log Entries: 0 00:17:47.995 Warning Temperature Time: 0 minutes 00:17:47.995 Critical Temperature Time: 0 minutes 00:17:47.995 00:17:47.995 Number of Queues 00:17:47.995 ================ 00:17:47.995 Number of I/O Submission Queues: 127 00:17:47.995 Number of I/O Completion Queues: 127 00:17:47.995 00:17:47.996 Active Namespaces 00:17:47.996 ================= 00:17:47.996 Namespace ID:1 00:17:47.996 Error Recovery Timeout: Unlimited 00:17:47.996 Command Set Identifier: NVM (00h) 00:17:47.996 Deallocate: Supported 00:17:47.996 Deallocated/Unwritten Error: Not Supported 00:17:47.996 Deallocated Read Value: Unknown 00:17:47.996 Deallocate in Write Zeroes: Not Supported 00:17:47.996 Deallocated Guard Field: 0xFFFF 00:17:47.996 Flush: Supported 00:17:47.996 Reservation: Supported 00:17:47.996 Namespace Sharing Capabilities: Multiple Controllers 00:17:47.996 Size (in LBAs): 131072 (0GiB) 00:17:47.996 Capacity (in LBAs): 131072 (0GiB) 00:17:47.996 Utilization (in LBAs): 131072 (0GiB) 00:17:47.996 NGUID: B68624A51D9D4C69B919B44AB28EE00E 00:17:47.996 UUID: b68624a5-1d9d-4c69-b919-b44ab28ee00e 00:17:47.996 Thin Provisioning: Not Supported 00:17:47.996 Per-NS Atomic Units: Yes 00:17:47.996 Atomic Boundary Size (Normal): 0 00:17:47.996 Atomic Boundary Size (PFail): 0 00:17:47.996 Atomic Boundary Offset: 0 00:17:47.996 Maximum Single Source Range Length: 65535 00:17:47.996 Maximum Copy Length: 65535 00:17:47.996 Maximum Source Range Count: 1 00:17:47.996 NGUID/EUI64 Never Reused: No 00:17:47.996 Namespace Write Protected: No 00:17:47.996 Number of LBA Formats: 1 00:17:47.996 Current LBA Format: LBA Format #00 00:17:47.996 LBA Format #00: Data Size: 512 Metadata Size: 0 00:17:47.996 00:17:47.996 07:03:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:17:48.256 [2024-11-18 07:03:09.137461] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:17:53.540 Initializing NVMe Controllers 00:17:53.540 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:17:53.540 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:17:53.540 Initialization complete. Launching workers. 00:17:53.540 ======================================================== 00:17:53.540 Latency(us) 00:17:53.540 Device Information : IOPS MiB/s Average min max 00:17:53.540 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 33876.80 132.33 3778.45 1168.24 7671.93 00:17:53.540 ======================================================== 00:17:53.540 Total : 33876.80 132.33 3778.45 1168.24 7671.93 00:17:53.540 00:17:53.540 [2024-11-18 07:03:14.160394] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:17:53.540 07:03:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:17:53.540 [2024-11-18 07:03:14.407581] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:17:58.825 Initializing NVMe Controllers 00:17:58.825 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:17:58.825 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:17:58.825 Initialization complete. Launching workers. 00:17:58.825 ======================================================== 00:17:58.825 Latency(us) 00:17:58.825 Device Information : IOPS MiB/s Average min max 00:17:58.825 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16050.81 62.70 7979.88 5984.34 11001.83 00:17:58.825 ======================================================== 00:17:58.825 Total : 16050.81 62.70 7979.88 5984.34 11001.83 00:17:58.825 00:17:58.825 [2024-11-18 07:03:19.451834] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:17:58.825 07:03:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:17:58.825 [2024-11-18 07:03:19.688003] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:04.107 [2024-11-18 07:03:24.769880] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:04.107 Initializing NVMe Controllers 00:18:04.107 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:18:04.107 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:18:04.107 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:18:04.107 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:18:04.107 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:18:04.107 Initialization complete. Launching workers. 00:18:04.107 Starting thread on core 2 00:18:04.107 Starting thread on core 3 00:18:04.107 Starting thread on core 1 00:18:04.107 07:03:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:18:04.107 [2024-11-18 07:03:25.083011] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:07.404 [2024-11-18 07:03:28.156910] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:07.404 Initializing NVMe Controllers 00:18:07.404 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:18:07.404 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:18:07.404 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:18:07.404 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:18:07.404 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:18:07.404 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:18:07.404 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:18:07.404 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:18:07.404 Initialization complete. Launching workers. 00:18:07.404 Starting thread on core 1 with urgent priority queue 00:18:07.404 Starting thread on core 2 with urgent priority queue 00:18:07.404 Starting thread on core 3 with urgent priority queue 00:18:07.404 Starting thread on core 0 with urgent priority queue 00:18:07.404 SPDK bdev Controller (SPDK1 ) core 0: 5086.33 IO/s 19.66 secs/100000 ios 00:18:07.404 SPDK bdev Controller (SPDK1 ) core 1: 5519.67 IO/s 18.12 secs/100000 ios 00:18:07.404 SPDK bdev Controller (SPDK1 ) core 2: 5576.00 IO/s 17.93 secs/100000 ios 00:18:07.404 SPDK bdev Controller (SPDK1 ) core 3: 5528.67 IO/s 18.09 secs/100000 ios 00:18:07.404 ======================================================== 00:18:07.404 00:18:07.404 07:03:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:18:07.663 [2024-11-18 07:03:28.455838] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:07.663 Initializing NVMe Controllers 00:18:07.663 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:18:07.663 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:18:07.663 Namespace ID: 1 size: 0GB 00:18:07.663 Initialization complete. 00:18:07.663 INFO: using host memory buffer for IO 00:18:07.663 Hello world! 00:18:07.663 [2024-11-18 07:03:28.490516] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:07.663 07:03:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:18:07.924 [2024-11-18 07:03:28.803043] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:08.862 Initializing NVMe Controllers 00:18:08.862 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:18:08.862 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:18:08.862 Initialization complete. Launching workers. 00:18:08.862 submit (in ns) avg, min, max = 7050.1, 3506.7, 4015405.6 00:18:08.862 complete (in ns) avg, min, max = 30994.1, 2074.4, 4016022.2 00:18:08.862 00:18:08.862 Submit histogram 00:18:08.862 ================ 00:18:08.862 Range in us Cumulative Count 00:18:08.862 3.484 - 3.508: 0.0078% ( 1) 00:18:08.862 3.508 - 3.532: 0.3133% ( 39) 00:18:08.862 3.532 - 3.556: 1.1357% ( 105) 00:18:08.862 3.556 - 3.579: 3.8142% ( 342) 00:18:08.862 3.579 - 3.603: 7.9182% ( 524) 00:18:08.862 3.603 - 3.627: 15.8521% ( 1013) 00:18:08.862 3.627 - 3.650: 23.9818% ( 1038) 00:18:08.862 3.650 - 3.674: 31.9392% ( 1016) 00:18:08.862 3.674 - 3.698: 38.8315% ( 880) 00:18:08.862 3.698 - 3.721: 47.2196% ( 1071) 00:18:08.862 3.721 - 3.745: 53.8690% ( 849) 00:18:08.862 3.745 - 3.769: 59.6335% ( 736) 00:18:08.862 3.769 - 3.793: 63.7531% ( 526) 00:18:08.862 3.793 - 3.816: 67.3716% ( 462) 00:18:08.862 3.816 - 3.840: 71.1858% ( 487) 00:18:08.862 3.840 - 3.864: 75.0157% ( 489) 00:18:08.862 3.864 - 3.887: 78.4696% ( 441) 00:18:08.862 3.887 - 3.911: 81.7356% ( 417) 00:18:08.862 3.911 - 3.935: 84.7039% ( 379) 00:18:08.862 3.935 - 3.959: 87.1554% ( 313) 00:18:08.862 3.959 - 3.982: 89.0351% ( 240) 00:18:08.862 3.982 - 4.006: 90.8051% ( 226) 00:18:08.862 4.006 - 4.030: 92.1131% ( 167) 00:18:08.862 4.030 - 4.053: 93.2566% ( 146) 00:18:08.862 4.053 - 4.077: 94.0241% ( 98) 00:18:08.862 4.077 - 4.101: 94.7212% ( 89) 00:18:08.862 4.101 - 4.124: 95.3242% ( 77) 00:18:08.862 4.124 - 4.148: 95.6924% ( 47) 00:18:08.862 4.148 - 4.172: 95.8568% ( 21) 00:18:08.862 4.172 - 4.196: 96.0683% ( 27) 00:18:08.862 4.196 - 4.219: 96.2171% ( 19) 00:18:08.862 4.219 - 4.243: 96.4051% ( 24) 00:18:08.862 4.243 - 4.267: 96.5147% ( 14) 00:18:08.862 4.267 - 4.290: 96.6009% ( 11) 00:18:08.862 4.290 - 4.314: 96.7262% ( 16) 00:18:08.862 4.314 - 4.338: 96.7967% ( 9) 00:18:08.862 4.338 - 4.361: 96.8437% ( 6) 00:18:08.862 4.361 - 4.385: 96.9690% ( 16) 00:18:08.862 4.385 - 4.409: 97.0551% ( 11) 00:18:08.862 4.409 - 4.433: 97.1100% ( 7) 00:18:08.862 4.433 - 4.456: 97.1648% ( 7) 00:18:08.862 4.456 - 4.480: 97.1883% ( 3) 00:18:08.862 4.480 - 4.504: 97.2353% ( 6) 00:18:08.862 4.504 - 4.527: 97.2823% ( 6) 00:18:08.862 4.527 - 4.551: 97.3058% ( 3) 00:18:08.862 4.551 - 4.575: 97.3606% ( 7) 00:18:08.862 4.575 - 4.599: 97.4311% ( 9) 00:18:08.862 4.599 - 4.622: 97.4546% ( 3) 00:18:08.862 4.622 - 4.646: 97.4624% ( 1) 00:18:08.862 4.646 - 4.670: 97.5329% ( 9) 00:18:08.862 4.670 - 4.693: 97.5486% ( 2) 00:18:08.862 4.693 - 4.717: 97.5799% ( 4) 00:18:08.862 4.717 - 4.741: 97.6190% ( 5) 00:18:08.862 4.741 - 4.764: 97.6425% ( 3) 00:18:08.862 4.764 - 4.788: 97.6660% ( 3) 00:18:08.862 4.788 - 4.812: 97.6739% ( 1) 00:18:08.862 4.812 - 4.836: 97.7130% ( 5) 00:18:08.862 4.836 - 4.859: 97.7600% ( 6) 00:18:08.862 4.859 - 4.883: 97.8305% ( 9) 00:18:08.862 4.883 - 4.907: 97.8697% ( 5) 00:18:08.862 4.907 - 4.930: 97.9323% ( 8) 00:18:08.862 4.930 - 4.954: 98.0107% ( 10) 00:18:08.862 4.954 - 4.978: 98.0733% ( 8) 00:18:08.862 4.978 - 5.001: 98.1125% ( 5) 00:18:08.862 5.001 - 5.025: 98.1830% ( 9) 00:18:08.862 5.025 - 5.049: 98.2221% ( 5) 00:18:08.862 5.049 - 5.073: 98.2613% ( 5) 00:18:08.862 5.073 - 5.096: 98.3083% ( 6) 00:18:08.862 5.096 - 5.120: 98.3318% ( 3) 00:18:08.862 5.120 - 5.144: 98.3709% ( 5) 00:18:08.862 5.144 - 5.167: 98.4336% ( 8) 00:18:08.862 5.167 - 5.191: 98.4727% ( 5) 00:18:08.862 5.191 - 5.215: 98.4962% ( 3) 00:18:08.862 5.215 - 5.239: 98.5276% ( 4) 00:18:08.862 5.239 - 5.262: 98.5511% ( 3) 00:18:08.862 5.262 - 5.286: 98.5824% ( 4) 00:18:08.862 5.286 - 5.310: 98.5981% ( 2) 00:18:08.862 5.310 - 5.333: 98.6372% ( 5) 00:18:08.862 5.333 - 5.357: 98.6529% ( 2) 00:18:08.862 5.357 - 5.381: 98.6607% ( 1) 00:18:08.862 5.381 - 5.404: 98.6685% ( 1) 00:18:08.862 5.404 - 5.428: 98.6764% ( 1) 00:18:08.862 5.428 - 5.452: 98.6842% ( 1) 00:18:08.862 5.476 - 5.499: 98.6920% ( 1) 00:18:08.862 5.499 - 5.523: 98.7234% ( 4) 00:18:08.862 5.523 - 5.547: 98.7312% ( 1) 00:18:08.862 5.547 - 5.570: 98.7469% ( 2) 00:18:08.862 5.594 - 5.618: 98.7625% ( 2) 00:18:08.862 5.689 - 5.713: 98.7782% ( 2) 00:18:08.862 5.831 - 5.855: 98.7860% ( 1) 00:18:08.862 5.855 - 5.879: 98.7939% ( 1) 00:18:08.862 5.879 - 5.902: 98.8017% ( 1) 00:18:08.862 5.973 - 5.997: 98.8095% ( 1) 00:18:08.862 6.021 - 6.044: 98.8174% ( 1) 00:18:08.862 6.210 - 6.258: 98.8252% ( 1) 00:18:08.862 6.258 - 6.305: 98.8330% ( 1) 00:18:08.862 6.495 - 6.542: 98.8409% ( 1) 00:18:08.862 6.542 - 6.590: 98.8487% ( 1) 00:18:08.862 7.016 - 7.064: 98.8565% ( 1) 00:18:08.862 7.206 - 7.253: 98.8643% ( 1) 00:18:08.862 7.680 - 7.727: 98.8722% ( 1) 00:18:08.862 7.775 - 7.822: 98.8800% ( 1) 00:18:08.862 7.822 - 7.870: 98.8878% ( 1) 00:18:08.862 8.059 - 8.107: 98.9035% ( 2) 00:18:08.862 8.249 - 8.296: 98.9192% ( 2) 00:18:08.862 8.391 - 8.439: 98.9270% ( 1) 00:18:08.862 8.486 - 8.533: 98.9348% ( 1) 00:18:08.862 8.581 - 8.628: 98.9427% ( 1) 00:18:08.862 8.723 - 8.770: 98.9505% ( 1) 00:18:08.862 8.913 - 8.960: 98.9583% ( 1) 00:18:08.862 9.007 - 9.055: 98.9662% ( 1) 00:18:08.862 9.244 - 9.292: 98.9740% ( 1) 00:18:08.862 9.292 - 9.339: 98.9818% ( 1) 00:18:08.862 9.529 - 9.576: 98.9897% ( 1) 00:18:08.862 9.576 - 9.624: 99.0053% ( 2) 00:18:08.863 9.671 - 9.719: 99.0132% ( 1) 00:18:08.863 9.719 - 9.766: 99.0288% ( 2) 00:18:08.863 9.766 - 9.813: 99.0367% ( 1) 00:18:08.863 9.861 - 9.908: 99.0445% ( 1) 00:18:08.863 9.956 - 10.003: 99.0523% ( 1) 00:18:08.863 10.287 - 10.335: 99.0602% ( 1) 00:18:08.863 10.382 - 10.430: 99.0680% ( 1) 00:18:08.863 10.430 - 10.477: 99.0758% ( 1) 00:18:08.863 10.856 - 10.904: 99.0836% ( 1) 00:18:08.863 10.904 - 10.951: 99.0915% ( 1) 00:18:08.863 10.951 - 10.999: 99.0993% ( 1) 00:18:08.863 11.093 - 11.141: 99.1071% ( 1) 00:18:08.863 11.710 - 11.757: 99.1150% ( 1) 00:18:08.863 11.757 - 11.804: 99.1228% ( 1) 00:18:08.863 11.947 - 11.994: 99.1306% ( 1) 00:18:08.863 12.136 - 12.231: 99.1385% ( 1) 00:18:08.863 12.231 - 12.326: 99.1463% ( 1) 00:18:08.863 12.516 - 12.610: 99.1541% ( 1) 00:18:08.863 12.705 - 12.800: 99.1620% ( 1) 00:18:08.863 12.895 - 12.990: 99.1698% ( 1) 00:18:08.863 13.369 - 13.464: 99.1776% ( 1) 00:18:08.863 13.938 - 14.033: 99.1855% ( 1) 00:18:08.863 15.265 - 15.360: 99.1933% ( 1) 00:18:08.863 17.161 - 17.256: 99.2011% ( 1) 00:18:08.863 17.351 - 17.446: 99.2090% ( 1) 00:18:08.863 17.446 - 17.541: 99.2325% ( 3) 00:18:08.863 17.541 - 17.636: 99.2638% ( 4) 00:18:08.863 17.636 - 17.730: 99.3186% ( 7) 00:18:08.863 17.730 - 17.825: 99.3656% ( 6) 00:18:08.863 17.825 - 17.920: 99.4048% ( 5) 00:18:08.863 17.920 - 18.015: 99.4518% ( 6) 00:18:08.863 18.015 - 18.110: 99.4753% ( 3) 00:18:08.863 18.110 - 18.204: 99.5301% ( 7) 00:18:08.863 18.204 - 18.299: 99.5614% ( 4) 00:18:08.863 18.299 - 18.394: 99.6006% ( 5) 00:18:08.863 18.394 - 18.489: 99.6789% ( 10) 00:18:08.863 18.489 - 18.584: 99.7102% ( 4) 00:18:08.863 18.584 - 18.679: 99.7337% ( 3) 00:18:08.863 18.679 - 18.773: 99.7650% ( 4) 00:18:08.863 18.773 - 18.868: 99.7729% ( 1) 00:18:08.863 18.868 - 18.963: 99.8042% ( 4) 00:18:08.863 19.058 - 19.153: 99.8120% ( 1) 00:18:08.863 19.153 - 19.247: 99.8199% ( 1) 00:18:08.863 19.437 - 19.532: 99.8277% ( 1) 00:18:08.863 19.532 - 19.627: 99.8355% ( 1) 00:18:08.863 19.627 - 19.721: 99.8434% ( 1) 00:18:08.863 19.816 - 19.911: 99.8512% ( 1) 00:18:08.863 20.196 - 20.290: 99.8590% ( 1) 00:18:08.863 21.144 - 21.239: 99.8669% ( 1) 00:18:08.863 22.281 - 22.376: 99.8747% ( 1) 00:18:08.863 23.040 - 23.135: 99.8825% ( 1) 00:18:08.863 23.514 - 23.609: 99.8904% ( 1) 00:18:08.863 26.169 - 26.359: 99.8982% ( 1) 00:18:08.863 28.065 - 28.255: 99.9060% ( 1) 00:18:08.863 28.444 - 28.634: 99.9138% ( 1) 00:18:08.863 35.081 - 35.271: 99.9217% ( 1) 00:18:08.863 3980.705 - 4004.978: 99.9687% ( 6) 00:18:08.863 4004.978 - 4029.250: 100.0000% ( 4) 00:18:08.863 00:18:08.863 Complete histogram 00:18:08.863 ================== 00:18:08.863 Range in us Cumulative Count 00:18:08.863 2.074 - 2.086: 6.5006% ( 830) 00:18:08.863 2.086 - 2.098: 38.7061% ( 4112) 00:18:08.863 2.098 - 2.110: 44.1259% ( 692) 00:18:08.863 2.110 - 2.121: 50.6971% ( 839) 00:18:08.863 2.121 - 2.133: 58.2785% ( 968) 00:18:08.863 2.133 - 2.145: 59.8841% ( 205) 00:18:08.863 2.145 - 2.157: 68.6404% ( 1118) 00:18:08.863 2.157 - 2.169: 80.0517% ( 1457) 00:18:08.863 2.169 - 2.181: 81.7513% ( 217) 00:18:08.863 2.181 - 2.193: 85.0956% ( 427) 00:18:08.863 2.193 - 2.204: 87.7271% ( 336) 00:18:08.863 2.204 - 2.216: 88.4164% ( 88) 00:18:08.863 2.216 - 2.228: 89.7870% ( 175) 00:18:08.863 2.228 - 2.240: 91.0244% ( 158) 00:18:08.863 2.240 - 2.252: 92.7318% ( 218) 00:18:08.863 2.252 - 2.264: 93.9066% ( 150) 00:18:08.863 2.264 - 2.276: 94.1729% ( 34) 00:18:08.863 2.276 - 2.287: 94.2826% ( 14) 00:18:08.863 2.287 - 2.299: 94.4314% ( 19) 00:18:08.863 2.299 - 2.311: 94.6350% ( 26) 00:18:08.863 2.311 - 2.323: 95.0188% ( 49) 00:18:08.863 2.323 - 2.335: 95.3164% ( 38) 00:18:08.863 2.335 - 2.347: 95.3634% ( 6) 00:18:08.863 2.347 - 2.359: 95.4261% ( 8) 00:18:08.863 2.359 - 2.370: 95.5122% ( 11) 00:18:08.863 2.370 - 2.382: 95.6924% ( 23) 00:18:08.863 2.382 - 2.394: 95.9586% ( 34) 00:18:08.863 2.394 - 2.406: 96.1936% ( 30) 00:18:08.863 2.406 - 2.418: 96.3737% ( 23) 00:18:08.863 2.418 - 2.430: 96.5147% ( 18) 00:18:08.863 2.430 - 2.441: 96.6087% ( 12) 00:18:08.863 2.441 - 2.453: 96.6949% ( 11) 00:18:08.863 2.453 - 2.465: 96.8358% ( 18) 00:18:08.863 2.465 - 2.477: 96.9298% ( 12) 00:18:08.863 2.477 - 2.489: 97.0003% ( 9) 00:18:08.863 2.489 - 2.501: 97.0786% ( 10) 00:18:08.863 2.501 - 2.513: 97.1648% ( 11) 00:18:08.863 2.513 - 2.524: 97.2039% ( 5) 00:18:08.863 2.524 - 2.536: 97.2588% ( 7) 00:18:08.863 2.536 - 2.548: 97.2823% ( 3) 00:18:08.863 2.548 - 2.560: 97.3449% ( 8) 00:18:08.863 2.560 - 2.572: 97.3763% ( 4) 00:18:08.863 2.572 - 2.584: 97.3997% ( 3) 00:18:08.863 2.584 - 2.596: 97.4154% ( 2) 00:18:08.863 2.607 - 2.619: 97.4389% ( 3) 00:18:08.863 2.619 - 2.631: 97.4781% ( 5) 00:18:08.863 2.631 - 2.643: 97.5016% ( 3) 00:18:08.863 2.643 - 2.655: 97.5251% ( 3) 00:18:08.863 2.655 - 2.667: 97.5877% ( 8) 00:18:08.863 2.667 - 2.679: 97.6347% ( 6) 00:18:08.863 2.679 - 2.690: 97.6739% ( 5) 00:18:08.863 2.690 - 2.702: 97.7052% ( 4) 00:18:08.863 2.702 - 2.714: 97.7600% ( 7) 00:18:08.863 2.714 - 2.726: 97.7914% ( 4) 00:18:08.863 2.726 - 2.738: 97.8305% ( 5) 00:18:08.863 2.738 - 2.750: 97.8618% ( 4) 00:18:08.863 2.750 - 2.761: 97.8932% ( 4) 00:18:08.863 2.761 - 2.773: 97.9088% ( 2) 00:18:08.863 2.773 - 2.785: 97.9402% ( 4) 00:18:08.863 2.785 - 2.797: 97.9715% ( 4) 00:18:08.863 2.797 - 2.809: 97.9872% ( 2) 00:18:08.863 2.809 - 2.821: 98.0185% ( 4) 00:18:08.863 2.821 - 2.833: 98.0263% ( 1) 00:18:08.863 2.833 - 2.844: 98.0576% ( 4) 00:18:08.863 2.844 - 2.856: 98.0811% ( 3) 00:18:08.863 2.856 - 2.868: 98.0890% ( 1) 00:18:08.863 2.868 - 2.880: 98.0968% ( 1) 00:18:08.863 2.880 - 2.892: 98.1046% ( 1) 00:18:08.863 2.892 - 2.904: 98.1203% ( 2) 00:18:08.863 2.927 - 2.939: 98.1281% ( 1) 00:18:08.863 2.975 - 2.987: 98.1360% ( 1) 00:18:08.863 2.987 - 2.999: 98.1438% ( 1) 00:18:08.864 3.010 - 3.022: 98.1516% ( 1) 00:18:08.864 3.022 - 3.034: 98.1595% ( 1) 00:18:08.864 3.034 - 3.058: 98.1830% ( 3) 00:18:08.864 3.058 - 3.081: 98.2143% ( 4) 00:18:08.864 3.081 - 3.105: 98.2221% ( 1) 00:18:08.864 3.105 - 3.129: 98.2456% ( 3) 00:18:08.864 3.129 - 3.153: 9[2024-11-18 07:03:29.825226] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:09.122 8.2691% ( 3) 00:18:09.122 3.153 - 3.176: 98.3004% ( 4) 00:18:09.122 3.176 - 3.200: 98.3083% ( 1) 00:18:09.122 3.200 - 3.224: 98.3239% ( 2) 00:18:09.122 3.224 - 3.247: 98.3474% ( 3) 00:18:09.122 3.247 - 3.271: 98.3788% ( 4) 00:18:09.122 3.271 - 3.295: 98.3944% ( 2) 00:18:09.122 3.319 - 3.342: 98.4023% ( 1) 00:18:09.122 3.342 - 3.366: 98.4258% ( 3) 00:18:09.122 3.413 - 3.437: 98.4414% ( 2) 00:18:09.122 3.437 - 3.461: 98.4492% ( 1) 00:18:09.122 3.484 - 3.508: 98.4571% ( 1) 00:18:09.122 3.532 - 3.556: 98.4649% ( 1) 00:18:09.122 3.579 - 3.603: 98.4727% ( 1) 00:18:09.122 3.627 - 3.650: 98.4806% ( 1) 00:18:09.122 3.674 - 3.698: 98.5041% ( 3) 00:18:09.122 3.721 - 3.745: 98.5119% ( 1) 00:18:09.122 3.745 - 3.769: 98.5197% ( 1) 00:18:09.122 3.840 - 3.864: 98.5276% ( 1) 00:18:09.122 3.887 - 3.911: 98.5354% ( 1) 00:18:09.122 3.911 - 3.935: 98.5432% ( 1) 00:18:09.122 3.935 - 3.959: 98.5589% ( 2) 00:18:09.122 3.959 - 3.982: 98.5824% ( 3) 00:18:09.122 3.982 - 4.006: 98.5902% ( 1) 00:18:09.122 4.101 - 4.124: 98.6059% ( 2) 00:18:09.122 4.219 - 4.243: 98.6137% ( 1) 00:18:09.122 4.243 - 4.267: 98.6216% ( 1) 00:18:09.122 4.267 - 4.290: 98.6372% ( 2) 00:18:09.122 4.954 - 4.978: 98.6451% ( 1) 00:18:09.122 6.068 - 6.116: 98.6529% ( 1) 00:18:09.122 6.116 - 6.163: 98.6607% ( 1) 00:18:09.122 6.163 - 6.210: 98.6685% ( 1) 00:18:09.122 6.400 - 6.447: 98.6764% ( 1) 00:18:09.122 6.590 - 6.637: 98.6999% ( 3) 00:18:09.122 6.827 - 6.874: 98.7077% ( 1) 00:18:09.122 6.874 - 6.921: 98.7155% ( 1) 00:18:09.122 6.921 - 6.969: 98.7234% ( 1) 00:18:09.122 7.111 - 7.159: 98.7312% ( 1) 00:18:09.122 7.585 - 7.633: 98.7390% ( 1) 00:18:09.122 8.249 - 8.296: 98.7469% ( 1) 00:18:09.122 8.344 - 8.391: 98.7547% ( 1) 00:18:09.122 8.391 - 8.439: 98.7625% ( 1) 00:18:09.122 9.481 - 9.529: 98.7704% ( 1) 00:18:09.122 10.335 - 10.382: 98.7782% ( 1) 00:18:09.122 14.317 - 14.412: 98.7860% ( 1) 00:18:09.122 15.360 - 15.455: 98.7939% ( 1) 00:18:09.122 15.644 - 15.739: 98.8017% ( 1) 00:18:09.122 15.739 - 15.834: 98.8174% ( 2) 00:18:09.122 15.834 - 15.929: 98.8409% ( 3) 00:18:09.122 15.929 - 16.024: 98.8800% ( 5) 00:18:09.122 16.024 - 16.119: 98.8957% ( 2) 00:18:09.122 16.119 - 16.213: 98.9192% ( 3) 00:18:09.122 16.213 - 16.308: 98.9505% ( 4) 00:18:09.122 16.308 - 16.403: 98.9818% ( 4) 00:18:09.122 16.403 - 16.498: 99.0132% ( 4) 00:18:09.122 16.498 - 16.593: 99.0288% ( 2) 00:18:09.122 16.593 - 16.687: 99.0836% ( 7) 00:18:09.122 16.687 - 16.782: 99.1071% ( 3) 00:18:09.122 16.782 - 16.877: 99.1385% ( 4) 00:18:09.122 16.877 - 16.972: 99.1541% ( 2) 00:18:09.122 16.972 - 17.067: 99.1698% ( 2) 00:18:09.122 17.067 - 17.161: 99.1855% ( 2) 00:18:09.122 17.161 - 17.256: 99.2168% ( 4) 00:18:09.122 17.256 - 17.351: 99.2246% ( 1) 00:18:09.122 17.351 - 17.446: 99.2403% ( 2) 00:18:09.123 18.015 - 18.110: 99.2481% ( 1) 00:18:09.123 18.110 - 18.204: 99.2560% ( 1) 00:18:09.123 18.394 - 18.489: 99.2638% ( 1) 00:18:09.123 19.058 - 19.153: 99.2716% ( 1) 00:18:09.123 28.065 - 28.255: 99.2794% ( 1) 00:18:09.123 3009.801 - 3021.938: 99.2873% ( 1) 00:18:09.123 3980.705 - 4004.978: 99.7259% ( 56) 00:18:09.123 4004.978 - 4029.250: 100.0000% ( 35) 00:18:09.123 00:18:09.123 07:03:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:18:09.123 07:03:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:18:09.123 07:03:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:18:09.123 07:03:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:18:09.123 07:03:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:18:09.382 [ 00:18:09.382 { 00:18:09.382 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:18:09.382 "subtype": "Discovery", 00:18:09.382 "listen_addresses": [], 00:18:09.382 "allow_any_host": true, 00:18:09.382 "hosts": [] 00:18:09.382 }, 00:18:09.382 { 00:18:09.382 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:18:09.382 "subtype": "NVMe", 00:18:09.382 "listen_addresses": [ 00:18:09.382 { 00:18:09.382 "trtype": "VFIOUSER", 00:18:09.382 "adrfam": "IPv4", 00:18:09.382 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:18:09.382 "trsvcid": "0" 00:18:09.382 } 00:18:09.382 ], 00:18:09.382 "allow_any_host": true, 00:18:09.382 "hosts": [], 00:18:09.382 "serial_number": "SPDK1", 00:18:09.382 "model_number": "SPDK bdev Controller", 00:18:09.382 "max_namespaces": 32, 00:18:09.382 "min_cntlid": 1, 00:18:09.382 "max_cntlid": 65519, 00:18:09.382 "namespaces": [ 00:18:09.382 { 00:18:09.382 "nsid": 1, 00:18:09.382 "bdev_name": "Malloc1", 00:18:09.382 "name": "Malloc1", 00:18:09.382 "nguid": "B68624A51D9D4C69B919B44AB28EE00E", 00:18:09.382 "uuid": "b68624a5-1d9d-4c69-b919-b44ab28ee00e" 00:18:09.382 } 00:18:09.382 ] 00:18:09.382 }, 00:18:09.382 { 00:18:09.382 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:18:09.382 "subtype": "NVMe", 00:18:09.382 "listen_addresses": [ 00:18:09.382 { 00:18:09.382 "trtype": "VFIOUSER", 00:18:09.382 "adrfam": "IPv4", 00:18:09.382 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:18:09.382 "trsvcid": "0" 00:18:09.382 } 00:18:09.382 ], 00:18:09.382 "allow_any_host": true, 00:18:09.382 "hosts": [], 00:18:09.382 "serial_number": "SPDK2", 00:18:09.382 "model_number": "SPDK bdev Controller", 00:18:09.382 "max_namespaces": 32, 00:18:09.382 "min_cntlid": 1, 00:18:09.382 "max_cntlid": 65519, 00:18:09.382 "namespaces": [ 00:18:09.382 { 00:18:09.382 "nsid": 1, 00:18:09.382 "bdev_name": "Malloc2", 00:18:09.382 "name": "Malloc2", 00:18:09.382 "nguid": "4D0B88AC717640369DA02ACEA9D8749C", 00:18:09.382 "uuid": "4d0b88ac-7176-4036-9da0-2acea9d8749c" 00:18:09.382 } 00:18:09.382 ] 00:18:09.382 } 00:18:09.382 ] 00:18:09.382 07:03:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:18:09.382 07:03:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=231644 00:18:09.382 07:03:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:18:09.382 07:03:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:18:09.382 07:03:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:18:09.382 07:03:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:18:09.382 07:03:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1271 -- # '[' 0 -lt 200 ']' 00:18:09.382 07:03:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # i=1 00:18:09.382 07:03:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1273 -- # sleep 0.1 00:18:09.382 07:03:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:18:09.382 07:03:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1271 -- # '[' 1 -lt 200 ']' 00:18:09.382 07:03:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # i=2 00:18:09.382 07:03:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1273 -- # sleep 0.1 00:18:09.383 [2024-11-18 07:03:30.330027] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:09.643 07:03:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:18:09.643 07:03:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:18:09.643 07:03:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:18:09.643 07:03:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:18:09.643 07:03:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:18:09.901 Malloc3 00:18:09.902 07:03:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:18:10.160 [2024-11-18 07:03:30.931413] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:10.160 07:03:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:18:10.160 Asynchronous Event Request test 00:18:10.160 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:18:10.160 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:18:10.160 Registering asynchronous event callbacks... 00:18:10.160 Starting namespace attribute notice tests for all controllers... 00:18:10.160 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:18:10.160 aer_cb - Changed Namespace 00:18:10.160 Cleaning up... 00:18:10.420 [ 00:18:10.420 { 00:18:10.420 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:18:10.420 "subtype": "Discovery", 00:18:10.420 "listen_addresses": [], 00:18:10.420 "allow_any_host": true, 00:18:10.420 "hosts": [] 00:18:10.420 }, 00:18:10.420 { 00:18:10.420 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:18:10.420 "subtype": "NVMe", 00:18:10.420 "listen_addresses": [ 00:18:10.420 { 00:18:10.420 "trtype": "VFIOUSER", 00:18:10.420 "adrfam": "IPv4", 00:18:10.420 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:18:10.420 "trsvcid": "0" 00:18:10.420 } 00:18:10.420 ], 00:18:10.420 "allow_any_host": true, 00:18:10.420 "hosts": [], 00:18:10.420 "serial_number": "SPDK1", 00:18:10.420 "model_number": "SPDK bdev Controller", 00:18:10.420 "max_namespaces": 32, 00:18:10.420 "min_cntlid": 1, 00:18:10.420 "max_cntlid": 65519, 00:18:10.420 "namespaces": [ 00:18:10.420 { 00:18:10.420 "nsid": 1, 00:18:10.420 "bdev_name": "Malloc1", 00:18:10.420 "name": "Malloc1", 00:18:10.420 "nguid": "B68624A51D9D4C69B919B44AB28EE00E", 00:18:10.420 "uuid": "b68624a5-1d9d-4c69-b919-b44ab28ee00e" 00:18:10.420 }, 00:18:10.420 { 00:18:10.420 "nsid": 2, 00:18:10.420 "bdev_name": "Malloc3", 00:18:10.420 "name": "Malloc3", 00:18:10.420 "nguid": "DF741D38219246878C2B9CD1DB4667F2", 00:18:10.420 "uuid": "df741d38-2192-4687-8c2b-9cd1db4667f2" 00:18:10.420 } 00:18:10.420 ] 00:18:10.420 }, 00:18:10.420 { 00:18:10.420 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:18:10.420 "subtype": "NVMe", 00:18:10.420 "listen_addresses": [ 00:18:10.420 { 00:18:10.420 "trtype": "VFIOUSER", 00:18:10.420 "adrfam": "IPv4", 00:18:10.420 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:18:10.420 "trsvcid": "0" 00:18:10.420 } 00:18:10.420 ], 00:18:10.420 "allow_any_host": true, 00:18:10.420 "hosts": [], 00:18:10.420 "serial_number": "SPDK2", 00:18:10.420 "model_number": "SPDK bdev Controller", 00:18:10.420 "max_namespaces": 32, 00:18:10.420 "min_cntlid": 1, 00:18:10.420 "max_cntlid": 65519, 00:18:10.420 "namespaces": [ 00:18:10.420 { 00:18:10.420 "nsid": 1, 00:18:10.420 "bdev_name": "Malloc2", 00:18:10.420 "name": "Malloc2", 00:18:10.420 "nguid": "4D0B88AC717640369DA02ACEA9D8749C", 00:18:10.420 "uuid": "4d0b88ac-7176-4036-9da0-2acea9d8749c" 00:18:10.420 } 00:18:10.420 ] 00:18:10.420 } 00:18:10.421 ] 00:18:10.421 07:03:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 231644 00:18:10.421 07:03:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:18:10.421 07:03:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:18:10.421 07:03:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:18:10.421 07:03:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:18:10.421 [2024-11-18 07:03:31.240810] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:18:10.421 [2024-11-18 07:03:31.240862] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid231782 ] 00:18:10.421 [2024-11-18 07:03:31.288271] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:18:10.421 [2024-11-18 07:03:31.296794] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:18:10.421 [2024-11-18 07:03:31.296825] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f8c08836000 00:18:10.421 [2024-11-18 07:03:31.297780] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:10.421 [2024-11-18 07:03:31.298807] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:10.421 [2024-11-18 07:03:31.299810] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:10.421 [2024-11-18 07:03:31.300819] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:18:10.421 [2024-11-18 07:03:31.301821] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:18:10.421 [2024-11-18 07:03:31.302825] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:10.421 [2024-11-18 07:03:31.303848] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:18:10.421 [2024-11-18 07:03:31.304851] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:10.421 [2024-11-18 07:03:31.305860] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:18:10.421 [2024-11-18 07:03:31.305881] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f8c0752e000 00:18:10.421 [2024-11-18 07:03:31.306993] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:18:10.421 [2024-11-18 07:03:31.321725] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:18:10.421 [2024-11-18 07:03:31.321762] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to connect adminq (no timeout) 00:18:10.421 [2024-11-18 07:03:31.325857] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:18:10.421 [2024-11-18 07:03:31.325910] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:18:10.421 [2024-11-18 07:03:31.325993] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for connect adminq (no timeout) 00:18:10.421 [2024-11-18 07:03:31.326014] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs (no timeout) 00:18:10.421 [2024-11-18 07:03:31.326025] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs wait for vs (no timeout) 00:18:10.421 [2024-11-18 07:03:31.326868] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:18:10.421 [2024-11-18 07:03:31.326889] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap (no timeout) 00:18:10.421 [2024-11-18 07:03:31.326902] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap wait for cap (no timeout) 00:18:10.421 [2024-11-18 07:03:31.327871] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:18:10.421 [2024-11-18 07:03:31.327891] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en (no timeout) 00:18:10.421 [2024-11-18 07:03:31.327904] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en wait for cc (timeout 15000 ms) 00:18:10.421 [2024-11-18 07:03:31.328879] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:18:10.421 [2024-11-18 07:03:31.328900] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:18:10.421 [2024-11-18 07:03:31.329885] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:18:10.421 [2024-11-18 07:03:31.329908] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 0 && CSTS.RDY = 0 00:18:10.421 [2024-11-18 07:03:31.329918] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to controller is disabled (timeout 15000 ms) 00:18:10.421 [2024-11-18 07:03:31.329930] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:18:10.421 [2024-11-18 07:03:31.330040] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Setting CC.EN = 1 00:18:10.421 [2024-11-18 07:03:31.330048] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:18:10.421 [2024-11-18 07:03:31.330056] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:18:10.421 [2024-11-18 07:03:31.330894] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:18:10.421 [2024-11-18 07:03:31.331899] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:18:10.421 [2024-11-18 07:03:31.332911] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:18:10.421 [2024-11-18 07:03:31.333904] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:10.421 [2024-11-18 07:03:31.333982] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:18:10.421 [2024-11-18 07:03:31.334933] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:18:10.421 [2024-11-18 07:03:31.334954] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:18:10.421 [2024-11-18 07:03:31.334964] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to reset admin queue (timeout 30000 ms) 00:18:10.421 [2024-11-18 07:03:31.335003] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller (no timeout) 00:18:10.421 [2024-11-18 07:03:31.335018] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify controller (timeout 30000 ms) 00:18:10.421 [2024-11-18 07:03:31.335039] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:18:10.421 [2024-11-18 07:03:31.335049] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:10.421 [2024-11-18 07:03:31.335056] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:10.421 [2024-11-18 07:03:31.335072] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:10.421 [2024-11-18 07:03:31.343503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:18:10.421 [2024-11-18 07:03:31.343527] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_xfer_size 131072 00:18:10.421 [2024-11-18 07:03:31.343538] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] MDTS max_xfer_size 131072 00:18:10.421 [2024-11-18 07:03:31.343545] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CNTLID 0x0001 00:18:10.421 [2024-11-18 07:03:31.343554] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:18:10.421 [2024-11-18 07:03:31.343571] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_sges 1 00:18:10.421 [2024-11-18 07:03:31.343581] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] fuses compare and write: 1 00:18:10.421 [2024-11-18 07:03:31.343590] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to configure AER (timeout 30000 ms) 00:18:10.421 [2024-11-18 07:03:31.343606] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for configure aer (timeout 30000 ms) 00:18:10.421 [2024-11-18 07:03:31.343624] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:18:10.421 [2024-11-18 07:03:31.351505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:18:10.421 [2024-11-18 07:03:31.351530] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:18:10.421 [2024-11-18 07:03:31.351544] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:18:10.421 [2024-11-18 07:03:31.351557] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:18:10.421 [2024-11-18 07:03:31.351570] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:18:10.421 [2024-11-18 07:03:31.351579] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:18:10.421 [2024-11-18 07:03:31.351593] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:18:10.421 [2024-11-18 07:03:31.351607] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:18:10.421 [2024-11-18 07:03:31.359503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:18:10.421 [2024-11-18 07:03:31.359527] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Controller adjusted keep alive timeout to 0 ms 00:18:10.421 [2024-11-18 07:03:31.359538] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:18:10.422 [2024-11-18 07:03:31.359550] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set number of queues (timeout 30000 ms) 00:18:10.422 [2024-11-18 07:03:31.359560] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:18:10.422 [2024-11-18 07:03:31.359574] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:18:10.422 [2024-11-18 07:03:31.367503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:18:10.422 [2024-11-18 07:03:31.367580] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify active ns (timeout 30000 ms) 00:18:10.422 [2024-11-18 07:03:31.367597] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:18:10.422 [2024-11-18 07:03:31.367610] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:18:10.422 [2024-11-18 07:03:31.367619] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:18:10.422 [2024-11-18 07:03:31.367625] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:10.422 [2024-11-18 07:03:31.367639] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:18:10.422 [2024-11-18 07:03:31.375516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:18:10.422 [2024-11-18 07:03:31.375544] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Namespace 1 was added 00:18:10.422 [2024-11-18 07:03:31.375561] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns (timeout 30000 ms) 00:18:10.422 [2024-11-18 07:03:31.375577] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify ns (timeout 30000 ms) 00:18:10.422 [2024-11-18 07:03:31.375590] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:18:10.422 [2024-11-18 07:03:31.375599] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:10.422 [2024-11-18 07:03:31.375605] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:10.422 [2024-11-18 07:03:31.375614] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:10.422 [2024-11-18 07:03:31.383500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:18:10.422 [2024-11-18 07:03:31.383528] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:18:10.422 [2024-11-18 07:03:31.383545] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:18:10.422 [2024-11-18 07:03:31.383559] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:18:10.422 [2024-11-18 07:03:31.383567] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:10.422 [2024-11-18 07:03:31.383573] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:10.422 [2024-11-18 07:03:31.383583] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:10.422 [2024-11-18 07:03:31.391503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:18:10.422 [2024-11-18 07:03:31.391524] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:18:10.422 [2024-11-18 07:03:31.391537] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported log pages (timeout 30000 ms) 00:18:10.422 [2024-11-18 07:03:31.391551] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported features (timeout 30000 ms) 00:18:10.422 [2024-11-18 07:03:31.391562] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:18:10.422 [2024-11-18 07:03:31.391571] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:18:10.422 [2024-11-18 07:03:31.391580] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host ID (timeout 30000 ms) 00:18:10.422 [2024-11-18 07:03:31.391588] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] NVMe-oF transport - not sending Set Features - Host ID 00:18:10.422 [2024-11-18 07:03:31.391596] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to transport ready (timeout 30000 ms) 00:18:10.422 [2024-11-18 07:03:31.391608] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to ready (no timeout) 00:18:10.422 [2024-11-18 07:03:31.391634] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:18:10.684 [2024-11-18 07:03:31.399499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:18:10.684 [2024-11-18 07:03:31.399535] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:18:10.684 [2024-11-18 07:03:31.407501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:18:10.684 [2024-11-18 07:03:31.407526] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:18:10.684 [2024-11-18 07:03:31.415499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:18:10.684 [2024-11-18 07:03:31.415525] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:18:10.684 [2024-11-18 07:03:31.423501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:18:10.684 [2024-11-18 07:03:31.423533] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:18:10.684 [2024-11-18 07:03:31.423544] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:18:10.684 [2024-11-18 07:03:31.423550] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:18:10.684 [2024-11-18 07:03:31.423557] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:18:10.684 [2024-11-18 07:03:31.423563] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:18:10.684 [2024-11-18 07:03:31.423572] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:18:10.684 [2024-11-18 07:03:31.423584] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:18:10.684 [2024-11-18 07:03:31.423592] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:18:10.684 [2024-11-18 07:03:31.423598] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:10.684 [2024-11-18 07:03:31.423607] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:18:10.684 [2024-11-18 07:03:31.423618] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:18:10.684 [2024-11-18 07:03:31.423626] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:10.684 [2024-11-18 07:03:31.423632] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:10.684 [2024-11-18 07:03:31.423641] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:10.684 [2024-11-18 07:03:31.423653] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:18:10.684 [2024-11-18 07:03:31.423661] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:18:10.684 [2024-11-18 07:03:31.423666] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:10.684 [2024-11-18 07:03:31.423675] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:18:10.684 [2024-11-18 07:03:31.431501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:18:10.684 [2024-11-18 07:03:31.431529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:18:10.684 [2024-11-18 07:03:31.431550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:18:10.684 [2024-11-18 07:03:31.431563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:18:10.684 ===================================================== 00:18:10.684 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:18:10.684 ===================================================== 00:18:10.684 Controller Capabilities/Features 00:18:10.684 ================================ 00:18:10.684 Vendor ID: 4e58 00:18:10.684 Subsystem Vendor ID: 4e58 00:18:10.684 Serial Number: SPDK2 00:18:10.684 Model Number: SPDK bdev Controller 00:18:10.684 Firmware Version: 25.01 00:18:10.684 Recommended Arb Burst: 6 00:18:10.684 IEEE OUI Identifier: 8d 6b 50 00:18:10.684 Multi-path I/O 00:18:10.684 May have multiple subsystem ports: Yes 00:18:10.684 May have multiple controllers: Yes 00:18:10.684 Associated with SR-IOV VF: No 00:18:10.684 Max Data Transfer Size: 131072 00:18:10.684 Max Number of Namespaces: 32 00:18:10.684 Max Number of I/O Queues: 127 00:18:10.684 NVMe Specification Version (VS): 1.3 00:18:10.684 NVMe Specification Version (Identify): 1.3 00:18:10.684 Maximum Queue Entries: 256 00:18:10.684 Contiguous Queues Required: Yes 00:18:10.684 Arbitration Mechanisms Supported 00:18:10.684 Weighted Round Robin: Not Supported 00:18:10.684 Vendor Specific: Not Supported 00:18:10.684 Reset Timeout: 15000 ms 00:18:10.684 Doorbell Stride: 4 bytes 00:18:10.684 NVM Subsystem Reset: Not Supported 00:18:10.684 Command Sets Supported 00:18:10.684 NVM Command Set: Supported 00:18:10.684 Boot Partition: Not Supported 00:18:10.685 Memory Page Size Minimum: 4096 bytes 00:18:10.685 Memory Page Size Maximum: 4096 bytes 00:18:10.685 Persistent Memory Region: Not Supported 00:18:10.685 Optional Asynchronous Events Supported 00:18:10.685 Namespace Attribute Notices: Supported 00:18:10.685 Firmware Activation Notices: Not Supported 00:18:10.685 ANA Change Notices: Not Supported 00:18:10.685 PLE Aggregate Log Change Notices: Not Supported 00:18:10.685 LBA Status Info Alert Notices: Not Supported 00:18:10.685 EGE Aggregate Log Change Notices: Not Supported 00:18:10.685 Normal NVM Subsystem Shutdown event: Not Supported 00:18:10.685 Zone Descriptor Change Notices: Not Supported 00:18:10.685 Discovery Log Change Notices: Not Supported 00:18:10.685 Controller Attributes 00:18:10.685 128-bit Host Identifier: Supported 00:18:10.685 Non-Operational Permissive Mode: Not Supported 00:18:10.685 NVM Sets: Not Supported 00:18:10.685 Read Recovery Levels: Not Supported 00:18:10.685 Endurance Groups: Not Supported 00:18:10.685 Predictable Latency Mode: Not Supported 00:18:10.685 Traffic Based Keep ALive: Not Supported 00:18:10.685 Namespace Granularity: Not Supported 00:18:10.685 SQ Associations: Not Supported 00:18:10.685 UUID List: Not Supported 00:18:10.685 Multi-Domain Subsystem: Not Supported 00:18:10.685 Fixed Capacity Management: Not Supported 00:18:10.685 Variable Capacity Management: Not Supported 00:18:10.685 Delete Endurance Group: Not Supported 00:18:10.685 Delete NVM Set: Not Supported 00:18:10.685 Extended LBA Formats Supported: Not Supported 00:18:10.685 Flexible Data Placement Supported: Not Supported 00:18:10.685 00:18:10.685 Controller Memory Buffer Support 00:18:10.685 ================================ 00:18:10.685 Supported: No 00:18:10.685 00:18:10.685 Persistent Memory Region Support 00:18:10.685 ================================ 00:18:10.685 Supported: No 00:18:10.685 00:18:10.685 Admin Command Set Attributes 00:18:10.685 ============================ 00:18:10.685 Security Send/Receive: Not Supported 00:18:10.685 Format NVM: Not Supported 00:18:10.685 Firmware Activate/Download: Not Supported 00:18:10.685 Namespace Management: Not Supported 00:18:10.685 Device Self-Test: Not Supported 00:18:10.685 Directives: Not Supported 00:18:10.685 NVMe-MI: Not Supported 00:18:10.685 Virtualization Management: Not Supported 00:18:10.685 Doorbell Buffer Config: Not Supported 00:18:10.685 Get LBA Status Capability: Not Supported 00:18:10.685 Command & Feature Lockdown Capability: Not Supported 00:18:10.685 Abort Command Limit: 4 00:18:10.685 Async Event Request Limit: 4 00:18:10.685 Number of Firmware Slots: N/A 00:18:10.685 Firmware Slot 1 Read-Only: N/A 00:18:10.685 Firmware Activation Without Reset: N/A 00:18:10.685 Multiple Update Detection Support: N/A 00:18:10.685 Firmware Update Granularity: No Information Provided 00:18:10.685 Per-Namespace SMART Log: No 00:18:10.685 Asymmetric Namespace Access Log Page: Not Supported 00:18:10.685 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:18:10.685 Command Effects Log Page: Supported 00:18:10.685 Get Log Page Extended Data: Supported 00:18:10.685 Telemetry Log Pages: Not Supported 00:18:10.685 Persistent Event Log Pages: Not Supported 00:18:10.685 Supported Log Pages Log Page: May Support 00:18:10.685 Commands Supported & Effects Log Page: Not Supported 00:18:10.685 Feature Identifiers & Effects Log Page:May Support 00:18:10.685 NVMe-MI Commands & Effects Log Page: May Support 00:18:10.685 Data Area 4 for Telemetry Log: Not Supported 00:18:10.685 Error Log Page Entries Supported: 128 00:18:10.685 Keep Alive: Supported 00:18:10.685 Keep Alive Granularity: 10000 ms 00:18:10.685 00:18:10.685 NVM Command Set Attributes 00:18:10.685 ========================== 00:18:10.685 Submission Queue Entry Size 00:18:10.685 Max: 64 00:18:10.685 Min: 64 00:18:10.685 Completion Queue Entry Size 00:18:10.685 Max: 16 00:18:10.685 Min: 16 00:18:10.685 Number of Namespaces: 32 00:18:10.685 Compare Command: Supported 00:18:10.685 Write Uncorrectable Command: Not Supported 00:18:10.685 Dataset Management Command: Supported 00:18:10.685 Write Zeroes Command: Supported 00:18:10.685 Set Features Save Field: Not Supported 00:18:10.685 Reservations: Not Supported 00:18:10.685 Timestamp: Not Supported 00:18:10.685 Copy: Supported 00:18:10.685 Volatile Write Cache: Present 00:18:10.685 Atomic Write Unit (Normal): 1 00:18:10.685 Atomic Write Unit (PFail): 1 00:18:10.685 Atomic Compare & Write Unit: 1 00:18:10.685 Fused Compare & Write: Supported 00:18:10.685 Scatter-Gather List 00:18:10.685 SGL Command Set: Supported (Dword aligned) 00:18:10.685 SGL Keyed: Not Supported 00:18:10.685 SGL Bit Bucket Descriptor: Not Supported 00:18:10.685 SGL Metadata Pointer: Not Supported 00:18:10.685 Oversized SGL: Not Supported 00:18:10.685 SGL Metadata Address: Not Supported 00:18:10.685 SGL Offset: Not Supported 00:18:10.685 Transport SGL Data Block: Not Supported 00:18:10.685 Replay Protected Memory Block: Not Supported 00:18:10.685 00:18:10.685 Firmware Slot Information 00:18:10.685 ========================= 00:18:10.685 Active slot: 1 00:18:10.685 Slot 1 Firmware Revision: 25.01 00:18:10.685 00:18:10.685 00:18:10.685 Commands Supported and Effects 00:18:10.685 ============================== 00:18:10.685 Admin Commands 00:18:10.685 -------------- 00:18:10.685 Get Log Page (02h): Supported 00:18:10.685 Identify (06h): Supported 00:18:10.685 Abort (08h): Supported 00:18:10.685 Set Features (09h): Supported 00:18:10.685 Get Features (0Ah): Supported 00:18:10.685 Asynchronous Event Request (0Ch): Supported 00:18:10.685 Keep Alive (18h): Supported 00:18:10.685 I/O Commands 00:18:10.685 ------------ 00:18:10.685 Flush (00h): Supported LBA-Change 00:18:10.685 Write (01h): Supported LBA-Change 00:18:10.685 Read (02h): Supported 00:18:10.685 Compare (05h): Supported 00:18:10.685 Write Zeroes (08h): Supported LBA-Change 00:18:10.685 Dataset Management (09h): Supported LBA-Change 00:18:10.685 Copy (19h): Supported LBA-Change 00:18:10.685 00:18:10.685 Error Log 00:18:10.685 ========= 00:18:10.685 00:18:10.685 Arbitration 00:18:10.685 =========== 00:18:10.685 Arbitration Burst: 1 00:18:10.685 00:18:10.685 Power Management 00:18:10.685 ================ 00:18:10.685 Number of Power States: 1 00:18:10.685 Current Power State: Power State #0 00:18:10.685 Power State #0: 00:18:10.685 Max Power: 0.00 W 00:18:10.685 Non-Operational State: Operational 00:18:10.685 Entry Latency: Not Reported 00:18:10.685 Exit Latency: Not Reported 00:18:10.685 Relative Read Throughput: 0 00:18:10.685 Relative Read Latency: 0 00:18:10.685 Relative Write Throughput: 0 00:18:10.685 Relative Write Latency: 0 00:18:10.685 Idle Power: Not Reported 00:18:10.685 Active Power: Not Reported 00:18:10.685 Non-Operational Permissive Mode: Not Supported 00:18:10.685 00:18:10.685 Health Information 00:18:10.685 ================== 00:18:10.685 Critical Warnings: 00:18:10.685 Available Spare Space: OK 00:18:10.685 Temperature: OK 00:18:10.685 Device Reliability: OK 00:18:10.685 Read Only: No 00:18:10.685 Volatile Memory Backup: OK 00:18:10.685 Current Temperature: 0 Kelvin (-273 Celsius) 00:18:10.685 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:18:10.685 Available Spare: 0% 00:18:10.685 Available Sp[2024-11-18 07:03:31.431687] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:18:10.685 [2024-11-18 07:03:31.439499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:18:10.685 [2024-11-18 07:03:31.439551] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Prepare to destruct SSD 00:18:10.685 [2024-11-18 07:03:31.439569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.685 [2024-11-18 07:03:31.439580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.685 [2024-11-18 07:03:31.439591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.685 [2024-11-18 07:03:31.439600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.685 [2024-11-18 07:03:31.439667] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:18:10.685 [2024-11-18 07:03:31.439688] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:18:10.685 [2024-11-18 07:03:31.440667] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:10.685 [2024-11-18 07:03:31.440740] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] RTD3E = 0 us 00:18:10.685 [2024-11-18 07:03:31.440755] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown timeout = 10000 ms 00:18:10.686 [2024-11-18 07:03:31.441672] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:18:10.686 [2024-11-18 07:03:31.441695] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown complete in 0 milliseconds 00:18:10.686 [2024-11-18 07:03:31.441747] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:18:10.686 [2024-11-18 07:03:31.442932] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:18:10.686 are Threshold: 0% 00:18:10.686 Life Percentage Used: 0% 00:18:10.686 Data Units Read: 0 00:18:10.686 Data Units Written: 0 00:18:10.686 Host Read Commands: 0 00:18:10.686 Host Write Commands: 0 00:18:10.686 Controller Busy Time: 0 minutes 00:18:10.686 Power Cycles: 0 00:18:10.686 Power On Hours: 0 hours 00:18:10.686 Unsafe Shutdowns: 0 00:18:10.686 Unrecoverable Media Errors: 0 00:18:10.686 Lifetime Error Log Entries: 0 00:18:10.686 Warning Temperature Time: 0 minutes 00:18:10.686 Critical Temperature Time: 0 minutes 00:18:10.686 00:18:10.686 Number of Queues 00:18:10.686 ================ 00:18:10.686 Number of I/O Submission Queues: 127 00:18:10.686 Number of I/O Completion Queues: 127 00:18:10.686 00:18:10.686 Active Namespaces 00:18:10.686 ================= 00:18:10.686 Namespace ID:1 00:18:10.686 Error Recovery Timeout: Unlimited 00:18:10.686 Command Set Identifier: NVM (00h) 00:18:10.686 Deallocate: Supported 00:18:10.686 Deallocated/Unwritten Error: Not Supported 00:18:10.686 Deallocated Read Value: Unknown 00:18:10.686 Deallocate in Write Zeroes: Not Supported 00:18:10.686 Deallocated Guard Field: 0xFFFF 00:18:10.686 Flush: Supported 00:18:10.686 Reservation: Supported 00:18:10.686 Namespace Sharing Capabilities: Multiple Controllers 00:18:10.686 Size (in LBAs): 131072 (0GiB) 00:18:10.686 Capacity (in LBAs): 131072 (0GiB) 00:18:10.686 Utilization (in LBAs): 131072 (0GiB) 00:18:10.686 NGUID: 4D0B88AC717640369DA02ACEA9D8749C 00:18:10.686 UUID: 4d0b88ac-7176-4036-9da0-2acea9d8749c 00:18:10.686 Thin Provisioning: Not Supported 00:18:10.686 Per-NS Atomic Units: Yes 00:18:10.686 Atomic Boundary Size (Normal): 0 00:18:10.686 Atomic Boundary Size (PFail): 0 00:18:10.686 Atomic Boundary Offset: 0 00:18:10.686 Maximum Single Source Range Length: 65535 00:18:10.686 Maximum Copy Length: 65535 00:18:10.686 Maximum Source Range Count: 1 00:18:10.686 NGUID/EUI64 Never Reused: No 00:18:10.686 Namespace Write Protected: No 00:18:10.686 Number of LBA Formats: 1 00:18:10.686 Current LBA Format: LBA Format #00 00:18:10.686 LBA Format #00: Data Size: 512 Metadata Size: 0 00:18:10.686 00:18:10.686 07:03:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:18:10.945 [2024-11-18 07:03:31.679437] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:16.224 Initializing NVMe Controllers 00:18:16.224 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:18:16.224 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:18:16.224 Initialization complete. Launching workers. 00:18:16.224 ======================================================== 00:18:16.224 Latency(us) 00:18:16.224 Device Information : IOPS MiB/s Average min max 00:18:16.224 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 34790.51 135.90 3678.36 1171.30 8988.75 00:18:16.224 ======================================================== 00:18:16.224 Total : 34790.51 135.90 3678.36 1171.30 8988.75 00:18:16.224 00:18:16.224 [2024-11-18 07:03:36.784853] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:16.224 07:03:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:18:16.224 [2024-11-18 07:03:37.042598] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:21.505 Initializing NVMe Controllers 00:18:21.505 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:18:21.505 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:18:21.505 Initialization complete. Launching workers. 00:18:21.505 ======================================================== 00:18:21.505 Latency(us) 00:18:21.505 Device Information : IOPS MiB/s Average min max 00:18:21.505 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 31123.20 121.58 4113.90 1216.92 9328.90 00:18:21.505 ======================================================== 00:18:21.505 Total : 31123.20 121.58 4113.90 1216.92 9328.90 00:18:21.505 00:18:21.505 [2024-11-18 07:03:42.063395] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:21.505 07:03:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:18:21.505 [2024-11-18 07:03:42.288356] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:26.789 [2024-11-18 07:03:47.432642] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:26.789 Initializing NVMe Controllers 00:18:26.789 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:18:26.789 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:18:26.789 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:18:26.789 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:18:26.789 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:18:26.789 Initialization complete. Launching workers. 00:18:26.789 Starting thread on core 2 00:18:26.789 Starting thread on core 3 00:18:26.789 Starting thread on core 1 00:18:26.789 07:03:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:18:26.789 [2024-11-18 07:03:47.753129] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:30.086 [2024-11-18 07:03:50.832487] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:30.086 Initializing NVMe Controllers 00:18:30.086 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:18:30.086 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:18:30.086 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:18:30.086 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:18:30.086 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:18:30.086 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:18:30.086 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:18:30.086 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:18:30.086 Initialization complete. Launching workers. 00:18:30.086 Starting thread on core 1 with urgent priority queue 00:18:30.086 Starting thread on core 2 with urgent priority queue 00:18:30.086 Starting thread on core 3 with urgent priority queue 00:18:30.086 Starting thread on core 0 with urgent priority queue 00:18:30.086 SPDK bdev Controller (SPDK2 ) core 0: 6204.33 IO/s 16.12 secs/100000 ios 00:18:30.086 SPDK bdev Controller (SPDK2 ) core 1: 5527.00 IO/s 18.09 secs/100000 ios 00:18:30.086 SPDK bdev Controller (SPDK2 ) core 2: 5653.33 IO/s 17.69 secs/100000 ios 00:18:30.086 SPDK bdev Controller (SPDK2 ) core 3: 6268.00 IO/s 15.95 secs/100000 ios 00:18:30.086 ======================================================== 00:18:30.086 00:18:30.086 07:03:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:18:30.345 [2024-11-18 07:03:51.158042] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:30.345 Initializing NVMe Controllers 00:18:30.345 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:18:30.345 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:18:30.345 Namespace ID: 1 size: 0GB 00:18:30.345 Initialization complete. 00:18:30.345 INFO: using host memory buffer for IO 00:18:30.345 Hello world! 00:18:30.345 [2024-11-18 07:03:51.167109] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:30.345 07:03:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:18:30.603 [2024-11-18 07:03:51.479708] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:31.981 Initializing NVMe Controllers 00:18:31.981 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:18:31.981 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:18:31.981 Initialization complete. Launching workers. 00:18:31.981 submit (in ns) avg, min, max = 8570.5, 3481.1, 4015693.3 00:18:31.981 complete (in ns) avg, min, max = 26664.7, 2050.0, 4015472.2 00:18:31.981 00:18:31.981 Submit histogram 00:18:31.981 ================ 00:18:31.981 Range in us Cumulative Count 00:18:31.981 3.461 - 3.484: 0.0155% ( 2) 00:18:31.981 3.484 - 3.508: 0.5660% ( 71) 00:18:31.981 3.508 - 3.532: 1.4422% ( 113) 00:18:31.981 3.532 - 3.556: 4.1870% ( 354) 00:18:31.981 3.556 - 3.579: 9.0641% ( 629) 00:18:31.981 3.579 - 3.603: 17.0272% ( 1027) 00:18:31.981 3.603 - 3.627: 25.0446% ( 1034) 00:18:31.981 3.627 - 3.650: 34.9151% ( 1273) 00:18:31.981 3.650 - 3.674: 42.6068% ( 992) 00:18:31.981 3.674 - 3.698: 49.8100% ( 929) 00:18:31.981 3.698 - 3.721: 56.5868% ( 874) 00:18:31.981 3.721 - 3.745: 61.9369% ( 690) 00:18:31.981 3.745 - 3.769: 65.9378% ( 516) 00:18:31.981 3.769 - 3.793: 69.5433% ( 465) 00:18:31.981 3.793 - 3.816: 72.9084% ( 434) 00:18:31.981 3.816 - 3.840: 76.3433% ( 443) 00:18:31.981 3.840 - 3.864: 80.1659% ( 493) 00:18:31.981 3.864 - 3.887: 83.1899% ( 390) 00:18:31.981 3.887 - 3.911: 85.8029% ( 337) 00:18:31.981 3.911 - 3.935: 87.9507% ( 277) 00:18:31.981 3.935 - 3.959: 89.7883% ( 237) 00:18:31.981 3.959 - 3.982: 91.4864% ( 219) 00:18:31.981 3.982 - 4.006: 92.9518% ( 189) 00:18:31.981 4.006 - 4.030: 94.0529% ( 142) 00:18:31.981 4.030 - 4.053: 94.9601% ( 117) 00:18:31.981 4.053 - 4.077: 95.5338% ( 74) 00:18:31.981 4.077 - 4.101: 95.9138% ( 49) 00:18:31.981 4.101 - 4.124: 96.1541% ( 31) 00:18:31.981 4.124 - 4.148: 96.2937% ( 18) 00:18:31.981 4.148 - 4.172: 96.4798% ( 24) 00:18:31.981 4.172 - 4.196: 96.5961% ( 15) 00:18:31.981 4.196 - 4.219: 96.7124% ( 15) 00:18:31.981 4.219 - 4.243: 96.8752% ( 21) 00:18:31.981 4.243 - 4.267: 96.9760% ( 13) 00:18:31.981 4.267 - 4.290: 97.0536% ( 10) 00:18:31.981 4.290 - 4.314: 97.0691% ( 2) 00:18:31.981 4.314 - 4.338: 97.1079% ( 5) 00:18:31.981 4.338 - 4.361: 97.1311% ( 3) 00:18:31.981 4.361 - 4.385: 97.1621% ( 4) 00:18:31.981 4.385 - 4.409: 97.1776% ( 2) 00:18:31.981 4.409 - 4.433: 97.1854% ( 1) 00:18:31.981 4.433 - 4.456: 97.1931% ( 1) 00:18:31.981 4.456 - 4.480: 97.2009% ( 1) 00:18:31.981 4.480 - 4.504: 97.2242% ( 3) 00:18:31.981 4.527 - 4.551: 97.2319% ( 1) 00:18:31.981 4.551 - 4.575: 97.2397% ( 1) 00:18:31.981 4.575 - 4.599: 97.2552% ( 2) 00:18:31.981 4.599 - 4.622: 97.2707% ( 2) 00:18:31.981 4.622 - 4.646: 97.2784% ( 1) 00:18:31.981 4.646 - 4.670: 97.2862% ( 1) 00:18:31.981 4.670 - 4.693: 97.3095% ( 3) 00:18:31.981 4.693 - 4.717: 97.3482% ( 5) 00:18:31.981 4.717 - 4.741: 97.3792% ( 4) 00:18:31.981 4.741 - 4.764: 97.4490% ( 9) 00:18:31.981 4.764 - 4.788: 97.4723% ( 3) 00:18:31.981 4.788 - 4.812: 97.5266% ( 7) 00:18:31.981 4.812 - 4.836: 97.5421% ( 2) 00:18:31.981 4.836 - 4.859: 97.5808% ( 5) 00:18:31.981 4.859 - 4.883: 97.6041% ( 3) 00:18:31.981 4.883 - 4.907: 97.6196% ( 2) 00:18:31.981 4.907 - 4.930: 97.6816% ( 8) 00:18:31.981 4.930 - 4.954: 97.7126% ( 4) 00:18:31.981 4.954 - 4.978: 97.7747% ( 8) 00:18:31.981 4.978 - 5.001: 97.8212% ( 6) 00:18:31.981 5.001 - 5.025: 97.8522% ( 4) 00:18:31.981 5.025 - 5.049: 97.8755% ( 3) 00:18:31.981 5.049 - 5.073: 97.8910% ( 2) 00:18:31.981 5.073 - 5.096: 97.9220% ( 4) 00:18:31.981 5.096 - 5.120: 97.9375% ( 2) 00:18:31.981 5.120 - 5.144: 97.9608% ( 3) 00:18:31.981 5.144 - 5.167: 97.9685% ( 1) 00:18:31.981 5.167 - 5.191: 97.9918% ( 3) 00:18:31.981 5.191 - 5.215: 97.9995% ( 1) 00:18:31.981 5.239 - 5.262: 98.0150% ( 2) 00:18:31.981 5.262 - 5.286: 98.0305% ( 2) 00:18:31.981 5.333 - 5.357: 98.0461% ( 2) 00:18:31.981 5.381 - 5.404: 98.0538% ( 1) 00:18:31.981 5.404 - 5.428: 98.0616% ( 1) 00:18:31.981 5.594 - 5.618: 98.0693% ( 1) 00:18:31.982 5.665 - 5.689: 98.0771% ( 1) 00:18:31.982 5.689 - 5.713: 98.0848% ( 1) 00:18:31.982 5.736 - 5.760: 98.0926% ( 1) 00:18:31.982 5.760 - 5.784: 98.1158% ( 3) 00:18:31.982 5.879 - 5.902: 98.1236% ( 1) 00:18:31.982 5.902 - 5.926: 98.1313% ( 1) 00:18:31.982 5.950 - 5.973: 98.1391% ( 1) 00:18:31.982 5.973 - 5.997: 98.1546% ( 2) 00:18:31.982 6.044 - 6.068: 98.1624% ( 1) 00:18:31.982 6.163 - 6.210: 98.1701% ( 1) 00:18:31.982 6.210 - 6.258: 98.1779% ( 1) 00:18:31.982 6.258 - 6.305: 98.1856% ( 1) 00:18:31.982 6.353 - 6.400: 98.1934% ( 1) 00:18:31.982 6.400 - 6.447: 98.2089% ( 2) 00:18:31.982 6.447 - 6.495: 98.2166% ( 1) 00:18:31.982 6.542 - 6.590: 98.2321% ( 2) 00:18:31.982 6.590 - 6.637: 98.2399% ( 1) 00:18:31.982 6.637 - 6.684: 98.2477% ( 1) 00:18:31.982 6.732 - 6.779: 98.2554% ( 1) 00:18:31.982 6.779 - 6.827: 98.2632% ( 1) 00:18:31.982 6.874 - 6.921: 98.2709% ( 1) 00:18:31.982 6.921 - 6.969: 98.2864% ( 2) 00:18:31.982 6.969 - 7.016: 98.3019% ( 2) 00:18:31.982 7.111 - 7.159: 98.3174% ( 2) 00:18:31.982 7.159 - 7.206: 98.3252% ( 1) 00:18:31.982 7.206 - 7.253: 98.3407% ( 2) 00:18:31.982 7.253 - 7.301: 98.3485% ( 1) 00:18:31.982 7.301 - 7.348: 98.3717% ( 3) 00:18:31.982 7.348 - 7.396: 98.3950% ( 3) 00:18:31.982 7.538 - 7.585: 98.4027% ( 1) 00:18:31.982 7.585 - 7.633: 98.4105% ( 1) 00:18:31.982 7.727 - 7.775: 98.4415% ( 4) 00:18:31.982 7.775 - 7.822: 98.4493% ( 1) 00:18:31.982 7.822 - 7.870: 98.4725% ( 3) 00:18:31.982 7.917 - 7.964: 98.4880% ( 2) 00:18:31.982 7.964 - 8.012: 98.5035% ( 2) 00:18:31.982 8.012 - 8.059: 98.5113% ( 1) 00:18:31.982 8.296 - 8.344: 98.5345% ( 3) 00:18:31.982 8.344 - 8.391: 98.5578% ( 3) 00:18:31.982 8.391 - 8.439: 98.5733% ( 2) 00:18:31.982 8.533 - 8.581: 98.5888% ( 2) 00:18:31.982 8.581 - 8.628: 98.5966% ( 1) 00:18:31.982 8.628 - 8.676: 98.6043% ( 1) 00:18:31.982 8.676 - 8.723: 98.6121% ( 1) 00:18:31.982 8.723 - 8.770: 98.6198% ( 1) 00:18:31.982 8.770 - 8.818: 98.6353% ( 2) 00:18:31.982 8.818 - 8.865: 98.6508% ( 2) 00:18:31.982 9.055 - 9.102: 98.6586% ( 1) 00:18:31.982 9.292 - 9.339: 98.6664% ( 1) 00:18:31.982 9.387 - 9.434: 98.6741% ( 1) 00:18:31.982 9.861 - 9.908: 98.6819% ( 1) 00:18:31.982 10.003 - 10.050: 98.6974% ( 2) 00:18:31.982 10.050 - 10.098: 98.7051% ( 1) 00:18:31.982 10.145 - 10.193: 98.7129% ( 1) 00:18:31.982 10.382 - 10.430: 98.7206% ( 1) 00:18:31.982 10.524 - 10.572: 98.7284% ( 1) 00:18:31.982 10.619 - 10.667: 98.7361% ( 1) 00:18:31.982 10.667 - 10.714: 98.7516% ( 2) 00:18:31.982 10.904 - 10.951: 98.7594% ( 1) 00:18:31.982 11.046 - 11.093: 98.7672% ( 1) 00:18:31.982 11.093 - 11.141: 98.7749% ( 1) 00:18:31.982 11.236 - 11.283: 98.7827% ( 1) 00:18:31.982 11.473 - 11.520: 98.7904% ( 1) 00:18:31.982 11.567 - 11.615: 98.7982% ( 1) 00:18:31.982 11.615 - 11.662: 98.8059% ( 1) 00:18:31.982 11.710 - 11.757: 98.8137% ( 1) 00:18:31.982 11.852 - 11.899: 98.8292% ( 2) 00:18:31.982 11.899 - 11.947: 98.8369% ( 1) 00:18:31.982 11.994 - 12.041: 98.8524% ( 2) 00:18:31.982 12.089 - 12.136: 98.8602% ( 1) 00:18:31.982 12.136 - 12.231: 98.8680% ( 1) 00:18:31.982 12.421 - 12.516: 98.8835% ( 2) 00:18:31.982 12.516 - 12.610: 98.8990% ( 2) 00:18:31.982 12.610 - 12.705: 98.9145% ( 2) 00:18:31.982 12.705 - 12.800: 98.9222% ( 1) 00:18:31.982 12.800 - 12.895: 98.9300% ( 1) 00:18:31.982 13.274 - 13.369: 98.9377% ( 1) 00:18:31.982 13.369 - 13.464: 98.9455% ( 1) 00:18:31.982 13.559 - 13.653: 98.9532% ( 1) 00:18:31.982 13.653 - 13.748: 98.9688% ( 2) 00:18:31.982 13.748 - 13.843: 98.9765% ( 1) 00:18:31.982 13.843 - 13.938: 98.9843% ( 1) 00:18:31.982 14.127 - 14.222: 98.9920% ( 1) 00:18:31.982 14.222 - 14.317: 99.0075% ( 2) 00:18:31.982 14.317 - 14.412: 99.0153% ( 1) 00:18:31.982 14.791 - 14.886: 99.0230% ( 1) 00:18:31.982 15.076 - 15.170: 99.0308% ( 1) 00:18:31.982 16.308 - 16.403: 99.0385% ( 1) 00:18:31.982 17.067 - 17.161: 99.0463% ( 1) 00:18:31.982 17.161 - 17.256: 99.0618% ( 2) 00:18:31.982 17.256 - 17.351: 99.0928% ( 4) 00:18:31.982 17.351 - 17.446: 99.1083% ( 2) 00:18:31.982 17.446 - 17.541: 99.1316% ( 3) 00:18:31.982 17.541 - 17.636: 99.1471% ( 2) 00:18:31.982 17.636 - 17.730: 99.1936% ( 6) 00:18:31.982 17.730 - 17.825: 99.2634% ( 9) 00:18:31.982 17.825 - 17.920: 99.3564% ( 12) 00:18:31.982 17.920 - 18.015: 99.3952% ( 5) 00:18:31.982 18.015 - 18.110: 99.4495% ( 7) 00:18:31.982 18.110 - 18.204: 99.4650% ( 2) 00:18:31.982 18.204 - 18.299: 99.4960% ( 4) 00:18:31.982 18.299 - 18.394: 99.5503% ( 7) 00:18:31.982 18.394 - 18.489: 99.6046% ( 7) 00:18:31.982 18.489 - 18.584: 99.6433% ( 5) 00:18:31.982 18.584 - 18.679: 99.6899% ( 6) 00:18:31.982 18.679 - 18.773: 99.7131% ( 3) 00:18:31.982 18.773 - 18.868: 99.7364% ( 3) 00:18:31.982 18.868 - 18.963: 99.7596% ( 3) 00:18:31.982 19.058 - 19.153: 99.7674% ( 1) 00:18:31.982 19.153 - 19.247: 99.7751% ( 1) 00:18:31.982 19.437 - 19.532: 99.7829% ( 1) 00:18:31.982 19.627 - 19.721: 99.7906% ( 1) 00:18:31.982 19.721 - 19.816: 99.7984% ( 1) 00:18:31.982 19.816 - 19.911: 99.8062% ( 1) 00:18:31.982 19.911 - 20.006: 99.8139% ( 1) 00:18:31.982 20.575 - 20.670: 99.8217% ( 1) 00:18:31.982 20.764 - 20.859: 99.8294% ( 1) 00:18:31.982 20.954 - 21.049: 99.8372% ( 1) 00:18:31.982 22.281 - 22.376: 99.8449% ( 1) 00:18:31.982 22.471 - 22.566: 99.8527% ( 1) 00:18:31.982 24.462 - 24.652: 99.8604% ( 1) 00:18:31.982 24.652 - 24.841: 99.8682% ( 1) 00:18:31.982 25.031 - 25.221: 99.8759% ( 1) 00:18:31.982 26.738 - 26.927: 99.8837% ( 1) 00:18:31.982 3980.705 - 4004.978: 99.9690% ( 11) 00:18:31.982 4004.978 - 4029.250: 100.0000% ( 4) 00:18:31.982 00:18:31.982 Complete histogram 00:18:31.982 ================== 00:18:31.982 Range in us Cumulative Count 00:18:31.982 2.039 - 2.050: 0.0078% ( 1) 00:18:31.983 2.050 - 2.062: 10.3590% ( 1335) 00:18:31.983 2.062 - 2.074: 42.0330% ( 4085) 00:18:31.983 2.074 - 2.086: 44.7081% ( 345) 00:18:31.983 2.086 - 2.098: 50.8103% ( 787) 00:18:31.983 2.098 - 2.110: 57.0598% ( 806) 00:18:31.983 2.110 - 2.121: 58.7501% ( 218) 00:18:31.983 2.121 - 2.133: 70.6443% ( 1534) 00:18:31.983 2.133 - 2.145: 79.3053% ( 1117) 00:18:31.983 2.145 - 2.157: 80.3985% ( 141) 00:18:31.983 2.157 - 2.169: 84.1591% ( 485) 00:18:31.983 2.169 - 2.181: 86.0743% ( 247) 00:18:31.983 2.181 - 2.193: 86.9737% ( 116) 00:18:31.983 2.193 - 2.204: 89.6643% ( 347) 00:18:31.983 2.204 - 2.216: 91.4631% ( 232) 00:18:31.983 2.216 - 2.228: 93.3473% ( 243) 00:18:31.983 2.228 - 2.240: 94.1847% ( 108) 00:18:31.983 2.240 - 2.252: 94.5104% ( 42) 00:18:31.983 2.252 - 2.264: 94.6189% ( 14) 00:18:31.983 2.264 - 2.276: 94.8050% ( 24) 00:18:31.983 2.276 - 2.287: 95.0454% ( 31) 00:18:31.983 2.287 - 2.299: 95.5804% ( 69) 00:18:31.983 2.299 - 2.311: 95.7199% ( 18) 00:18:31.983 2.311 - 2.323: 95.7820% ( 8) 00:18:31.983 2.323 - 2.335: 95.8905% ( 14) 00:18:31.983 2.335 - 2.347: 96.2549% ( 47) 00:18:31.983 2.347 - 2.359: 96.6039% ( 45) 00:18:31.983 2.359 - 2.370: 97.0303% ( 55) 00:18:31.983 2.370 - 2.382: 97.3715% ( 44) 00:18:31.983 2.382 - 2.394: 97.6196% ( 32) 00:18:31.983 2.394 - 2.406: 97.8290% ( 27) 00:18:31.983 2.406 - 2.418: 97.9220% ( 12) 00:18:31.983 2.418 - 2.430: 98.0693% ( 19) 00:18:31.983 2.430 - 2.441: 98.1469% ( 10) 00:18:31.983 2.441 - 2.453: 98.2321% ( 11) 00:18:31.983 2.453 - 2.465: 98.2787% ( 6) 00:18:31.983 2.465 - 2.477: 98.3252% ( 6) 00:18:31.983 2.477 - 2.489: 98.3407% ( 2) 00:18:31.983 2.489 - 2.501: 98.4027% ( 8) 00:18:31.983 2.501 - 2.513: 98.4182% ( 2) 00:18:31.983 2.513 - 2.524: 98.4260% ( 1) 00:18:31.983 2.536 - 2.548: 98.4337% ( 1) 00:18:31.983 2.548 - 2.560: 98.4570% ( 3) 00:18:31.983 2.596 - 2.607: 98.4648% ( 1) 00:18:31.983 2.607 - 2.619: 98.4803% ( 2) 00:18:31.983 2.631 - 2.643: 98.4880% ( 1) 00:18:31.983 2.643 - 2.655: 9[2024-11-18 07:03:52.581293] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:31.983 8.4958% ( 1) 00:18:31.983 2.655 - 2.667: 98.5035% ( 1) 00:18:31.983 3.437 - 3.461: 98.5113% ( 1) 00:18:31.983 3.532 - 3.556: 98.5190% ( 1) 00:18:31.983 3.556 - 3.579: 98.5268% ( 1) 00:18:31.983 3.603 - 3.627: 98.5345% ( 1) 00:18:31.983 3.627 - 3.650: 98.5656% ( 4) 00:18:31.983 3.650 - 3.674: 98.5733% ( 1) 00:18:31.983 3.674 - 3.698: 98.5811% ( 1) 00:18:31.983 3.698 - 3.721: 98.5966% ( 2) 00:18:31.983 3.721 - 3.745: 98.6043% ( 1) 00:18:31.983 3.745 - 3.769: 98.6121% ( 1) 00:18:31.983 3.769 - 3.793: 98.6198% ( 1) 00:18:31.983 3.793 - 3.816: 98.6353% ( 2) 00:18:31.983 3.840 - 3.864: 98.6431% ( 1) 00:18:31.983 3.911 - 3.935: 98.6586% ( 2) 00:18:31.983 3.935 - 3.959: 98.6741% ( 2) 00:18:31.983 3.982 - 4.006: 98.6819% ( 1) 00:18:31.983 4.053 - 4.077: 98.6896% ( 1) 00:18:31.983 4.172 - 4.196: 98.6974% ( 1) 00:18:31.983 4.836 - 4.859: 98.7051% ( 1) 00:18:31.983 5.144 - 5.167: 98.7129% ( 1) 00:18:31.983 5.594 - 5.618: 98.7206% ( 1) 00:18:31.983 5.641 - 5.665: 98.7284% ( 1) 00:18:31.983 5.713 - 5.736: 98.7361% ( 1) 00:18:31.983 5.950 - 5.973: 98.7439% ( 1) 00:18:31.983 6.163 - 6.210: 98.7516% ( 1) 00:18:31.983 6.305 - 6.353: 98.7672% ( 2) 00:18:31.983 6.353 - 6.400: 98.7749% ( 1) 00:18:31.983 6.495 - 6.542: 98.7904% ( 2) 00:18:31.983 6.874 - 6.921: 98.7982% ( 1) 00:18:31.983 7.348 - 7.396: 98.8059% ( 1) 00:18:31.983 7.396 - 7.443: 98.8214% ( 2) 00:18:31.983 7.633 - 7.680: 98.8292% ( 1) 00:18:31.983 7.870 - 7.917: 98.8369% ( 1) 00:18:31.983 7.917 - 7.964: 98.8447% ( 1) 00:18:31.983 8.107 - 8.154: 98.8524% ( 1) 00:18:31.983 10.193 - 10.240: 98.8602% ( 1) 00:18:31.983 12.326 - 12.421: 98.8680% ( 1) 00:18:31.983 15.739 - 15.834: 98.8912% ( 3) 00:18:31.983 15.834 - 15.929: 98.9067% ( 2) 00:18:31.983 15.929 - 16.024: 98.9377% ( 4) 00:18:31.983 16.024 - 16.119: 98.9688% ( 4) 00:18:31.983 16.119 - 16.213: 99.0075% ( 5) 00:18:31.983 16.213 - 16.308: 99.0385% ( 4) 00:18:31.983 16.308 - 16.403: 99.0463% ( 1) 00:18:31.983 16.403 - 16.498: 99.0618% ( 2) 00:18:31.983 16.498 - 16.593: 99.0851% ( 3) 00:18:31.983 16.593 - 16.687: 99.1316% ( 6) 00:18:31.983 16.687 - 16.782: 99.1548% ( 3) 00:18:31.983 16.782 - 16.877: 99.1859% ( 4) 00:18:31.983 16.877 - 16.972: 99.2014% ( 2) 00:18:31.983 16.972 - 17.067: 99.2324% ( 4) 00:18:31.983 17.067 - 17.161: 99.2479% ( 2) 00:18:31.983 17.161 - 17.256: 99.2867% ( 5) 00:18:31.983 17.256 - 17.351: 99.2944% ( 1) 00:18:31.983 17.541 - 17.636: 99.3022% ( 1) 00:18:31.983 17.825 - 17.920: 99.3099% ( 1) 00:18:31.983 17.920 - 18.015: 99.3177% ( 1) 00:18:31.983 18.015 - 18.110: 99.3254% ( 1) 00:18:31.983 18.110 - 18.204: 99.3332% ( 1) 00:18:31.983 18.204 - 18.299: 99.3409% ( 1) 00:18:31.983 18.584 - 18.679: 99.3487% ( 1) 00:18:31.983 20.859 - 20.954: 99.3564% ( 1) 00:18:31.983 29.961 - 30.151: 99.3642% ( 1) 00:18:31.983 35.650 - 35.840: 99.3719% ( 1) 00:18:31.983 41.529 - 41.719: 99.3797% ( 1) 00:18:31.983 105.434 - 106.193: 99.3875% ( 1) 00:18:31.983 3665.161 - 3689.434: 99.4030% ( 2) 00:18:31.983 3980.705 - 4004.978: 99.9225% ( 67) 00:18:31.983 4004.978 - 4029.250: 100.0000% ( 10) 00:18:31.983 00:18:31.983 07:03:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:18:31.983 07:03:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:18:31.983 07:03:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:18:31.983 07:03:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:18:31.983 07:03:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:18:31.983 [ 00:18:31.983 { 00:18:31.983 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:18:31.983 "subtype": "Discovery", 00:18:31.983 "listen_addresses": [], 00:18:31.983 "allow_any_host": true, 00:18:31.983 "hosts": [] 00:18:31.983 }, 00:18:31.983 { 00:18:31.983 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:18:31.983 "subtype": "NVMe", 00:18:31.983 "listen_addresses": [ 00:18:31.983 { 00:18:31.983 "trtype": "VFIOUSER", 00:18:31.983 "adrfam": "IPv4", 00:18:31.983 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:18:31.983 "trsvcid": "0" 00:18:31.983 } 00:18:31.983 ], 00:18:31.983 "allow_any_host": true, 00:18:31.984 "hosts": [], 00:18:31.984 "serial_number": "SPDK1", 00:18:31.984 "model_number": "SPDK bdev Controller", 00:18:31.984 "max_namespaces": 32, 00:18:31.984 "min_cntlid": 1, 00:18:31.984 "max_cntlid": 65519, 00:18:31.984 "namespaces": [ 00:18:31.984 { 00:18:31.984 "nsid": 1, 00:18:31.984 "bdev_name": "Malloc1", 00:18:31.984 "name": "Malloc1", 00:18:31.984 "nguid": "B68624A51D9D4C69B919B44AB28EE00E", 00:18:31.984 "uuid": "b68624a5-1d9d-4c69-b919-b44ab28ee00e" 00:18:31.984 }, 00:18:31.984 { 00:18:31.984 "nsid": 2, 00:18:31.984 "bdev_name": "Malloc3", 00:18:31.984 "name": "Malloc3", 00:18:31.984 "nguid": "DF741D38219246878C2B9CD1DB4667F2", 00:18:31.984 "uuid": "df741d38-2192-4687-8c2b-9cd1db4667f2" 00:18:31.984 } 00:18:31.984 ] 00:18:31.984 }, 00:18:31.984 { 00:18:31.984 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:18:31.984 "subtype": "NVMe", 00:18:31.984 "listen_addresses": [ 00:18:31.984 { 00:18:31.984 "trtype": "VFIOUSER", 00:18:31.984 "adrfam": "IPv4", 00:18:31.984 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:18:31.984 "trsvcid": "0" 00:18:31.984 } 00:18:31.984 ], 00:18:31.984 "allow_any_host": true, 00:18:31.984 "hosts": [], 00:18:31.984 "serial_number": "SPDK2", 00:18:31.984 "model_number": "SPDK bdev Controller", 00:18:31.984 "max_namespaces": 32, 00:18:31.984 "min_cntlid": 1, 00:18:31.984 "max_cntlid": 65519, 00:18:31.984 "namespaces": [ 00:18:31.984 { 00:18:31.984 "nsid": 1, 00:18:31.984 "bdev_name": "Malloc2", 00:18:31.984 "name": "Malloc2", 00:18:31.984 "nguid": "4D0B88AC717640369DA02ACEA9D8749C", 00:18:31.984 "uuid": "4d0b88ac-7176-4036-9da0-2acea9d8749c" 00:18:31.984 } 00:18:31.984 ] 00:18:31.984 } 00:18:31.984 ] 00:18:31.984 07:03:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:18:31.984 07:03:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=234293 00:18:31.984 07:03:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:18:31.984 07:03:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:18:31.984 07:03:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:18:31.984 07:03:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:18:31.984 07:03:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1271 -- # '[' 0 -lt 200 ']' 00:18:31.984 07:03:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # i=1 00:18:31.984 07:03:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1273 -- # sleep 0.1 00:18:32.272 07:03:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:18:32.272 07:03:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1271 -- # '[' 1 -lt 200 ']' 00:18:32.272 07:03:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # i=2 00:18:32.272 07:03:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1273 -- # sleep 0.1 00:18:32.272 [2024-11-18 07:03:53.084061] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:32.272 07:03:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:18:32.272 07:03:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:18:32.272 07:03:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:18:32.272 07:03:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:18:32.273 07:03:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:18:32.531 Malloc4 00:18:32.531 07:03:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:18:32.789 [2024-11-18 07:03:53.678473] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:32.789 07:03:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:18:32.789 Asynchronous Event Request test 00:18:32.789 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:18:32.789 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:18:32.789 Registering asynchronous event callbacks... 00:18:32.789 Starting namespace attribute notice tests for all controllers... 00:18:32.789 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:18:32.789 aer_cb - Changed Namespace 00:18:32.789 Cleaning up... 00:18:33.047 [ 00:18:33.047 { 00:18:33.047 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:18:33.047 "subtype": "Discovery", 00:18:33.047 "listen_addresses": [], 00:18:33.047 "allow_any_host": true, 00:18:33.047 "hosts": [] 00:18:33.047 }, 00:18:33.047 { 00:18:33.047 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:18:33.047 "subtype": "NVMe", 00:18:33.047 "listen_addresses": [ 00:18:33.047 { 00:18:33.047 "trtype": "VFIOUSER", 00:18:33.047 "adrfam": "IPv4", 00:18:33.047 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:18:33.047 "trsvcid": "0" 00:18:33.047 } 00:18:33.047 ], 00:18:33.047 "allow_any_host": true, 00:18:33.047 "hosts": [], 00:18:33.047 "serial_number": "SPDK1", 00:18:33.047 "model_number": "SPDK bdev Controller", 00:18:33.047 "max_namespaces": 32, 00:18:33.047 "min_cntlid": 1, 00:18:33.047 "max_cntlid": 65519, 00:18:33.047 "namespaces": [ 00:18:33.047 { 00:18:33.047 "nsid": 1, 00:18:33.047 "bdev_name": "Malloc1", 00:18:33.047 "name": "Malloc1", 00:18:33.047 "nguid": "B68624A51D9D4C69B919B44AB28EE00E", 00:18:33.047 "uuid": "b68624a5-1d9d-4c69-b919-b44ab28ee00e" 00:18:33.047 }, 00:18:33.047 { 00:18:33.047 "nsid": 2, 00:18:33.047 "bdev_name": "Malloc3", 00:18:33.047 "name": "Malloc3", 00:18:33.047 "nguid": "DF741D38219246878C2B9CD1DB4667F2", 00:18:33.047 "uuid": "df741d38-2192-4687-8c2b-9cd1db4667f2" 00:18:33.047 } 00:18:33.047 ] 00:18:33.047 }, 00:18:33.047 { 00:18:33.047 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:18:33.047 "subtype": "NVMe", 00:18:33.047 "listen_addresses": [ 00:18:33.047 { 00:18:33.047 "trtype": "VFIOUSER", 00:18:33.047 "adrfam": "IPv4", 00:18:33.047 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:18:33.047 "trsvcid": "0" 00:18:33.047 } 00:18:33.047 ], 00:18:33.047 "allow_any_host": true, 00:18:33.047 "hosts": [], 00:18:33.047 "serial_number": "SPDK2", 00:18:33.047 "model_number": "SPDK bdev Controller", 00:18:33.047 "max_namespaces": 32, 00:18:33.047 "min_cntlid": 1, 00:18:33.047 "max_cntlid": 65519, 00:18:33.047 "namespaces": [ 00:18:33.047 { 00:18:33.047 "nsid": 1, 00:18:33.047 "bdev_name": "Malloc2", 00:18:33.047 "name": "Malloc2", 00:18:33.047 "nguid": "4D0B88AC717640369DA02ACEA9D8749C", 00:18:33.047 "uuid": "4d0b88ac-7176-4036-9da0-2acea9d8749c" 00:18:33.047 }, 00:18:33.047 { 00:18:33.047 "nsid": 2, 00:18:33.047 "bdev_name": "Malloc4", 00:18:33.047 "name": "Malloc4", 00:18:33.047 "nguid": "72938C625C3343D9BD306E7C4A28A41A", 00:18:33.047 "uuid": "72938c62-5c33-43d9-bd30-6e7c4a28a41a" 00:18:33.047 } 00:18:33.047 ] 00:18:33.047 } 00:18:33.047 ] 00:18:33.047 07:03:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 234293 00:18:33.047 07:03:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:18:33.047 07:03:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 228179 00:18:33.047 07:03:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 228179 ']' 00:18:33.047 07:03:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 228179 00:18:33.047 07:03:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:18:33.047 07:03:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:33.047 07:03:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 228179 00:18:33.047 07:03:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:33.047 07:03:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:33.047 07:03:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 228179' 00:18:33.047 killing process with pid 228179 00:18:33.047 07:03:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 228179 00:18:33.047 07:03:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 228179 00:18:33.611 07:03:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:18:33.611 07:03:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:18:33.611 07:03:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:18:33.611 07:03:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:18:33.611 07:03:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:18:33.611 07:03:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=234443 00:18:33.611 07:03:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:18:33.611 07:03:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 234443' 00:18:33.611 Process pid: 234443 00:18:33.611 07:03:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:18:33.611 07:03:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 234443 00:18:33.611 07:03:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 234443 ']' 00:18:33.611 07:03:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:33.611 07:03:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:33.611 07:03:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:33.611 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:33.611 07:03:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:33.611 07:03:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:18:33.611 [2024-11-18 07:03:54.367683] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:18:33.611 [2024-11-18 07:03:54.368707] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:18:33.611 [2024-11-18 07:03:54.368772] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:33.611 [2024-11-18 07:03:54.433065] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:33.611 [2024-11-18 07:03:54.474767] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:33.611 [2024-11-18 07:03:54.474825] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:33.611 [2024-11-18 07:03:54.474864] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:33.611 [2024-11-18 07:03:54.474875] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:33.611 [2024-11-18 07:03:54.474885] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:33.611 [2024-11-18 07:03:54.476276] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:33.611 [2024-11-18 07:03:54.476385] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:33.611 [2024-11-18 07:03:54.476472] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:18:33.611 [2024-11-18 07:03:54.476475] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:33.611 [2024-11-18 07:03:54.560271] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:18:33.611 [2024-11-18 07:03:54.560450] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:18:33.611 [2024-11-18 07:03:54.560727] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:18:33.611 [2024-11-18 07:03:54.561255] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:18:33.611 [2024-11-18 07:03:54.561470] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:18:33.869 07:03:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:33.869 07:03:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:18:33.869 07:03:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:18:34.807 07:03:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:18:35.066 07:03:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:18:35.066 07:03:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:18:35.066 07:03:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:18:35.067 07:03:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:18:35.067 07:03:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:18:35.327 Malloc1 00:18:35.327 07:03:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:18:35.587 07:03:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:18:35.846 07:03:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:18:36.106 07:03:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:18:36.106 07:03:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:18:36.106 07:03:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:18:36.674 Malloc2 00:18:36.674 07:03:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:18:36.674 07:03:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:18:36.933 07:03:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:18:37.191 07:03:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:18:37.191 07:03:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 234443 00:18:37.191 07:03:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 234443 ']' 00:18:37.191 07:03:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 234443 00:18:37.191 07:03:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:18:37.191 07:03:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:37.191 07:03:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 234443 00:18:37.450 07:03:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:37.450 07:03:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:37.450 07:03:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 234443' 00:18:37.450 killing process with pid 234443 00:18:37.450 07:03:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 234443 00:18:37.450 07:03:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 234443 00:18:37.711 07:03:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:18:37.711 07:03:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:18:37.711 00:18:37.711 real 0m53.728s 00:18:37.711 user 3m27.816s 00:18:37.711 sys 0m3.962s 00:18:37.711 07:03:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:37.711 07:03:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:18:37.711 ************************************ 00:18:37.711 END TEST nvmf_vfio_user 00:18:37.711 ************************************ 00:18:37.711 07:03:58 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@32 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:18:37.711 07:03:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:37.711 07:03:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:37.711 07:03:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:37.711 ************************************ 00:18:37.711 START TEST nvmf_vfio_user_nvme_compliance 00:18:37.711 ************************************ 00:18:37.711 07:03:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:18:37.711 * Looking for test storage... 00:18:37.711 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:18:37.711 07:03:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:37.711 07:03:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1693 -- # lcov --version 00:18:37.711 07:03:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:37.711 07:03:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:37.711 07:03:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:37.711 07:03:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:37.711 07:03:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:37.711 07:03:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # IFS=.-: 00:18:37.711 07:03:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # read -ra ver1 00:18:37.711 07:03:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # IFS=.-: 00:18:37.711 07:03:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # read -ra ver2 00:18:37.711 07:03:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@338 -- # local 'op=<' 00:18:37.711 07:03:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@340 -- # ver1_l=2 00:18:37.711 07:03:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@341 -- # ver2_l=1 00:18:37.711 07:03:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:37.711 07:03:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@344 -- # case "$op" in 00:18:37.711 07:03:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@345 -- # : 1 00:18:37.711 07:03:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:37.711 07:03:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:37.711 07:03:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # decimal 1 00:18:37.711 07:03:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=1 00:18:37.711 07:03:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:37.711 07:03:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 1 00:18:37.711 07:03:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # ver1[v]=1 00:18:37.711 07:03:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # decimal 2 00:18:37.711 07:03:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=2 00:18:37.711 07:03:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:37.711 07:03:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 2 00:18:37.711 07:03:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # ver2[v]=2 00:18:37.711 07:03:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:37.711 07:03:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:37.711 07:03:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # return 0 00:18:37.711 07:03:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:37.711 07:03:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:37.711 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:37.711 --rc genhtml_branch_coverage=1 00:18:37.711 --rc genhtml_function_coverage=1 00:18:37.711 --rc genhtml_legend=1 00:18:37.711 --rc geninfo_all_blocks=1 00:18:37.711 --rc geninfo_unexecuted_blocks=1 00:18:37.711 00:18:37.711 ' 00:18:37.711 07:03:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:37.711 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:37.711 --rc genhtml_branch_coverage=1 00:18:37.711 --rc genhtml_function_coverage=1 00:18:37.711 --rc genhtml_legend=1 00:18:37.711 --rc geninfo_all_blocks=1 00:18:37.711 --rc geninfo_unexecuted_blocks=1 00:18:37.711 00:18:37.711 ' 00:18:37.711 07:03:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:37.711 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:37.711 --rc genhtml_branch_coverage=1 00:18:37.711 --rc genhtml_function_coverage=1 00:18:37.711 --rc genhtml_legend=1 00:18:37.711 --rc geninfo_all_blocks=1 00:18:37.711 --rc geninfo_unexecuted_blocks=1 00:18:37.711 00:18:37.711 ' 00:18:37.711 07:03:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:37.711 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:37.711 --rc genhtml_branch_coverage=1 00:18:37.711 --rc genhtml_function_coverage=1 00:18:37.711 --rc genhtml_legend=1 00:18:37.711 --rc geninfo_all_blocks=1 00:18:37.711 --rc geninfo_unexecuted_blocks=1 00:18:37.711 00:18:37.711 ' 00:18:37.711 07:03:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:37.711 07:03:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:18:37.711 07:03:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:37.711 07:03:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:37.711 07:03:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:37.711 07:03:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:37.711 07:03:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:37.711 07:03:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:37.711 07:03:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:37.711 07:03:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:37.712 07:03:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:37.712 07:03:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:37.712 07:03:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:37.712 07:03:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:37.712 07:03:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:37.712 07:03:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:37.712 07:03:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:37.712 07:03:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:37.712 07:03:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:37.712 07:03:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@15 -- # shopt -s extglob 00:18:37.712 07:03:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:37.712 07:03:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:37.712 07:03:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:37.712 07:03:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:37.712 07:03:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:37.712 07:03:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:37.712 07:03:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:18:37.712 07:03:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:37.712 07:03:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # : 0 00:18:37.712 07:03:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:37.712 07:03:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:37.712 07:03:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:37.712 07:03:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:37.712 07:03:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:37.712 07:03:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:37.712 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:37.712 07:03:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:37.712 07:03:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:37.712 07:03:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:37.712 07:03:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:37.712 07:03:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:37.712 07:03:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:18:37.712 07:03:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:18:37.712 07:03:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:18:37.712 07:03:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=235047 00:18:37.712 07:03:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:18:37.712 07:03:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 235047' 00:18:37.712 Process pid: 235047 00:18:37.712 07:03:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:18:37.712 07:03:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 235047 00:18:37.712 07:03:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@835 -- # '[' -z 235047 ']' 00:18:37.712 07:03:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:37.712 07:03:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:37.712 07:03:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:37.712 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:37.712 07:03:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:37.712 07:03:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:18:37.974 [2024-11-18 07:03:58.711795] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:18:37.974 [2024-11-18 07:03:58.711902] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:37.974 [2024-11-18 07:03:58.778305] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:37.974 [2024-11-18 07:03:58.822252] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:37.974 [2024-11-18 07:03:58.822312] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:37.974 [2024-11-18 07:03:58.822335] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:37.974 [2024-11-18 07:03:58.822346] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:37.974 [2024-11-18 07:03:58.822357] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:37.974 [2024-11-18 07:03:58.823657] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:37.974 [2024-11-18 07:03:58.823719] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:37.974 [2024-11-18 07:03:58.823723] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:37.974 07:03:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:37.974 07:03:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@868 -- # return 0 00:18:37.974 07:03:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:18:39.356 07:03:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:18:39.357 07:03:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:18:39.357 07:03:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:18:39.357 07:03:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.357 07:03:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:18:39.357 07:03:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.357 07:03:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:18:39.357 07:03:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:18:39.357 07:03:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.357 07:03:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:18:39.357 malloc0 00:18:39.357 07:03:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.357 07:03:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:18:39.357 07:03:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.357 07:03:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:18:39.357 07:03:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.357 07:03:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:18:39.357 07:03:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.357 07:03:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:18:39.357 07:04:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.357 07:04:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:18:39.357 07:04:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.357 07:04:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:18:39.357 07:04:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.357 07:04:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:18:39.357 00:18:39.357 00:18:39.357 CUnit - A unit testing framework for C - Version 2.1-3 00:18:39.357 http://cunit.sourceforge.net/ 00:18:39.357 00:18:39.357 00:18:39.357 Suite: nvme_compliance 00:18:39.357 Test: admin_identify_ctrlr_verify_dptr ...[2024-11-18 07:04:00.191249] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:39.357 [2024-11-18 07:04:00.192778] vfio_user.c: 807:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:18:39.357 [2024-11-18 07:04:00.192805] vfio_user.c:5511:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:18:39.357 [2024-11-18 07:04:00.192818] vfio_user.c:5604:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:18:39.357 [2024-11-18 07:04:00.194265] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:39.357 passed 00:18:39.357 Test: admin_identify_ctrlr_verify_fused ...[2024-11-18 07:04:00.279905] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:39.357 [2024-11-18 07:04:00.282927] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:39.357 passed 00:18:39.616 Test: admin_identify_ns ...[2024-11-18 07:04:00.372984] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:39.616 [2024-11-18 07:04:00.432511] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:18:39.616 [2024-11-18 07:04:00.440504] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:18:39.616 [2024-11-18 07:04:00.461636] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:39.616 passed 00:18:39.616 Test: admin_get_features_mandatory_features ...[2024-11-18 07:04:00.546083] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:39.616 [2024-11-18 07:04:00.549101] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:39.616 passed 00:18:39.876 Test: admin_get_features_optional_features ...[2024-11-18 07:04:00.637718] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:39.876 [2024-11-18 07:04:00.640737] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:39.876 passed 00:18:39.876 Test: admin_set_features_number_of_queues ...[2024-11-18 07:04:00.723985] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:39.876 [2024-11-18 07:04:00.828627] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:40.134 passed 00:18:40.135 Test: admin_get_log_page_mandatory_logs ...[2024-11-18 07:04:00.911419] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:40.135 [2024-11-18 07:04:00.914445] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:40.135 passed 00:18:40.135 Test: admin_get_log_page_with_lpo ...[2024-11-18 07:04:00.999721] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:40.135 [2024-11-18 07:04:01.067510] ctrlr.c:2697:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:18:40.135 [2024-11-18 07:04:01.080592] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:40.135 passed 00:18:40.394 Test: fabric_property_get ...[2024-11-18 07:04:01.164272] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:40.394 [2024-11-18 07:04:01.165579] vfio_user.c:5604:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:18:40.394 [2024-11-18 07:04:01.167292] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:40.394 passed 00:18:40.394 Test: admin_delete_io_sq_use_admin_qid ...[2024-11-18 07:04:01.251867] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:40.394 [2024-11-18 07:04:01.253167] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:18:40.394 [2024-11-18 07:04:01.254900] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:40.394 passed 00:18:40.394 Test: admin_delete_io_sq_delete_sq_twice ...[2024-11-18 07:04:01.337085] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:40.654 [2024-11-18 07:04:01.420517] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:18:40.654 [2024-11-18 07:04:01.436518] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:18:40.654 [2024-11-18 07:04:01.441625] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:40.654 passed 00:18:40.654 Test: admin_delete_io_cq_use_admin_qid ...[2024-11-18 07:04:01.525314] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:40.654 [2024-11-18 07:04:01.526659] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:18:40.654 [2024-11-18 07:04:01.528337] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:40.654 passed 00:18:40.654 Test: admin_delete_io_cq_delete_cq_first ...[2024-11-18 07:04:01.611607] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:40.915 [2024-11-18 07:04:01.685500] vfio_user.c:2322:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:18:40.915 [2024-11-18 07:04:01.709503] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:18:40.915 [2024-11-18 07:04:01.714592] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:40.915 passed 00:18:40.915 Test: admin_create_io_cq_verify_iv_pc ...[2024-11-18 07:04:01.800782] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:40.915 [2024-11-18 07:04:01.802106] vfio_user.c:2161:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:18:40.915 [2024-11-18 07:04:01.802148] vfio_user.c:2155:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:18:40.915 [2024-11-18 07:04:01.803805] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:40.915 passed 00:18:40.915 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-11-18 07:04:01.886090] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:41.175 [2024-11-18 07:04:01.976500] vfio_user.c:2243:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:18:41.175 [2024-11-18 07:04:01.984499] vfio_user.c:2243:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:18:41.175 [2024-11-18 07:04:01.992497] vfio_user.c:2041:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:18:41.175 [2024-11-18 07:04:02.000502] vfio_user.c:2041:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:18:41.175 [2024-11-18 07:04:02.029610] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:41.175 passed 00:18:41.175 Test: admin_create_io_sq_verify_pc ...[2024-11-18 07:04:02.112179] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:41.175 [2024-11-18 07:04:02.128530] vfio_user.c:2054:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:18:41.175 [2024-11-18 07:04:02.146513] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:41.444 passed 00:18:41.444 Test: admin_create_io_qp_max_qps ...[2024-11-18 07:04:02.229042] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:42.831 [2024-11-18 07:04:03.371522] nvme_ctrlr.c:5523:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user, 0] No free I/O queue IDs 00:18:42.831 [2024-11-18 07:04:03.755232] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:42.831 passed 00:18:43.090 Test: admin_create_io_sq_shared_cq ...[2024-11-18 07:04:03.841765] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:43.090 [2024-11-18 07:04:03.973497] vfio_user.c:2322:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:18:43.090 [2024-11-18 07:04:04.010589] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:43.090 passed 00:18:43.090 00:18:43.090 Run Summary: Type Total Ran Passed Failed Inactive 00:18:43.090 suites 1 1 n/a 0 0 00:18:43.090 tests 18 18 18 0 0 00:18:43.090 asserts 360 360 360 0 n/a 00:18:43.090 00:18:43.090 Elapsed time = 1.589 seconds 00:18:43.090 07:04:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 235047 00:18:43.090 07:04:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # '[' -z 235047 ']' 00:18:43.090 07:04:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # kill -0 235047 00:18:43.090 07:04:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # uname 00:18:43.090 07:04:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:43.090 07:04:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 235047 00:18:43.349 07:04:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:43.349 07:04:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:43.349 07:04:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@972 -- # echo 'killing process with pid 235047' 00:18:43.349 killing process with pid 235047 00:18:43.349 07:04:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@973 -- # kill 235047 00:18:43.349 07:04:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@978 -- # wait 235047 00:18:43.349 07:04:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:18:43.349 07:04:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:18:43.349 00:18:43.349 real 0m5.804s 00:18:43.349 user 0m16.338s 00:18:43.349 sys 0m0.524s 00:18:43.349 07:04:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:43.349 07:04:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:18:43.349 ************************************ 00:18:43.349 END TEST nvmf_vfio_user_nvme_compliance 00:18:43.349 ************************************ 00:18:43.349 07:04:04 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@33 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:18:43.349 07:04:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:43.349 07:04:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:43.349 07:04:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:43.608 ************************************ 00:18:43.608 START TEST nvmf_vfio_user_fuzz 00:18:43.608 ************************************ 00:18:43.608 07:04:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:18:43.608 * Looking for test storage... 00:18:43.608 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:43.609 07:04:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:43.609 07:04:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1693 -- # lcov --version 00:18:43.609 07:04:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:43.609 07:04:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:43.609 07:04:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:43.609 07:04:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:43.609 07:04:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:43.609 07:04:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:18:43.609 07:04:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:18:43.609 07:04:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:18:43.609 07:04:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:18:43.609 07:04:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:18:43.609 07:04:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:18:43.609 07:04:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:18:43.609 07:04:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:43.609 07:04:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:18:43.609 07:04:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@345 -- # : 1 00:18:43.609 07:04:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:43.609 07:04:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:43.609 07:04:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # decimal 1 00:18:43.609 07:04:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=1 00:18:43.609 07:04:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:43.609 07:04:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 1 00:18:43.609 07:04:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:18:43.609 07:04:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # decimal 2 00:18:43.609 07:04:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=2 00:18:43.609 07:04:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:43.609 07:04:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 2 00:18:43.609 07:04:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:18:43.609 07:04:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:43.609 07:04:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:43.609 07:04:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # return 0 00:18:43.609 07:04:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:43.609 07:04:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:43.609 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:43.609 --rc genhtml_branch_coverage=1 00:18:43.609 --rc genhtml_function_coverage=1 00:18:43.609 --rc genhtml_legend=1 00:18:43.609 --rc geninfo_all_blocks=1 00:18:43.609 --rc geninfo_unexecuted_blocks=1 00:18:43.609 00:18:43.609 ' 00:18:43.609 07:04:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:43.609 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:43.609 --rc genhtml_branch_coverage=1 00:18:43.609 --rc genhtml_function_coverage=1 00:18:43.609 --rc genhtml_legend=1 00:18:43.609 --rc geninfo_all_blocks=1 00:18:43.609 --rc geninfo_unexecuted_blocks=1 00:18:43.609 00:18:43.609 ' 00:18:43.609 07:04:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:43.609 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:43.609 --rc genhtml_branch_coverage=1 00:18:43.609 --rc genhtml_function_coverage=1 00:18:43.609 --rc genhtml_legend=1 00:18:43.609 --rc geninfo_all_blocks=1 00:18:43.609 --rc geninfo_unexecuted_blocks=1 00:18:43.609 00:18:43.609 ' 00:18:43.609 07:04:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:43.609 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:43.609 --rc genhtml_branch_coverage=1 00:18:43.609 --rc genhtml_function_coverage=1 00:18:43.609 --rc genhtml_legend=1 00:18:43.609 --rc geninfo_all_blocks=1 00:18:43.609 --rc geninfo_unexecuted_blocks=1 00:18:43.609 00:18:43.609 ' 00:18:43.609 07:04:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:43.609 07:04:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:18:43.609 07:04:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:43.609 07:04:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:43.609 07:04:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:43.609 07:04:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:43.609 07:04:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:43.609 07:04:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:43.609 07:04:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:43.609 07:04:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:43.609 07:04:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:43.609 07:04:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:43.609 07:04:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:43.609 07:04:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:43.609 07:04:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:43.609 07:04:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:43.609 07:04:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:43.609 07:04:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:43.609 07:04:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:43.609 07:04:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:18:43.609 07:04:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:43.609 07:04:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:43.609 07:04:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:43.609 07:04:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:43.609 07:04:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:43.609 07:04:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:43.609 07:04:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:18:43.609 07:04:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:43.609 07:04:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # : 0 00:18:43.609 07:04:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:43.609 07:04:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:43.609 07:04:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:43.609 07:04:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:43.609 07:04:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:43.609 07:04:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:43.609 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:43.609 07:04:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:43.610 07:04:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:43.610 07:04:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:43.610 07:04:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:18:43.610 07:04:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:18:43.610 07:04:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:18:43.610 07:04:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:18:43.610 07:04:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:18:43.610 07:04:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:18:43.610 07:04:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:18:43.610 07:04:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=235779 00:18:43.610 07:04:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:18:43.610 07:04:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 235779' 00:18:43.610 Process pid: 235779 00:18:43.610 07:04:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:18:43.610 07:04:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 235779 00:18:43.610 07:04:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@835 -- # '[' -z 235779 ']' 00:18:43.610 07:04:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:43.610 07:04:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:43.610 07:04:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:43.610 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:43.610 07:04:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:43.610 07:04:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:43.869 07:04:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:43.869 07:04:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@868 -- # return 0 00:18:43.869 07:04:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:18:44.808 07:04:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:18:44.808 07:04:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.808 07:04:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:44.808 07:04:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.808 07:04:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:18:45.068 07:04:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:18:45.068 07:04:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.069 07:04:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:45.069 malloc0 00:18:45.069 07:04:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.069 07:04:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:18:45.069 07:04:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.069 07:04:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:45.069 07:04:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.069 07:04:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:18:45.069 07:04:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.069 07:04:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:45.069 07:04:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.069 07:04:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:18:45.069 07:04:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.069 07:04:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:45.069 07:04:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.069 07:04:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:18:45.069 07:04:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:19:17.176 Fuzzing completed. Shutting down the fuzz application 00:19:17.176 00:19:17.176 Dumping successful admin opcodes: 00:19:17.176 8, 9, 10, 24, 00:19:17.176 Dumping successful io opcodes: 00:19:17.176 0, 00:19:17.177 NS: 0x20000081ef00 I/O qp, Total commands completed: 625822, total successful commands: 2425, random_seed: 2288713152 00:19:17.177 NS: 0x20000081ef00 admin qp, Total commands completed: 145518, total successful commands: 1180, random_seed: 3624985408 00:19:17.177 07:04:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:19:17.177 07:04:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.177 07:04:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:17.177 07:04:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.177 07:04:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 235779 00:19:17.177 07:04:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # '[' -z 235779 ']' 00:19:17.177 07:04:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # kill -0 235779 00:19:17.177 07:04:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # uname 00:19:17.177 07:04:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:17.177 07:04:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 235779 00:19:17.177 07:04:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:17.177 07:04:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:17.177 07:04:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@972 -- # echo 'killing process with pid 235779' 00:19:17.177 killing process with pid 235779 00:19:17.177 07:04:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@973 -- # kill 235779 00:19:17.177 07:04:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@978 -- # wait 235779 00:19:17.177 07:04:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:19:17.177 07:04:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:19:17.177 00:19:17.177 real 0m32.176s 00:19:17.177 user 0m29.817s 00:19:17.177 sys 0m29.919s 00:19:17.177 07:04:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:17.177 07:04:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:17.177 ************************************ 00:19:17.177 END TEST nvmf_vfio_user_fuzz 00:19:17.177 ************************************ 00:19:17.177 07:04:36 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:19:17.177 07:04:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:17.177 07:04:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:17.177 07:04:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:17.177 ************************************ 00:19:17.177 START TEST nvmf_auth_target 00:19:17.177 ************************************ 00:19:17.177 07:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:19:17.177 * Looking for test storage... 00:19:17.177 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:17.177 07:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:17.177 07:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # lcov --version 00:19:17.177 07:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:17.177 07:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:17.177 07:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:17.177 07:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:17.177 07:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:17.177 07:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:19:17.177 07:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:19:17.177 07:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:19:17.177 07:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:19:17.177 07:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:19:17.177 07:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:19:17.177 07:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:19:17.177 07:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:17.177 07:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:19:17.177 07:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:19:17.177 07:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:17.177 07:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:17.177 07:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:19:17.177 07:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:19:17.177 07:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:17.177 07:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:19:17.177 07:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:19:17.177 07:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:19:17.177 07:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:19:17.177 07:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:17.177 07:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:19:17.177 07:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:19:17.177 07:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:17.177 07:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:17.177 07:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:19:17.177 07:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:17.177 07:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:17.177 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:17.177 --rc genhtml_branch_coverage=1 00:19:17.177 --rc genhtml_function_coverage=1 00:19:17.177 --rc genhtml_legend=1 00:19:17.177 --rc geninfo_all_blocks=1 00:19:17.177 --rc geninfo_unexecuted_blocks=1 00:19:17.177 00:19:17.177 ' 00:19:17.177 07:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:17.177 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:17.177 --rc genhtml_branch_coverage=1 00:19:17.177 --rc genhtml_function_coverage=1 00:19:17.177 --rc genhtml_legend=1 00:19:17.177 --rc geninfo_all_blocks=1 00:19:17.177 --rc geninfo_unexecuted_blocks=1 00:19:17.177 00:19:17.177 ' 00:19:17.177 07:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:17.177 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:17.177 --rc genhtml_branch_coverage=1 00:19:17.177 --rc genhtml_function_coverage=1 00:19:17.177 --rc genhtml_legend=1 00:19:17.177 --rc geninfo_all_blocks=1 00:19:17.177 --rc geninfo_unexecuted_blocks=1 00:19:17.177 00:19:17.177 ' 00:19:17.177 07:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:17.177 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:17.177 --rc genhtml_branch_coverage=1 00:19:17.177 --rc genhtml_function_coverage=1 00:19:17.177 --rc genhtml_legend=1 00:19:17.177 --rc geninfo_all_blocks=1 00:19:17.177 --rc geninfo_unexecuted_blocks=1 00:19:17.177 00:19:17.177 ' 00:19:17.177 07:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:17.177 07:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:19:17.177 07:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:17.177 07:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:17.177 07:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:17.177 07:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:17.177 07:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:17.177 07:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:17.177 07:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:17.177 07:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:17.177 07:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:17.177 07:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:17.177 07:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:17.177 07:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:19:17.177 07:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:17.177 07:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:17.177 07:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:17.177 07:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:17.177 07:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:17.177 07:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:19:17.178 07:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:17.178 07:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:17.178 07:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:17.178 07:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:17.178 07:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:17.178 07:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:17.178 07:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:19:17.178 07:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:17.178 07:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:19:17.178 07:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:17.178 07:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:17.178 07:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:17.178 07:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:17.178 07:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:17.178 07:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:17.178 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:17.178 07:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:17.178 07:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:17.178 07:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:17.178 07:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:19:17.178 07:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:19:17.178 07:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:19:17.178 07:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:17.178 07:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:19:17.178 07:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:19:17.178 07:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:19:17.178 07:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:19:17.178 07:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:17.178 07:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:17.178 07:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:17.178 07:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:17.178 07:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:17.178 07:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:17.178 07:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:17.178 07:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:17.178 07:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:17.178 07:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:17.178 07:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@309 -- # xtrace_disable 00:19:17.178 07:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:18.113 07:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:18.113 07:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # pci_devs=() 00:19:18.113 07:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:18.113 07:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:18.113 07:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:18.113 07:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:18.113 07:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:18.113 07:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # net_devs=() 00:19:18.113 07:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:18.113 07:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # e810=() 00:19:18.113 07:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # local -ga e810 00:19:18.113 07:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # x722=() 00:19:18.113 07:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # local -ga x722 00:19:18.113 07:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # mlx=() 00:19:18.113 07:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # local -ga mlx 00:19:18.113 07:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:18.113 07:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:18.113 07:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:18.113 07:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:18.113 07:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:18.113 07:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:18.113 07:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:18.113 07:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:18.113 07:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:18.113 07:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:18.113 07:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:18.113 07:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:18.114 07:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:18.114 07:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:18.114 07:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:18.114 07:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:18.114 07:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:18.114 07:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:18.114 07:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:18.114 07:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:19:18.114 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:19:18.114 07:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:18.114 07:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:18.114 07:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:18.114 07:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:18.114 07:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:18.114 07:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:18.114 07:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:19:18.114 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:19:18.114 07:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:18.114 07:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:18.114 07:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:18.114 07:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:18.114 07:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:18.114 07:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:18.114 07:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:18.114 07:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:18.114 07:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:18.114 07:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:18.114 07:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:18.114 07:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:18.114 07:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:18.114 07:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:18.114 07:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:18.114 07:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:19:18.114 Found net devices under 0000:0a:00.0: cvl_0_0 00:19:18.114 07:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:18.114 07:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:18.114 07:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:18.114 07:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:18.114 07:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:18.114 07:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:18.114 07:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:18.114 07:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:18.114 07:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:19:18.114 Found net devices under 0000:0a:00.1: cvl_0_1 00:19:18.114 07:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:18.114 07:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:18.114 07:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # is_hw=yes 00:19:18.114 07:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:18.114 07:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:19:18.114 07:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:19:18.114 07:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:18.114 07:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:18.114 07:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:18.114 07:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:18.114 07:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:18.114 07:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:18.114 07:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:18.114 07:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:18.114 07:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:18.114 07:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:18.114 07:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:18.114 07:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:18.114 07:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:18.114 07:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:18.114 07:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:18.114 07:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:18.114 07:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:18.114 07:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:18.114 07:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:18.114 07:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:18.114 07:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:18.114 07:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:18.114 07:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:18.114 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:18.114 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.205 ms 00:19:18.114 00:19:18.114 --- 10.0.0.2 ping statistics --- 00:19:18.114 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:18.114 rtt min/avg/max/mdev = 0.205/0.205/0.205/0.000 ms 00:19:18.114 07:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:18.114 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:18.114 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.145 ms 00:19:18.114 00:19:18.114 --- 10.0.0.1 ping statistics --- 00:19:18.114 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:18.114 rtt min/avg/max/mdev = 0.145/0.145/0.145/0.000 ms 00:19:18.114 07:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:18.114 07:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # return 0 00:19:18.114 07:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:18.114 07:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:18.114 07:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:18.114 07:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:18.114 07:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:18.114 07:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:18.114 07:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:18.114 07:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:19:18.114 07:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:18.114 07:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:18.114 07:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:18.114 07:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=241220 00:19:18.114 07:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:19:18.114 07:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 241220 00:19:18.114 07:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 241220 ']' 00:19:18.114 07:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:18.114 07:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:18.114 07:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:18.114 07:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:18.114 07:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:18.372 07:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:18.372 07:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:19:18.372 07:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:18.372 07:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:18.372 07:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:18.372 07:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:18.372 07:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=241241 00:19:18.372 07:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:19:18.372 07:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:19:18.372 07:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:19:18.372 07:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:19:18.372 07:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:18.372 07:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:19:18.372 07:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=null 00:19:18.372 07:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:19:18.372 07:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:18.372 07:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=b12dff85ef93bcbfc2726a10df3f727bd38bb3a2af166042 00:19:18.372 07:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:19:18.372 07:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.oJ7 00:19:18.372 07:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key b12dff85ef93bcbfc2726a10df3f727bd38bb3a2af166042 0 00:19:18.372 07:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 b12dff85ef93bcbfc2726a10df3f727bd38bb3a2af166042 0 00:19:18.372 07:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:19:18.372 07:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:18.372 07:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=b12dff85ef93bcbfc2726a10df3f727bd38bb3a2af166042 00:19:18.372 07:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=0 00:19:18.372 07:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:19:18.630 07:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.oJ7 00:19:18.630 07:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.oJ7 00:19:18.630 07:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.oJ7 00:19:18.630 07:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:19:18.630 07:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:19:18.630 07:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:18.630 07:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:19:18.630 07:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:19:18.630 07:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:19:18.630 07:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:19:18.630 07:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=0e5953f3e4d22f9564ceee9911f3a489feb10bdd531807c17a2506dea6cad3fd 00:19:18.630 07:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:19:18.630 07:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.0h7 00:19:18.630 07:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 0e5953f3e4d22f9564ceee9911f3a489feb10bdd531807c17a2506dea6cad3fd 3 00:19:18.630 07:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 0e5953f3e4d22f9564ceee9911f3a489feb10bdd531807c17a2506dea6cad3fd 3 00:19:18.630 07:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:19:18.630 07:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:18.630 07:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=0e5953f3e4d22f9564ceee9911f3a489feb10bdd531807c17a2506dea6cad3fd 00:19:18.630 07:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:19:18.630 07:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:19:18.630 07:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.0h7 00:19:18.630 07:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.0h7 00:19:18.630 07:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.0h7 00:19:18.631 07:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:19:18.631 07:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:19:18.631 07:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:18.631 07:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:19:18.631 07:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:19:18.631 07:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:19:18.631 07:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:19:18.631 07:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=80f5a3c92c74e8924b05b110102fc060 00:19:18.631 07:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:19:18.631 07:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.7UP 00:19:18.631 07:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 80f5a3c92c74e8924b05b110102fc060 1 00:19:18.631 07:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 80f5a3c92c74e8924b05b110102fc060 1 00:19:18.631 07:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:19:18.631 07:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:18.631 07:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=80f5a3c92c74e8924b05b110102fc060 00:19:18.631 07:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:19:18.631 07:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:19:18.631 07:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.7UP 00:19:18.631 07:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.7UP 00:19:18.631 07:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.7UP 00:19:18.631 07:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:19:18.631 07:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:19:18.631 07:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:18.631 07:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:19:18.631 07:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:19:18.631 07:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:19:18.631 07:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:18.631 07:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=6bb5298d85171d30677a1ea3c3e65bc14cfbd09fa38c1bc9 00:19:18.631 07:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:19:18.631 07:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.Svh 00:19:18.631 07:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 6bb5298d85171d30677a1ea3c3e65bc14cfbd09fa38c1bc9 2 00:19:18.631 07:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 6bb5298d85171d30677a1ea3c3e65bc14cfbd09fa38c1bc9 2 00:19:18.631 07:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:19:18.631 07:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:18.631 07:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=6bb5298d85171d30677a1ea3c3e65bc14cfbd09fa38c1bc9 00:19:18.631 07:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:19:18.631 07:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:19:18.631 07:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.Svh 00:19:18.631 07:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.Svh 00:19:18.631 07:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.Svh 00:19:18.631 07:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:19:18.631 07:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:19:18.631 07:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:18.631 07:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:19:18.631 07:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:19:18.631 07:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:19:18.631 07:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:18.631 07:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=ba7b4c571bfbed64f2412b2c91c666fc675da4bc3e638e42 00:19:18.631 07:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:19:18.631 07:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.hvN 00:19:18.631 07:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key ba7b4c571bfbed64f2412b2c91c666fc675da4bc3e638e42 2 00:19:18.631 07:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 ba7b4c571bfbed64f2412b2c91c666fc675da4bc3e638e42 2 00:19:18.631 07:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:19:18.631 07:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:18.631 07:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=ba7b4c571bfbed64f2412b2c91c666fc675da4bc3e638e42 00:19:18.631 07:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:19:18.631 07:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:19:18.631 07:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.hvN 00:19:18.631 07:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.hvN 00:19:18.631 07:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.hvN 00:19:18.631 07:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:19:18.631 07:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:19:18.631 07:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:18.631 07:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:19:18.631 07:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:19:18.631 07:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:19:18.631 07:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:19:18.631 07:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=a125c7f8445fb4409f617b83be8a3816 00:19:18.631 07:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:19:18.631 07:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.u57 00:19:18.631 07:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key a125c7f8445fb4409f617b83be8a3816 1 00:19:18.631 07:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 a125c7f8445fb4409f617b83be8a3816 1 00:19:18.631 07:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:19:18.631 07:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:18.631 07:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=a125c7f8445fb4409f617b83be8a3816 00:19:18.631 07:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:19:18.631 07:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:19:18.631 07:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.u57 00:19:18.631 07:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.u57 00:19:18.631 07:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.u57 00:19:18.631 07:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:19:18.631 07:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:19:18.631 07:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:18.631 07:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:19:18.631 07:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:19:18.631 07:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:19:18.890 07:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:19:18.890 07:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=5aa81a4605bccfd26c9472beef38c0774f927adb08868b6515357aa2aa938392 00:19:18.890 07:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:19:18.890 07:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.oMV 00:19:18.890 07:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 5aa81a4605bccfd26c9472beef38c0774f927adb08868b6515357aa2aa938392 3 00:19:18.890 07:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 5aa81a4605bccfd26c9472beef38c0774f927adb08868b6515357aa2aa938392 3 00:19:18.890 07:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:19:18.890 07:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:18.890 07:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=5aa81a4605bccfd26c9472beef38c0774f927adb08868b6515357aa2aa938392 00:19:18.890 07:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:19:18.890 07:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:19:18.890 07:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.oMV 00:19:18.890 07:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.oMV 00:19:18.890 07:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.oMV 00:19:18.890 07:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:19:18.890 07:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 241220 00:19:18.890 07:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 241220 ']' 00:19:18.890 07:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:18.890 07:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:18.890 07:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:18.890 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:18.890 07:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:18.890 07:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.149 07:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:19.149 07:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:19:19.149 07:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 241241 /var/tmp/host.sock 00:19:19.149 07:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 241241 ']' 00:19:19.149 07:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:19:19.149 07:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:19.149 07:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:19:19.149 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:19:19.149 07:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:19.149 07:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.408 07:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:19.408 07:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:19:19.408 07:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:19:19.408 07:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.408 07:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.408 07:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.408 07:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:19:19.408 07:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.oJ7 00:19:19.408 07:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.408 07:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.408 07:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.408 07:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.oJ7 00:19:19.408 07:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.oJ7 00:19:19.666 07:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.0h7 ]] 00:19:19.666 07:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.0h7 00:19:19.666 07:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.666 07:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.666 07:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.666 07:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.0h7 00:19:19.666 07:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.0h7 00:19:19.925 07:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:19:19.925 07:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.7UP 00:19:19.925 07:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.925 07:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.925 07:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.925 07:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.7UP 00:19:19.925 07:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.7UP 00:19:20.184 07:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.Svh ]] 00:19:20.184 07:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Svh 00:19:20.184 07:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.184 07:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.184 07:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.184 07:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Svh 00:19:20.184 07:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Svh 00:19:20.442 07:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:19:20.442 07:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.hvN 00:19:20.442 07:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.442 07:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.442 07:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.442 07:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.hvN 00:19:20.442 07:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.hvN 00:19:20.701 07:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.u57 ]] 00:19:20.701 07:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.u57 00:19:20.701 07:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.701 07:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.701 07:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.701 07:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.u57 00:19:20.701 07:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.u57 00:19:21.269 07:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:19:21.269 07:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.oMV 00:19:21.269 07:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.269 07:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.269 07:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.269 07:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.oMV 00:19:21.269 07:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.oMV 00:19:21.269 07:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:19:21.269 07:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:19:21.269 07:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:21.269 07:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:21.269 07:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:21.269 07:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:21.528 07:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:19:21.528 07:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:21.528 07:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:21.528 07:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:21.528 07:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:21.528 07:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:21.528 07:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:21.528 07:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.528 07:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.528 07:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.528 07:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:21.528 07:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:21.528 07:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:22.096 00:19:22.096 07:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:22.096 07:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:22.096 07:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:22.096 07:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:22.356 07:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:22.356 07:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.356 07:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.356 07:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:22.356 07:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:22.356 { 00:19:22.356 "cntlid": 1, 00:19:22.356 "qid": 0, 00:19:22.356 "state": "enabled", 00:19:22.356 "thread": "nvmf_tgt_poll_group_000", 00:19:22.356 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:22.356 "listen_address": { 00:19:22.356 "trtype": "TCP", 00:19:22.356 "adrfam": "IPv4", 00:19:22.356 "traddr": "10.0.0.2", 00:19:22.356 "trsvcid": "4420" 00:19:22.356 }, 00:19:22.356 "peer_address": { 00:19:22.356 "trtype": "TCP", 00:19:22.356 "adrfam": "IPv4", 00:19:22.356 "traddr": "10.0.0.1", 00:19:22.356 "trsvcid": "38318" 00:19:22.356 }, 00:19:22.356 "auth": { 00:19:22.356 "state": "completed", 00:19:22.356 "digest": "sha256", 00:19:22.356 "dhgroup": "null" 00:19:22.356 } 00:19:22.356 } 00:19:22.356 ]' 00:19:22.356 07:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:22.356 07:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:22.356 07:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:22.356 07:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:22.356 07:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:22.356 07:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:22.356 07:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:22.356 07:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:22.615 07:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjEyZGZmODVlZjkzYmNiZmMyNzI2YTEwZGYzZjcyN2JkMzhiYjNhMmFmMTY2MDQye34UVg==: --dhchap-ctrl-secret DHHC-1:03:MGU1OTUzZjNlNGQyMmY5NTY0Y2VlZTk5MTFmM2E0ODlmZWIxMGJkZDUzMTgwN2MxN2EyNTA2ZGVhNmNhZDNmZB7cbfg=: 00:19:22.615 07:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:YjEyZGZmODVlZjkzYmNiZmMyNzI2YTEwZGYzZjcyN2JkMzhiYjNhMmFmMTY2MDQye34UVg==: --dhchap-ctrl-secret DHHC-1:03:MGU1OTUzZjNlNGQyMmY5NTY0Y2VlZTk5MTFmM2E0ODlmZWIxMGJkZDUzMTgwN2MxN2EyNTA2ZGVhNmNhZDNmZB7cbfg=: 00:19:27.883 07:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:27.883 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:27.883 07:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:27.883 07:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.883 07:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.883 07:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.883 07:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:27.883 07:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:27.883 07:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:27.883 07:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:19:27.883 07:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:27.883 07:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:27.883 07:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:27.883 07:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:27.883 07:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:27.883 07:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:27.883 07:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.883 07:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.883 07:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.883 07:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:27.883 07:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:27.883 07:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:27.883 00:19:27.883 07:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:27.883 07:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:27.883 07:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:27.883 07:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:27.883 07:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:27.883 07:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.883 07:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.883 07:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.883 07:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:27.883 { 00:19:27.883 "cntlid": 3, 00:19:27.883 "qid": 0, 00:19:27.883 "state": "enabled", 00:19:27.883 "thread": "nvmf_tgt_poll_group_000", 00:19:27.883 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:27.883 "listen_address": { 00:19:27.883 "trtype": "TCP", 00:19:27.883 "adrfam": "IPv4", 00:19:27.883 "traddr": "10.0.0.2", 00:19:27.883 "trsvcid": "4420" 00:19:27.883 }, 00:19:27.883 "peer_address": { 00:19:27.883 "trtype": "TCP", 00:19:27.883 "adrfam": "IPv4", 00:19:27.883 "traddr": "10.0.0.1", 00:19:27.883 "trsvcid": "38340" 00:19:27.883 }, 00:19:27.883 "auth": { 00:19:27.883 "state": "completed", 00:19:27.883 "digest": "sha256", 00:19:27.883 "dhgroup": "null" 00:19:27.883 } 00:19:27.883 } 00:19:27.883 ]' 00:19:27.883 07:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:28.142 07:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:28.142 07:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:28.142 07:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:28.142 07:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:28.142 07:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:28.142 07:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:28.142 07:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:28.404 07:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ODBmNWEzYzkyYzc0ZTg5MjRiMDViMTEwMTAyZmMwNjAGUQzo: --dhchap-ctrl-secret DHHC-1:02:NmJiNTI5OGQ4NTE3MWQzMDY3N2ExZWEzYzNlNjViYzE0Y2ZiZDA5ZmEzOGMxYmM5xhrtlA==: 00:19:28.404 07:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:ODBmNWEzYzkyYzc0ZTg5MjRiMDViMTEwMTAyZmMwNjAGUQzo: --dhchap-ctrl-secret DHHC-1:02:NmJiNTI5OGQ4NTE3MWQzMDY3N2ExZWEzYzNlNjViYzE0Y2ZiZDA5ZmEzOGMxYmM5xhrtlA==: 00:19:29.340 07:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:29.340 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:29.340 07:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:29.340 07:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.340 07:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:29.340 07:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.340 07:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:29.340 07:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:29.340 07:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:29.597 07:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:19:29.597 07:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:29.597 07:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:29.597 07:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:29.597 07:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:29.598 07:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:29.598 07:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:29.598 07:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.598 07:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:29.598 07:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.598 07:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:29.598 07:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:29.598 07:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:29.855 00:19:29.855 07:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:29.855 07:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:29.855 07:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:30.114 07:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:30.114 07:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:30.114 07:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.114 07:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:30.114 07:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.114 07:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:30.114 { 00:19:30.114 "cntlid": 5, 00:19:30.114 "qid": 0, 00:19:30.114 "state": "enabled", 00:19:30.114 "thread": "nvmf_tgt_poll_group_000", 00:19:30.114 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:30.114 "listen_address": { 00:19:30.114 "trtype": "TCP", 00:19:30.114 "adrfam": "IPv4", 00:19:30.114 "traddr": "10.0.0.2", 00:19:30.114 "trsvcid": "4420" 00:19:30.114 }, 00:19:30.114 "peer_address": { 00:19:30.114 "trtype": "TCP", 00:19:30.114 "adrfam": "IPv4", 00:19:30.114 "traddr": "10.0.0.1", 00:19:30.114 "trsvcid": "60990" 00:19:30.114 }, 00:19:30.114 "auth": { 00:19:30.114 "state": "completed", 00:19:30.114 "digest": "sha256", 00:19:30.114 "dhgroup": "null" 00:19:30.114 } 00:19:30.114 } 00:19:30.114 ]' 00:19:30.114 07:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:30.373 07:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:30.373 07:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:30.373 07:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:30.373 07:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:30.373 07:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:30.373 07:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:30.373 07:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:30.633 07:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YmE3YjRjNTcxYmZiZWQ2NGYyNDEyYjJjOTFjNjY2ZmM2NzVkYTRiYzNlNjM4ZTQyxZYLnw==: --dhchap-ctrl-secret DHHC-1:01:YTEyNWM3Zjg0NDVmYjQ0MDlmNjE3YjgzYmU4YTM4MTYKJrp/: 00:19:30.633 07:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:YmE3YjRjNTcxYmZiZWQ2NGYyNDEyYjJjOTFjNjY2ZmM2NzVkYTRiYzNlNjM4ZTQyxZYLnw==: --dhchap-ctrl-secret DHHC-1:01:YTEyNWM3Zjg0NDVmYjQ0MDlmNjE3YjgzYmU4YTM4MTYKJrp/: 00:19:31.573 07:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:31.573 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:31.573 07:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:31.573 07:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.573 07:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:31.573 07:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.573 07:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:31.573 07:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:31.573 07:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:31.832 07:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:19:31.832 07:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:31.832 07:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:31.832 07:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:31.832 07:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:31.832 07:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:31.832 07:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:31.832 07:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.832 07:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:31.832 07:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.832 07:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:31.832 07:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:31.832 07:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:32.091 00:19:32.091 07:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:32.091 07:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:32.091 07:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:32.349 07:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:32.349 07:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:32.349 07:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.349 07:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:32.349 07:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.349 07:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:32.349 { 00:19:32.349 "cntlid": 7, 00:19:32.349 "qid": 0, 00:19:32.349 "state": "enabled", 00:19:32.349 "thread": "nvmf_tgt_poll_group_000", 00:19:32.349 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:32.349 "listen_address": { 00:19:32.349 "trtype": "TCP", 00:19:32.349 "adrfam": "IPv4", 00:19:32.349 "traddr": "10.0.0.2", 00:19:32.349 "trsvcid": "4420" 00:19:32.349 }, 00:19:32.349 "peer_address": { 00:19:32.349 "trtype": "TCP", 00:19:32.349 "adrfam": "IPv4", 00:19:32.349 "traddr": "10.0.0.1", 00:19:32.349 "trsvcid": "32796" 00:19:32.349 }, 00:19:32.349 "auth": { 00:19:32.349 "state": "completed", 00:19:32.349 "digest": "sha256", 00:19:32.349 "dhgroup": "null" 00:19:32.349 } 00:19:32.349 } 00:19:32.349 ]' 00:19:32.349 07:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:32.349 07:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:32.349 07:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:32.349 07:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:32.349 07:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:32.349 07:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:32.349 07:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:32.349 07:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:32.608 07:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NWFhODFhNDYwNWJjY2ZkMjZjOTQ3MmJlZWYzOGMwNzc0ZjkyN2FkYjA4ODY4YjY1MTUzNTdhYTJhYTkzODM5MhLbLTY=: 00:19:32.608 07:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:NWFhODFhNDYwNWJjY2ZkMjZjOTQ3MmJlZWYzOGMwNzc0ZjkyN2FkYjA4ODY4YjY1MTUzNTdhYTJhYTkzODM5MhLbLTY=: 00:19:33.547 07:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:33.547 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:33.547 07:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:33.547 07:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.547 07:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.547 07:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.547 07:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:33.547 07:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:33.547 07:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:33.547 07:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:33.806 07:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:19:33.806 07:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:33.806 07:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:33.806 07:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:33.806 07:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:33.806 07:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:33.806 07:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:33.806 07:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.806 07:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.806 07:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.806 07:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:33.806 07:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:33.806 07:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:34.376 00:19:34.376 07:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:34.376 07:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:34.376 07:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:34.635 07:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:34.635 07:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:34.635 07:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.635 07:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.635 07:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.635 07:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:34.635 { 00:19:34.635 "cntlid": 9, 00:19:34.635 "qid": 0, 00:19:34.635 "state": "enabled", 00:19:34.635 "thread": "nvmf_tgt_poll_group_000", 00:19:34.635 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:34.635 "listen_address": { 00:19:34.635 "trtype": "TCP", 00:19:34.635 "adrfam": "IPv4", 00:19:34.635 "traddr": "10.0.0.2", 00:19:34.635 "trsvcid": "4420" 00:19:34.635 }, 00:19:34.635 "peer_address": { 00:19:34.635 "trtype": "TCP", 00:19:34.635 "adrfam": "IPv4", 00:19:34.635 "traddr": "10.0.0.1", 00:19:34.635 "trsvcid": "32824" 00:19:34.635 }, 00:19:34.635 "auth": { 00:19:34.635 "state": "completed", 00:19:34.635 "digest": "sha256", 00:19:34.635 "dhgroup": "ffdhe2048" 00:19:34.635 } 00:19:34.635 } 00:19:34.635 ]' 00:19:34.635 07:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:34.635 07:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:34.635 07:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:34.635 07:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:34.635 07:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:34.635 07:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:34.635 07:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:34.635 07:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:34.894 07:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjEyZGZmODVlZjkzYmNiZmMyNzI2YTEwZGYzZjcyN2JkMzhiYjNhMmFmMTY2MDQye34UVg==: --dhchap-ctrl-secret DHHC-1:03:MGU1OTUzZjNlNGQyMmY5NTY0Y2VlZTk5MTFmM2E0ODlmZWIxMGJkZDUzMTgwN2MxN2EyNTA2ZGVhNmNhZDNmZB7cbfg=: 00:19:34.894 07:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:YjEyZGZmODVlZjkzYmNiZmMyNzI2YTEwZGYzZjcyN2JkMzhiYjNhMmFmMTY2MDQye34UVg==: --dhchap-ctrl-secret DHHC-1:03:MGU1OTUzZjNlNGQyMmY5NTY0Y2VlZTk5MTFmM2E0ODlmZWIxMGJkZDUzMTgwN2MxN2EyNTA2ZGVhNmNhZDNmZB7cbfg=: 00:19:35.832 07:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:35.832 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:35.832 07:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:35.832 07:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.832 07:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.832 07:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.832 07:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:35.832 07:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:35.832 07:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:36.090 07:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:19:36.090 07:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:36.090 07:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:36.090 07:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:36.090 07:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:36.090 07:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:36.090 07:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:36.090 07:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.090 07:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.090 07:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.090 07:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:36.090 07:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:36.090 07:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:36.349 00:19:36.349 07:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:36.349 07:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:36.349 07:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:36.608 07:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:36.608 07:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:36.608 07:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.608 07:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.608 07:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.608 07:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:36.608 { 00:19:36.608 "cntlid": 11, 00:19:36.608 "qid": 0, 00:19:36.608 "state": "enabled", 00:19:36.608 "thread": "nvmf_tgt_poll_group_000", 00:19:36.608 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:36.608 "listen_address": { 00:19:36.608 "trtype": "TCP", 00:19:36.608 "adrfam": "IPv4", 00:19:36.608 "traddr": "10.0.0.2", 00:19:36.608 "trsvcid": "4420" 00:19:36.608 }, 00:19:36.608 "peer_address": { 00:19:36.608 "trtype": "TCP", 00:19:36.608 "adrfam": "IPv4", 00:19:36.608 "traddr": "10.0.0.1", 00:19:36.608 "trsvcid": "32846" 00:19:36.608 }, 00:19:36.608 "auth": { 00:19:36.608 "state": "completed", 00:19:36.608 "digest": "sha256", 00:19:36.608 "dhgroup": "ffdhe2048" 00:19:36.608 } 00:19:36.608 } 00:19:36.608 ]' 00:19:36.608 07:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:36.866 07:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:36.866 07:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:36.866 07:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:36.866 07:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:36.866 07:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:36.866 07:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:36.866 07:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:37.125 07:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ODBmNWEzYzkyYzc0ZTg5MjRiMDViMTEwMTAyZmMwNjAGUQzo: --dhchap-ctrl-secret DHHC-1:02:NmJiNTI5OGQ4NTE3MWQzMDY3N2ExZWEzYzNlNjViYzE0Y2ZiZDA5ZmEzOGMxYmM5xhrtlA==: 00:19:37.125 07:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:ODBmNWEzYzkyYzc0ZTg5MjRiMDViMTEwMTAyZmMwNjAGUQzo: --dhchap-ctrl-secret DHHC-1:02:NmJiNTI5OGQ4NTE3MWQzMDY3N2ExZWEzYzNlNjViYzE0Y2ZiZDA5ZmEzOGMxYmM5xhrtlA==: 00:19:38.061 07:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:38.061 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:38.061 07:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:38.061 07:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.061 07:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.061 07:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.061 07:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:38.061 07:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:38.061 07:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:38.322 07:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:19:38.322 07:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:38.322 07:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:38.322 07:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:38.322 07:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:38.322 07:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:38.322 07:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:38.322 07:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.322 07:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.322 07:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.322 07:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:38.322 07:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:38.322 07:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:38.581 00:19:38.581 07:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:38.581 07:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:38.581 07:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:39.148 07:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:39.148 07:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:39.148 07:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.148 07:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.148 07:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.148 07:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:39.148 { 00:19:39.148 "cntlid": 13, 00:19:39.148 "qid": 0, 00:19:39.148 "state": "enabled", 00:19:39.148 "thread": "nvmf_tgt_poll_group_000", 00:19:39.148 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:39.148 "listen_address": { 00:19:39.148 "trtype": "TCP", 00:19:39.148 "adrfam": "IPv4", 00:19:39.148 "traddr": "10.0.0.2", 00:19:39.148 "trsvcid": "4420" 00:19:39.148 }, 00:19:39.148 "peer_address": { 00:19:39.148 "trtype": "TCP", 00:19:39.148 "adrfam": "IPv4", 00:19:39.148 "traddr": "10.0.0.1", 00:19:39.148 "trsvcid": "32870" 00:19:39.148 }, 00:19:39.148 "auth": { 00:19:39.148 "state": "completed", 00:19:39.148 "digest": "sha256", 00:19:39.148 "dhgroup": "ffdhe2048" 00:19:39.148 } 00:19:39.148 } 00:19:39.148 ]' 00:19:39.148 07:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:39.148 07:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:39.148 07:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:39.148 07:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:39.148 07:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:39.148 07:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:39.148 07:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:39.148 07:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:39.406 07:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YmE3YjRjNTcxYmZiZWQ2NGYyNDEyYjJjOTFjNjY2ZmM2NzVkYTRiYzNlNjM4ZTQyxZYLnw==: --dhchap-ctrl-secret DHHC-1:01:YTEyNWM3Zjg0NDVmYjQ0MDlmNjE3YjgzYmU4YTM4MTYKJrp/: 00:19:39.406 07:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:YmE3YjRjNTcxYmZiZWQ2NGYyNDEyYjJjOTFjNjY2ZmM2NzVkYTRiYzNlNjM4ZTQyxZYLnw==: --dhchap-ctrl-secret DHHC-1:01:YTEyNWM3Zjg0NDVmYjQ0MDlmNjE3YjgzYmU4YTM4MTYKJrp/: 00:19:40.341 07:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:40.341 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:40.341 07:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:40.341 07:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.341 07:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.341 07:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.341 07:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:40.341 07:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:40.341 07:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:40.599 07:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:19:40.599 07:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:40.599 07:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:40.599 07:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:40.599 07:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:40.599 07:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:40.599 07:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:40.599 07:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.599 07:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.599 07:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.599 07:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:40.599 07:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:40.599 07:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:40.857 00:19:40.857 07:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:40.857 07:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:40.857 07:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:41.116 07:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:41.116 07:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:41.116 07:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.116 07:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.116 07:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.116 07:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:41.116 { 00:19:41.116 "cntlid": 15, 00:19:41.116 "qid": 0, 00:19:41.116 "state": "enabled", 00:19:41.116 "thread": "nvmf_tgt_poll_group_000", 00:19:41.116 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:41.116 "listen_address": { 00:19:41.116 "trtype": "TCP", 00:19:41.116 "adrfam": "IPv4", 00:19:41.116 "traddr": "10.0.0.2", 00:19:41.116 "trsvcid": "4420" 00:19:41.116 }, 00:19:41.116 "peer_address": { 00:19:41.116 "trtype": "TCP", 00:19:41.116 "adrfam": "IPv4", 00:19:41.116 "traddr": "10.0.0.1", 00:19:41.116 "trsvcid": "33112" 00:19:41.116 }, 00:19:41.116 "auth": { 00:19:41.116 "state": "completed", 00:19:41.116 "digest": "sha256", 00:19:41.116 "dhgroup": "ffdhe2048" 00:19:41.116 } 00:19:41.116 } 00:19:41.116 ]' 00:19:41.116 07:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:41.376 07:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:41.376 07:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:41.376 07:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:41.376 07:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:41.376 07:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:41.376 07:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:41.376 07:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:41.634 07:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NWFhODFhNDYwNWJjY2ZkMjZjOTQ3MmJlZWYzOGMwNzc0ZjkyN2FkYjA4ODY4YjY1MTUzNTdhYTJhYTkzODM5MhLbLTY=: 00:19:41.634 07:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:NWFhODFhNDYwNWJjY2ZkMjZjOTQ3MmJlZWYzOGMwNzc0ZjkyN2FkYjA4ODY4YjY1MTUzNTdhYTJhYTkzODM5MhLbLTY=: 00:19:42.574 07:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:42.574 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:42.574 07:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:42.574 07:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.574 07:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.574 07:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.574 07:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:42.574 07:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:42.574 07:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:42.574 07:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:42.833 07:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:19:42.833 07:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:42.833 07:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:42.833 07:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:42.833 07:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:42.833 07:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:42.833 07:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:42.833 07:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.833 07:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.833 07:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.833 07:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:42.833 07:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:42.833 07:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:43.092 00:19:43.092 07:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:43.092 07:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:43.092 07:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:43.350 07:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:43.350 07:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:43.350 07:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.350 07:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:43.350 07:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.351 07:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:43.351 { 00:19:43.351 "cntlid": 17, 00:19:43.351 "qid": 0, 00:19:43.351 "state": "enabled", 00:19:43.351 "thread": "nvmf_tgt_poll_group_000", 00:19:43.351 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:43.351 "listen_address": { 00:19:43.351 "trtype": "TCP", 00:19:43.351 "adrfam": "IPv4", 00:19:43.351 "traddr": "10.0.0.2", 00:19:43.351 "trsvcid": "4420" 00:19:43.351 }, 00:19:43.351 "peer_address": { 00:19:43.351 "trtype": "TCP", 00:19:43.351 "adrfam": "IPv4", 00:19:43.351 "traddr": "10.0.0.1", 00:19:43.351 "trsvcid": "33126" 00:19:43.351 }, 00:19:43.351 "auth": { 00:19:43.351 "state": "completed", 00:19:43.351 "digest": "sha256", 00:19:43.351 "dhgroup": "ffdhe3072" 00:19:43.351 } 00:19:43.351 } 00:19:43.351 ]' 00:19:43.351 07:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:43.609 07:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:43.609 07:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:43.609 07:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:43.609 07:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:43.609 07:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:43.609 07:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:43.609 07:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:43.867 07:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjEyZGZmODVlZjkzYmNiZmMyNzI2YTEwZGYzZjcyN2JkMzhiYjNhMmFmMTY2MDQye34UVg==: --dhchap-ctrl-secret DHHC-1:03:MGU1OTUzZjNlNGQyMmY5NTY0Y2VlZTk5MTFmM2E0ODlmZWIxMGJkZDUzMTgwN2MxN2EyNTA2ZGVhNmNhZDNmZB7cbfg=: 00:19:43.867 07:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:YjEyZGZmODVlZjkzYmNiZmMyNzI2YTEwZGYzZjcyN2JkMzhiYjNhMmFmMTY2MDQye34UVg==: --dhchap-ctrl-secret DHHC-1:03:MGU1OTUzZjNlNGQyMmY5NTY0Y2VlZTk5MTFmM2E0ODlmZWIxMGJkZDUzMTgwN2MxN2EyNTA2ZGVhNmNhZDNmZB7cbfg=: 00:19:44.806 07:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:44.806 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:44.807 07:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:44.807 07:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.807 07:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.807 07:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.807 07:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:44.807 07:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:44.807 07:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:45.066 07:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:19:45.066 07:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:45.066 07:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:45.066 07:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:45.066 07:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:45.066 07:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:45.066 07:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:45.066 07:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.066 07:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:45.066 07:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.066 07:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:45.066 07:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:45.066 07:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:45.325 00:19:45.325 07:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:45.325 07:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:45.325 07:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:45.584 07:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:45.584 07:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:45.584 07:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.584 07:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:45.584 07:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.584 07:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:45.584 { 00:19:45.584 "cntlid": 19, 00:19:45.584 "qid": 0, 00:19:45.584 "state": "enabled", 00:19:45.584 "thread": "nvmf_tgt_poll_group_000", 00:19:45.584 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:45.584 "listen_address": { 00:19:45.584 "trtype": "TCP", 00:19:45.584 "adrfam": "IPv4", 00:19:45.584 "traddr": "10.0.0.2", 00:19:45.584 "trsvcid": "4420" 00:19:45.584 }, 00:19:45.584 "peer_address": { 00:19:45.584 "trtype": "TCP", 00:19:45.584 "adrfam": "IPv4", 00:19:45.584 "traddr": "10.0.0.1", 00:19:45.584 "trsvcid": "33144" 00:19:45.584 }, 00:19:45.584 "auth": { 00:19:45.584 "state": "completed", 00:19:45.584 "digest": "sha256", 00:19:45.584 "dhgroup": "ffdhe3072" 00:19:45.584 } 00:19:45.584 } 00:19:45.584 ]' 00:19:45.584 07:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:45.842 07:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:45.842 07:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:45.842 07:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:45.842 07:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:45.843 07:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:45.843 07:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:45.843 07:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:46.101 07:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ODBmNWEzYzkyYzc0ZTg5MjRiMDViMTEwMTAyZmMwNjAGUQzo: --dhchap-ctrl-secret DHHC-1:02:NmJiNTI5OGQ4NTE3MWQzMDY3N2ExZWEzYzNlNjViYzE0Y2ZiZDA5ZmEzOGMxYmM5xhrtlA==: 00:19:46.101 07:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:ODBmNWEzYzkyYzc0ZTg5MjRiMDViMTEwMTAyZmMwNjAGUQzo: --dhchap-ctrl-secret DHHC-1:02:NmJiNTI5OGQ4NTE3MWQzMDY3N2ExZWEzYzNlNjViYzE0Y2ZiZDA5ZmEzOGMxYmM5xhrtlA==: 00:19:47.040 07:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:47.040 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:47.040 07:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:47.040 07:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.040 07:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:47.040 07:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.040 07:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:47.040 07:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:47.040 07:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:47.298 07:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:19:47.298 07:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:47.298 07:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:47.298 07:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:47.298 07:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:47.298 07:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:47.298 07:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:47.298 07:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.299 07:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:47.299 07:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.299 07:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:47.299 07:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:47.299 07:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:47.557 00:19:47.557 07:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:47.557 07:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:47.557 07:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:47.815 07:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:47.816 07:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:47.816 07:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.816 07:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:47.816 07:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.816 07:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:47.816 { 00:19:47.816 "cntlid": 21, 00:19:47.816 "qid": 0, 00:19:47.816 "state": "enabled", 00:19:47.816 "thread": "nvmf_tgt_poll_group_000", 00:19:47.816 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:47.816 "listen_address": { 00:19:47.816 "trtype": "TCP", 00:19:47.816 "adrfam": "IPv4", 00:19:47.816 "traddr": "10.0.0.2", 00:19:47.816 "trsvcid": "4420" 00:19:47.816 }, 00:19:47.816 "peer_address": { 00:19:47.816 "trtype": "TCP", 00:19:47.816 "adrfam": "IPv4", 00:19:47.816 "traddr": "10.0.0.1", 00:19:47.816 "trsvcid": "33176" 00:19:47.816 }, 00:19:47.816 "auth": { 00:19:47.816 "state": "completed", 00:19:47.816 "digest": "sha256", 00:19:47.816 "dhgroup": "ffdhe3072" 00:19:47.816 } 00:19:47.816 } 00:19:47.816 ]' 00:19:47.816 07:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:47.816 07:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:47.816 07:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:48.075 07:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:48.075 07:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:48.075 07:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:48.075 07:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:48.075 07:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:48.334 07:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YmE3YjRjNTcxYmZiZWQ2NGYyNDEyYjJjOTFjNjY2ZmM2NzVkYTRiYzNlNjM4ZTQyxZYLnw==: --dhchap-ctrl-secret DHHC-1:01:YTEyNWM3Zjg0NDVmYjQ0MDlmNjE3YjgzYmU4YTM4MTYKJrp/: 00:19:48.334 07:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:YmE3YjRjNTcxYmZiZWQ2NGYyNDEyYjJjOTFjNjY2ZmM2NzVkYTRiYzNlNjM4ZTQyxZYLnw==: --dhchap-ctrl-secret DHHC-1:01:YTEyNWM3Zjg0NDVmYjQ0MDlmNjE3YjgzYmU4YTM4MTYKJrp/: 00:19:49.279 07:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:49.279 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:49.279 07:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:49.279 07:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.279 07:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.279 07:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.279 07:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:49.279 07:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:49.279 07:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:49.537 07:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:19:49.537 07:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:49.537 07:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:49.537 07:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:49.537 07:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:49.537 07:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:49.537 07:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:49.537 07:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.537 07:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.537 07:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.537 07:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:49.537 07:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:49.537 07:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:49.796 00:19:49.796 07:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:49.796 07:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:49.796 07:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:50.054 07:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:50.054 07:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:50.054 07:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.054 07:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.054 07:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.054 07:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:50.055 { 00:19:50.055 "cntlid": 23, 00:19:50.055 "qid": 0, 00:19:50.055 "state": "enabled", 00:19:50.055 "thread": "nvmf_tgt_poll_group_000", 00:19:50.055 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:50.055 "listen_address": { 00:19:50.055 "trtype": "TCP", 00:19:50.055 "adrfam": "IPv4", 00:19:50.055 "traddr": "10.0.0.2", 00:19:50.055 "trsvcid": "4420" 00:19:50.055 }, 00:19:50.055 "peer_address": { 00:19:50.055 "trtype": "TCP", 00:19:50.055 "adrfam": "IPv4", 00:19:50.055 "traddr": "10.0.0.1", 00:19:50.055 "trsvcid": "33854" 00:19:50.055 }, 00:19:50.055 "auth": { 00:19:50.055 "state": "completed", 00:19:50.055 "digest": "sha256", 00:19:50.055 "dhgroup": "ffdhe3072" 00:19:50.055 } 00:19:50.055 } 00:19:50.055 ]' 00:19:50.055 07:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:50.313 07:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:50.313 07:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:50.313 07:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:50.313 07:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:50.313 07:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:50.313 07:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:50.313 07:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:50.573 07:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NWFhODFhNDYwNWJjY2ZkMjZjOTQ3MmJlZWYzOGMwNzc0ZjkyN2FkYjA4ODY4YjY1MTUzNTdhYTJhYTkzODM5MhLbLTY=: 00:19:50.573 07:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:NWFhODFhNDYwNWJjY2ZkMjZjOTQ3MmJlZWYzOGMwNzc0ZjkyN2FkYjA4ODY4YjY1MTUzNTdhYTJhYTkzODM5MhLbLTY=: 00:19:51.511 07:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:51.511 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:51.511 07:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:51.511 07:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.511 07:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.511 07:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.511 07:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:51.511 07:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:51.511 07:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:51.511 07:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:51.770 07:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:19:51.770 07:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:51.770 07:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:51.770 07:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:51.770 07:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:51.770 07:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:51.770 07:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:51.770 07:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.770 07:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.770 07:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.770 07:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:51.770 07:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:51.770 07:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:52.028 00:19:52.028 07:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:52.028 07:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:52.028 07:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:52.288 07:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:52.288 07:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:52.288 07:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.288 07:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:52.288 07:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.288 07:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:52.288 { 00:19:52.288 "cntlid": 25, 00:19:52.288 "qid": 0, 00:19:52.288 "state": "enabled", 00:19:52.288 "thread": "nvmf_tgt_poll_group_000", 00:19:52.288 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:52.288 "listen_address": { 00:19:52.288 "trtype": "TCP", 00:19:52.288 "adrfam": "IPv4", 00:19:52.288 "traddr": "10.0.0.2", 00:19:52.288 "trsvcid": "4420" 00:19:52.288 }, 00:19:52.288 "peer_address": { 00:19:52.288 "trtype": "TCP", 00:19:52.288 "adrfam": "IPv4", 00:19:52.288 "traddr": "10.0.0.1", 00:19:52.288 "trsvcid": "33870" 00:19:52.288 }, 00:19:52.288 "auth": { 00:19:52.288 "state": "completed", 00:19:52.288 "digest": "sha256", 00:19:52.288 "dhgroup": "ffdhe4096" 00:19:52.288 } 00:19:52.288 } 00:19:52.288 ]' 00:19:52.288 07:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:52.547 07:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:52.547 07:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:52.547 07:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:52.547 07:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:52.547 07:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:52.547 07:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:52.547 07:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:52.805 07:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjEyZGZmODVlZjkzYmNiZmMyNzI2YTEwZGYzZjcyN2JkMzhiYjNhMmFmMTY2MDQye34UVg==: --dhchap-ctrl-secret DHHC-1:03:MGU1OTUzZjNlNGQyMmY5NTY0Y2VlZTk5MTFmM2E0ODlmZWIxMGJkZDUzMTgwN2MxN2EyNTA2ZGVhNmNhZDNmZB7cbfg=: 00:19:52.805 07:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:YjEyZGZmODVlZjkzYmNiZmMyNzI2YTEwZGYzZjcyN2JkMzhiYjNhMmFmMTY2MDQye34UVg==: --dhchap-ctrl-secret DHHC-1:03:MGU1OTUzZjNlNGQyMmY5NTY0Y2VlZTk5MTFmM2E0ODlmZWIxMGJkZDUzMTgwN2MxN2EyNTA2ZGVhNmNhZDNmZB7cbfg=: 00:19:53.746 07:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:53.746 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:53.746 07:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:53.746 07:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.746 07:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:53.746 07:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.746 07:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:53.746 07:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:53.746 07:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:54.003 07:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:19:54.003 07:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:54.003 07:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:54.003 07:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:54.003 07:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:54.003 07:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:54.004 07:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:54.004 07:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.004 07:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.004 07:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.004 07:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:54.004 07:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:54.004 07:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:54.575 00:19:54.575 07:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:54.575 07:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:54.575 07:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:54.575 07:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:54.575 07:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:54.575 07:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.575 07:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.575 07:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.834 07:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:54.834 { 00:19:54.834 "cntlid": 27, 00:19:54.834 "qid": 0, 00:19:54.834 "state": "enabled", 00:19:54.834 "thread": "nvmf_tgt_poll_group_000", 00:19:54.834 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:54.834 "listen_address": { 00:19:54.834 "trtype": "TCP", 00:19:54.834 "adrfam": "IPv4", 00:19:54.834 "traddr": "10.0.0.2", 00:19:54.834 "trsvcid": "4420" 00:19:54.834 }, 00:19:54.834 "peer_address": { 00:19:54.834 "trtype": "TCP", 00:19:54.834 "adrfam": "IPv4", 00:19:54.834 "traddr": "10.0.0.1", 00:19:54.834 "trsvcid": "33898" 00:19:54.834 }, 00:19:54.834 "auth": { 00:19:54.834 "state": "completed", 00:19:54.834 "digest": "sha256", 00:19:54.834 "dhgroup": "ffdhe4096" 00:19:54.834 } 00:19:54.834 } 00:19:54.834 ]' 00:19:54.834 07:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:54.834 07:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:54.834 07:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:54.834 07:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:54.834 07:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:54.834 07:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:54.834 07:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:54.834 07:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:55.093 07:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ODBmNWEzYzkyYzc0ZTg5MjRiMDViMTEwMTAyZmMwNjAGUQzo: --dhchap-ctrl-secret DHHC-1:02:NmJiNTI5OGQ4NTE3MWQzMDY3N2ExZWEzYzNlNjViYzE0Y2ZiZDA5ZmEzOGMxYmM5xhrtlA==: 00:19:55.093 07:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:ODBmNWEzYzkyYzc0ZTg5MjRiMDViMTEwMTAyZmMwNjAGUQzo: --dhchap-ctrl-secret DHHC-1:02:NmJiNTI5OGQ4NTE3MWQzMDY3N2ExZWEzYzNlNjViYzE0Y2ZiZDA5ZmEzOGMxYmM5xhrtlA==: 00:19:56.032 07:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:56.032 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:56.032 07:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:56.032 07:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.032 07:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:56.032 07:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.032 07:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:56.032 07:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:56.032 07:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:56.290 07:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:19:56.290 07:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:56.290 07:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:56.290 07:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:56.290 07:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:56.290 07:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:56.290 07:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:56.290 07:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.290 07:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:56.290 07:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.290 07:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:56.290 07:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:56.290 07:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:56.857 00:19:56.857 07:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:56.857 07:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:56.857 07:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:57.116 07:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:57.116 07:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:57.116 07:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.116 07:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:57.116 07:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.116 07:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:57.116 { 00:19:57.116 "cntlid": 29, 00:19:57.116 "qid": 0, 00:19:57.116 "state": "enabled", 00:19:57.116 "thread": "nvmf_tgt_poll_group_000", 00:19:57.116 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:57.116 "listen_address": { 00:19:57.116 "trtype": "TCP", 00:19:57.116 "adrfam": "IPv4", 00:19:57.116 "traddr": "10.0.0.2", 00:19:57.116 "trsvcid": "4420" 00:19:57.116 }, 00:19:57.116 "peer_address": { 00:19:57.116 "trtype": "TCP", 00:19:57.116 "adrfam": "IPv4", 00:19:57.116 "traddr": "10.0.0.1", 00:19:57.116 "trsvcid": "33920" 00:19:57.116 }, 00:19:57.116 "auth": { 00:19:57.116 "state": "completed", 00:19:57.116 "digest": "sha256", 00:19:57.116 "dhgroup": "ffdhe4096" 00:19:57.116 } 00:19:57.116 } 00:19:57.116 ]' 00:19:57.116 07:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:57.116 07:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:57.116 07:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:57.116 07:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:57.116 07:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:57.116 07:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:57.116 07:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:57.116 07:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:57.376 07:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YmE3YjRjNTcxYmZiZWQ2NGYyNDEyYjJjOTFjNjY2ZmM2NzVkYTRiYzNlNjM4ZTQyxZYLnw==: --dhchap-ctrl-secret DHHC-1:01:YTEyNWM3Zjg0NDVmYjQ0MDlmNjE3YjgzYmU4YTM4MTYKJrp/: 00:19:57.376 07:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:YmE3YjRjNTcxYmZiZWQ2NGYyNDEyYjJjOTFjNjY2ZmM2NzVkYTRiYzNlNjM4ZTQyxZYLnw==: --dhchap-ctrl-secret DHHC-1:01:YTEyNWM3Zjg0NDVmYjQ0MDlmNjE3YjgzYmU4YTM4MTYKJrp/: 00:19:58.313 07:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:58.313 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:58.313 07:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:58.313 07:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.313 07:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:58.313 07:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.313 07:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:58.313 07:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:58.313 07:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:58.571 07:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:19:58.571 07:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:58.571 07:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:58.571 07:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:58.571 07:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:58.571 07:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:58.571 07:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:58.571 07:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.571 07:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:58.571 07:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.571 07:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:58.571 07:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:58.571 07:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:59.138 00:19:59.138 07:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:59.138 07:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:59.138 07:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:59.397 07:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:59.397 07:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:59.397 07:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.397 07:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:59.397 07:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.397 07:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:59.397 { 00:19:59.397 "cntlid": 31, 00:19:59.397 "qid": 0, 00:19:59.397 "state": "enabled", 00:19:59.397 "thread": "nvmf_tgt_poll_group_000", 00:19:59.397 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:59.397 "listen_address": { 00:19:59.397 "trtype": "TCP", 00:19:59.397 "adrfam": "IPv4", 00:19:59.397 "traddr": "10.0.0.2", 00:19:59.397 "trsvcid": "4420" 00:19:59.397 }, 00:19:59.397 "peer_address": { 00:19:59.397 "trtype": "TCP", 00:19:59.397 "adrfam": "IPv4", 00:19:59.397 "traddr": "10.0.0.1", 00:19:59.397 "trsvcid": "33938" 00:19:59.397 }, 00:19:59.397 "auth": { 00:19:59.397 "state": "completed", 00:19:59.397 "digest": "sha256", 00:19:59.397 "dhgroup": "ffdhe4096" 00:19:59.397 } 00:19:59.397 } 00:19:59.397 ]' 00:19:59.397 07:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:59.397 07:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:59.397 07:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:59.397 07:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:59.397 07:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:59.397 07:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:59.397 07:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:59.397 07:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:59.658 07:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NWFhODFhNDYwNWJjY2ZkMjZjOTQ3MmJlZWYzOGMwNzc0ZjkyN2FkYjA4ODY4YjY1MTUzNTdhYTJhYTkzODM5MhLbLTY=: 00:19:59.658 07:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:NWFhODFhNDYwNWJjY2ZkMjZjOTQ3MmJlZWYzOGMwNzc0ZjkyN2FkYjA4ODY4YjY1MTUzNTdhYTJhYTkzODM5MhLbLTY=: 00:20:00.599 07:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:00.599 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:00.599 07:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:00.599 07:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.599 07:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:00.599 07:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.599 07:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:00.599 07:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:00.599 07:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:00.599 07:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:00.858 07:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:20:00.858 07:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:00.858 07:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:00.858 07:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:00.858 07:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:00.858 07:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:00.858 07:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:00.858 07:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.858 07:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:00.858 07:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.858 07:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:00.858 07:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:00.858 07:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:01.425 00:20:01.425 07:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:01.425 07:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:01.425 07:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:01.684 07:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:01.684 07:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:01.684 07:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.684 07:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:01.684 07:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.684 07:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:01.684 { 00:20:01.684 "cntlid": 33, 00:20:01.684 "qid": 0, 00:20:01.684 "state": "enabled", 00:20:01.684 "thread": "nvmf_tgt_poll_group_000", 00:20:01.684 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:01.684 "listen_address": { 00:20:01.684 "trtype": "TCP", 00:20:01.684 "adrfam": "IPv4", 00:20:01.684 "traddr": "10.0.0.2", 00:20:01.684 "trsvcid": "4420" 00:20:01.684 }, 00:20:01.684 "peer_address": { 00:20:01.684 "trtype": "TCP", 00:20:01.684 "adrfam": "IPv4", 00:20:01.684 "traddr": "10.0.0.1", 00:20:01.684 "trsvcid": "45928" 00:20:01.684 }, 00:20:01.684 "auth": { 00:20:01.684 "state": "completed", 00:20:01.684 "digest": "sha256", 00:20:01.684 "dhgroup": "ffdhe6144" 00:20:01.684 } 00:20:01.684 } 00:20:01.684 ]' 00:20:01.684 07:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:01.684 07:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:01.684 07:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:01.684 07:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:01.684 07:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:01.943 07:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:01.943 07:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:01.943 07:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:02.201 07:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjEyZGZmODVlZjkzYmNiZmMyNzI2YTEwZGYzZjcyN2JkMzhiYjNhMmFmMTY2MDQye34UVg==: --dhchap-ctrl-secret DHHC-1:03:MGU1OTUzZjNlNGQyMmY5NTY0Y2VlZTk5MTFmM2E0ODlmZWIxMGJkZDUzMTgwN2MxN2EyNTA2ZGVhNmNhZDNmZB7cbfg=: 00:20:02.201 07:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:YjEyZGZmODVlZjkzYmNiZmMyNzI2YTEwZGYzZjcyN2JkMzhiYjNhMmFmMTY2MDQye34UVg==: --dhchap-ctrl-secret DHHC-1:03:MGU1OTUzZjNlNGQyMmY5NTY0Y2VlZTk5MTFmM2E0ODlmZWIxMGJkZDUzMTgwN2MxN2EyNTA2ZGVhNmNhZDNmZB7cbfg=: 00:20:03.138 07:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:03.138 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:03.138 07:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:03.138 07:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.138 07:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.138 07:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.138 07:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:03.138 07:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:03.138 07:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:03.397 07:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:20:03.397 07:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:03.397 07:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:03.397 07:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:03.397 07:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:03.397 07:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:03.397 07:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:03.397 07:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.397 07:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.397 07:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.397 07:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:03.397 07:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:03.398 07:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:03.968 00:20:03.968 07:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:03.968 07:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:03.968 07:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:04.227 07:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:04.227 07:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:04.227 07:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.227 07:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:04.227 07:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.227 07:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:04.227 { 00:20:04.227 "cntlid": 35, 00:20:04.227 "qid": 0, 00:20:04.227 "state": "enabled", 00:20:04.227 "thread": "nvmf_tgt_poll_group_000", 00:20:04.227 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:04.227 "listen_address": { 00:20:04.227 "trtype": "TCP", 00:20:04.227 "adrfam": "IPv4", 00:20:04.227 "traddr": "10.0.0.2", 00:20:04.227 "trsvcid": "4420" 00:20:04.227 }, 00:20:04.227 "peer_address": { 00:20:04.227 "trtype": "TCP", 00:20:04.227 "adrfam": "IPv4", 00:20:04.227 "traddr": "10.0.0.1", 00:20:04.227 "trsvcid": "45956" 00:20:04.227 }, 00:20:04.227 "auth": { 00:20:04.227 "state": "completed", 00:20:04.227 "digest": "sha256", 00:20:04.227 "dhgroup": "ffdhe6144" 00:20:04.227 } 00:20:04.227 } 00:20:04.227 ]' 00:20:04.227 07:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:04.227 07:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:04.228 07:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:04.228 07:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:04.228 07:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:04.228 07:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:04.228 07:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:04.228 07:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:04.486 07:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ODBmNWEzYzkyYzc0ZTg5MjRiMDViMTEwMTAyZmMwNjAGUQzo: --dhchap-ctrl-secret DHHC-1:02:NmJiNTI5OGQ4NTE3MWQzMDY3N2ExZWEzYzNlNjViYzE0Y2ZiZDA5ZmEzOGMxYmM5xhrtlA==: 00:20:04.486 07:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:ODBmNWEzYzkyYzc0ZTg5MjRiMDViMTEwMTAyZmMwNjAGUQzo: --dhchap-ctrl-secret DHHC-1:02:NmJiNTI5OGQ4NTE3MWQzMDY3N2ExZWEzYzNlNjViYzE0Y2ZiZDA5ZmEzOGMxYmM5xhrtlA==: 00:20:05.424 07:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:05.424 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:05.424 07:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:05.424 07:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.424 07:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.424 07:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.424 07:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:05.424 07:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:05.424 07:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:05.683 07:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:20:05.683 07:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:05.683 07:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:05.683 07:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:05.683 07:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:05.683 07:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:05.683 07:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:05.683 07:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.683 07:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.683 07:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.683 07:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:05.683 07:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:05.683 07:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:06.251 00:20:06.251 07:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:06.251 07:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:06.251 07:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:06.509 07:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:06.509 07:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:06.509 07:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.509 07:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:06.509 07:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.509 07:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:06.509 { 00:20:06.509 "cntlid": 37, 00:20:06.509 "qid": 0, 00:20:06.509 "state": "enabled", 00:20:06.509 "thread": "nvmf_tgt_poll_group_000", 00:20:06.509 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:06.509 "listen_address": { 00:20:06.509 "trtype": "TCP", 00:20:06.509 "adrfam": "IPv4", 00:20:06.509 "traddr": "10.0.0.2", 00:20:06.509 "trsvcid": "4420" 00:20:06.509 }, 00:20:06.509 "peer_address": { 00:20:06.509 "trtype": "TCP", 00:20:06.509 "adrfam": "IPv4", 00:20:06.509 "traddr": "10.0.0.1", 00:20:06.509 "trsvcid": "45990" 00:20:06.509 }, 00:20:06.509 "auth": { 00:20:06.509 "state": "completed", 00:20:06.509 "digest": "sha256", 00:20:06.509 "dhgroup": "ffdhe6144" 00:20:06.509 } 00:20:06.509 } 00:20:06.509 ]' 00:20:06.510 07:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:06.510 07:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:06.510 07:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:06.510 07:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:06.510 07:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:06.768 07:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:06.768 07:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:06.768 07:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:07.029 07:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YmE3YjRjNTcxYmZiZWQ2NGYyNDEyYjJjOTFjNjY2ZmM2NzVkYTRiYzNlNjM4ZTQyxZYLnw==: --dhchap-ctrl-secret DHHC-1:01:YTEyNWM3Zjg0NDVmYjQ0MDlmNjE3YjgzYmU4YTM4MTYKJrp/: 00:20:07.029 07:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:YmE3YjRjNTcxYmZiZWQ2NGYyNDEyYjJjOTFjNjY2ZmM2NzVkYTRiYzNlNjM4ZTQyxZYLnw==: --dhchap-ctrl-secret DHHC-1:01:YTEyNWM3Zjg0NDVmYjQ0MDlmNjE3YjgzYmU4YTM4MTYKJrp/: 00:20:07.964 07:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:07.964 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:07.964 07:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:07.964 07:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.964 07:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.964 07:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.964 07:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:07.964 07:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:07.964 07:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:07.964 07:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:20:07.964 07:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:07.964 07:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:07.964 07:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:07.964 07:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:07.964 07:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:07.964 07:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:07.964 07:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.964 07:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:08.224 07:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.224 07:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:08.224 07:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:08.224 07:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:08.794 00:20:08.794 07:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:08.794 07:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:08.794 07:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:08.794 07:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:08.794 07:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:08.794 07:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.794 07:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:08.794 07:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.794 07:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:08.794 { 00:20:08.794 "cntlid": 39, 00:20:08.794 "qid": 0, 00:20:08.794 "state": "enabled", 00:20:08.794 "thread": "nvmf_tgt_poll_group_000", 00:20:08.794 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:08.794 "listen_address": { 00:20:08.794 "trtype": "TCP", 00:20:08.794 "adrfam": "IPv4", 00:20:08.794 "traddr": "10.0.0.2", 00:20:08.794 "trsvcid": "4420" 00:20:08.794 }, 00:20:08.794 "peer_address": { 00:20:08.794 "trtype": "TCP", 00:20:08.794 "adrfam": "IPv4", 00:20:08.794 "traddr": "10.0.0.1", 00:20:08.794 "trsvcid": "46014" 00:20:08.794 }, 00:20:08.794 "auth": { 00:20:08.794 "state": "completed", 00:20:08.794 "digest": "sha256", 00:20:08.794 "dhgroup": "ffdhe6144" 00:20:08.794 } 00:20:08.794 } 00:20:08.794 ]' 00:20:08.794 07:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:09.053 07:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:09.053 07:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:09.053 07:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:09.053 07:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:09.053 07:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:09.053 07:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:09.053 07:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:09.311 07:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NWFhODFhNDYwNWJjY2ZkMjZjOTQ3MmJlZWYzOGMwNzc0ZjkyN2FkYjA4ODY4YjY1MTUzNTdhYTJhYTkzODM5MhLbLTY=: 00:20:09.311 07:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:NWFhODFhNDYwNWJjY2ZkMjZjOTQ3MmJlZWYzOGMwNzc0ZjkyN2FkYjA4ODY4YjY1MTUzNTdhYTJhYTkzODM5MhLbLTY=: 00:20:10.252 07:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:10.252 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:10.252 07:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:10.252 07:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.252 07:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.252 07:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.252 07:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:10.252 07:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:10.252 07:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:10.252 07:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:10.510 07:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:20:10.510 07:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:10.510 07:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:10.510 07:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:10.510 07:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:10.510 07:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:10.510 07:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:10.510 07:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.510 07:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.510 07:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.510 07:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:10.510 07:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:10.510 07:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:11.449 00:20:11.449 07:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:11.449 07:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:11.449 07:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:11.708 07:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:11.708 07:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:11.708 07:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.708 07:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.708 07:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.708 07:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:11.708 { 00:20:11.708 "cntlid": 41, 00:20:11.708 "qid": 0, 00:20:11.708 "state": "enabled", 00:20:11.708 "thread": "nvmf_tgt_poll_group_000", 00:20:11.708 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:11.708 "listen_address": { 00:20:11.708 "trtype": "TCP", 00:20:11.708 "adrfam": "IPv4", 00:20:11.708 "traddr": "10.0.0.2", 00:20:11.708 "trsvcid": "4420" 00:20:11.708 }, 00:20:11.708 "peer_address": { 00:20:11.708 "trtype": "TCP", 00:20:11.708 "adrfam": "IPv4", 00:20:11.708 "traddr": "10.0.0.1", 00:20:11.708 "trsvcid": "36210" 00:20:11.708 }, 00:20:11.708 "auth": { 00:20:11.708 "state": "completed", 00:20:11.708 "digest": "sha256", 00:20:11.708 "dhgroup": "ffdhe8192" 00:20:11.708 } 00:20:11.708 } 00:20:11.708 ]' 00:20:11.708 07:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:11.708 07:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:11.708 07:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:11.708 07:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:11.708 07:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:11.708 07:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:11.708 07:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:11.709 07:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:11.966 07:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjEyZGZmODVlZjkzYmNiZmMyNzI2YTEwZGYzZjcyN2JkMzhiYjNhMmFmMTY2MDQye34UVg==: --dhchap-ctrl-secret DHHC-1:03:MGU1OTUzZjNlNGQyMmY5NTY0Y2VlZTk5MTFmM2E0ODlmZWIxMGJkZDUzMTgwN2MxN2EyNTA2ZGVhNmNhZDNmZB7cbfg=: 00:20:11.966 07:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:YjEyZGZmODVlZjkzYmNiZmMyNzI2YTEwZGYzZjcyN2JkMzhiYjNhMmFmMTY2MDQye34UVg==: --dhchap-ctrl-secret DHHC-1:03:MGU1OTUzZjNlNGQyMmY5NTY0Y2VlZTk5MTFmM2E0ODlmZWIxMGJkZDUzMTgwN2MxN2EyNTA2ZGVhNmNhZDNmZB7cbfg=: 00:20:12.907 07:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:12.907 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:12.907 07:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:12.907 07:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.907 07:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:12.907 07:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.907 07:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:12.907 07:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:12.907 07:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:13.165 07:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:20:13.165 07:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:13.165 07:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:13.165 07:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:13.165 07:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:13.165 07:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:13.165 07:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:13.165 07:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.165 07:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:13.165 07:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.165 07:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:13.165 07:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:13.165 07:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:14.104 00:20:14.104 07:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:14.104 07:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:14.104 07:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:14.104 07:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:14.104 07:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:14.104 07:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.104 07:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.363 07:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.363 07:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:14.363 { 00:20:14.363 "cntlid": 43, 00:20:14.363 "qid": 0, 00:20:14.363 "state": "enabled", 00:20:14.363 "thread": "nvmf_tgt_poll_group_000", 00:20:14.363 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:14.363 "listen_address": { 00:20:14.363 "trtype": "TCP", 00:20:14.363 "adrfam": "IPv4", 00:20:14.363 "traddr": "10.0.0.2", 00:20:14.363 "trsvcid": "4420" 00:20:14.363 }, 00:20:14.363 "peer_address": { 00:20:14.363 "trtype": "TCP", 00:20:14.363 "adrfam": "IPv4", 00:20:14.363 "traddr": "10.0.0.1", 00:20:14.363 "trsvcid": "36242" 00:20:14.363 }, 00:20:14.363 "auth": { 00:20:14.363 "state": "completed", 00:20:14.363 "digest": "sha256", 00:20:14.363 "dhgroup": "ffdhe8192" 00:20:14.363 } 00:20:14.363 } 00:20:14.363 ]' 00:20:14.363 07:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:14.364 07:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:14.364 07:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:14.364 07:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:14.364 07:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:14.364 07:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:14.364 07:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:14.364 07:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:14.622 07:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ODBmNWEzYzkyYzc0ZTg5MjRiMDViMTEwMTAyZmMwNjAGUQzo: --dhchap-ctrl-secret DHHC-1:02:NmJiNTI5OGQ4NTE3MWQzMDY3N2ExZWEzYzNlNjViYzE0Y2ZiZDA5ZmEzOGMxYmM5xhrtlA==: 00:20:14.622 07:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:ODBmNWEzYzkyYzc0ZTg5MjRiMDViMTEwMTAyZmMwNjAGUQzo: --dhchap-ctrl-secret DHHC-1:02:NmJiNTI5OGQ4NTE3MWQzMDY3N2ExZWEzYzNlNjViYzE0Y2ZiZDA5ZmEzOGMxYmM5xhrtlA==: 00:20:15.561 07:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:15.561 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:15.561 07:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:15.561 07:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.561 07:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:15.561 07:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.561 07:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:15.561 07:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:15.561 07:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:15.820 07:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:20:15.820 07:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:15.820 07:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:15.820 07:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:15.820 07:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:15.820 07:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:15.820 07:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:15.820 07:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.820 07:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:15.820 07:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.820 07:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:15.820 07:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:15.820 07:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:16.759 00:20:16.759 07:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:16.759 07:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:16.759 07:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:17.018 07:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:17.018 07:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:17.018 07:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.018 07:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:17.018 07:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.018 07:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:17.018 { 00:20:17.018 "cntlid": 45, 00:20:17.018 "qid": 0, 00:20:17.018 "state": "enabled", 00:20:17.018 "thread": "nvmf_tgt_poll_group_000", 00:20:17.018 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:17.018 "listen_address": { 00:20:17.018 "trtype": "TCP", 00:20:17.018 "adrfam": "IPv4", 00:20:17.018 "traddr": "10.0.0.2", 00:20:17.018 "trsvcid": "4420" 00:20:17.018 }, 00:20:17.018 "peer_address": { 00:20:17.018 "trtype": "TCP", 00:20:17.018 "adrfam": "IPv4", 00:20:17.018 "traddr": "10.0.0.1", 00:20:17.018 "trsvcid": "36260" 00:20:17.018 }, 00:20:17.018 "auth": { 00:20:17.018 "state": "completed", 00:20:17.018 "digest": "sha256", 00:20:17.018 "dhgroup": "ffdhe8192" 00:20:17.018 } 00:20:17.018 } 00:20:17.018 ]' 00:20:17.018 07:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:17.018 07:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:17.018 07:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:17.018 07:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:17.018 07:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:17.018 07:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:17.018 07:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:17.018 07:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:17.277 07:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YmE3YjRjNTcxYmZiZWQ2NGYyNDEyYjJjOTFjNjY2ZmM2NzVkYTRiYzNlNjM4ZTQyxZYLnw==: --dhchap-ctrl-secret DHHC-1:01:YTEyNWM3Zjg0NDVmYjQ0MDlmNjE3YjgzYmU4YTM4MTYKJrp/: 00:20:17.277 07:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:YmE3YjRjNTcxYmZiZWQ2NGYyNDEyYjJjOTFjNjY2ZmM2NzVkYTRiYzNlNjM4ZTQyxZYLnw==: --dhchap-ctrl-secret DHHC-1:01:YTEyNWM3Zjg0NDVmYjQ0MDlmNjE3YjgzYmU4YTM4MTYKJrp/: 00:20:18.216 07:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:18.216 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:18.216 07:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:18.216 07:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.216 07:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:18.216 07:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.216 07:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:18.216 07:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:18.216 07:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:18.474 07:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:20:18.474 07:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:18.474 07:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:18.474 07:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:18.474 07:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:18.474 07:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:18.474 07:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:18.474 07:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.474 07:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:18.474 07:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.474 07:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:18.474 07:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:18.474 07:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:19.412 00:20:19.412 07:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:19.412 07:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:19.412 07:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:19.670 07:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:19.670 07:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:19.670 07:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.670 07:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:19.670 07:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.670 07:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:19.670 { 00:20:19.670 "cntlid": 47, 00:20:19.670 "qid": 0, 00:20:19.670 "state": "enabled", 00:20:19.670 "thread": "nvmf_tgt_poll_group_000", 00:20:19.670 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:19.670 "listen_address": { 00:20:19.670 "trtype": "TCP", 00:20:19.670 "adrfam": "IPv4", 00:20:19.670 "traddr": "10.0.0.2", 00:20:19.670 "trsvcid": "4420" 00:20:19.670 }, 00:20:19.670 "peer_address": { 00:20:19.670 "trtype": "TCP", 00:20:19.670 "adrfam": "IPv4", 00:20:19.670 "traddr": "10.0.0.1", 00:20:19.670 "trsvcid": "36272" 00:20:19.670 }, 00:20:19.670 "auth": { 00:20:19.670 "state": "completed", 00:20:19.670 "digest": "sha256", 00:20:19.670 "dhgroup": "ffdhe8192" 00:20:19.670 } 00:20:19.670 } 00:20:19.670 ]' 00:20:19.670 07:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:19.670 07:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:19.670 07:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:19.670 07:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:19.670 07:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:19.670 07:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:19.670 07:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:19.670 07:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:19.929 07:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NWFhODFhNDYwNWJjY2ZkMjZjOTQ3MmJlZWYzOGMwNzc0ZjkyN2FkYjA4ODY4YjY1MTUzNTdhYTJhYTkzODM5MhLbLTY=: 00:20:19.929 07:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:NWFhODFhNDYwNWJjY2ZkMjZjOTQ3MmJlZWYzOGMwNzc0ZjkyN2FkYjA4ODY4YjY1MTUzNTdhYTJhYTkzODM5MhLbLTY=: 00:20:20.869 07:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:20.869 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:20.869 07:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:20.869 07:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:20.869 07:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:20.869 07:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:20.869 07:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:20:20.869 07:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:20.869 07:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:20.869 07:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:20.869 07:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:21.127 07:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:20:21.127 07:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:21.127 07:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:21.127 07:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:21.127 07:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:21.127 07:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:21.128 07:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:21.128 07:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.128 07:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:21.128 07:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.128 07:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:21.128 07:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:21.128 07:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:21.388 00:20:21.647 07:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:21.647 07:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:21.647 07:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:21.906 07:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:21.906 07:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:21.906 07:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.906 07:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:21.906 07:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.906 07:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:21.906 { 00:20:21.906 "cntlid": 49, 00:20:21.906 "qid": 0, 00:20:21.906 "state": "enabled", 00:20:21.906 "thread": "nvmf_tgt_poll_group_000", 00:20:21.906 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:21.906 "listen_address": { 00:20:21.906 "trtype": "TCP", 00:20:21.906 "adrfam": "IPv4", 00:20:21.906 "traddr": "10.0.0.2", 00:20:21.906 "trsvcid": "4420" 00:20:21.906 }, 00:20:21.906 "peer_address": { 00:20:21.906 "trtype": "TCP", 00:20:21.906 "adrfam": "IPv4", 00:20:21.906 "traddr": "10.0.0.1", 00:20:21.906 "trsvcid": "32976" 00:20:21.906 }, 00:20:21.906 "auth": { 00:20:21.906 "state": "completed", 00:20:21.906 "digest": "sha384", 00:20:21.906 "dhgroup": "null" 00:20:21.906 } 00:20:21.906 } 00:20:21.906 ]' 00:20:21.906 07:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:21.906 07:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:21.906 07:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:21.906 07:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:21.906 07:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:21.906 07:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:21.906 07:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:21.906 07:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:22.165 07:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjEyZGZmODVlZjkzYmNiZmMyNzI2YTEwZGYzZjcyN2JkMzhiYjNhMmFmMTY2MDQye34UVg==: --dhchap-ctrl-secret DHHC-1:03:MGU1OTUzZjNlNGQyMmY5NTY0Y2VlZTk5MTFmM2E0ODlmZWIxMGJkZDUzMTgwN2MxN2EyNTA2ZGVhNmNhZDNmZB7cbfg=: 00:20:22.165 07:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:YjEyZGZmODVlZjkzYmNiZmMyNzI2YTEwZGYzZjcyN2JkMzhiYjNhMmFmMTY2MDQye34UVg==: --dhchap-ctrl-secret DHHC-1:03:MGU1OTUzZjNlNGQyMmY5NTY0Y2VlZTk5MTFmM2E0ODlmZWIxMGJkZDUzMTgwN2MxN2EyNTA2ZGVhNmNhZDNmZB7cbfg=: 00:20:23.103 07:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:23.103 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:23.103 07:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:23.103 07:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.103 07:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:23.103 07:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.103 07:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:23.103 07:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:23.103 07:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:23.362 07:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:20:23.362 07:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:23.362 07:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:23.362 07:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:23.362 07:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:23.362 07:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:23.362 07:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:23.362 07:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.362 07:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:23.362 07:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.362 07:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:23.362 07:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:23.362 07:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:23.620 00:20:23.620 07:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:23.620 07:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:23.620 07:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:23.879 07:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:23.879 07:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:23.879 07:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.879 07:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:23.879 07:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.879 07:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:23.879 { 00:20:23.879 "cntlid": 51, 00:20:23.879 "qid": 0, 00:20:23.879 "state": "enabled", 00:20:23.879 "thread": "nvmf_tgt_poll_group_000", 00:20:23.879 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:23.879 "listen_address": { 00:20:23.879 "trtype": "TCP", 00:20:23.879 "adrfam": "IPv4", 00:20:23.879 "traddr": "10.0.0.2", 00:20:23.879 "trsvcid": "4420" 00:20:23.879 }, 00:20:23.879 "peer_address": { 00:20:23.879 "trtype": "TCP", 00:20:23.879 "adrfam": "IPv4", 00:20:23.879 "traddr": "10.0.0.1", 00:20:23.879 "trsvcid": "33010" 00:20:23.879 }, 00:20:23.879 "auth": { 00:20:23.879 "state": "completed", 00:20:23.879 "digest": "sha384", 00:20:23.879 "dhgroup": "null" 00:20:23.879 } 00:20:23.879 } 00:20:23.879 ]' 00:20:23.879 07:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:24.138 07:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:24.138 07:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:24.138 07:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:24.138 07:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:24.138 07:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:24.138 07:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:24.138 07:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:24.397 07:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ODBmNWEzYzkyYzc0ZTg5MjRiMDViMTEwMTAyZmMwNjAGUQzo: --dhchap-ctrl-secret DHHC-1:02:NmJiNTI5OGQ4NTE3MWQzMDY3N2ExZWEzYzNlNjViYzE0Y2ZiZDA5ZmEzOGMxYmM5xhrtlA==: 00:20:24.397 07:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:ODBmNWEzYzkyYzc0ZTg5MjRiMDViMTEwMTAyZmMwNjAGUQzo: --dhchap-ctrl-secret DHHC-1:02:NmJiNTI5OGQ4NTE3MWQzMDY3N2ExZWEzYzNlNjViYzE0Y2ZiZDA5ZmEzOGMxYmM5xhrtlA==: 00:20:25.337 07:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:25.337 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:25.337 07:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:25.337 07:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.337 07:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:25.337 07:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.337 07:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:25.338 07:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:25.338 07:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:25.596 07:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:20:25.596 07:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:25.596 07:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:25.596 07:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:25.596 07:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:25.596 07:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:25.596 07:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:25.596 07:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.596 07:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:25.596 07:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.596 07:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:25.596 07:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:25.596 07:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:25.855 00:20:25.855 07:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:25.855 07:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:25.855 07:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:26.113 07:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:26.113 07:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:26.113 07:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:26.113 07:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:26.113 07:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:26.113 07:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:26.113 { 00:20:26.113 "cntlid": 53, 00:20:26.113 "qid": 0, 00:20:26.113 "state": "enabled", 00:20:26.113 "thread": "nvmf_tgt_poll_group_000", 00:20:26.113 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:26.113 "listen_address": { 00:20:26.113 "trtype": "TCP", 00:20:26.113 "adrfam": "IPv4", 00:20:26.114 "traddr": "10.0.0.2", 00:20:26.114 "trsvcid": "4420" 00:20:26.114 }, 00:20:26.114 "peer_address": { 00:20:26.114 "trtype": "TCP", 00:20:26.114 "adrfam": "IPv4", 00:20:26.114 "traddr": "10.0.0.1", 00:20:26.114 "trsvcid": "33042" 00:20:26.114 }, 00:20:26.114 "auth": { 00:20:26.114 "state": "completed", 00:20:26.114 "digest": "sha384", 00:20:26.114 "dhgroup": "null" 00:20:26.114 } 00:20:26.114 } 00:20:26.114 ]' 00:20:26.114 07:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:26.372 07:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:26.372 07:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:26.372 07:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:26.372 07:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:26.372 07:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:26.372 07:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:26.372 07:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:26.630 07:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YmE3YjRjNTcxYmZiZWQ2NGYyNDEyYjJjOTFjNjY2ZmM2NzVkYTRiYzNlNjM4ZTQyxZYLnw==: --dhchap-ctrl-secret DHHC-1:01:YTEyNWM3Zjg0NDVmYjQ0MDlmNjE3YjgzYmU4YTM4MTYKJrp/: 00:20:26.630 07:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:YmE3YjRjNTcxYmZiZWQ2NGYyNDEyYjJjOTFjNjY2ZmM2NzVkYTRiYzNlNjM4ZTQyxZYLnw==: --dhchap-ctrl-secret DHHC-1:01:YTEyNWM3Zjg0NDVmYjQ0MDlmNjE3YjgzYmU4YTM4MTYKJrp/: 00:20:27.565 07:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:27.565 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:27.565 07:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:27.565 07:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.565 07:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:27.565 07:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.565 07:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:27.565 07:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:27.565 07:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:27.824 07:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:20:27.824 07:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:27.824 07:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:27.824 07:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:27.824 07:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:27.824 07:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:27.824 07:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:27.824 07:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.824 07:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:27.824 07:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.824 07:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:27.824 07:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:27.824 07:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:28.083 00:20:28.083 07:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:28.083 07:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:28.083 07:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:28.361 07:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:28.361 07:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:28.361 07:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.361 07:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:28.361 07:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.361 07:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:28.361 { 00:20:28.361 "cntlid": 55, 00:20:28.361 "qid": 0, 00:20:28.361 "state": "enabled", 00:20:28.361 "thread": "nvmf_tgt_poll_group_000", 00:20:28.361 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:28.361 "listen_address": { 00:20:28.361 "trtype": "TCP", 00:20:28.361 "adrfam": "IPv4", 00:20:28.361 "traddr": "10.0.0.2", 00:20:28.361 "trsvcid": "4420" 00:20:28.361 }, 00:20:28.361 "peer_address": { 00:20:28.361 "trtype": "TCP", 00:20:28.361 "adrfam": "IPv4", 00:20:28.361 "traddr": "10.0.0.1", 00:20:28.361 "trsvcid": "33078" 00:20:28.361 }, 00:20:28.361 "auth": { 00:20:28.361 "state": "completed", 00:20:28.361 "digest": "sha384", 00:20:28.361 "dhgroup": "null" 00:20:28.361 } 00:20:28.361 } 00:20:28.361 ]' 00:20:28.361 07:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:28.361 07:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:28.361 07:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:28.361 07:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:28.361 07:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:28.361 07:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:28.361 07:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:28.361 07:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:28.927 07:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NWFhODFhNDYwNWJjY2ZkMjZjOTQ3MmJlZWYzOGMwNzc0ZjkyN2FkYjA4ODY4YjY1MTUzNTdhYTJhYTkzODM5MhLbLTY=: 00:20:28.927 07:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:NWFhODFhNDYwNWJjY2ZkMjZjOTQ3MmJlZWYzOGMwNzc0ZjkyN2FkYjA4ODY4YjY1MTUzNTdhYTJhYTkzODM5MhLbLTY=: 00:20:29.866 07:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:29.866 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:29.866 07:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:29.866 07:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.866 07:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:29.866 07:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.866 07:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:29.866 07:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:29.866 07:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:29.866 07:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:29.866 07:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:20:29.866 07:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:29.866 07:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:29.866 07:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:29.866 07:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:29.866 07:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:29.866 07:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:29.866 07:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.866 07:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:29.866 07:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.866 07:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:29.866 07:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:29.866 07:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:30.432 00:20:30.432 07:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:30.432 07:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:30.432 07:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:30.692 07:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:30.692 07:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:30.692 07:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:30.692 07:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:30.692 07:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:30.692 07:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:30.692 { 00:20:30.692 "cntlid": 57, 00:20:30.692 "qid": 0, 00:20:30.692 "state": "enabled", 00:20:30.692 "thread": "nvmf_tgt_poll_group_000", 00:20:30.692 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:30.692 "listen_address": { 00:20:30.692 "trtype": "TCP", 00:20:30.692 "adrfam": "IPv4", 00:20:30.692 "traddr": "10.0.0.2", 00:20:30.692 "trsvcid": "4420" 00:20:30.692 }, 00:20:30.692 "peer_address": { 00:20:30.692 "trtype": "TCP", 00:20:30.692 "adrfam": "IPv4", 00:20:30.692 "traddr": "10.0.0.1", 00:20:30.692 "trsvcid": "59762" 00:20:30.692 }, 00:20:30.692 "auth": { 00:20:30.692 "state": "completed", 00:20:30.692 "digest": "sha384", 00:20:30.692 "dhgroup": "ffdhe2048" 00:20:30.692 } 00:20:30.692 } 00:20:30.692 ]' 00:20:30.692 07:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:30.692 07:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:30.692 07:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:30.692 07:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:30.692 07:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:30.692 07:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:30.692 07:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:30.692 07:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:30.952 07:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjEyZGZmODVlZjkzYmNiZmMyNzI2YTEwZGYzZjcyN2JkMzhiYjNhMmFmMTY2MDQye34UVg==: --dhchap-ctrl-secret DHHC-1:03:MGU1OTUzZjNlNGQyMmY5NTY0Y2VlZTk5MTFmM2E0ODlmZWIxMGJkZDUzMTgwN2MxN2EyNTA2ZGVhNmNhZDNmZB7cbfg=: 00:20:30.952 07:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:YjEyZGZmODVlZjkzYmNiZmMyNzI2YTEwZGYzZjcyN2JkMzhiYjNhMmFmMTY2MDQye34UVg==: --dhchap-ctrl-secret DHHC-1:03:MGU1OTUzZjNlNGQyMmY5NTY0Y2VlZTk5MTFmM2E0ODlmZWIxMGJkZDUzMTgwN2MxN2EyNTA2ZGVhNmNhZDNmZB7cbfg=: 00:20:31.888 07:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:31.888 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:31.888 07:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:31.888 07:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:31.888 07:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:31.888 07:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:31.888 07:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:31.888 07:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:31.888 07:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:32.146 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:20:32.146 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:32.146 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:32.146 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:32.146 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:32.146 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:32.146 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:32.146 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.146 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:32.146 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:32.146 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:32.146 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:32.146 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:32.717 00:20:32.717 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:32.717 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:32.717 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:32.717 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:32.717 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:32.717 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.717 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:32.717 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:32.717 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:32.717 { 00:20:32.717 "cntlid": 59, 00:20:32.717 "qid": 0, 00:20:32.717 "state": "enabled", 00:20:32.717 "thread": "nvmf_tgt_poll_group_000", 00:20:32.717 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:32.717 "listen_address": { 00:20:32.717 "trtype": "TCP", 00:20:32.717 "adrfam": "IPv4", 00:20:32.717 "traddr": "10.0.0.2", 00:20:32.717 "trsvcid": "4420" 00:20:32.717 }, 00:20:32.717 "peer_address": { 00:20:32.717 "trtype": "TCP", 00:20:32.717 "adrfam": "IPv4", 00:20:32.717 "traddr": "10.0.0.1", 00:20:32.717 "trsvcid": "59788" 00:20:32.717 }, 00:20:32.717 "auth": { 00:20:32.717 "state": "completed", 00:20:32.717 "digest": "sha384", 00:20:32.717 "dhgroup": "ffdhe2048" 00:20:32.717 } 00:20:32.717 } 00:20:32.717 ]' 00:20:32.717 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:32.976 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:32.976 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:32.976 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:32.976 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:32.976 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:32.976 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:32.976 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:33.236 07:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ODBmNWEzYzkyYzc0ZTg5MjRiMDViMTEwMTAyZmMwNjAGUQzo: --dhchap-ctrl-secret DHHC-1:02:NmJiNTI5OGQ4NTE3MWQzMDY3N2ExZWEzYzNlNjViYzE0Y2ZiZDA5ZmEzOGMxYmM5xhrtlA==: 00:20:33.237 07:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:ODBmNWEzYzkyYzc0ZTg5MjRiMDViMTEwMTAyZmMwNjAGUQzo: --dhchap-ctrl-secret DHHC-1:02:NmJiNTI5OGQ4NTE3MWQzMDY3N2ExZWEzYzNlNjViYzE0Y2ZiZDA5ZmEzOGMxYmM5xhrtlA==: 00:20:34.187 07:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:34.187 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:34.187 07:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:34.187 07:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:34.187 07:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:34.187 07:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:34.187 07:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:34.187 07:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:34.187 07:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:34.445 07:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:20:34.445 07:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:34.445 07:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:34.445 07:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:34.445 07:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:34.445 07:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:34.445 07:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:34.445 07:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:34.445 07:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:34.445 07:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:34.445 07:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:34.445 07:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:34.445 07:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:34.704 00:20:34.704 07:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:34.704 07:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:34.704 07:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:34.962 07:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:35.221 07:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:35.221 07:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.221 07:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:35.221 07:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.221 07:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:35.221 { 00:20:35.221 "cntlid": 61, 00:20:35.221 "qid": 0, 00:20:35.221 "state": "enabled", 00:20:35.221 "thread": "nvmf_tgt_poll_group_000", 00:20:35.221 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:35.221 "listen_address": { 00:20:35.221 "trtype": "TCP", 00:20:35.221 "adrfam": "IPv4", 00:20:35.221 "traddr": "10.0.0.2", 00:20:35.221 "trsvcid": "4420" 00:20:35.221 }, 00:20:35.221 "peer_address": { 00:20:35.221 "trtype": "TCP", 00:20:35.221 "adrfam": "IPv4", 00:20:35.221 "traddr": "10.0.0.1", 00:20:35.221 "trsvcid": "59820" 00:20:35.221 }, 00:20:35.221 "auth": { 00:20:35.221 "state": "completed", 00:20:35.221 "digest": "sha384", 00:20:35.221 "dhgroup": "ffdhe2048" 00:20:35.221 } 00:20:35.221 } 00:20:35.221 ]' 00:20:35.221 07:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:35.221 07:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:35.221 07:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:35.221 07:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:35.221 07:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:35.221 07:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:35.221 07:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:35.221 07:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:35.480 07:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YmE3YjRjNTcxYmZiZWQ2NGYyNDEyYjJjOTFjNjY2ZmM2NzVkYTRiYzNlNjM4ZTQyxZYLnw==: --dhchap-ctrl-secret DHHC-1:01:YTEyNWM3Zjg0NDVmYjQ0MDlmNjE3YjgzYmU4YTM4MTYKJrp/: 00:20:35.480 07:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:YmE3YjRjNTcxYmZiZWQ2NGYyNDEyYjJjOTFjNjY2ZmM2NzVkYTRiYzNlNjM4ZTQyxZYLnw==: --dhchap-ctrl-secret DHHC-1:01:YTEyNWM3Zjg0NDVmYjQ0MDlmNjE3YjgzYmU4YTM4MTYKJrp/: 00:20:36.420 07:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:36.420 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:36.420 07:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:36.420 07:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.420 07:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:36.420 07:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:36.420 07:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:36.420 07:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:36.420 07:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:36.678 07:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:20:36.678 07:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:36.678 07:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:36.678 07:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:36.678 07:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:36.678 07:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:36.679 07:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:36.679 07:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.679 07:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:36.679 07:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:36.679 07:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:36.679 07:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:36.679 07:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:36.938 00:20:36.938 07:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:36.938 07:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:36.938 07:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:37.197 07:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:37.197 07:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:37.197 07:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.197 07:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:37.197 07:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.197 07:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:37.197 { 00:20:37.197 "cntlid": 63, 00:20:37.197 "qid": 0, 00:20:37.197 "state": "enabled", 00:20:37.197 "thread": "nvmf_tgt_poll_group_000", 00:20:37.197 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:37.197 "listen_address": { 00:20:37.197 "trtype": "TCP", 00:20:37.197 "adrfam": "IPv4", 00:20:37.197 "traddr": "10.0.0.2", 00:20:37.197 "trsvcid": "4420" 00:20:37.197 }, 00:20:37.197 "peer_address": { 00:20:37.197 "trtype": "TCP", 00:20:37.197 "adrfam": "IPv4", 00:20:37.197 "traddr": "10.0.0.1", 00:20:37.197 "trsvcid": "59860" 00:20:37.197 }, 00:20:37.197 "auth": { 00:20:37.197 "state": "completed", 00:20:37.197 "digest": "sha384", 00:20:37.197 "dhgroup": "ffdhe2048" 00:20:37.197 } 00:20:37.197 } 00:20:37.197 ]' 00:20:37.197 07:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:37.197 07:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:37.197 07:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:37.456 07:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:37.456 07:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:37.456 07:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:37.456 07:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:37.456 07:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:37.714 07:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NWFhODFhNDYwNWJjY2ZkMjZjOTQ3MmJlZWYzOGMwNzc0ZjkyN2FkYjA4ODY4YjY1MTUzNTdhYTJhYTkzODM5MhLbLTY=: 00:20:37.714 07:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:NWFhODFhNDYwNWJjY2ZkMjZjOTQ3MmJlZWYzOGMwNzc0ZjkyN2FkYjA4ODY4YjY1MTUzNTdhYTJhYTkzODM5MhLbLTY=: 00:20:38.652 07:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:38.652 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:38.652 07:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:38.652 07:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.652 07:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:38.652 07:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.653 07:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:38.653 07:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:38.653 07:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:38.653 07:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:38.911 07:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:20:38.911 07:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:38.911 07:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:38.911 07:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:38.911 07:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:38.911 07:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:38.911 07:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:38.911 07:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.911 07:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:38.911 07:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.911 07:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:38.911 07:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:38.911 07:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:39.170 00:20:39.170 07:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:39.170 07:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:39.170 07:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:39.429 07:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:39.429 07:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:39.429 07:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:39.429 07:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:39.429 07:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:39.429 07:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:39.429 { 00:20:39.429 "cntlid": 65, 00:20:39.429 "qid": 0, 00:20:39.429 "state": "enabled", 00:20:39.429 "thread": "nvmf_tgt_poll_group_000", 00:20:39.429 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:39.429 "listen_address": { 00:20:39.429 "trtype": "TCP", 00:20:39.429 "adrfam": "IPv4", 00:20:39.429 "traddr": "10.0.0.2", 00:20:39.429 "trsvcid": "4420" 00:20:39.429 }, 00:20:39.429 "peer_address": { 00:20:39.429 "trtype": "TCP", 00:20:39.429 "adrfam": "IPv4", 00:20:39.429 "traddr": "10.0.0.1", 00:20:39.429 "trsvcid": "39818" 00:20:39.429 }, 00:20:39.429 "auth": { 00:20:39.429 "state": "completed", 00:20:39.429 "digest": "sha384", 00:20:39.429 "dhgroup": "ffdhe3072" 00:20:39.429 } 00:20:39.429 } 00:20:39.429 ]' 00:20:39.429 07:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:39.688 07:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:39.688 07:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:39.688 07:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:39.688 07:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:39.688 07:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:39.688 07:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:39.688 07:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:39.947 07:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjEyZGZmODVlZjkzYmNiZmMyNzI2YTEwZGYzZjcyN2JkMzhiYjNhMmFmMTY2MDQye34UVg==: --dhchap-ctrl-secret DHHC-1:03:MGU1OTUzZjNlNGQyMmY5NTY0Y2VlZTk5MTFmM2E0ODlmZWIxMGJkZDUzMTgwN2MxN2EyNTA2ZGVhNmNhZDNmZB7cbfg=: 00:20:39.947 07:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:YjEyZGZmODVlZjkzYmNiZmMyNzI2YTEwZGYzZjcyN2JkMzhiYjNhMmFmMTY2MDQye34UVg==: --dhchap-ctrl-secret DHHC-1:03:MGU1OTUzZjNlNGQyMmY5NTY0Y2VlZTk5MTFmM2E0ODlmZWIxMGJkZDUzMTgwN2MxN2EyNTA2ZGVhNmNhZDNmZB7cbfg=: 00:20:40.882 07:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:40.882 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:40.882 07:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:40.882 07:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.882 07:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.882 07:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:40.882 07:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:40.882 07:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:40.882 07:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:41.140 07:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:20:41.140 07:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:41.140 07:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:41.140 07:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:41.140 07:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:41.140 07:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:41.141 07:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:41.141 07:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.141 07:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:41.141 07:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.141 07:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:41.141 07:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:41.141 07:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:41.401 00:20:41.660 07:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:41.660 07:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:41.660 07:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:41.919 07:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:41.919 07:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:41.919 07:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.919 07:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:41.919 07:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.919 07:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:41.919 { 00:20:41.919 "cntlid": 67, 00:20:41.919 "qid": 0, 00:20:41.919 "state": "enabled", 00:20:41.919 "thread": "nvmf_tgt_poll_group_000", 00:20:41.919 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:41.919 "listen_address": { 00:20:41.919 "trtype": "TCP", 00:20:41.919 "adrfam": "IPv4", 00:20:41.919 "traddr": "10.0.0.2", 00:20:41.919 "trsvcid": "4420" 00:20:41.919 }, 00:20:41.919 "peer_address": { 00:20:41.919 "trtype": "TCP", 00:20:41.919 "adrfam": "IPv4", 00:20:41.919 "traddr": "10.0.0.1", 00:20:41.919 "trsvcid": "39854" 00:20:41.919 }, 00:20:41.919 "auth": { 00:20:41.919 "state": "completed", 00:20:41.919 "digest": "sha384", 00:20:41.919 "dhgroup": "ffdhe3072" 00:20:41.919 } 00:20:41.919 } 00:20:41.919 ]' 00:20:41.919 07:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:41.919 07:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:41.919 07:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:41.919 07:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:41.919 07:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:41.919 07:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:41.919 07:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:41.919 07:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:42.179 07:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ODBmNWEzYzkyYzc0ZTg5MjRiMDViMTEwMTAyZmMwNjAGUQzo: --dhchap-ctrl-secret DHHC-1:02:NmJiNTI5OGQ4NTE3MWQzMDY3N2ExZWEzYzNlNjViYzE0Y2ZiZDA5ZmEzOGMxYmM5xhrtlA==: 00:20:42.179 07:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:ODBmNWEzYzkyYzc0ZTg5MjRiMDViMTEwMTAyZmMwNjAGUQzo: --dhchap-ctrl-secret DHHC-1:02:NmJiNTI5OGQ4NTE3MWQzMDY3N2ExZWEzYzNlNjViYzE0Y2ZiZDA5ZmEzOGMxYmM5xhrtlA==: 00:20:43.118 07:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:43.118 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:43.118 07:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:43.118 07:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.118 07:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:43.118 07:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.118 07:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:43.118 07:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:43.118 07:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:43.377 07:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:20:43.377 07:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:43.377 07:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:43.377 07:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:43.377 07:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:43.377 07:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:43.377 07:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:43.377 07:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.377 07:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:43.377 07:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.377 07:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:43.377 07:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:43.377 07:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:43.946 00:20:43.946 07:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:43.946 07:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:43.946 07:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:44.206 07:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:44.206 07:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:44.206 07:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:44.206 07:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:44.206 07:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:44.206 07:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:44.206 { 00:20:44.206 "cntlid": 69, 00:20:44.206 "qid": 0, 00:20:44.206 "state": "enabled", 00:20:44.206 "thread": "nvmf_tgt_poll_group_000", 00:20:44.206 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:44.206 "listen_address": { 00:20:44.206 "trtype": "TCP", 00:20:44.206 "adrfam": "IPv4", 00:20:44.206 "traddr": "10.0.0.2", 00:20:44.206 "trsvcid": "4420" 00:20:44.206 }, 00:20:44.206 "peer_address": { 00:20:44.206 "trtype": "TCP", 00:20:44.206 "adrfam": "IPv4", 00:20:44.206 "traddr": "10.0.0.1", 00:20:44.206 "trsvcid": "39878" 00:20:44.206 }, 00:20:44.206 "auth": { 00:20:44.206 "state": "completed", 00:20:44.206 "digest": "sha384", 00:20:44.206 "dhgroup": "ffdhe3072" 00:20:44.206 } 00:20:44.206 } 00:20:44.206 ]' 00:20:44.206 07:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:44.206 07:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:44.206 07:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:44.206 07:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:44.206 07:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:44.206 07:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:44.206 07:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:44.206 07:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:44.465 07:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YmE3YjRjNTcxYmZiZWQ2NGYyNDEyYjJjOTFjNjY2ZmM2NzVkYTRiYzNlNjM4ZTQyxZYLnw==: --dhchap-ctrl-secret DHHC-1:01:YTEyNWM3Zjg0NDVmYjQ0MDlmNjE3YjgzYmU4YTM4MTYKJrp/: 00:20:44.466 07:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:YmE3YjRjNTcxYmZiZWQ2NGYyNDEyYjJjOTFjNjY2ZmM2NzVkYTRiYzNlNjM4ZTQyxZYLnw==: --dhchap-ctrl-secret DHHC-1:01:YTEyNWM3Zjg0NDVmYjQ0MDlmNjE3YjgzYmU4YTM4MTYKJrp/: 00:20:45.470 07:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:45.470 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:45.470 07:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:45.470 07:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:45.470 07:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:45.470 07:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:45.470 07:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:45.470 07:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:45.470 07:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:45.766 07:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:20:45.766 07:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:45.766 07:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:45.766 07:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:45.766 07:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:45.766 07:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:45.766 07:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:45.766 07:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:45.766 07:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:45.766 07:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:45.766 07:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:45.766 07:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:45.766 07:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:46.055 00:20:46.055 07:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:46.055 07:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:46.055 07:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:46.351 07:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:46.351 07:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:46.351 07:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.351 07:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:46.351 07:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:46.351 07:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:46.351 { 00:20:46.351 "cntlid": 71, 00:20:46.351 "qid": 0, 00:20:46.351 "state": "enabled", 00:20:46.351 "thread": "nvmf_tgt_poll_group_000", 00:20:46.351 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:46.351 "listen_address": { 00:20:46.351 "trtype": "TCP", 00:20:46.351 "adrfam": "IPv4", 00:20:46.351 "traddr": "10.0.0.2", 00:20:46.351 "trsvcid": "4420" 00:20:46.351 }, 00:20:46.351 "peer_address": { 00:20:46.351 "trtype": "TCP", 00:20:46.351 "adrfam": "IPv4", 00:20:46.351 "traddr": "10.0.0.1", 00:20:46.351 "trsvcid": "39902" 00:20:46.351 }, 00:20:46.351 "auth": { 00:20:46.351 "state": "completed", 00:20:46.351 "digest": "sha384", 00:20:46.351 "dhgroup": "ffdhe3072" 00:20:46.351 } 00:20:46.351 } 00:20:46.351 ]' 00:20:46.351 07:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:46.351 07:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:46.351 07:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:46.351 07:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:46.351 07:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:46.636 07:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:46.636 07:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:46.636 07:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:46.905 07:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NWFhODFhNDYwNWJjY2ZkMjZjOTQ3MmJlZWYzOGMwNzc0ZjkyN2FkYjA4ODY4YjY1MTUzNTdhYTJhYTkzODM5MhLbLTY=: 00:20:46.905 07:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:NWFhODFhNDYwNWJjY2ZkMjZjOTQ3MmJlZWYzOGMwNzc0ZjkyN2FkYjA4ODY4YjY1MTUzNTdhYTJhYTkzODM5MhLbLTY=: 00:20:47.899 07:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:47.899 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:47.899 07:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:47.899 07:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.899 07:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:47.899 07:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.899 07:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:47.899 07:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:47.899 07:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:47.899 07:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:47.899 07:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:20:47.899 07:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:47.899 07:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:47.899 07:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:47.899 07:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:47.899 07:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:47.899 07:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:47.899 07:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.899 07:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:47.899 07:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.899 07:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:47.899 07:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:47.899 07:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:48.465 00:20:48.465 07:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:48.465 07:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:48.465 07:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:48.723 07:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:48.723 07:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:48.723 07:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.723 07:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:48.723 07:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.723 07:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:48.723 { 00:20:48.723 "cntlid": 73, 00:20:48.723 "qid": 0, 00:20:48.723 "state": "enabled", 00:20:48.723 "thread": "nvmf_tgt_poll_group_000", 00:20:48.723 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:48.723 "listen_address": { 00:20:48.723 "trtype": "TCP", 00:20:48.723 "adrfam": "IPv4", 00:20:48.723 "traddr": "10.0.0.2", 00:20:48.723 "trsvcid": "4420" 00:20:48.723 }, 00:20:48.723 "peer_address": { 00:20:48.723 "trtype": "TCP", 00:20:48.723 "adrfam": "IPv4", 00:20:48.723 "traddr": "10.0.0.1", 00:20:48.723 "trsvcid": "39940" 00:20:48.723 }, 00:20:48.723 "auth": { 00:20:48.723 "state": "completed", 00:20:48.723 "digest": "sha384", 00:20:48.723 "dhgroup": "ffdhe4096" 00:20:48.723 } 00:20:48.723 } 00:20:48.723 ]' 00:20:48.723 07:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:48.723 07:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:48.723 07:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:48.723 07:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:48.723 07:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:48.723 07:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:48.723 07:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:48.723 07:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:48.983 07:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjEyZGZmODVlZjkzYmNiZmMyNzI2YTEwZGYzZjcyN2JkMzhiYjNhMmFmMTY2MDQye34UVg==: --dhchap-ctrl-secret DHHC-1:03:MGU1OTUzZjNlNGQyMmY5NTY0Y2VlZTk5MTFmM2E0ODlmZWIxMGJkZDUzMTgwN2MxN2EyNTA2ZGVhNmNhZDNmZB7cbfg=: 00:20:48.983 07:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:YjEyZGZmODVlZjkzYmNiZmMyNzI2YTEwZGYzZjcyN2JkMzhiYjNhMmFmMTY2MDQye34UVg==: --dhchap-ctrl-secret DHHC-1:03:MGU1OTUzZjNlNGQyMmY5NTY0Y2VlZTk5MTFmM2E0ODlmZWIxMGJkZDUzMTgwN2MxN2EyNTA2ZGVhNmNhZDNmZB7cbfg=: 00:20:49.919 07:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:49.919 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:49.919 07:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:49.919 07:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:49.919 07:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:49.919 07:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:49.919 07:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:49.919 07:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:49.919 07:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:50.179 07:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:20:50.179 07:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:50.179 07:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:50.179 07:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:50.179 07:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:50.179 07:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:50.179 07:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:50.179 07:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:50.179 07:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:50.179 07:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:50.179 07:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:50.179 07:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:50.179 07:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:50.748 00:20:50.748 07:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:50.748 07:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:50.748 07:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:51.007 07:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:51.007 07:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:51.007 07:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.007 07:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:51.007 07:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.007 07:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:51.007 { 00:20:51.007 "cntlid": 75, 00:20:51.007 "qid": 0, 00:20:51.007 "state": "enabled", 00:20:51.007 "thread": "nvmf_tgt_poll_group_000", 00:20:51.007 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:51.007 "listen_address": { 00:20:51.007 "trtype": "TCP", 00:20:51.007 "adrfam": "IPv4", 00:20:51.007 "traddr": "10.0.0.2", 00:20:51.007 "trsvcid": "4420" 00:20:51.007 }, 00:20:51.007 "peer_address": { 00:20:51.007 "trtype": "TCP", 00:20:51.007 "adrfam": "IPv4", 00:20:51.007 "traddr": "10.0.0.1", 00:20:51.007 "trsvcid": "42172" 00:20:51.007 }, 00:20:51.007 "auth": { 00:20:51.007 "state": "completed", 00:20:51.007 "digest": "sha384", 00:20:51.007 "dhgroup": "ffdhe4096" 00:20:51.007 } 00:20:51.007 } 00:20:51.007 ]' 00:20:51.007 07:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:51.007 07:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:51.007 07:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:51.007 07:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:51.007 07:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:51.007 07:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:51.007 07:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:51.007 07:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:51.266 07:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ODBmNWEzYzkyYzc0ZTg5MjRiMDViMTEwMTAyZmMwNjAGUQzo: --dhchap-ctrl-secret DHHC-1:02:NmJiNTI5OGQ4NTE3MWQzMDY3N2ExZWEzYzNlNjViYzE0Y2ZiZDA5ZmEzOGMxYmM5xhrtlA==: 00:20:51.266 07:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:ODBmNWEzYzkyYzc0ZTg5MjRiMDViMTEwMTAyZmMwNjAGUQzo: --dhchap-ctrl-secret DHHC-1:02:NmJiNTI5OGQ4NTE3MWQzMDY3N2ExZWEzYzNlNjViYzE0Y2ZiZDA5ZmEzOGMxYmM5xhrtlA==: 00:20:52.203 07:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:52.203 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:52.203 07:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:52.203 07:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:52.203 07:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:52.203 07:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:52.203 07:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:52.203 07:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:52.203 07:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:52.772 07:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:20:52.772 07:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:52.772 07:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:52.772 07:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:52.772 07:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:52.772 07:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:52.772 07:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:52.772 07:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:52.772 07:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:52.772 07:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:52.772 07:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:52.772 07:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:52.772 07:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:53.030 00:20:53.030 07:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:53.030 07:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:53.030 07:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:53.289 07:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:53.289 07:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:53.289 07:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:53.289 07:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:53.289 07:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:53.289 07:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:53.289 { 00:20:53.289 "cntlid": 77, 00:20:53.289 "qid": 0, 00:20:53.289 "state": "enabled", 00:20:53.289 "thread": "nvmf_tgt_poll_group_000", 00:20:53.289 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:53.289 "listen_address": { 00:20:53.289 "trtype": "TCP", 00:20:53.289 "adrfam": "IPv4", 00:20:53.289 "traddr": "10.0.0.2", 00:20:53.289 "trsvcid": "4420" 00:20:53.289 }, 00:20:53.289 "peer_address": { 00:20:53.289 "trtype": "TCP", 00:20:53.289 "adrfam": "IPv4", 00:20:53.289 "traddr": "10.0.0.1", 00:20:53.289 "trsvcid": "42200" 00:20:53.289 }, 00:20:53.289 "auth": { 00:20:53.289 "state": "completed", 00:20:53.289 "digest": "sha384", 00:20:53.289 "dhgroup": "ffdhe4096" 00:20:53.289 } 00:20:53.289 } 00:20:53.289 ]' 00:20:53.289 07:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:53.289 07:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:53.289 07:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:53.289 07:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:53.289 07:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:53.548 07:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:53.548 07:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:53.548 07:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:53.806 07:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YmE3YjRjNTcxYmZiZWQ2NGYyNDEyYjJjOTFjNjY2ZmM2NzVkYTRiYzNlNjM4ZTQyxZYLnw==: --dhchap-ctrl-secret DHHC-1:01:YTEyNWM3Zjg0NDVmYjQ0MDlmNjE3YjgzYmU4YTM4MTYKJrp/: 00:20:53.806 07:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:YmE3YjRjNTcxYmZiZWQ2NGYyNDEyYjJjOTFjNjY2ZmM2NzVkYTRiYzNlNjM4ZTQyxZYLnw==: --dhchap-ctrl-secret DHHC-1:01:YTEyNWM3Zjg0NDVmYjQ0MDlmNjE3YjgzYmU4YTM4MTYKJrp/: 00:20:54.741 07:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:54.741 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:54.741 07:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:54.741 07:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.741 07:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:54.741 07:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.741 07:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:54.741 07:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:54.741 07:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:55.001 07:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:20:55.001 07:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:55.001 07:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:55.001 07:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:55.001 07:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:55.001 07:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:55.001 07:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:55.001 07:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:55.001 07:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:55.001 07:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:55.001 07:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:55.001 07:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:55.001 07:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:55.260 00:20:55.260 07:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:55.260 07:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:55.260 07:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:55.518 07:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:55.518 07:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:55.518 07:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:55.518 07:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:55.518 07:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:55.518 07:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:55.518 { 00:20:55.518 "cntlid": 79, 00:20:55.518 "qid": 0, 00:20:55.518 "state": "enabled", 00:20:55.518 "thread": "nvmf_tgt_poll_group_000", 00:20:55.518 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:55.518 "listen_address": { 00:20:55.518 "trtype": "TCP", 00:20:55.518 "adrfam": "IPv4", 00:20:55.518 "traddr": "10.0.0.2", 00:20:55.518 "trsvcid": "4420" 00:20:55.518 }, 00:20:55.518 "peer_address": { 00:20:55.518 "trtype": "TCP", 00:20:55.518 "adrfam": "IPv4", 00:20:55.518 "traddr": "10.0.0.1", 00:20:55.518 "trsvcid": "42222" 00:20:55.518 }, 00:20:55.518 "auth": { 00:20:55.518 "state": "completed", 00:20:55.518 "digest": "sha384", 00:20:55.518 "dhgroup": "ffdhe4096" 00:20:55.518 } 00:20:55.518 } 00:20:55.518 ]' 00:20:55.518 07:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:55.518 07:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:55.518 07:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:55.776 07:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:55.776 07:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:55.776 07:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:55.776 07:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:55.776 07:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:56.035 07:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NWFhODFhNDYwNWJjY2ZkMjZjOTQ3MmJlZWYzOGMwNzc0ZjkyN2FkYjA4ODY4YjY1MTUzNTdhYTJhYTkzODM5MhLbLTY=: 00:20:56.035 07:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:NWFhODFhNDYwNWJjY2ZkMjZjOTQ3MmJlZWYzOGMwNzc0ZjkyN2FkYjA4ODY4YjY1MTUzNTdhYTJhYTkzODM5MhLbLTY=: 00:20:56.972 07:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:56.972 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:56.972 07:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:56.972 07:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:56.972 07:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:56.972 07:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:56.972 07:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:56.972 07:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:56.972 07:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:56.972 07:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:57.231 07:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:20:57.231 07:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:57.231 07:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:57.231 07:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:57.231 07:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:57.231 07:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:57.231 07:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:57.231 07:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.231 07:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:57.231 07:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.231 07:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:57.231 07:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:57.231 07:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:57.799 00:20:57.799 07:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:57.799 07:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:57.799 07:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:58.057 07:06:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:58.057 07:06:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:58.057 07:06:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:58.057 07:06:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:58.057 07:06:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:58.057 07:06:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:58.057 { 00:20:58.057 "cntlid": 81, 00:20:58.057 "qid": 0, 00:20:58.057 "state": "enabled", 00:20:58.057 "thread": "nvmf_tgt_poll_group_000", 00:20:58.057 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:58.057 "listen_address": { 00:20:58.057 "trtype": "TCP", 00:20:58.057 "adrfam": "IPv4", 00:20:58.057 "traddr": "10.0.0.2", 00:20:58.057 "trsvcid": "4420" 00:20:58.057 }, 00:20:58.057 "peer_address": { 00:20:58.057 "trtype": "TCP", 00:20:58.057 "adrfam": "IPv4", 00:20:58.057 "traddr": "10.0.0.1", 00:20:58.057 "trsvcid": "42244" 00:20:58.057 }, 00:20:58.057 "auth": { 00:20:58.057 "state": "completed", 00:20:58.057 "digest": "sha384", 00:20:58.057 "dhgroup": "ffdhe6144" 00:20:58.057 } 00:20:58.057 } 00:20:58.057 ]' 00:20:58.058 07:06:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:58.316 07:06:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:58.316 07:06:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:58.316 07:06:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:58.316 07:06:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:58.316 07:06:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:58.316 07:06:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:58.316 07:06:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:58.574 07:06:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjEyZGZmODVlZjkzYmNiZmMyNzI2YTEwZGYzZjcyN2JkMzhiYjNhMmFmMTY2MDQye34UVg==: --dhchap-ctrl-secret DHHC-1:03:MGU1OTUzZjNlNGQyMmY5NTY0Y2VlZTk5MTFmM2E0ODlmZWIxMGJkZDUzMTgwN2MxN2EyNTA2ZGVhNmNhZDNmZB7cbfg=: 00:20:58.574 07:06:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:YjEyZGZmODVlZjkzYmNiZmMyNzI2YTEwZGYzZjcyN2JkMzhiYjNhMmFmMTY2MDQye34UVg==: --dhchap-ctrl-secret DHHC-1:03:MGU1OTUzZjNlNGQyMmY5NTY0Y2VlZTk5MTFmM2E0ODlmZWIxMGJkZDUzMTgwN2MxN2EyNTA2ZGVhNmNhZDNmZB7cbfg=: 00:20:59.512 07:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:59.512 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:59.512 07:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:59.512 07:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.512 07:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:59.512 07:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.512 07:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:59.512 07:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:59.512 07:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:59.770 07:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:20:59.770 07:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:59.770 07:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:59.770 07:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:59.770 07:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:59.771 07:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:59.771 07:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:59.771 07:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.771 07:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:59.771 07:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.771 07:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:59.771 07:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:59.771 07:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:00.339 00:21:00.339 07:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:00.339 07:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:00.339 07:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:00.597 07:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:00.597 07:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:00.597 07:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:00.597 07:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:00.597 07:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:00.597 07:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:00.597 { 00:21:00.597 "cntlid": 83, 00:21:00.597 "qid": 0, 00:21:00.597 "state": "enabled", 00:21:00.597 "thread": "nvmf_tgt_poll_group_000", 00:21:00.597 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:00.597 "listen_address": { 00:21:00.597 "trtype": "TCP", 00:21:00.597 "adrfam": "IPv4", 00:21:00.597 "traddr": "10.0.0.2", 00:21:00.597 "trsvcid": "4420" 00:21:00.597 }, 00:21:00.597 "peer_address": { 00:21:00.597 "trtype": "TCP", 00:21:00.597 "adrfam": "IPv4", 00:21:00.597 "traddr": "10.0.0.1", 00:21:00.597 "trsvcid": "59456" 00:21:00.597 }, 00:21:00.597 "auth": { 00:21:00.597 "state": "completed", 00:21:00.597 "digest": "sha384", 00:21:00.597 "dhgroup": "ffdhe6144" 00:21:00.598 } 00:21:00.598 } 00:21:00.598 ]' 00:21:00.598 07:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:00.598 07:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:00.598 07:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:00.856 07:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:00.856 07:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:00.856 07:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:00.856 07:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:00.856 07:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:01.114 07:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ODBmNWEzYzkyYzc0ZTg5MjRiMDViMTEwMTAyZmMwNjAGUQzo: --dhchap-ctrl-secret DHHC-1:02:NmJiNTI5OGQ4NTE3MWQzMDY3N2ExZWEzYzNlNjViYzE0Y2ZiZDA5ZmEzOGMxYmM5xhrtlA==: 00:21:01.114 07:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:ODBmNWEzYzkyYzc0ZTg5MjRiMDViMTEwMTAyZmMwNjAGUQzo: --dhchap-ctrl-secret DHHC-1:02:NmJiNTI5OGQ4NTE3MWQzMDY3N2ExZWEzYzNlNjViYzE0Y2ZiZDA5ZmEzOGMxYmM5xhrtlA==: 00:21:02.052 07:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:02.052 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:02.052 07:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:02.052 07:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.052 07:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:02.052 07:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.052 07:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:02.052 07:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:02.052 07:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:02.309 07:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:21:02.310 07:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:02.310 07:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:02.310 07:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:02.310 07:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:02.310 07:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:02.310 07:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:02.310 07:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.310 07:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:02.310 07:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.310 07:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:02.310 07:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:02.310 07:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:02.880 00:21:02.880 07:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:02.880 07:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:02.880 07:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:03.139 07:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:03.139 07:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:03.139 07:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:03.139 07:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:03.139 07:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:03.139 07:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:03.139 { 00:21:03.139 "cntlid": 85, 00:21:03.139 "qid": 0, 00:21:03.139 "state": "enabled", 00:21:03.139 "thread": "nvmf_tgt_poll_group_000", 00:21:03.139 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:03.139 "listen_address": { 00:21:03.139 "trtype": "TCP", 00:21:03.139 "adrfam": "IPv4", 00:21:03.139 "traddr": "10.0.0.2", 00:21:03.139 "trsvcid": "4420" 00:21:03.139 }, 00:21:03.139 "peer_address": { 00:21:03.139 "trtype": "TCP", 00:21:03.139 "adrfam": "IPv4", 00:21:03.139 "traddr": "10.0.0.1", 00:21:03.139 "trsvcid": "59484" 00:21:03.139 }, 00:21:03.139 "auth": { 00:21:03.139 "state": "completed", 00:21:03.139 "digest": "sha384", 00:21:03.139 "dhgroup": "ffdhe6144" 00:21:03.139 } 00:21:03.139 } 00:21:03.139 ]' 00:21:03.139 07:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:03.139 07:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:03.139 07:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:03.397 07:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:03.397 07:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:03.397 07:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:03.397 07:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:03.397 07:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:03.657 07:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YmE3YjRjNTcxYmZiZWQ2NGYyNDEyYjJjOTFjNjY2ZmM2NzVkYTRiYzNlNjM4ZTQyxZYLnw==: --dhchap-ctrl-secret DHHC-1:01:YTEyNWM3Zjg0NDVmYjQ0MDlmNjE3YjgzYmU4YTM4MTYKJrp/: 00:21:03.657 07:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:YmE3YjRjNTcxYmZiZWQ2NGYyNDEyYjJjOTFjNjY2ZmM2NzVkYTRiYzNlNjM4ZTQyxZYLnw==: --dhchap-ctrl-secret DHHC-1:01:YTEyNWM3Zjg0NDVmYjQ0MDlmNjE3YjgzYmU4YTM4MTYKJrp/: 00:21:04.593 07:06:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:04.593 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:04.593 07:06:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:04.593 07:06:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:04.593 07:06:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:04.593 07:06:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:04.593 07:06:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:04.593 07:06:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:04.593 07:06:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:04.852 07:06:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:21:04.852 07:06:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:04.852 07:06:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:04.852 07:06:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:04.852 07:06:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:04.852 07:06:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:04.852 07:06:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:04.852 07:06:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:04.852 07:06:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:04.852 07:06:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:04.852 07:06:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:04.852 07:06:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:04.852 07:06:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:05.419 00:21:05.419 07:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:05.419 07:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:05.419 07:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:05.677 07:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:05.677 07:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:05.677 07:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.677 07:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:05.677 07:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.677 07:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:05.677 { 00:21:05.677 "cntlid": 87, 00:21:05.677 "qid": 0, 00:21:05.677 "state": "enabled", 00:21:05.677 "thread": "nvmf_tgt_poll_group_000", 00:21:05.677 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:05.677 "listen_address": { 00:21:05.677 "trtype": "TCP", 00:21:05.677 "adrfam": "IPv4", 00:21:05.677 "traddr": "10.0.0.2", 00:21:05.677 "trsvcid": "4420" 00:21:05.677 }, 00:21:05.677 "peer_address": { 00:21:05.677 "trtype": "TCP", 00:21:05.677 "adrfam": "IPv4", 00:21:05.677 "traddr": "10.0.0.1", 00:21:05.677 "trsvcid": "59510" 00:21:05.677 }, 00:21:05.677 "auth": { 00:21:05.677 "state": "completed", 00:21:05.677 "digest": "sha384", 00:21:05.677 "dhgroup": "ffdhe6144" 00:21:05.677 } 00:21:05.677 } 00:21:05.677 ]' 00:21:05.677 07:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:05.677 07:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:05.677 07:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:05.677 07:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:05.677 07:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:05.677 07:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:05.677 07:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:05.677 07:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:05.936 07:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NWFhODFhNDYwNWJjY2ZkMjZjOTQ3MmJlZWYzOGMwNzc0ZjkyN2FkYjA4ODY4YjY1MTUzNTdhYTJhYTkzODM5MhLbLTY=: 00:21:05.936 07:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:NWFhODFhNDYwNWJjY2ZkMjZjOTQ3MmJlZWYzOGMwNzc0ZjkyN2FkYjA4ODY4YjY1MTUzNTdhYTJhYTkzODM5MhLbLTY=: 00:21:06.874 07:06:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:06.874 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:06.874 07:06:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:06.874 07:06:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.874 07:06:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:06.874 07:06:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.874 07:06:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:06.874 07:06:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:06.874 07:06:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:06.874 07:06:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:07.132 07:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:21:07.132 07:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:07.132 07:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:07.132 07:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:07.132 07:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:07.132 07:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:07.132 07:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:07.132 07:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:07.132 07:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:07.132 07:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:07.132 07:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:07.132 07:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:07.132 07:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:08.069 00:21:08.069 07:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:08.069 07:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:08.069 07:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:08.327 07:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:08.327 07:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:08.327 07:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:08.327 07:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:08.327 07:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:08.327 07:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:08.327 { 00:21:08.327 "cntlid": 89, 00:21:08.327 "qid": 0, 00:21:08.327 "state": "enabled", 00:21:08.327 "thread": "nvmf_tgt_poll_group_000", 00:21:08.327 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:08.327 "listen_address": { 00:21:08.327 "trtype": "TCP", 00:21:08.327 "adrfam": "IPv4", 00:21:08.327 "traddr": "10.0.0.2", 00:21:08.327 "trsvcid": "4420" 00:21:08.327 }, 00:21:08.327 "peer_address": { 00:21:08.327 "trtype": "TCP", 00:21:08.327 "adrfam": "IPv4", 00:21:08.327 "traddr": "10.0.0.1", 00:21:08.327 "trsvcid": "59550" 00:21:08.327 }, 00:21:08.327 "auth": { 00:21:08.327 "state": "completed", 00:21:08.327 "digest": "sha384", 00:21:08.327 "dhgroup": "ffdhe8192" 00:21:08.327 } 00:21:08.327 } 00:21:08.327 ]' 00:21:08.327 07:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:08.327 07:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:08.327 07:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:08.585 07:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:08.585 07:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:08.585 07:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:08.585 07:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:08.585 07:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:08.843 07:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjEyZGZmODVlZjkzYmNiZmMyNzI2YTEwZGYzZjcyN2JkMzhiYjNhMmFmMTY2MDQye34UVg==: --dhchap-ctrl-secret DHHC-1:03:MGU1OTUzZjNlNGQyMmY5NTY0Y2VlZTk5MTFmM2E0ODlmZWIxMGJkZDUzMTgwN2MxN2EyNTA2ZGVhNmNhZDNmZB7cbfg=: 00:21:08.843 07:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:YjEyZGZmODVlZjkzYmNiZmMyNzI2YTEwZGYzZjcyN2JkMzhiYjNhMmFmMTY2MDQye34UVg==: --dhchap-ctrl-secret DHHC-1:03:MGU1OTUzZjNlNGQyMmY5NTY0Y2VlZTk5MTFmM2E0ODlmZWIxMGJkZDUzMTgwN2MxN2EyNTA2ZGVhNmNhZDNmZB7cbfg=: 00:21:09.782 07:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:09.782 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:09.782 07:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:09.782 07:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:09.782 07:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:09.782 07:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:09.782 07:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:09.782 07:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:09.782 07:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:10.040 07:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:21:10.040 07:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:10.040 07:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:10.040 07:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:10.040 07:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:10.040 07:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:10.040 07:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:10.040 07:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:10.040 07:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:10.040 07:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:10.040 07:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:10.041 07:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:10.041 07:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:10.980 00:21:10.980 07:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:10.980 07:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:10.980 07:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:10.980 07:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:10.980 07:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:10.980 07:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:10.980 07:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.240 07:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.240 07:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:11.240 { 00:21:11.240 "cntlid": 91, 00:21:11.240 "qid": 0, 00:21:11.240 "state": "enabled", 00:21:11.240 "thread": "nvmf_tgt_poll_group_000", 00:21:11.240 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:11.240 "listen_address": { 00:21:11.240 "trtype": "TCP", 00:21:11.240 "adrfam": "IPv4", 00:21:11.240 "traddr": "10.0.0.2", 00:21:11.240 "trsvcid": "4420" 00:21:11.240 }, 00:21:11.240 "peer_address": { 00:21:11.240 "trtype": "TCP", 00:21:11.240 "adrfam": "IPv4", 00:21:11.240 "traddr": "10.0.0.1", 00:21:11.240 "trsvcid": "48958" 00:21:11.240 }, 00:21:11.240 "auth": { 00:21:11.240 "state": "completed", 00:21:11.240 "digest": "sha384", 00:21:11.240 "dhgroup": "ffdhe8192" 00:21:11.240 } 00:21:11.240 } 00:21:11.240 ]' 00:21:11.240 07:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:11.240 07:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:11.240 07:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:11.240 07:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:11.240 07:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:11.240 07:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:11.240 07:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:11.240 07:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:11.499 07:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ODBmNWEzYzkyYzc0ZTg5MjRiMDViMTEwMTAyZmMwNjAGUQzo: --dhchap-ctrl-secret DHHC-1:02:NmJiNTI5OGQ4NTE3MWQzMDY3N2ExZWEzYzNlNjViYzE0Y2ZiZDA5ZmEzOGMxYmM5xhrtlA==: 00:21:11.499 07:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:ODBmNWEzYzkyYzc0ZTg5MjRiMDViMTEwMTAyZmMwNjAGUQzo: --dhchap-ctrl-secret DHHC-1:02:NmJiNTI5OGQ4NTE3MWQzMDY3N2ExZWEzYzNlNjViYzE0Y2ZiZDA5ZmEzOGMxYmM5xhrtlA==: 00:21:12.436 07:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:12.436 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:12.436 07:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:12.436 07:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.436 07:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:12.436 07:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.436 07:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:12.436 07:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:12.436 07:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:12.694 07:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:21:12.694 07:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:12.694 07:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:12.695 07:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:12.695 07:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:12.695 07:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:12.695 07:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:12.695 07:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.695 07:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:12.695 07:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.695 07:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:12.695 07:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:12.695 07:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:13.635 00:21:13.635 07:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:13.635 07:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:13.635 07:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:13.635 07:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:13.894 07:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:13.894 07:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:13.894 07:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:13.894 07:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:13.894 07:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:13.894 { 00:21:13.894 "cntlid": 93, 00:21:13.894 "qid": 0, 00:21:13.894 "state": "enabled", 00:21:13.894 "thread": "nvmf_tgt_poll_group_000", 00:21:13.894 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:13.894 "listen_address": { 00:21:13.894 "trtype": "TCP", 00:21:13.894 "adrfam": "IPv4", 00:21:13.894 "traddr": "10.0.0.2", 00:21:13.894 "trsvcid": "4420" 00:21:13.894 }, 00:21:13.894 "peer_address": { 00:21:13.894 "trtype": "TCP", 00:21:13.894 "adrfam": "IPv4", 00:21:13.895 "traddr": "10.0.0.1", 00:21:13.895 "trsvcid": "48982" 00:21:13.895 }, 00:21:13.895 "auth": { 00:21:13.895 "state": "completed", 00:21:13.895 "digest": "sha384", 00:21:13.895 "dhgroup": "ffdhe8192" 00:21:13.895 } 00:21:13.895 } 00:21:13.895 ]' 00:21:13.895 07:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:13.895 07:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:13.895 07:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:13.895 07:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:13.895 07:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:13.895 07:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:13.895 07:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:13.895 07:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:14.153 07:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YmE3YjRjNTcxYmZiZWQ2NGYyNDEyYjJjOTFjNjY2ZmM2NzVkYTRiYzNlNjM4ZTQyxZYLnw==: --dhchap-ctrl-secret DHHC-1:01:YTEyNWM3Zjg0NDVmYjQ0MDlmNjE3YjgzYmU4YTM4MTYKJrp/: 00:21:14.153 07:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:YmE3YjRjNTcxYmZiZWQ2NGYyNDEyYjJjOTFjNjY2ZmM2NzVkYTRiYzNlNjM4ZTQyxZYLnw==: --dhchap-ctrl-secret DHHC-1:01:YTEyNWM3Zjg0NDVmYjQ0MDlmNjE3YjgzYmU4YTM4MTYKJrp/: 00:21:15.094 07:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:15.094 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:15.094 07:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:15.094 07:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:15.094 07:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:15.094 07:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:15.094 07:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:15.094 07:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:15.094 07:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:15.353 07:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:21:15.353 07:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:15.353 07:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:15.353 07:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:15.353 07:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:15.353 07:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:15.353 07:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:15.353 07:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:15.353 07:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:15.353 07:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:15.353 07:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:15.353 07:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:15.354 07:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:16.289 00:21:16.289 07:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:16.289 07:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:16.289 07:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:16.548 07:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:16.548 07:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:16.548 07:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.548 07:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:16.548 07:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:16.548 07:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:16.548 { 00:21:16.548 "cntlid": 95, 00:21:16.548 "qid": 0, 00:21:16.548 "state": "enabled", 00:21:16.548 "thread": "nvmf_tgt_poll_group_000", 00:21:16.548 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:16.548 "listen_address": { 00:21:16.548 "trtype": "TCP", 00:21:16.548 "adrfam": "IPv4", 00:21:16.548 "traddr": "10.0.0.2", 00:21:16.548 "trsvcid": "4420" 00:21:16.548 }, 00:21:16.548 "peer_address": { 00:21:16.548 "trtype": "TCP", 00:21:16.548 "adrfam": "IPv4", 00:21:16.548 "traddr": "10.0.0.1", 00:21:16.548 "trsvcid": "49000" 00:21:16.548 }, 00:21:16.548 "auth": { 00:21:16.548 "state": "completed", 00:21:16.548 "digest": "sha384", 00:21:16.548 "dhgroup": "ffdhe8192" 00:21:16.548 } 00:21:16.548 } 00:21:16.548 ]' 00:21:16.548 07:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:16.548 07:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:16.548 07:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:16.806 07:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:16.806 07:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:16.806 07:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:16.806 07:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:16.806 07:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:17.064 07:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NWFhODFhNDYwNWJjY2ZkMjZjOTQ3MmJlZWYzOGMwNzc0ZjkyN2FkYjA4ODY4YjY1MTUzNTdhYTJhYTkzODM5MhLbLTY=: 00:21:17.064 07:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:NWFhODFhNDYwNWJjY2ZkMjZjOTQ3MmJlZWYzOGMwNzc0ZjkyN2FkYjA4ODY4YjY1MTUzNTdhYTJhYTkzODM5MhLbLTY=: 00:21:18.003 07:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:18.003 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:18.003 07:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:18.003 07:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.003 07:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:18.003 07:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.003 07:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:21:18.003 07:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:18.003 07:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:18.003 07:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:18.003 07:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:18.261 07:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:21:18.261 07:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:18.261 07:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:18.261 07:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:18.261 07:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:18.261 07:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:18.261 07:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:18.261 07:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.261 07:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:18.261 07:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.261 07:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:18.261 07:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:18.261 07:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:18.518 00:21:18.518 07:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:18.519 07:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:18.519 07:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:18.777 07:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:18.777 07:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:18.777 07:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.777 07:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:18.777 07:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.777 07:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:18.777 { 00:21:18.777 "cntlid": 97, 00:21:18.777 "qid": 0, 00:21:18.777 "state": "enabled", 00:21:18.777 "thread": "nvmf_tgt_poll_group_000", 00:21:18.777 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:18.777 "listen_address": { 00:21:18.777 "trtype": "TCP", 00:21:18.777 "adrfam": "IPv4", 00:21:18.777 "traddr": "10.0.0.2", 00:21:18.777 "trsvcid": "4420" 00:21:18.777 }, 00:21:18.777 "peer_address": { 00:21:18.777 "trtype": "TCP", 00:21:18.777 "adrfam": "IPv4", 00:21:18.777 "traddr": "10.0.0.1", 00:21:18.777 "trsvcid": "49034" 00:21:18.777 }, 00:21:18.777 "auth": { 00:21:18.777 "state": "completed", 00:21:18.777 "digest": "sha512", 00:21:18.777 "dhgroup": "null" 00:21:18.777 } 00:21:18.777 } 00:21:18.777 ]' 00:21:18.777 07:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:18.777 07:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:18.777 07:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:19.036 07:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:19.036 07:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:19.036 07:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:19.036 07:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:19.036 07:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:19.294 07:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjEyZGZmODVlZjkzYmNiZmMyNzI2YTEwZGYzZjcyN2JkMzhiYjNhMmFmMTY2MDQye34UVg==: --dhchap-ctrl-secret DHHC-1:03:MGU1OTUzZjNlNGQyMmY5NTY0Y2VlZTk5MTFmM2E0ODlmZWIxMGJkZDUzMTgwN2MxN2EyNTA2ZGVhNmNhZDNmZB7cbfg=: 00:21:19.294 07:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:YjEyZGZmODVlZjkzYmNiZmMyNzI2YTEwZGYzZjcyN2JkMzhiYjNhMmFmMTY2MDQye34UVg==: --dhchap-ctrl-secret DHHC-1:03:MGU1OTUzZjNlNGQyMmY5NTY0Y2VlZTk5MTFmM2E0ODlmZWIxMGJkZDUzMTgwN2MxN2EyNTA2ZGVhNmNhZDNmZB7cbfg=: 00:21:20.229 07:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:20.229 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:20.229 07:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:20.229 07:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:20.229 07:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:20.229 07:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:20.229 07:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:20.229 07:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:20.229 07:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:20.487 07:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:21:20.487 07:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:20.487 07:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:20.487 07:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:20.487 07:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:20.487 07:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:20.487 07:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:20.487 07:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:20.487 07:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:20.487 07:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:20.487 07:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:20.487 07:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:20.487 07:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:20.746 00:21:20.746 07:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:20.746 07:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:20.746 07:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:21.005 07:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:21.005 07:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:21.005 07:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:21.005 07:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:21.005 07:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:21.005 07:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:21.005 { 00:21:21.005 "cntlid": 99, 00:21:21.005 "qid": 0, 00:21:21.005 "state": "enabled", 00:21:21.005 "thread": "nvmf_tgt_poll_group_000", 00:21:21.005 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:21.005 "listen_address": { 00:21:21.005 "trtype": "TCP", 00:21:21.005 "adrfam": "IPv4", 00:21:21.005 "traddr": "10.0.0.2", 00:21:21.005 "trsvcid": "4420" 00:21:21.005 }, 00:21:21.005 "peer_address": { 00:21:21.005 "trtype": "TCP", 00:21:21.005 "adrfam": "IPv4", 00:21:21.005 "traddr": "10.0.0.1", 00:21:21.005 "trsvcid": "48314" 00:21:21.005 }, 00:21:21.005 "auth": { 00:21:21.005 "state": "completed", 00:21:21.005 "digest": "sha512", 00:21:21.005 "dhgroup": "null" 00:21:21.005 } 00:21:21.005 } 00:21:21.005 ]' 00:21:21.005 07:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:21.264 07:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:21.264 07:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:21.264 07:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:21.264 07:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:21.264 07:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:21.264 07:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:21.264 07:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:21.522 07:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ODBmNWEzYzkyYzc0ZTg5MjRiMDViMTEwMTAyZmMwNjAGUQzo: --dhchap-ctrl-secret DHHC-1:02:NmJiNTI5OGQ4NTE3MWQzMDY3N2ExZWEzYzNlNjViYzE0Y2ZiZDA5ZmEzOGMxYmM5xhrtlA==: 00:21:21.522 07:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:ODBmNWEzYzkyYzc0ZTg5MjRiMDViMTEwMTAyZmMwNjAGUQzo: --dhchap-ctrl-secret DHHC-1:02:NmJiNTI5OGQ4NTE3MWQzMDY3N2ExZWEzYzNlNjViYzE0Y2ZiZDA5ZmEzOGMxYmM5xhrtlA==: 00:21:22.459 07:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:22.459 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:22.459 07:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:22.459 07:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:22.459 07:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:22.459 07:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:22.459 07:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:22.459 07:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:22.459 07:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:22.717 07:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:21:22.717 07:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:22.717 07:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:22.717 07:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:22.717 07:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:22.717 07:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:22.717 07:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:22.717 07:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:22.717 07:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:22.717 07:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:22.717 07:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:22.717 07:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:22.717 07:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:22.976 00:21:22.976 07:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:22.976 07:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:22.976 07:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:23.234 07:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:23.234 07:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:23.234 07:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.234 07:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:23.234 07:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.234 07:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:23.234 { 00:21:23.234 "cntlid": 101, 00:21:23.234 "qid": 0, 00:21:23.234 "state": "enabled", 00:21:23.234 "thread": "nvmf_tgt_poll_group_000", 00:21:23.234 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:23.234 "listen_address": { 00:21:23.234 "trtype": "TCP", 00:21:23.234 "adrfam": "IPv4", 00:21:23.234 "traddr": "10.0.0.2", 00:21:23.234 "trsvcid": "4420" 00:21:23.234 }, 00:21:23.234 "peer_address": { 00:21:23.234 "trtype": "TCP", 00:21:23.234 "adrfam": "IPv4", 00:21:23.234 "traddr": "10.0.0.1", 00:21:23.234 "trsvcid": "48336" 00:21:23.234 }, 00:21:23.234 "auth": { 00:21:23.234 "state": "completed", 00:21:23.234 "digest": "sha512", 00:21:23.234 "dhgroup": "null" 00:21:23.234 } 00:21:23.234 } 00:21:23.234 ]' 00:21:23.234 07:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:23.234 07:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:23.234 07:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:23.492 07:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:23.492 07:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:23.492 07:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:23.492 07:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:23.492 07:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:23.750 07:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YmE3YjRjNTcxYmZiZWQ2NGYyNDEyYjJjOTFjNjY2ZmM2NzVkYTRiYzNlNjM4ZTQyxZYLnw==: --dhchap-ctrl-secret DHHC-1:01:YTEyNWM3Zjg0NDVmYjQ0MDlmNjE3YjgzYmU4YTM4MTYKJrp/: 00:21:23.750 07:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:YmE3YjRjNTcxYmZiZWQ2NGYyNDEyYjJjOTFjNjY2ZmM2NzVkYTRiYzNlNjM4ZTQyxZYLnw==: --dhchap-ctrl-secret DHHC-1:01:YTEyNWM3Zjg0NDVmYjQ0MDlmNjE3YjgzYmU4YTM4MTYKJrp/: 00:21:24.686 07:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:24.686 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:24.686 07:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:24.686 07:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.686 07:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:24.686 07:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.686 07:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:24.686 07:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:24.686 07:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:24.944 07:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:21:24.944 07:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:24.944 07:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:24.944 07:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:24.944 07:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:24.944 07:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:24.944 07:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:24.945 07:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.945 07:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:24.945 07:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.945 07:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:24.945 07:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:24.945 07:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:25.203 00:21:25.203 07:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:25.203 07:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:25.203 07:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:25.462 07:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:25.462 07:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:25.462 07:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:25.462 07:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:25.462 07:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:25.462 07:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:25.462 { 00:21:25.462 "cntlid": 103, 00:21:25.462 "qid": 0, 00:21:25.462 "state": "enabled", 00:21:25.462 "thread": "nvmf_tgt_poll_group_000", 00:21:25.462 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:25.462 "listen_address": { 00:21:25.462 "trtype": "TCP", 00:21:25.462 "adrfam": "IPv4", 00:21:25.462 "traddr": "10.0.0.2", 00:21:25.462 "trsvcid": "4420" 00:21:25.462 }, 00:21:25.462 "peer_address": { 00:21:25.462 "trtype": "TCP", 00:21:25.462 "adrfam": "IPv4", 00:21:25.462 "traddr": "10.0.0.1", 00:21:25.462 "trsvcid": "48366" 00:21:25.462 }, 00:21:25.462 "auth": { 00:21:25.462 "state": "completed", 00:21:25.462 "digest": "sha512", 00:21:25.462 "dhgroup": "null" 00:21:25.462 } 00:21:25.462 } 00:21:25.462 ]' 00:21:25.462 07:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:25.462 07:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:25.462 07:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:25.462 07:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:25.462 07:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:25.720 07:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:25.720 07:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:25.720 07:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:25.980 07:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NWFhODFhNDYwNWJjY2ZkMjZjOTQ3MmJlZWYzOGMwNzc0ZjkyN2FkYjA4ODY4YjY1MTUzNTdhYTJhYTkzODM5MhLbLTY=: 00:21:25.980 07:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:NWFhODFhNDYwNWJjY2ZkMjZjOTQ3MmJlZWYzOGMwNzc0ZjkyN2FkYjA4ODY4YjY1MTUzNTdhYTJhYTkzODM5MhLbLTY=: 00:21:26.917 07:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:26.917 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:26.917 07:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:26.917 07:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:26.917 07:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:26.917 07:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:26.917 07:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:26.917 07:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:26.917 07:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:26.917 07:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:27.176 07:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:21:27.176 07:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:27.176 07:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:27.176 07:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:27.176 07:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:27.176 07:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:27.177 07:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:27.177 07:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:27.177 07:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:27.177 07:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:27.177 07:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:27.177 07:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:27.177 07:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:27.435 00:21:27.435 07:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:27.435 07:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:27.435 07:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:27.694 07:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:27.953 07:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:27.953 07:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:27.953 07:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:27.953 07:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:27.953 07:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:27.953 { 00:21:27.953 "cntlid": 105, 00:21:27.953 "qid": 0, 00:21:27.953 "state": "enabled", 00:21:27.953 "thread": "nvmf_tgt_poll_group_000", 00:21:27.953 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:27.953 "listen_address": { 00:21:27.953 "trtype": "TCP", 00:21:27.953 "adrfam": "IPv4", 00:21:27.953 "traddr": "10.0.0.2", 00:21:27.953 "trsvcid": "4420" 00:21:27.953 }, 00:21:27.953 "peer_address": { 00:21:27.953 "trtype": "TCP", 00:21:27.953 "adrfam": "IPv4", 00:21:27.953 "traddr": "10.0.0.1", 00:21:27.953 "trsvcid": "48404" 00:21:27.953 }, 00:21:27.953 "auth": { 00:21:27.953 "state": "completed", 00:21:27.953 "digest": "sha512", 00:21:27.953 "dhgroup": "ffdhe2048" 00:21:27.953 } 00:21:27.953 } 00:21:27.953 ]' 00:21:27.953 07:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:27.953 07:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:27.953 07:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:27.953 07:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:27.953 07:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:27.953 07:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:27.953 07:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:27.953 07:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:28.212 07:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjEyZGZmODVlZjkzYmNiZmMyNzI2YTEwZGYzZjcyN2JkMzhiYjNhMmFmMTY2MDQye34UVg==: --dhchap-ctrl-secret DHHC-1:03:MGU1OTUzZjNlNGQyMmY5NTY0Y2VlZTk5MTFmM2E0ODlmZWIxMGJkZDUzMTgwN2MxN2EyNTA2ZGVhNmNhZDNmZB7cbfg=: 00:21:28.212 07:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:YjEyZGZmODVlZjkzYmNiZmMyNzI2YTEwZGYzZjcyN2JkMzhiYjNhMmFmMTY2MDQye34UVg==: --dhchap-ctrl-secret DHHC-1:03:MGU1OTUzZjNlNGQyMmY5NTY0Y2VlZTk5MTFmM2E0ODlmZWIxMGJkZDUzMTgwN2MxN2EyNTA2ZGVhNmNhZDNmZB7cbfg=: 00:21:29.158 07:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:29.158 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:29.158 07:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:29.158 07:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:29.158 07:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:29.158 07:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:29.158 07:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:29.158 07:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:29.158 07:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:29.415 07:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:21:29.415 07:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:29.415 07:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:29.415 07:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:29.415 07:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:29.415 07:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:29.416 07:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:29.416 07:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:29.416 07:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:29.416 07:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:29.416 07:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:29.416 07:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:29.416 07:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:29.673 00:21:29.673 07:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:29.673 07:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:29.673 07:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:29.931 07:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:29.931 07:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:29.931 07:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:29.931 07:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:29.931 07:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:29.931 07:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:29.931 { 00:21:29.931 "cntlid": 107, 00:21:29.931 "qid": 0, 00:21:29.931 "state": "enabled", 00:21:29.931 "thread": "nvmf_tgt_poll_group_000", 00:21:29.931 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:29.931 "listen_address": { 00:21:29.931 "trtype": "TCP", 00:21:29.931 "adrfam": "IPv4", 00:21:29.931 "traddr": "10.0.0.2", 00:21:29.931 "trsvcid": "4420" 00:21:29.931 }, 00:21:29.931 "peer_address": { 00:21:29.931 "trtype": "TCP", 00:21:29.931 "adrfam": "IPv4", 00:21:29.931 "traddr": "10.0.0.1", 00:21:29.931 "trsvcid": "37322" 00:21:29.931 }, 00:21:29.931 "auth": { 00:21:29.931 "state": "completed", 00:21:29.931 "digest": "sha512", 00:21:29.931 "dhgroup": "ffdhe2048" 00:21:29.931 } 00:21:29.931 } 00:21:29.931 ]' 00:21:29.931 07:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:30.189 07:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:30.189 07:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:30.189 07:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:30.189 07:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:30.189 07:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:30.189 07:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:30.189 07:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:30.447 07:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ODBmNWEzYzkyYzc0ZTg5MjRiMDViMTEwMTAyZmMwNjAGUQzo: --dhchap-ctrl-secret DHHC-1:02:NmJiNTI5OGQ4NTE3MWQzMDY3N2ExZWEzYzNlNjViYzE0Y2ZiZDA5ZmEzOGMxYmM5xhrtlA==: 00:21:30.447 07:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:ODBmNWEzYzkyYzc0ZTg5MjRiMDViMTEwMTAyZmMwNjAGUQzo: --dhchap-ctrl-secret DHHC-1:02:NmJiNTI5OGQ4NTE3MWQzMDY3N2ExZWEzYzNlNjViYzE0Y2ZiZDA5ZmEzOGMxYmM5xhrtlA==: 00:21:31.384 07:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:31.384 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:31.384 07:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:31.384 07:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.384 07:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:31.384 07:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:31.384 07:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:31.384 07:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:31.384 07:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:31.642 07:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:21:31.642 07:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:31.642 07:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:31.642 07:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:31.642 07:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:31.642 07:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:31.642 07:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:31.642 07:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.643 07:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:31.643 07:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:31.643 07:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:31.643 07:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:31.643 07:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:31.901 00:21:31.901 07:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:31.901 07:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:31.901 07:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:32.159 07:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:32.159 07:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:32.159 07:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.159 07:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:32.159 07:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.159 07:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:32.159 { 00:21:32.159 "cntlid": 109, 00:21:32.159 "qid": 0, 00:21:32.159 "state": "enabled", 00:21:32.159 "thread": "nvmf_tgt_poll_group_000", 00:21:32.159 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:32.159 "listen_address": { 00:21:32.159 "trtype": "TCP", 00:21:32.159 "adrfam": "IPv4", 00:21:32.159 "traddr": "10.0.0.2", 00:21:32.159 "trsvcid": "4420" 00:21:32.159 }, 00:21:32.159 "peer_address": { 00:21:32.159 "trtype": "TCP", 00:21:32.159 "adrfam": "IPv4", 00:21:32.159 "traddr": "10.0.0.1", 00:21:32.159 "trsvcid": "37358" 00:21:32.159 }, 00:21:32.159 "auth": { 00:21:32.159 "state": "completed", 00:21:32.159 "digest": "sha512", 00:21:32.159 "dhgroup": "ffdhe2048" 00:21:32.159 } 00:21:32.159 } 00:21:32.159 ]' 00:21:32.159 07:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:32.159 07:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:32.159 07:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:32.417 07:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:32.417 07:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:32.417 07:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:32.417 07:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:32.417 07:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:32.675 07:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YmE3YjRjNTcxYmZiZWQ2NGYyNDEyYjJjOTFjNjY2ZmM2NzVkYTRiYzNlNjM4ZTQyxZYLnw==: --dhchap-ctrl-secret DHHC-1:01:YTEyNWM3Zjg0NDVmYjQ0MDlmNjE3YjgzYmU4YTM4MTYKJrp/: 00:21:32.675 07:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:YmE3YjRjNTcxYmZiZWQ2NGYyNDEyYjJjOTFjNjY2ZmM2NzVkYTRiYzNlNjM4ZTQyxZYLnw==: --dhchap-ctrl-secret DHHC-1:01:YTEyNWM3Zjg0NDVmYjQ0MDlmNjE3YjgzYmU4YTM4MTYKJrp/: 00:21:33.609 07:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:33.609 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:33.609 07:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:33.609 07:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:33.609 07:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:33.609 07:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:33.609 07:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:33.609 07:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:33.609 07:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:33.868 07:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:21:33.868 07:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:33.868 07:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:33.868 07:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:33.868 07:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:33.868 07:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:33.868 07:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:33.868 07:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:33.868 07:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:33.868 07:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:33.868 07:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:33.868 07:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:33.868 07:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:34.125 00:21:34.125 07:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:34.125 07:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:34.125 07:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:34.384 07:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:34.384 07:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:34.384 07:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:34.384 07:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:34.384 07:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:34.384 07:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:34.384 { 00:21:34.384 "cntlid": 111, 00:21:34.384 "qid": 0, 00:21:34.384 "state": "enabled", 00:21:34.384 "thread": "nvmf_tgt_poll_group_000", 00:21:34.384 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:34.384 "listen_address": { 00:21:34.384 "trtype": "TCP", 00:21:34.384 "adrfam": "IPv4", 00:21:34.384 "traddr": "10.0.0.2", 00:21:34.384 "trsvcid": "4420" 00:21:34.384 }, 00:21:34.384 "peer_address": { 00:21:34.384 "trtype": "TCP", 00:21:34.384 "adrfam": "IPv4", 00:21:34.384 "traddr": "10.0.0.1", 00:21:34.384 "trsvcid": "37382" 00:21:34.384 }, 00:21:34.384 "auth": { 00:21:34.384 "state": "completed", 00:21:34.384 "digest": "sha512", 00:21:34.384 "dhgroup": "ffdhe2048" 00:21:34.384 } 00:21:34.384 } 00:21:34.384 ]' 00:21:34.384 07:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:34.642 07:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:34.642 07:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:34.642 07:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:34.642 07:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:34.642 07:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:34.642 07:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:34.642 07:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:34.900 07:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NWFhODFhNDYwNWJjY2ZkMjZjOTQ3MmJlZWYzOGMwNzc0ZjkyN2FkYjA4ODY4YjY1MTUzNTdhYTJhYTkzODM5MhLbLTY=: 00:21:34.900 07:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:NWFhODFhNDYwNWJjY2ZkMjZjOTQ3MmJlZWYzOGMwNzc0ZjkyN2FkYjA4ODY4YjY1MTUzNTdhYTJhYTkzODM5MhLbLTY=: 00:21:35.836 07:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:35.836 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:35.836 07:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:35.836 07:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:35.836 07:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:35.836 07:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:35.836 07:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:35.836 07:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:35.836 07:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:35.836 07:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:36.095 07:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:21:36.095 07:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:36.095 07:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:36.095 07:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:36.095 07:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:36.095 07:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:36.095 07:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:36.095 07:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.095 07:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:36.095 07:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.095 07:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:36.095 07:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:36.095 07:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:36.354 00:21:36.354 07:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:36.354 07:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:36.354 07:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:36.612 07:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:36.612 07:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:36.612 07:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.612 07:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:36.612 07:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.612 07:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:36.612 { 00:21:36.612 "cntlid": 113, 00:21:36.612 "qid": 0, 00:21:36.612 "state": "enabled", 00:21:36.612 "thread": "nvmf_tgt_poll_group_000", 00:21:36.612 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:36.612 "listen_address": { 00:21:36.612 "trtype": "TCP", 00:21:36.612 "adrfam": "IPv4", 00:21:36.612 "traddr": "10.0.0.2", 00:21:36.612 "trsvcid": "4420" 00:21:36.612 }, 00:21:36.612 "peer_address": { 00:21:36.612 "trtype": "TCP", 00:21:36.612 "adrfam": "IPv4", 00:21:36.612 "traddr": "10.0.0.1", 00:21:36.612 "trsvcid": "37416" 00:21:36.612 }, 00:21:36.612 "auth": { 00:21:36.612 "state": "completed", 00:21:36.612 "digest": "sha512", 00:21:36.612 "dhgroup": "ffdhe3072" 00:21:36.612 } 00:21:36.612 } 00:21:36.612 ]' 00:21:36.612 07:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:36.612 07:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:36.612 07:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:36.871 07:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:36.871 07:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:36.871 07:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:36.871 07:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:36.871 07:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:37.129 07:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjEyZGZmODVlZjkzYmNiZmMyNzI2YTEwZGYzZjcyN2JkMzhiYjNhMmFmMTY2MDQye34UVg==: --dhchap-ctrl-secret DHHC-1:03:MGU1OTUzZjNlNGQyMmY5NTY0Y2VlZTk5MTFmM2E0ODlmZWIxMGJkZDUzMTgwN2MxN2EyNTA2ZGVhNmNhZDNmZB7cbfg=: 00:21:37.129 07:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:YjEyZGZmODVlZjkzYmNiZmMyNzI2YTEwZGYzZjcyN2JkMzhiYjNhMmFmMTY2MDQye34UVg==: --dhchap-ctrl-secret DHHC-1:03:MGU1OTUzZjNlNGQyMmY5NTY0Y2VlZTk5MTFmM2E0ODlmZWIxMGJkZDUzMTgwN2MxN2EyNTA2ZGVhNmNhZDNmZB7cbfg=: 00:21:38.065 07:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:38.065 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:38.065 07:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:38.065 07:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.065 07:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:38.065 07:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.065 07:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:38.065 07:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:38.065 07:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:38.323 07:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:21:38.323 07:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:38.323 07:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:38.323 07:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:38.323 07:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:38.323 07:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:38.323 07:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:38.323 07:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.323 07:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:38.323 07:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.323 07:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:38.323 07:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:38.323 07:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:38.581 00:21:38.581 07:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:38.581 07:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:38.581 07:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:38.839 07:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:38.839 07:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:38.839 07:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.839 07:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:38.839 07:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.839 07:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:38.839 { 00:21:38.839 "cntlid": 115, 00:21:38.839 "qid": 0, 00:21:38.839 "state": "enabled", 00:21:38.839 "thread": "nvmf_tgt_poll_group_000", 00:21:38.839 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:38.839 "listen_address": { 00:21:38.839 "trtype": "TCP", 00:21:38.839 "adrfam": "IPv4", 00:21:38.839 "traddr": "10.0.0.2", 00:21:38.839 "trsvcid": "4420" 00:21:38.839 }, 00:21:38.839 "peer_address": { 00:21:38.839 "trtype": "TCP", 00:21:38.839 "adrfam": "IPv4", 00:21:38.839 "traddr": "10.0.0.1", 00:21:38.839 "trsvcid": "37456" 00:21:38.839 }, 00:21:38.839 "auth": { 00:21:38.839 "state": "completed", 00:21:38.839 "digest": "sha512", 00:21:38.839 "dhgroup": "ffdhe3072" 00:21:38.839 } 00:21:38.839 } 00:21:38.839 ]' 00:21:38.839 07:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:38.839 07:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:38.839 07:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:39.097 07:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:39.097 07:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:39.097 07:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:39.097 07:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:39.097 07:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:39.356 07:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ODBmNWEzYzkyYzc0ZTg5MjRiMDViMTEwMTAyZmMwNjAGUQzo: --dhchap-ctrl-secret DHHC-1:02:NmJiNTI5OGQ4NTE3MWQzMDY3N2ExZWEzYzNlNjViYzE0Y2ZiZDA5ZmEzOGMxYmM5xhrtlA==: 00:21:39.356 07:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:ODBmNWEzYzkyYzc0ZTg5MjRiMDViMTEwMTAyZmMwNjAGUQzo: --dhchap-ctrl-secret DHHC-1:02:NmJiNTI5OGQ4NTE3MWQzMDY3N2ExZWEzYzNlNjViYzE0Y2ZiZDA5ZmEzOGMxYmM5xhrtlA==: 00:21:40.294 07:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:40.294 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:40.294 07:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:40.294 07:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.294 07:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:40.294 07:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.294 07:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:40.294 07:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:40.294 07:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:40.553 07:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:21:40.553 07:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:40.553 07:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:40.553 07:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:40.553 07:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:40.553 07:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:40.553 07:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:40.553 07:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.553 07:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:40.553 07:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.553 07:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:40.553 07:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:40.553 07:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:40.814 00:21:40.814 07:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:40.814 07:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:40.814 07:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:41.072 07:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:41.072 07:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:41.072 07:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.072 07:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:41.072 07:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.072 07:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:41.072 { 00:21:41.072 "cntlid": 117, 00:21:41.072 "qid": 0, 00:21:41.072 "state": "enabled", 00:21:41.072 "thread": "nvmf_tgt_poll_group_000", 00:21:41.072 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:41.072 "listen_address": { 00:21:41.072 "trtype": "TCP", 00:21:41.072 "adrfam": "IPv4", 00:21:41.072 "traddr": "10.0.0.2", 00:21:41.072 "trsvcid": "4420" 00:21:41.072 }, 00:21:41.072 "peer_address": { 00:21:41.072 "trtype": "TCP", 00:21:41.072 "adrfam": "IPv4", 00:21:41.072 "traddr": "10.0.0.1", 00:21:41.072 "trsvcid": "44780" 00:21:41.072 }, 00:21:41.072 "auth": { 00:21:41.072 "state": "completed", 00:21:41.072 "digest": "sha512", 00:21:41.073 "dhgroup": "ffdhe3072" 00:21:41.073 } 00:21:41.073 } 00:21:41.073 ]' 00:21:41.073 07:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:41.073 07:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:41.073 07:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:41.331 07:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:41.331 07:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:41.331 07:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:41.331 07:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:41.331 07:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:41.590 07:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YmE3YjRjNTcxYmZiZWQ2NGYyNDEyYjJjOTFjNjY2ZmM2NzVkYTRiYzNlNjM4ZTQyxZYLnw==: --dhchap-ctrl-secret DHHC-1:01:YTEyNWM3Zjg0NDVmYjQ0MDlmNjE3YjgzYmU4YTM4MTYKJrp/: 00:21:41.590 07:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:YmE3YjRjNTcxYmZiZWQ2NGYyNDEyYjJjOTFjNjY2ZmM2NzVkYTRiYzNlNjM4ZTQyxZYLnw==: --dhchap-ctrl-secret DHHC-1:01:YTEyNWM3Zjg0NDVmYjQ0MDlmNjE3YjgzYmU4YTM4MTYKJrp/: 00:21:42.526 07:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:42.526 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:42.526 07:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:42.526 07:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:42.526 07:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:42.526 07:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:42.526 07:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:42.526 07:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:42.526 07:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:42.785 07:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:21:42.785 07:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:42.785 07:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:42.785 07:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:42.785 07:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:42.785 07:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:42.785 07:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:42.785 07:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:42.785 07:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:42.785 07:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:42.785 07:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:42.785 07:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:42.785 07:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:43.043 00:21:43.043 07:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:43.043 07:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:43.043 07:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:43.301 07:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:43.301 07:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:43.301 07:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.301 07:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:43.301 07:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.301 07:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:43.301 { 00:21:43.301 "cntlid": 119, 00:21:43.301 "qid": 0, 00:21:43.301 "state": "enabled", 00:21:43.301 "thread": "nvmf_tgt_poll_group_000", 00:21:43.301 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:43.301 "listen_address": { 00:21:43.301 "trtype": "TCP", 00:21:43.302 "adrfam": "IPv4", 00:21:43.302 "traddr": "10.0.0.2", 00:21:43.302 "trsvcid": "4420" 00:21:43.302 }, 00:21:43.302 "peer_address": { 00:21:43.302 "trtype": "TCP", 00:21:43.302 "adrfam": "IPv4", 00:21:43.302 "traddr": "10.0.0.1", 00:21:43.302 "trsvcid": "44808" 00:21:43.302 }, 00:21:43.302 "auth": { 00:21:43.302 "state": "completed", 00:21:43.302 "digest": "sha512", 00:21:43.302 "dhgroup": "ffdhe3072" 00:21:43.302 } 00:21:43.302 } 00:21:43.302 ]' 00:21:43.302 07:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:43.302 07:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:43.302 07:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:43.302 07:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:43.302 07:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:43.560 07:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:43.560 07:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:43.560 07:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:43.817 07:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NWFhODFhNDYwNWJjY2ZkMjZjOTQ3MmJlZWYzOGMwNzc0ZjkyN2FkYjA4ODY4YjY1MTUzNTdhYTJhYTkzODM5MhLbLTY=: 00:21:43.817 07:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:NWFhODFhNDYwNWJjY2ZkMjZjOTQ3MmJlZWYzOGMwNzc0ZjkyN2FkYjA4ODY4YjY1MTUzNTdhYTJhYTkzODM5MhLbLTY=: 00:21:44.752 07:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:44.752 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:44.752 07:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:44.752 07:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:44.752 07:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:44.752 07:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:44.752 07:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:44.752 07:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:44.752 07:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:44.752 07:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:45.010 07:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:21:45.010 07:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:45.010 07:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:45.010 07:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:45.010 07:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:45.010 07:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:45.010 07:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:45.010 07:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:45.010 07:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:45.010 07:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:45.010 07:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:45.010 07:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:45.010 07:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:45.268 00:21:45.268 07:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:45.268 07:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:45.268 07:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:45.527 07:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:45.527 07:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:45.527 07:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:45.527 07:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:45.527 07:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:45.527 07:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:45.527 { 00:21:45.527 "cntlid": 121, 00:21:45.527 "qid": 0, 00:21:45.527 "state": "enabled", 00:21:45.527 "thread": "nvmf_tgt_poll_group_000", 00:21:45.527 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:45.527 "listen_address": { 00:21:45.527 "trtype": "TCP", 00:21:45.527 "adrfam": "IPv4", 00:21:45.527 "traddr": "10.0.0.2", 00:21:45.527 "trsvcid": "4420" 00:21:45.527 }, 00:21:45.527 "peer_address": { 00:21:45.527 "trtype": "TCP", 00:21:45.527 "adrfam": "IPv4", 00:21:45.527 "traddr": "10.0.0.1", 00:21:45.527 "trsvcid": "44826" 00:21:45.527 }, 00:21:45.527 "auth": { 00:21:45.527 "state": "completed", 00:21:45.527 "digest": "sha512", 00:21:45.527 "dhgroup": "ffdhe4096" 00:21:45.527 } 00:21:45.527 } 00:21:45.527 ]' 00:21:45.527 07:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:45.786 07:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:45.786 07:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:45.786 07:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:45.786 07:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:45.786 07:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:45.786 07:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:45.786 07:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:46.044 07:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjEyZGZmODVlZjkzYmNiZmMyNzI2YTEwZGYzZjcyN2JkMzhiYjNhMmFmMTY2MDQye34UVg==: --dhchap-ctrl-secret DHHC-1:03:MGU1OTUzZjNlNGQyMmY5NTY0Y2VlZTk5MTFmM2E0ODlmZWIxMGJkZDUzMTgwN2MxN2EyNTA2ZGVhNmNhZDNmZB7cbfg=: 00:21:46.044 07:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:YjEyZGZmODVlZjkzYmNiZmMyNzI2YTEwZGYzZjcyN2JkMzhiYjNhMmFmMTY2MDQye34UVg==: --dhchap-ctrl-secret DHHC-1:03:MGU1OTUzZjNlNGQyMmY5NTY0Y2VlZTk5MTFmM2E0ODlmZWIxMGJkZDUzMTgwN2MxN2EyNTA2ZGVhNmNhZDNmZB7cbfg=: 00:21:46.981 07:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:46.981 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:46.981 07:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:46.981 07:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.981 07:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:46.981 07:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.981 07:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:46.981 07:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:46.981 07:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:47.240 07:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:21:47.240 07:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:47.240 07:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:47.240 07:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:47.240 07:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:47.240 07:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:47.240 07:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:47.240 07:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:47.240 07:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:47.240 07:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:47.240 07:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:47.240 07:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:47.240 07:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:47.501 00:21:47.762 07:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:47.762 07:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:47.762 07:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:48.021 07:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:48.021 07:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:48.021 07:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:48.021 07:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:48.021 07:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:48.021 07:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:48.021 { 00:21:48.021 "cntlid": 123, 00:21:48.021 "qid": 0, 00:21:48.021 "state": "enabled", 00:21:48.021 "thread": "nvmf_tgt_poll_group_000", 00:21:48.021 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:48.021 "listen_address": { 00:21:48.021 "trtype": "TCP", 00:21:48.021 "adrfam": "IPv4", 00:21:48.021 "traddr": "10.0.0.2", 00:21:48.021 "trsvcid": "4420" 00:21:48.021 }, 00:21:48.021 "peer_address": { 00:21:48.021 "trtype": "TCP", 00:21:48.021 "adrfam": "IPv4", 00:21:48.021 "traddr": "10.0.0.1", 00:21:48.021 "trsvcid": "44846" 00:21:48.021 }, 00:21:48.021 "auth": { 00:21:48.021 "state": "completed", 00:21:48.021 "digest": "sha512", 00:21:48.021 "dhgroup": "ffdhe4096" 00:21:48.021 } 00:21:48.021 } 00:21:48.021 ]' 00:21:48.021 07:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:48.021 07:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:48.021 07:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:48.021 07:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:48.021 07:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:48.021 07:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:48.021 07:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:48.021 07:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:48.283 07:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ODBmNWEzYzkyYzc0ZTg5MjRiMDViMTEwMTAyZmMwNjAGUQzo: --dhchap-ctrl-secret DHHC-1:02:NmJiNTI5OGQ4NTE3MWQzMDY3N2ExZWEzYzNlNjViYzE0Y2ZiZDA5ZmEzOGMxYmM5xhrtlA==: 00:21:48.283 07:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:ODBmNWEzYzkyYzc0ZTg5MjRiMDViMTEwMTAyZmMwNjAGUQzo: --dhchap-ctrl-secret DHHC-1:02:NmJiNTI5OGQ4NTE3MWQzMDY3N2ExZWEzYzNlNjViYzE0Y2ZiZDA5ZmEzOGMxYmM5xhrtlA==: 00:21:49.221 07:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:49.221 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:49.221 07:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:49.221 07:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:49.221 07:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:49.221 07:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:49.221 07:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:49.221 07:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:49.221 07:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:49.480 07:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:21:49.480 07:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:49.480 07:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:49.480 07:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:49.480 07:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:49.480 07:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:49.480 07:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:49.480 07:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:49.480 07:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:49.480 07:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:49.480 07:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:49.480 07:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:49.480 07:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:50.047 00:21:50.047 07:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:50.047 07:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:50.047 07:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:50.306 07:07:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:50.306 07:07:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:50.306 07:07:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:50.306 07:07:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:50.306 07:07:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:50.306 07:07:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:50.306 { 00:21:50.306 "cntlid": 125, 00:21:50.306 "qid": 0, 00:21:50.306 "state": "enabled", 00:21:50.306 "thread": "nvmf_tgt_poll_group_000", 00:21:50.306 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:50.306 "listen_address": { 00:21:50.306 "trtype": "TCP", 00:21:50.306 "adrfam": "IPv4", 00:21:50.306 "traddr": "10.0.0.2", 00:21:50.306 "trsvcid": "4420" 00:21:50.306 }, 00:21:50.306 "peer_address": { 00:21:50.306 "trtype": "TCP", 00:21:50.306 "adrfam": "IPv4", 00:21:50.306 "traddr": "10.0.0.1", 00:21:50.306 "trsvcid": "48860" 00:21:50.306 }, 00:21:50.306 "auth": { 00:21:50.306 "state": "completed", 00:21:50.306 "digest": "sha512", 00:21:50.306 "dhgroup": "ffdhe4096" 00:21:50.306 } 00:21:50.306 } 00:21:50.306 ]' 00:21:50.306 07:07:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:50.306 07:07:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:50.306 07:07:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:50.306 07:07:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:50.306 07:07:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:50.306 07:07:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:50.306 07:07:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:50.306 07:07:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:50.566 07:07:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YmE3YjRjNTcxYmZiZWQ2NGYyNDEyYjJjOTFjNjY2ZmM2NzVkYTRiYzNlNjM4ZTQyxZYLnw==: --dhchap-ctrl-secret DHHC-1:01:YTEyNWM3Zjg0NDVmYjQ0MDlmNjE3YjgzYmU4YTM4MTYKJrp/: 00:21:50.566 07:07:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:YmE3YjRjNTcxYmZiZWQ2NGYyNDEyYjJjOTFjNjY2ZmM2NzVkYTRiYzNlNjM4ZTQyxZYLnw==: --dhchap-ctrl-secret DHHC-1:01:YTEyNWM3Zjg0NDVmYjQ0MDlmNjE3YjgzYmU4YTM4MTYKJrp/: 00:21:51.503 07:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:51.503 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:51.503 07:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:51.503 07:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:51.503 07:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:51.503 07:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:51.503 07:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:51.503 07:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:51.503 07:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:51.762 07:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:21:51.762 07:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:51.762 07:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:51.762 07:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:51.762 07:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:51.762 07:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:51.762 07:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:51.762 07:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:51.762 07:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:51.762 07:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:51.762 07:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:51.762 07:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:51.762 07:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:52.331 00:21:52.331 07:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:52.331 07:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:52.331 07:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:52.589 07:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:52.589 07:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:52.589 07:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:52.589 07:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:52.589 07:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:52.589 07:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:52.589 { 00:21:52.589 "cntlid": 127, 00:21:52.589 "qid": 0, 00:21:52.589 "state": "enabled", 00:21:52.589 "thread": "nvmf_tgt_poll_group_000", 00:21:52.589 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:52.589 "listen_address": { 00:21:52.589 "trtype": "TCP", 00:21:52.589 "adrfam": "IPv4", 00:21:52.589 "traddr": "10.0.0.2", 00:21:52.589 "trsvcid": "4420" 00:21:52.589 }, 00:21:52.589 "peer_address": { 00:21:52.589 "trtype": "TCP", 00:21:52.589 "adrfam": "IPv4", 00:21:52.589 "traddr": "10.0.0.1", 00:21:52.589 "trsvcid": "48876" 00:21:52.589 }, 00:21:52.589 "auth": { 00:21:52.589 "state": "completed", 00:21:52.589 "digest": "sha512", 00:21:52.589 "dhgroup": "ffdhe4096" 00:21:52.589 } 00:21:52.589 } 00:21:52.589 ]' 00:21:52.589 07:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:52.589 07:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:52.589 07:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:52.589 07:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:52.589 07:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:52.589 07:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:52.589 07:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:52.589 07:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:52.849 07:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NWFhODFhNDYwNWJjY2ZkMjZjOTQ3MmJlZWYzOGMwNzc0ZjkyN2FkYjA4ODY4YjY1MTUzNTdhYTJhYTkzODM5MhLbLTY=: 00:21:52.849 07:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:NWFhODFhNDYwNWJjY2ZkMjZjOTQ3MmJlZWYzOGMwNzc0ZjkyN2FkYjA4ODY4YjY1MTUzNTdhYTJhYTkzODM5MhLbLTY=: 00:21:53.786 07:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:54.046 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:54.046 07:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:54.046 07:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:54.047 07:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:54.047 07:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:54.047 07:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:54.047 07:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:54.047 07:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:54.047 07:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:54.306 07:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:21:54.306 07:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:54.306 07:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:54.306 07:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:54.306 07:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:54.306 07:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:54.306 07:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:54.306 07:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:54.306 07:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:54.306 07:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:54.306 07:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:54.306 07:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:54.306 07:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:54.873 00:21:54.873 07:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:54.873 07:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:54.873 07:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:55.131 07:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:55.131 07:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:55.131 07:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:55.131 07:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:55.131 07:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:55.131 07:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:55.131 { 00:21:55.131 "cntlid": 129, 00:21:55.131 "qid": 0, 00:21:55.131 "state": "enabled", 00:21:55.131 "thread": "nvmf_tgt_poll_group_000", 00:21:55.131 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:55.131 "listen_address": { 00:21:55.131 "trtype": "TCP", 00:21:55.131 "adrfam": "IPv4", 00:21:55.131 "traddr": "10.0.0.2", 00:21:55.131 "trsvcid": "4420" 00:21:55.131 }, 00:21:55.131 "peer_address": { 00:21:55.131 "trtype": "TCP", 00:21:55.131 "adrfam": "IPv4", 00:21:55.131 "traddr": "10.0.0.1", 00:21:55.131 "trsvcid": "48906" 00:21:55.131 }, 00:21:55.131 "auth": { 00:21:55.131 "state": "completed", 00:21:55.131 "digest": "sha512", 00:21:55.131 "dhgroup": "ffdhe6144" 00:21:55.131 } 00:21:55.131 } 00:21:55.131 ]' 00:21:55.131 07:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:55.131 07:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:55.131 07:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:55.131 07:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:55.131 07:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:55.131 07:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:55.131 07:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:55.131 07:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:55.391 07:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjEyZGZmODVlZjkzYmNiZmMyNzI2YTEwZGYzZjcyN2JkMzhiYjNhMmFmMTY2MDQye34UVg==: --dhchap-ctrl-secret DHHC-1:03:MGU1OTUzZjNlNGQyMmY5NTY0Y2VlZTk5MTFmM2E0ODlmZWIxMGJkZDUzMTgwN2MxN2EyNTA2ZGVhNmNhZDNmZB7cbfg=: 00:21:55.391 07:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:YjEyZGZmODVlZjkzYmNiZmMyNzI2YTEwZGYzZjcyN2JkMzhiYjNhMmFmMTY2MDQye34UVg==: --dhchap-ctrl-secret DHHC-1:03:MGU1OTUzZjNlNGQyMmY5NTY0Y2VlZTk5MTFmM2E0ODlmZWIxMGJkZDUzMTgwN2MxN2EyNTA2ZGVhNmNhZDNmZB7cbfg=: 00:21:56.328 07:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:56.328 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:56.328 07:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:56.328 07:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:56.328 07:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:56.328 07:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:56.328 07:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:56.328 07:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:56.328 07:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:56.587 07:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:21:56.587 07:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:56.587 07:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:56.587 07:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:56.587 07:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:56.587 07:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:56.587 07:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:56.587 07:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:56.587 07:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:56.846 07:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:56.846 07:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:56.846 07:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:56.846 07:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:57.106 00:21:57.366 07:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:57.366 07:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:57.366 07:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:57.624 07:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:57.624 07:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:57.624 07:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:57.624 07:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:57.624 07:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:57.624 07:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:57.624 { 00:21:57.624 "cntlid": 131, 00:21:57.624 "qid": 0, 00:21:57.624 "state": "enabled", 00:21:57.624 "thread": "nvmf_tgt_poll_group_000", 00:21:57.624 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:57.624 "listen_address": { 00:21:57.624 "trtype": "TCP", 00:21:57.624 "adrfam": "IPv4", 00:21:57.624 "traddr": "10.0.0.2", 00:21:57.624 "trsvcid": "4420" 00:21:57.624 }, 00:21:57.624 "peer_address": { 00:21:57.624 "trtype": "TCP", 00:21:57.624 "adrfam": "IPv4", 00:21:57.624 "traddr": "10.0.0.1", 00:21:57.624 "trsvcid": "48942" 00:21:57.624 }, 00:21:57.624 "auth": { 00:21:57.624 "state": "completed", 00:21:57.624 "digest": "sha512", 00:21:57.624 "dhgroup": "ffdhe6144" 00:21:57.624 } 00:21:57.624 } 00:21:57.624 ]' 00:21:57.624 07:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:57.624 07:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:57.624 07:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:57.624 07:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:57.624 07:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:57.624 07:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:57.624 07:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:57.624 07:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:57.883 07:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ODBmNWEzYzkyYzc0ZTg5MjRiMDViMTEwMTAyZmMwNjAGUQzo: --dhchap-ctrl-secret DHHC-1:02:NmJiNTI5OGQ4NTE3MWQzMDY3N2ExZWEzYzNlNjViYzE0Y2ZiZDA5ZmEzOGMxYmM5xhrtlA==: 00:21:57.883 07:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:ODBmNWEzYzkyYzc0ZTg5MjRiMDViMTEwMTAyZmMwNjAGUQzo: --dhchap-ctrl-secret DHHC-1:02:NmJiNTI5OGQ4NTE3MWQzMDY3N2ExZWEzYzNlNjViYzE0Y2ZiZDA5ZmEzOGMxYmM5xhrtlA==: 00:21:58.817 07:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:58.817 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:58.817 07:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:58.817 07:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:58.817 07:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:58.817 07:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:58.817 07:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:58.817 07:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:58.817 07:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:59.076 07:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:21:59.076 07:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:59.076 07:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:59.076 07:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:59.076 07:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:59.076 07:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:59.076 07:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:59.076 07:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.076 07:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:59.076 07:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.076 07:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:59.076 07:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:59.076 07:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:59.643 00:21:59.643 07:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:59.643 07:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:59.643 07:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:59.901 07:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:59.901 07:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:59.901 07:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.901 07:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:59.901 07:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.901 07:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:59.901 { 00:21:59.901 "cntlid": 133, 00:21:59.901 "qid": 0, 00:21:59.901 "state": "enabled", 00:21:59.901 "thread": "nvmf_tgt_poll_group_000", 00:21:59.901 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:59.901 "listen_address": { 00:21:59.901 "trtype": "TCP", 00:21:59.901 "adrfam": "IPv4", 00:21:59.901 "traddr": "10.0.0.2", 00:21:59.901 "trsvcid": "4420" 00:21:59.901 }, 00:21:59.901 "peer_address": { 00:21:59.901 "trtype": "TCP", 00:21:59.901 "adrfam": "IPv4", 00:21:59.901 "traddr": "10.0.0.1", 00:21:59.901 "trsvcid": "41330" 00:21:59.901 }, 00:21:59.901 "auth": { 00:21:59.901 "state": "completed", 00:21:59.901 "digest": "sha512", 00:21:59.901 "dhgroup": "ffdhe6144" 00:21:59.901 } 00:21:59.901 } 00:21:59.901 ]' 00:21:59.901 07:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:59.901 07:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:59.901 07:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:59.901 07:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:59.901 07:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:00.159 07:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:00.159 07:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:00.159 07:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:00.417 07:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YmE3YjRjNTcxYmZiZWQ2NGYyNDEyYjJjOTFjNjY2ZmM2NzVkYTRiYzNlNjM4ZTQyxZYLnw==: --dhchap-ctrl-secret DHHC-1:01:YTEyNWM3Zjg0NDVmYjQ0MDlmNjE3YjgzYmU4YTM4MTYKJrp/: 00:22:00.417 07:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:YmE3YjRjNTcxYmZiZWQ2NGYyNDEyYjJjOTFjNjY2ZmM2NzVkYTRiYzNlNjM4ZTQyxZYLnw==: --dhchap-ctrl-secret DHHC-1:01:YTEyNWM3Zjg0NDVmYjQ0MDlmNjE3YjgzYmU4YTM4MTYKJrp/: 00:22:01.355 07:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:01.355 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:01.355 07:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:01.355 07:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:01.355 07:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:01.355 07:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:01.355 07:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:01.355 07:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:01.355 07:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:01.613 07:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:22:01.613 07:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:01.613 07:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:01.613 07:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:22:01.613 07:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:01.613 07:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:01.613 07:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:22:01.613 07:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:01.613 07:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:01.613 07:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:01.613 07:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:01.613 07:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:01.613 07:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:02.181 00:22:02.181 07:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:02.181 07:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:02.181 07:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:02.439 07:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:02.439 07:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:02.440 07:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.440 07:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:02.440 07:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.440 07:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:02.440 { 00:22:02.440 "cntlid": 135, 00:22:02.440 "qid": 0, 00:22:02.440 "state": "enabled", 00:22:02.440 "thread": "nvmf_tgt_poll_group_000", 00:22:02.440 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:02.440 "listen_address": { 00:22:02.440 "trtype": "TCP", 00:22:02.440 "adrfam": "IPv4", 00:22:02.440 "traddr": "10.0.0.2", 00:22:02.440 "trsvcid": "4420" 00:22:02.440 }, 00:22:02.440 "peer_address": { 00:22:02.440 "trtype": "TCP", 00:22:02.440 "adrfam": "IPv4", 00:22:02.440 "traddr": "10.0.0.1", 00:22:02.440 "trsvcid": "41346" 00:22:02.440 }, 00:22:02.440 "auth": { 00:22:02.440 "state": "completed", 00:22:02.440 "digest": "sha512", 00:22:02.440 "dhgroup": "ffdhe6144" 00:22:02.440 } 00:22:02.440 } 00:22:02.440 ]' 00:22:02.440 07:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:02.440 07:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:02.440 07:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:02.698 07:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:02.698 07:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:02.698 07:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:02.698 07:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:02.698 07:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:02.956 07:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NWFhODFhNDYwNWJjY2ZkMjZjOTQ3MmJlZWYzOGMwNzc0ZjkyN2FkYjA4ODY4YjY1MTUzNTdhYTJhYTkzODM5MhLbLTY=: 00:22:02.956 07:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:NWFhODFhNDYwNWJjY2ZkMjZjOTQ3MmJlZWYzOGMwNzc0ZjkyN2FkYjA4ODY4YjY1MTUzNTdhYTJhYTkzODM5MhLbLTY=: 00:22:03.895 07:07:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:03.895 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:03.895 07:07:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:03.895 07:07:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:03.895 07:07:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:03.895 07:07:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:03.895 07:07:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:22:03.895 07:07:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:03.895 07:07:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:03.895 07:07:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:04.153 07:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:22:04.153 07:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:04.153 07:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:04.153 07:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:04.153 07:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:04.153 07:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:04.153 07:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:04.153 07:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:04.153 07:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:04.153 07:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:04.153 07:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:04.153 07:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:04.153 07:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:05.092 00:22:05.092 07:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:05.092 07:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:05.092 07:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:05.351 07:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:05.351 07:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:05.351 07:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:05.351 07:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:05.351 07:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:05.351 07:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:05.351 { 00:22:05.351 "cntlid": 137, 00:22:05.351 "qid": 0, 00:22:05.351 "state": "enabled", 00:22:05.351 "thread": "nvmf_tgt_poll_group_000", 00:22:05.351 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:05.351 "listen_address": { 00:22:05.351 "trtype": "TCP", 00:22:05.351 "adrfam": "IPv4", 00:22:05.351 "traddr": "10.0.0.2", 00:22:05.351 "trsvcid": "4420" 00:22:05.351 }, 00:22:05.351 "peer_address": { 00:22:05.351 "trtype": "TCP", 00:22:05.351 "adrfam": "IPv4", 00:22:05.351 "traddr": "10.0.0.1", 00:22:05.351 "trsvcid": "41374" 00:22:05.351 }, 00:22:05.351 "auth": { 00:22:05.351 "state": "completed", 00:22:05.351 "digest": "sha512", 00:22:05.351 "dhgroup": "ffdhe8192" 00:22:05.351 } 00:22:05.351 } 00:22:05.351 ]' 00:22:05.351 07:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:05.351 07:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:05.351 07:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:05.351 07:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:05.351 07:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:05.351 07:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:05.351 07:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:05.351 07:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:05.610 07:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjEyZGZmODVlZjkzYmNiZmMyNzI2YTEwZGYzZjcyN2JkMzhiYjNhMmFmMTY2MDQye34UVg==: --dhchap-ctrl-secret DHHC-1:03:MGU1OTUzZjNlNGQyMmY5NTY0Y2VlZTk5MTFmM2E0ODlmZWIxMGJkZDUzMTgwN2MxN2EyNTA2ZGVhNmNhZDNmZB7cbfg=: 00:22:05.610 07:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:YjEyZGZmODVlZjkzYmNiZmMyNzI2YTEwZGYzZjcyN2JkMzhiYjNhMmFmMTY2MDQye34UVg==: --dhchap-ctrl-secret DHHC-1:03:MGU1OTUzZjNlNGQyMmY5NTY0Y2VlZTk5MTFmM2E0ODlmZWIxMGJkZDUzMTgwN2MxN2EyNTA2ZGVhNmNhZDNmZB7cbfg=: 00:22:06.546 07:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:06.546 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:06.546 07:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:06.546 07:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:06.546 07:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:06.546 07:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:06.546 07:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:06.546 07:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:06.546 07:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:06.804 07:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:22:06.804 07:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:06.804 07:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:06.804 07:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:06.804 07:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:06.804 07:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:06.804 07:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:06.804 07:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:06.804 07:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:06.804 07:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:06.804 07:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:06.804 07:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:06.804 07:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:07.740 00:22:07.740 07:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:07.740 07:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:07.740 07:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:07.999 07:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:07.999 07:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:07.999 07:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:07.999 07:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:07.999 07:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:07.999 07:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:07.999 { 00:22:07.999 "cntlid": 139, 00:22:07.999 "qid": 0, 00:22:07.999 "state": "enabled", 00:22:07.999 "thread": "nvmf_tgt_poll_group_000", 00:22:07.999 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:07.999 "listen_address": { 00:22:07.999 "trtype": "TCP", 00:22:07.999 "adrfam": "IPv4", 00:22:07.999 "traddr": "10.0.0.2", 00:22:07.999 "trsvcid": "4420" 00:22:07.999 }, 00:22:07.999 "peer_address": { 00:22:07.999 "trtype": "TCP", 00:22:07.999 "adrfam": "IPv4", 00:22:07.999 "traddr": "10.0.0.1", 00:22:07.999 "trsvcid": "41406" 00:22:07.999 }, 00:22:07.999 "auth": { 00:22:07.999 "state": "completed", 00:22:07.999 "digest": "sha512", 00:22:07.999 "dhgroup": "ffdhe8192" 00:22:07.999 } 00:22:07.999 } 00:22:07.999 ]' 00:22:07.999 07:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:07.999 07:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:07.999 07:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:08.257 07:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:08.257 07:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:08.257 07:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:08.257 07:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:08.258 07:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:08.514 07:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ODBmNWEzYzkyYzc0ZTg5MjRiMDViMTEwMTAyZmMwNjAGUQzo: --dhchap-ctrl-secret DHHC-1:02:NmJiNTI5OGQ4NTE3MWQzMDY3N2ExZWEzYzNlNjViYzE0Y2ZiZDA5ZmEzOGMxYmM5xhrtlA==: 00:22:08.514 07:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:ODBmNWEzYzkyYzc0ZTg5MjRiMDViMTEwMTAyZmMwNjAGUQzo: --dhchap-ctrl-secret DHHC-1:02:NmJiNTI5OGQ4NTE3MWQzMDY3N2ExZWEzYzNlNjViYzE0Y2ZiZDA5ZmEzOGMxYmM5xhrtlA==: 00:22:09.448 07:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:09.448 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:09.448 07:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:09.448 07:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.448 07:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:09.448 07:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.448 07:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:09.448 07:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:09.448 07:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:09.706 07:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:22:09.706 07:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:09.706 07:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:09.706 07:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:09.706 07:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:09.706 07:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:09.706 07:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:09.706 07:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.706 07:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:09.706 07:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.706 07:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:09.706 07:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:09.706 07:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:10.645 00:22:10.645 07:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:10.645 07:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:10.645 07:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:10.645 07:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:10.645 07:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:10.645 07:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:10.645 07:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:10.645 07:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:10.645 07:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:10.645 { 00:22:10.645 "cntlid": 141, 00:22:10.645 "qid": 0, 00:22:10.645 "state": "enabled", 00:22:10.645 "thread": "nvmf_tgt_poll_group_000", 00:22:10.645 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:10.645 "listen_address": { 00:22:10.645 "trtype": "TCP", 00:22:10.645 "adrfam": "IPv4", 00:22:10.645 "traddr": "10.0.0.2", 00:22:10.645 "trsvcid": "4420" 00:22:10.645 }, 00:22:10.645 "peer_address": { 00:22:10.645 "trtype": "TCP", 00:22:10.645 "adrfam": "IPv4", 00:22:10.645 "traddr": "10.0.0.1", 00:22:10.645 "trsvcid": "44238" 00:22:10.645 }, 00:22:10.645 "auth": { 00:22:10.645 "state": "completed", 00:22:10.645 "digest": "sha512", 00:22:10.645 "dhgroup": "ffdhe8192" 00:22:10.645 } 00:22:10.645 } 00:22:10.645 ]' 00:22:10.645 07:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:10.904 07:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:10.904 07:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:10.904 07:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:10.904 07:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:10.904 07:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:10.904 07:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:10.904 07:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:11.162 07:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YmE3YjRjNTcxYmZiZWQ2NGYyNDEyYjJjOTFjNjY2ZmM2NzVkYTRiYzNlNjM4ZTQyxZYLnw==: --dhchap-ctrl-secret DHHC-1:01:YTEyNWM3Zjg0NDVmYjQ0MDlmNjE3YjgzYmU4YTM4MTYKJrp/: 00:22:11.162 07:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:YmE3YjRjNTcxYmZiZWQ2NGYyNDEyYjJjOTFjNjY2ZmM2NzVkYTRiYzNlNjM4ZTQyxZYLnw==: --dhchap-ctrl-secret DHHC-1:01:YTEyNWM3Zjg0NDVmYjQ0MDlmNjE3YjgzYmU4YTM4MTYKJrp/: 00:22:12.101 07:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:12.101 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:12.101 07:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:12.101 07:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:12.101 07:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:12.101 07:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:12.101 07:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:12.101 07:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:12.101 07:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:12.359 07:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:22:12.359 07:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:12.359 07:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:12.359 07:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:12.359 07:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:12.359 07:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:12.359 07:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:22:12.359 07:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:12.359 07:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:12.359 07:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:12.359 07:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:12.359 07:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:12.359 07:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:13.296 00:22:13.296 07:07:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:13.296 07:07:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:13.296 07:07:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:13.554 07:07:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:13.554 07:07:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:13.554 07:07:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:13.554 07:07:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:13.554 07:07:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:13.554 07:07:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:13.554 { 00:22:13.554 "cntlid": 143, 00:22:13.554 "qid": 0, 00:22:13.554 "state": "enabled", 00:22:13.554 "thread": "nvmf_tgt_poll_group_000", 00:22:13.554 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:13.554 "listen_address": { 00:22:13.554 "trtype": "TCP", 00:22:13.554 "adrfam": "IPv4", 00:22:13.554 "traddr": "10.0.0.2", 00:22:13.554 "trsvcid": "4420" 00:22:13.554 }, 00:22:13.554 "peer_address": { 00:22:13.554 "trtype": "TCP", 00:22:13.554 "adrfam": "IPv4", 00:22:13.554 "traddr": "10.0.0.1", 00:22:13.554 "trsvcid": "44278" 00:22:13.554 }, 00:22:13.554 "auth": { 00:22:13.554 "state": "completed", 00:22:13.554 "digest": "sha512", 00:22:13.554 "dhgroup": "ffdhe8192" 00:22:13.554 } 00:22:13.554 } 00:22:13.554 ]' 00:22:13.554 07:07:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:13.554 07:07:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:13.554 07:07:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:13.554 07:07:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:13.554 07:07:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:13.554 07:07:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:13.554 07:07:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:13.555 07:07:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:13.813 07:07:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NWFhODFhNDYwNWJjY2ZkMjZjOTQ3MmJlZWYzOGMwNzc0ZjkyN2FkYjA4ODY4YjY1MTUzNTdhYTJhYTkzODM5MhLbLTY=: 00:22:13.813 07:07:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:NWFhODFhNDYwNWJjY2ZkMjZjOTQ3MmJlZWYzOGMwNzc0ZjkyN2FkYjA4ODY4YjY1MTUzNTdhYTJhYTkzODM5MhLbLTY=: 00:22:14.750 07:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:14.750 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:14.750 07:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:14.750 07:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.750 07:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:14.750 07:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.750 07:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:22:14.750 07:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:22:14.750 07:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:22:14.750 07:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:14.750 07:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:14.750 07:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:15.008 07:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:22:15.008 07:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:15.008 07:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:15.008 07:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:15.008 07:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:15.008 07:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:15.008 07:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:15.008 07:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:15.008 07:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:15.008 07:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:15.008 07:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:15.008 07:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:15.008 07:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:15.941 00:22:15.941 07:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:15.941 07:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:15.941 07:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:16.199 07:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:16.199 07:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:16.199 07:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:16.199 07:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:16.199 07:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:16.199 07:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:16.199 { 00:22:16.199 "cntlid": 145, 00:22:16.200 "qid": 0, 00:22:16.200 "state": "enabled", 00:22:16.200 "thread": "nvmf_tgt_poll_group_000", 00:22:16.200 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:16.200 "listen_address": { 00:22:16.200 "trtype": "TCP", 00:22:16.200 "adrfam": "IPv4", 00:22:16.200 "traddr": "10.0.0.2", 00:22:16.200 "trsvcid": "4420" 00:22:16.200 }, 00:22:16.200 "peer_address": { 00:22:16.200 "trtype": "TCP", 00:22:16.200 "adrfam": "IPv4", 00:22:16.200 "traddr": "10.0.0.1", 00:22:16.200 "trsvcid": "44312" 00:22:16.200 }, 00:22:16.200 "auth": { 00:22:16.200 "state": "completed", 00:22:16.200 "digest": "sha512", 00:22:16.200 "dhgroup": "ffdhe8192" 00:22:16.200 } 00:22:16.200 } 00:22:16.200 ]' 00:22:16.200 07:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:16.200 07:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:16.200 07:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:16.200 07:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:16.200 07:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:16.200 07:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:16.200 07:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:16.200 07:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:16.458 07:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjEyZGZmODVlZjkzYmNiZmMyNzI2YTEwZGYzZjcyN2JkMzhiYjNhMmFmMTY2MDQye34UVg==: --dhchap-ctrl-secret DHHC-1:03:MGU1OTUzZjNlNGQyMmY5NTY0Y2VlZTk5MTFmM2E0ODlmZWIxMGJkZDUzMTgwN2MxN2EyNTA2ZGVhNmNhZDNmZB7cbfg=: 00:22:16.459 07:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:YjEyZGZmODVlZjkzYmNiZmMyNzI2YTEwZGYzZjcyN2JkMzhiYjNhMmFmMTY2MDQye34UVg==: --dhchap-ctrl-secret DHHC-1:03:MGU1OTUzZjNlNGQyMmY5NTY0Y2VlZTk5MTFmM2E0ODlmZWIxMGJkZDUzMTgwN2MxN2EyNTA2ZGVhNmNhZDNmZB7cbfg=: 00:22:17.393 07:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:17.393 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:17.393 07:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:17.393 07:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:17.393 07:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:17.393 07:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:17.393 07:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:22:17.393 07:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:17.393 07:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:17.393 07:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:17.393 07:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:22:17.393 07:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:22:17.393 07:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:22:17.393 07:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:22:17.393 07:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:17.393 07:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:22:17.393 07:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:17.393 07:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key2 00:22:17.393 07:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:22:17.394 07:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:22:18.329 request: 00:22:18.329 { 00:22:18.329 "name": "nvme0", 00:22:18.329 "trtype": "tcp", 00:22:18.329 "traddr": "10.0.0.2", 00:22:18.329 "adrfam": "ipv4", 00:22:18.329 "trsvcid": "4420", 00:22:18.329 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:18.329 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:18.329 "prchk_reftag": false, 00:22:18.329 "prchk_guard": false, 00:22:18.329 "hdgst": false, 00:22:18.329 "ddgst": false, 00:22:18.329 "dhchap_key": "key2", 00:22:18.329 "allow_unrecognized_csi": false, 00:22:18.329 "method": "bdev_nvme_attach_controller", 00:22:18.329 "req_id": 1 00:22:18.329 } 00:22:18.329 Got JSON-RPC error response 00:22:18.329 response: 00:22:18.329 { 00:22:18.329 "code": -5, 00:22:18.329 "message": "Input/output error" 00:22:18.329 } 00:22:18.329 07:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:22:18.329 07:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:18.330 07:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:18.330 07:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:18.330 07:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:18.330 07:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:18.330 07:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:18.330 07:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:18.330 07:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:18.330 07:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:18.330 07:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:18.330 07:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:18.330 07:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:18.330 07:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:22:18.330 07:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:18.330 07:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:22:18.330 07:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:18.330 07:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:22:18.330 07:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:18.330 07:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:18.330 07:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:18.330 07:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:19.265 request: 00:22:19.265 { 00:22:19.265 "name": "nvme0", 00:22:19.265 "trtype": "tcp", 00:22:19.265 "traddr": "10.0.0.2", 00:22:19.265 "adrfam": "ipv4", 00:22:19.265 "trsvcid": "4420", 00:22:19.265 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:19.265 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:19.265 "prchk_reftag": false, 00:22:19.265 "prchk_guard": false, 00:22:19.265 "hdgst": false, 00:22:19.265 "ddgst": false, 00:22:19.265 "dhchap_key": "key1", 00:22:19.265 "dhchap_ctrlr_key": "ckey2", 00:22:19.265 "allow_unrecognized_csi": false, 00:22:19.265 "method": "bdev_nvme_attach_controller", 00:22:19.265 "req_id": 1 00:22:19.265 } 00:22:19.265 Got JSON-RPC error response 00:22:19.265 response: 00:22:19.265 { 00:22:19.265 "code": -5, 00:22:19.265 "message": "Input/output error" 00:22:19.265 } 00:22:19.265 07:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:22:19.265 07:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:19.265 07:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:19.265 07:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:19.265 07:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:19.265 07:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:19.265 07:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:19.265 07:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:19.265 07:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:22:19.265 07:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:19.265 07:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:19.265 07:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:19.265 07:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:19.265 07:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:22:19.265 07:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:19.265 07:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:22:19.265 07:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:19.265 07:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:22:19.265 07:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:19.265 07:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:19.265 07:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:19.265 07:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:19.834 request: 00:22:19.834 { 00:22:19.834 "name": "nvme0", 00:22:19.834 "trtype": "tcp", 00:22:19.834 "traddr": "10.0.0.2", 00:22:19.834 "adrfam": "ipv4", 00:22:19.834 "trsvcid": "4420", 00:22:19.834 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:19.834 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:19.834 "prchk_reftag": false, 00:22:19.834 "prchk_guard": false, 00:22:19.834 "hdgst": false, 00:22:19.834 "ddgst": false, 00:22:19.834 "dhchap_key": "key1", 00:22:19.834 "dhchap_ctrlr_key": "ckey1", 00:22:19.834 "allow_unrecognized_csi": false, 00:22:19.834 "method": "bdev_nvme_attach_controller", 00:22:19.834 "req_id": 1 00:22:19.834 } 00:22:19.834 Got JSON-RPC error response 00:22:19.834 response: 00:22:19.834 { 00:22:19.834 "code": -5, 00:22:19.834 "message": "Input/output error" 00:22:19.834 } 00:22:19.834 07:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:22:19.834 07:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:19.834 07:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:19.834 07:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:19.834 07:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:19.834 07:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:19.834 07:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:19.834 07:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:19.834 07:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 241220 00:22:19.834 07:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 241220 ']' 00:22:19.834 07:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 241220 00:22:19.834 07:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:22:19.834 07:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:19.834 07:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 241220 00:22:20.092 07:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:20.092 07:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:20.092 07:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 241220' 00:22:20.092 killing process with pid 241220 00:22:20.092 07:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 241220 00:22:20.092 07:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 241220 00:22:20.092 07:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:22:20.092 07:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:20.092 07:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:20.092 07:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:20.092 07:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=264332 00:22:20.092 07:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:22:20.092 07:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 264332 00:22:20.092 07:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 264332 ']' 00:22:20.092 07:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:20.092 07:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:20.092 07:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:20.092 07:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:20.092 07:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:20.350 07:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:20.350 07:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:22:20.350 07:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:20.350 07:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:20.350 07:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:20.350 07:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:20.350 07:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:22:20.350 07:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 264332 00:22:20.350 07:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 264332 ']' 00:22:20.350 07:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:20.350 07:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:20.350 07:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:20.350 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:20.350 07:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:20.350 07:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:20.610 07:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:20.610 07:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:22:20.610 07:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:22:20.610 07:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:20.610 07:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:20.870 null0 00:22:20.870 07:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:20.870 07:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:22:20.870 07:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.oJ7 00:22:20.870 07:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:20.870 07:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:20.870 07:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:20.870 07:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.0h7 ]] 00:22:20.870 07:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.0h7 00:22:20.870 07:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:20.870 07:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:20.870 07:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:20.870 07:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:22:20.870 07:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.7UP 00:22:20.870 07:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:20.870 07:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:20.870 07:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:20.870 07:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.Svh ]] 00:22:20.870 07:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Svh 00:22:20.870 07:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:20.870 07:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:20.870 07:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:20.870 07:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:22:20.870 07:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.hvN 00:22:20.870 07:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:20.870 07:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:20.870 07:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:20.870 07:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.u57 ]] 00:22:20.870 07:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.u57 00:22:20.870 07:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:20.870 07:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:20.870 07:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:20.870 07:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:22:20.870 07:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.oMV 00:22:20.870 07:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:20.870 07:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:20.870 07:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:20.870 07:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:22:20.870 07:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:22:20.870 07:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:20.870 07:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:20.870 07:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:20.870 07:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:20.870 07:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:20.870 07:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:22:20.870 07:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:20.870 07:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:20.870 07:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:20.870 07:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:20.870 07:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:20.870 07:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:22.248 nvme0n1 00:22:22.248 07:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:22.248 07:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:22.248 07:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:22.507 07:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:22.507 07:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:22.507 07:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:22.507 07:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:22.507 07:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:22.507 07:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:22.507 { 00:22:22.507 "cntlid": 1, 00:22:22.507 "qid": 0, 00:22:22.507 "state": "enabled", 00:22:22.507 "thread": "nvmf_tgt_poll_group_000", 00:22:22.507 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:22.507 "listen_address": { 00:22:22.507 "trtype": "TCP", 00:22:22.507 "adrfam": "IPv4", 00:22:22.507 "traddr": "10.0.0.2", 00:22:22.507 "trsvcid": "4420" 00:22:22.507 }, 00:22:22.507 "peer_address": { 00:22:22.507 "trtype": "TCP", 00:22:22.507 "adrfam": "IPv4", 00:22:22.507 "traddr": "10.0.0.1", 00:22:22.507 "trsvcid": "54964" 00:22:22.507 }, 00:22:22.507 "auth": { 00:22:22.507 "state": "completed", 00:22:22.507 "digest": "sha512", 00:22:22.507 "dhgroup": "ffdhe8192" 00:22:22.507 } 00:22:22.507 } 00:22:22.507 ]' 00:22:22.507 07:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:22.507 07:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:22.507 07:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:22.768 07:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:22.768 07:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:22.768 07:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:22.768 07:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:22.768 07:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:23.051 07:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NWFhODFhNDYwNWJjY2ZkMjZjOTQ3MmJlZWYzOGMwNzc0ZjkyN2FkYjA4ODY4YjY1MTUzNTdhYTJhYTkzODM5MhLbLTY=: 00:22:23.051 07:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:NWFhODFhNDYwNWJjY2ZkMjZjOTQ3MmJlZWYzOGMwNzc0ZjkyN2FkYjA4ODY4YjY1MTUzNTdhYTJhYTkzODM5MhLbLTY=: 00:22:24.087 07:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:24.087 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:24.087 07:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:24.087 07:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:24.087 07:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:24.087 07:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:24.087 07:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:22:24.087 07:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:24.087 07:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:24.087 07:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:24.087 07:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:22:24.087 07:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:22:24.386 07:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:22:24.386 07:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:22:24.386 07:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:22:24.386 07:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:22:24.386 07:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:24.386 07:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:22:24.386 07:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:24.386 07:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:24.386 07:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:24.386 07:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:24.682 request: 00:22:24.682 { 00:22:24.682 "name": "nvme0", 00:22:24.682 "trtype": "tcp", 00:22:24.682 "traddr": "10.0.0.2", 00:22:24.682 "adrfam": "ipv4", 00:22:24.682 "trsvcid": "4420", 00:22:24.682 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:24.682 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:24.682 "prchk_reftag": false, 00:22:24.682 "prchk_guard": false, 00:22:24.682 "hdgst": false, 00:22:24.682 "ddgst": false, 00:22:24.682 "dhchap_key": "key3", 00:22:24.682 "allow_unrecognized_csi": false, 00:22:24.682 "method": "bdev_nvme_attach_controller", 00:22:24.682 "req_id": 1 00:22:24.682 } 00:22:24.682 Got JSON-RPC error response 00:22:24.682 response: 00:22:24.682 { 00:22:24.682 "code": -5, 00:22:24.682 "message": "Input/output error" 00:22:24.682 } 00:22:24.682 07:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:22:24.682 07:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:24.682 07:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:24.682 07:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:24.682 07:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:22:24.682 07:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:22:24.682 07:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:22:24.682 07:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:22:24.940 07:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:22:24.940 07:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:22:24.940 07:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:22:24.940 07:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:22:24.940 07:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:24.940 07:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:22:24.940 07:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:24.940 07:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:24.940 07:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:24.940 07:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:25.198 request: 00:22:25.198 { 00:22:25.198 "name": "nvme0", 00:22:25.198 "trtype": "tcp", 00:22:25.198 "traddr": "10.0.0.2", 00:22:25.199 "adrfam": "ipv4", 00:22:25.199 "trsvcid": "4420", 00:22:25.199 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:25.199 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:25.199 "prchk_reftag": false, 00:22:25.199 "prchk_guard": false, 00:22:25.199 "hdgst": false, 00:22:25.199 "ddgst": false, 00:22:25.199 "dhchap_key": "key3", 00:22:25.199 "allow_unrecognized_csi": false, 00:22:25.199 "method": "bdev_nvme_attach_controller", 00:22:25.199 "req_id": 1 00:22:25.199 } 00:22:25.199 Got JSON-RPC error response 00:22:25.199 response: 00:22:25.199 { 00:22:25.199 "code": -5, 00:22:25.199 "message": "Input/output error" 00:22:25.199 } 00:22:25.199 07:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:22:25.199 07:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:25.199 07:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:25.199 07:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:25.199 07:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:22:25.199 07:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:22:25.199 07:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:22:25.199 07:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:25.199 07:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:25.199 07:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:25.457 07:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:25.457 07:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:25.457 07:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:25.457 07:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:25.457 07:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:25.458 07:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:25.458 07:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:25.458 07:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:25.458 07:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:25.458 07:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:22:25.458 07:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:25.458 07:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:22:25.458 07:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:25.458 07:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:22:25.458 07:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:25.458 07:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:25.458 07:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:25.458 07:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:26.024 request: 00:22:26.024 { 00:22:26.024 "name": "nvme0", 00:22:26.024 "trtype": "tcp", 00:22:26.024 "traddr": "10.0.0.2", 00:22:26.024 "adrfam": "ipv4", 00:22:26.024 "trsvcid": "4420", 00:22:26.024 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:26.024 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:26.024 "prchk_reftag": false, 00:22:26.024 "prchk_guard": false, 00:22:26.024 "hdgst": false, 00:22:26.024 "ddgst": false, 00:22:26.024 "dhchap_key": "key0", 00:22:26.024 "dhchap_ctrlr_key": "key1", 00:22:26.024 "allow_unrecognized_csi": false, 00:22:26.024 "method": "bdev_nvme_attach_controller", 00:22:26.024 "req_id": 1 00:22:26.024 } 00:22:26.024 Got JSON-RPC error response 00:22:26.024 response: 00:22:26.024 { 00:22:26.024 "code": -5, 00:22:26.024 "message": "Input/output error" 00:22:26.024 } 00:22:26.024 07:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:22:26.024 07:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:26.024 07:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:26.024 07:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:26.024 07:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:22:26.024 07:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:22:26.024 07:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:22:26.282 nvme0n1 00:22:26.282 07:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:22:26.282 07:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:22:26.282 07:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:26.540 07:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:26.540 07:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:26.540 07:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:27.106 07:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:22:27.106 07:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:27.106 07:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:27.106 07:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:27.106 07:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:22:27.106 07:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:22:27.106 07:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:22:28.478 nvme0n1 00:22:28.478 07:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:22:28.478 07:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:22:28.478 07:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:28.478 07:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:28.478 07:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:28.478 07:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:28.478 07:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:28.478 07:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:28.479 07:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:22:28.479 07:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:22:28.479 07:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:28.736 07:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:28.736 07:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:YmE3YjRjNTcxYmZiZWQ2NGYyNDEyYjJjOTFjNjY2ZmM2NzVkYTRiYzNlNjM4ZTQyxZYLnw==: --dhchap-ctrl-secret DHHC-1:03:NWFhODFhNDYwNWJjY2ZkMjZjOTQ3MmJlZWYzOGMwNzc0ZjkyN2FkYjA4ODY4YjY1MTUzNTdhYTJhYTkzODM5MhLbLTY=: 00:22:28.736 07:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:YmE3YjRjNTcxYmZiZWQ2NGYyNDEyYjJjOTFjNjY2ZmM2NzVkYTRiYzNlNjM4ZTQyxZYLnw==: --dhchap-ctrl-secret DHHC-1:03:NWFhODFhNDYwNWJjY2ZkMjZjOTQ3MmJlZWYzOGMwNzc0ZjkyN2FkYjA4ODY4YjY1MTUzNTdhYTJhYTkzODM5MhLbLTY=: 00:22:29.670 07:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:22:29.670 07:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:22:29.670 07:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:22:29.670 07:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:22:29.670 07:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:22:29.670 07:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:22:29.670 07:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:22:29.670 07:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:29.670 07:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:29.928 07:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:22:29.928 07:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:22:29.928 07:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:22:29.928 07:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:22:29.928 07:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:29.928 07:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:22:29.928 07:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:29.928 07:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 00:22:29.928 07:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:22:29.928 07:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:22:30.863 request: 00:22:30.863 { 00:22:30.863 "name": "nvme0", 00:22:30.863 "trtype": "tcp", 00:22:30.863 "traddr": "10.0.0.2", 00:22:30.863 "adrfam": "ipv4", 00:22:30.863 "trsvcid": "4420", 00:22:30.863 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:30.863 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:30.863 "prchk_reftag": false, 00:22:30.863 "prchk_guard": false, 00:22:30.863 "hdgst": false, 00:22:30.863 "ddgst": false, 00:22:30.863 "dhchap_key": "key1", 00:22:30.863 "allow_unrecognized_csi": false, 00:22:30.863 "method": "bdev_nvme_attach_controller", 00:22:30.863 "req_id": 1 00:22:30.863 } 00:22:30.863 Got JSON-RPC error response 00:22:30.863 response: 00:22:30.863 { 00:22:30.863 "code": -5, 00:22:30.863 "message": "Input/output error" 00:22:30.863 } 00:22:30.863 07:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:22:30.863 07:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:30.863 07:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:30.863 07:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:30.863 07:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:30.863 07:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:30.863 07:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:32.238 nvme0n1 00:22:32.238 07:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:22:32.238 07:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:22:32.238 07:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:32.495 07:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:32.495 07:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:32.495 07:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:32.752 07:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:32.752 07:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:32.752 07:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:32.752 07:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:32.752 07:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:22:32.752 07:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:22:32.752 07:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:22:33.010 nvme0n1 00:22:33.010 07:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:22:33.010 07:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:22:33.010 07:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:33.269 07:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:33.269 07:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:33.269 07:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:33.527 07:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key key3 00:22:33.527 07:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:33.527 07:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:33.527 07:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:33.527 07:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:ODBmNWEzYzkyYzc0ZTg5MjRiMDViMTEwMTAyZmMwNjAGUQzo: '' 2s 00:22:33.527 07:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:22:33.527 07:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:22:33.527 07:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:ODBmNWEzYzkyYzc0ZTg5MjRiMDViMTEwMTAyZmMwNjAGUQzo: 00:22:33.527 07:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:22:33.527 07:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:22:33.527 07:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:22:33.527 07:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:ODBmNWEzYzkyYzc0ZTg5MjRiMDViMTEwMTAyZmMwNjAGUQzo: ]] 00:22:33.527 07:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:ODBmNWEzYzkyYzc0ZTg5MjRiMDViMTEwMTAyZmMwNjAGUQzo: 00:22:33.527 07:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:22:33.527 07:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:22:33.527 07:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:22:36.053 07:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:22:36.053 07:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:22:36.053 07:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:22:36.053 07:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:22:36.053 07:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:22:36.053 07:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:22:36.053 07:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:22:36.053 07:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key key2 00:22:36.053 07:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:36.053 07:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:36.053 07:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:36.053 07:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:YmE3YjRjNTcxYmZiZWQ2NGYyNDEyYjJjOTFjNjY2ZmM2NzVkYTRiYzNlNjM4ZTQyxZYLnw==: 2s 00:22:36.053 07:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:22:36.053 07:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:22:36.053 07:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:22:36.053 07:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:YmE3YjRjNTcxYmZiZWQ2NGYyNDEyYjJjOTFjNjY2ZmM2NzVkYTRiYzNlNjM4ZTQyxZYLnw==: 00:22:36.053 07:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:22:36.053 07:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:22:36.053 07:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:22:36.053 07:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:YmE3YjRjNTcxYmZiZWQ2NGYyNDEyYjJjOTFjNjY2ZmM2NzVkYTRiYzNlNjM4ZTQyxZYLnw==: ]] 00:22:36.053 07:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:YmE3YjRjNTcxYmZiZWQ2NGYyNDEyYjJjOTFjNjY2ZmM2NzVkYTRiYzNlNjM4ZTQyxZYLnw==: 00:22:36.053 07:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:22:36.053 07:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:22:37.952 07:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:22:37.952 07:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:22:37.952 07:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:22:37.952 07:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:22:37.952 07:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:22:37.953 07:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:22:37.953 07:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:22:37.953 07:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:37.953 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:37.953 07:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:37.953 07:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:37.953 07:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:37.953 07:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:37.953 07:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:37.953 07:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:37.953 07:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:38.885 nvme0n1 00:22:39.143 07:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:39.143 07:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:39.143 07:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:39.143 07:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:39.143 07:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:39.143 07:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:40.077 07:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:22:40.077 07:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:22:40.078 07:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:40.078 07:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:40.078 07:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:40.078 07:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:40.078 07:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:40.078 07:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:40.078 07:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:22:40.078 07:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:22:40.644 07:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:22:40.644 07:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:22:40.644 07:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:40.644 07:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:40.644 07:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:40.644 07:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:40.644 07:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:40.902 07:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:40.902 07:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:22:40.902 07:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:22:40.902 07:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:22:40.902 07:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:22:40.902 07:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:40.902 07:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:22:40.902 07:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:40.902 07:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:22:40.902 07:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:22:41.469 request: 00:22:41.469 { 00:22:41.469 "name": "nvme0", 00:22:41.469 "dhchap_key": "key1", 00:22:41.469 "dhchap_ctrlr_key": "key3", 00:22:41.469 "method": "bdev_nvme_set_keys", 00:22:41.469 "req_id": 1 00:22:41.469 } 00:22:41.469 Got JSON-RPC error response 00:22:41.469 response: 00:22:41.469 { 00:22:41.469 "code": -13, 00:22:41.469 "message": "Permission denied" 00:22:41.469 } 00:22:41.469 07:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:22:41.469 07:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:41.469 07:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:41.469 07:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:41.469 07:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:22:41.469 07:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:22:41.469 07:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:42.035 07:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:22:42.035 07:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:22:42.968 07:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:22:42.968 07:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:22:42.969 07:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:43.226 07:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:22:43.227 07:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:43.227 07:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:43.227 07:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:43.227 07:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:43.227 07:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:43.227 07:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:43.227 07:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:44.601 nvme0n1 00:22:44.601 07:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:44.601 07:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:44.601 07:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:44.601 07:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:44.601 07:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:22:44.601 07:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:22:44.601 07:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:22:44.601 07:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:22:44.601 07:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:44.601 07:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:22:44.601 07:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:44.601 07:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:22:44.601 07:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:22:45.167 request: 00:22:45.167 { 00:22:45.167 "name": "nvme0", 00:22:45.167 "dhchap_key": "key2", 00:22:45.167 "dhchap_ctrlr_key": "key0", 00:22:45.167 "method": "bdev_nvme_set_keys", 00:22:45.167 "req_id": 1 00:22:45.167 } 00:22:45.167 Got JSON-RPC error response 00:22:45.167 response: 00:22:45.167 { 00:22:45.167 "code": -13, 00:22:45.167 "message": "Permission denied" 00:22:45.167 } 00:22:45.167 07:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:22:45.167 07:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:45.167 07:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:45.167 07:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:45.167 07:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:22:45.167 07:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:45.167 07:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:22:45.424 07:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:22:45.424 07:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:22:46.797 07:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:22:46.797 07:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:22:46.797 07:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:46.797 07:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:22:46.797 07:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:22:46.797 07:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:22:46.797 07:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 241241 00:22:46.797 07:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 241241 ']' 00:22:46.797 07:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 241241 00:22:46.797 07:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:22:46.797 07:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:46.797 07:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 241241 00:22:46.797 07:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:46.797 07:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:46.797 07:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 241241' 00:22:46.797 killing process with pid 241241 00:22:46.797 07:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 241241 00:22:46.797 07:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 241241 00:22:47.363 07:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:22:47.363 07:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:47.363 07:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:22:47.363 07:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:47.363 07:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:22:47.363 07:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:47.363 07:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:47.363 rmmod nvme_tcp 00:22:47.363 rmmod nvme_fabrics 00:22:47.363 rmmod nvme_keyring 00:22:47.363 07:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:47.363 07:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:22:47.363 07:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:22:47.363 07:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # '[' -n 264332 ']' 00:22:47.363 07:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # killprocess 264332 00:22:47.363 07:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 264332 ']' 00:22:47.363 07:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 264332 00:22:47.363 07:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:22:47.363 07:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:47.363 07:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 264332 00:22:47.363 07:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:47.363 07:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:47.363 07:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 264332' 00:22:47.363 killing process with pid 264332 00:22:47.363 07:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 264332 00:22:47.363 07:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 264332 00:22:47.623 07:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:47.623 07:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:47.623 07:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:47.623 07:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:22:47.623 07:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-save 00:22:47.623 07:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:47.623 07:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-restore 00:22:47.623 07:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:47.623 07:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:47.623 07:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:47.623 07:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:47.623 07:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:49.530 07:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:49.530 07:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.oJ7 /tmp/spdk.key-sha256.7UP /tmp/spdk.key-sha384.hvN /tmp/spdk.key-sha512.oMV /tmp/spdk.key-sha512.0h7 /tmp/spdk.key-sha384.Svh /tmp/spdk.key-sha256.u57 '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:22:49.530 00:22:49.530 real 3m33.863s 00:22:49.530 user 8m19.460s 00:22:49.530 sys 0m28.491s 00:22:49.530 07:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:49.530 07:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:49.530 ************************************ 00:22:49.530 END TEST nvmf_auth_target 00:22:49.530 ************************************ 00:22:49.530 07:08:10 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:22:49.530 07:08:10 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:22:49.530 07:08:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:22:49.530 07:08:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:49.530 07:08:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:49.530 ************************************ 00:22:49.530 START TEST nvmf_bdevio_no_huge 00:22:49.530 ************************************ 00:22:49.530 07:08:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:22:49.790 * Looking for test storage... 00:22:49.790 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:49.790 07:08:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:49.790 07:08:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # lcov --version 00:22:49.790 07:08:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:49.790 07:08:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:49.790 07:08:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:49.790 07:08:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:49.790 07:08:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:49.790 07:08:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:22:49.790 07:08:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:22:49.790 07:08:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:22:49.790 07:08:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:22:49.790 07:08:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:22:49.790 07:08:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:22:49.790 07:08:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:22:49.790 07:08:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:49.790 07:08:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:22:49.790 07:08:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:22:49.790 07:08:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:49.790 07:08:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:49.790 07:08:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:22:49.790 07:08:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:22:49.790 07:08:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:49.791 07:08:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:22:49.791 07:08:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:22:49.791 07:08:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:22:49.791 07:08:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:22:49.791 07:08:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:49.791 07:08:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:22:49.791 07:08:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:22:49.791 07:08:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:49.791 07:08:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:49.791 07:08:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:22:49.791 07:08:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:49.791 07:08:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:49.791 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:49.791 --rc genhtml_branch_coverage=1 00:22:49.791 --rc genhtml_function_coverage=1 00:22:49.791 --rc genhtml_legend=1 00:22:49.791 --rc geninfo_all_blocks=1 00:22:49.791 --rc geninfo_unexecuted_blocks=1 00:22:49.791 00:22:49.791 ' 00:22:49.791 07:08:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:49.791 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:49.791 --rc genhtml_branch_coverage=1 00:22:49.791 --rc genhtml_function_coverage=1 00:22:49.791 --rc genhtml_legend=1 00:22:49.791 --rc geninfo_all_blocks=1 00:22:49.791 --rc geninfo_unexecuted_blocks=1 00:22:49.791 00:22:49.791 ' 00:22:49.791 07:08:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:49.791 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:49.791 --rc genhtml_branch_coverage=1 00:22:49.791 --rc genhtml_function_coverage=1 00:22:49.791 --rc genhtml_legend=1 00:22:49.791 --rc geninfo_all_blocks=1 00:22:49.791 --rc geninfo_unexecuted_blocks=1 00:22:49.791 00:22:49.791 ' 00:22:49.791 07:08:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:49.791 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:49.791 --rc genhtml_branch_coverage=1 00:22:49.791 --rc genhtml_function_coverage=1 00:22:49.791 --rc genhtml_legend=1 00:22:49.791 --rc geninfo_all_blocks=1 00:22:49.791 --rc geninfo_unexecuted_blocks=1 00:22:49.791 00:22:49.791 ' 00:22:49.791 07:08:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:49.791 07:08:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:22:49.791 07:08:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:49.791 07:08:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:49.791 07:08:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:49.791 07:08:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:49.791 07:08:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:49.791 07:08:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:49.791 07:08:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:49.791 07:08:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:49.791 07:08:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:49.791 07:08:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:49.791 07:08:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:49.791 07:08:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:22:49.791 07:08:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:49.791 07:08:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:49.791 07:08:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:49.791 07:08:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:49.791 07:08:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:49.791 07:08:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:22:49.791 07:08:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:49.791 07:08:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:49.791 07:08:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:49.791 07:08:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:49.791 07:08:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:49.791 07:08:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:49.791 07:08:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:22:49.791 07:08:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:49.791 07:08:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:22:49.791 07:08:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:49.791 07:08:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:49.791 07:08:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:49.791 07:08:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:49.791 07:08:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:49.791 07:08:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:49.791 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:49.791 07:08:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:49.791 07:08:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:49.791 07:08:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:49.791 07:08:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:49.791 07:08:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:49.791 07:08:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:22:49.791 07:08:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:49.791 07:08:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:49.791 07:08:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:49.791 07:08:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:49.791 07:08:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:49.791 07:08:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:49.791 07:08:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:49.791 07:08:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:49.791 07:08:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:49.791 07:08:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:49.791 07:08:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@309 -- # xtrace_disable 00:22:49.791 07:08:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:51.697 07:08:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:51.697 07:08:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # pci_devs=() 00:22:51.697 07:08:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:51.697 07:08:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:51.697 07:08:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:51.697 07:08:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:51.697 07:08:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:51.697 07:08:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # net_devs=() 00:22:51.697 07:08:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:51.697 07:08:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # e810=() 00:22:51.697 07:08:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # local -ga e810 00:22:51.697 07:08:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # x722=() 00:22:51.697 07:08:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # local -ga x722 00:22:51.697 07:08:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # mlx=() 00:22:51.697 07:08:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # local -ga mlx 00:22:51.697 07:08:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:51.697 07:08:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:51.697 07:08:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:51.697 07:08:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:51.697 07:08:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:51.697 07:08:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:51.697 07:08:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:51.697 07:08:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:51.697 07:08:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:51.697 07:08:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:51.697 07:08:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:51.697 07:08:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:51.697 07:08:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:51.697 07:08:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:51.697 07:08:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:51.697 07:08:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:51.697 07:08:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:51.698 07:08:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:51.698 07:08:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:51.698 07:08:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:22:51.698 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:22:51.698 07:08:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:51.698 07:08:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:51.698 07:08:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:51.698 07:08:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:51.698 07:08:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:51.698 07:08:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:51.698 07:08:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:22:51.698 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:22:51.698 07:08:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:51.698 07:08:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:51.698 07:08:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:51.698 07:08:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:51.698 07:08:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:51.698 07:08:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:51.698 07:08:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:51.698 07:08:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:51.698 07:08:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:51.957 07:08:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:51.957 07:08:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:51.957 07:08:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:51.957 07:08:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:51.957 07:08:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:51.957 07:08:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:51.957 07:08:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:22:51.957 Found net devices under 0000:0a:00.0: cvl_0_0 00:22:51.957 07:08:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:51.957 07:08:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:51.957 07:08:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:51.957 07:08:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:51.957 07:08:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:51.957 07:08:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:51.957 07:08:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:51.957 07:08:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:51.957 07:08:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:22:51.957 Found net devices under 0000:0a:00.1: cvl_0_1 00:22:51.957 07:08:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:51.957 07:08:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:51.957 07:08:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # is_hw=yes 00:22:51.957 07:08:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:51.957 07:08:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:51.957 07:08:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:51.957 07:08:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:51.957 07:08:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:51.957 07:08:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:51.957 07:08:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:51.957 07:08:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:51.957 07:08:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:51.957 07:08:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:51.957 07:08:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:51.957 07:08:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:51.957 07:08:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:51.957 07:08:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:51.957 07:08:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:51.957 07:08:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:51.957 07:08:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:51.957 07:08:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:51.957 07:08:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:51.957 07:08:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:51.957 07:08:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:51.957 07:08:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:51.957 07:08:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:51.957 07:08:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:51.957 07:08:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:51.957 07:08:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:51.957 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:51.957 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.156 ms 00:22:51.958 00:22:51.958 --- 10.0.0.2 ping statistics --- 00:22:51.958 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:51.958 rtt min/avg/max/mdev = 0.156/0.156/0.156/0.000 ms 00:22:51.958 07:08:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:51.958 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:51.958 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.106 ms 00:22:51.958 00:22:51.958 --- 10.0.0.1 ping statistics --- 00:22:51.958 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:51.958 rtt min/avg/max/mdev = 0.106/0.106/0.106/0.000 ms 00:22:51.958 07:08:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:51.958 07:08:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # return 0 00:22:51.958 07:08:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:51.958 07:08:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:51.958 07:08:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:51.958 07:08:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:51.958 07:08:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:51.958 07:08:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:51.958 07:08:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:51.958 07:08:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:22:51.958 07:08:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:51.958 07:08:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:51.958 07:08:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:51.958 07:08:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # nvmfpid=269588 00:22:51.958 07:08:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:22:51.958 07:08:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # waitforlisten 269588 00:22:51.958 07:08:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # '[' -z 269588 ']' 00:22:51.958 07:08:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:51.958 07:08:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:51.958 07:08:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:51.958 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:51.958 07:08:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:51.958 07:08:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:51.958 [2024-11-18 07:08:12.894869] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:22:51.958 [2024-11-18 07:08:12.894954] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:22:52.217 [2024-11-18 07:08:12.974615] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:52.217 [2024-11-18 07:08:13.021071] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:52.217 [2024-11-18 07:08:13.021129] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:52.217 [2024-11-18 07:08:13.021142] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:52.217 [2024-11-18 07:08:13.021153] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:52.217 [2024-11-18 07:08:13.021163] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:52.217 [2024-11-18 07:08:13.022202] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:22:52.217 [2024-11-18 07:08:13.022264] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:22:52.217 [2024-11-18 07:08:13.022330] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:22:52.217 [2024-11-18 07:08:13.022332] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:52.217 07:08:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:52.217 07:08:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@868 -- # return 0 00:22:52.217 07:08:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:52.217 07:08:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:52.217 07:08:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:52.217 07:08:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:52.217 07:08:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:52.217 07:08:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:52.217 07:08:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:52.217 [2024-11-18 07:08:13.176584] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:52.217 07:08:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:52.217 07:08:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:52.217 07:08:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:52.217 07:08:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:52.217 Malloc0 00:22:52.476 07:08:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:52.476 07:08:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:52.476 07:08:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:52.476 07:08:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:52.476 07:08:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:52.476 07:08:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:52.476 07:08:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:52.476 07:08:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:52.476 07:08:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:52.476 07:08:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:52.476 07:08:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:52.476 07:08:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:52.476 [2024-11-18 07:08:13.215091] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:52.476 07:08:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:52.476 07:08:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:22:52.476 07:08:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:22:52.476 07:08:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # config=() 00:22:52.476 07:08:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # local subsystem config 00:22:52.476 07:08:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:52.476 07:08:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:52.476 { 00:22:52.476 "params": { 00:22:52.476 "name": "Nvme$subsystem", 00:22:52.476 "trtype": "$TEST_TRANSPORT", 00:22:52.476 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:52.476 "adrfam": "ipv4", 00:22:52.476 "trsvcid": "$NVMF_PORT", 00:22:52.476 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:52.476 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:52.476 "hdgst": ${hdgst:-false}, 00:22:52.476 "ddgst": ${ddgst:-false} 00:22:52.476 }, 00:22:52.476 "method": "bdev_nvme_attach_controller" 00:22:52.476 } 00:22:52.476 EOF 00:22:52.476 )") 00:22:52.476 07:08:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # cat 00:22:52.476 07:08:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # jq . 00:22:52.476 07:08:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@585 -- # IFS=, 00:22:52.476 07:08:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:22:52.476 "params": { 00:22:52.476 "name": "Nvme1", 00:22:52.476 "trtype": "tcp", 00:22:52.476 "traddr": "10.0.0.2", 00:22:52.476 "adrfam": "ipv4", 00:22:52.476 "trsvcid": "4420", 00:22:52.476 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:52.476 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:52.476 "hdgst": false, 00:22:52.476 "ddgst": false 00:22:52.476 }, 00:22:52.476 "method": "bdev_nvme_attach_controller" 00:22:52.476 }' 00:22:52.476 [2024-11-18 07:08:13.265849] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:22:52.476 [2024-11-18 07:08:13.265916] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid269618 ] 00:22:52.476 [2024-11-18 07:08:13.333898] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:22:52.476 [2024-11-18 07:08:13.385506] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:52.476 [2024-11-18 07:08:13.385543] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:52.476 [2024-11-18 07:08:13.385547] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:52.734 I/O targets: 00:22:52.734 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:22:52.734 00:22:52.734 00:22:52.734 CUnit - A unit testing framework for C - Version 2.1-3 00:22:52.734 http://cunit.sourceforge.net/ 00:22:52.734 00:22:52.734 00:22:52.734 Suite: bdevio tests on: Nvme1n1 00:22:52.993 Test: blockdev write read block ...passed 00:22:52.993 Test: blockdev write zeroes read block ...passed 00:22:52.993 Test: blockdev write zeroes read no split ...passed 00:22:52.993 Test: blockdev write zeroes read split ...passed 00:22:52.993 Test: blockdev write zeroes read split partial ...passed 00:22:52.993 Test: blockdev reset ...[2024-11-18 07:08:13.852411] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:22:52.993 [2024-11-18 07:08:13.852546] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x207b6a0 (9): Bad file descriptor 00:22:52.993 [2024-11-18 07:08:13.949424] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:22:52.993 passed 00:22:53.251 Test: blockdev write read 8 blocks ...passed 00:22:53.251 Test: blockdev write read size > 128k ...passed 00:22:53.251 Test: blockdev write read invalid size ...passed 00:22:53.251 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:22:53.251 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:22:53.251 Test: blockdev write read max offset ...passed 00:22:53.251 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:22:53.251 Test: blockdev writev readv 8 blocks ...passed 00:22:53.251 Test: blockdev writev readv 30 x 1block ...passed 00:22:53.251 Test: blockdev writev readv block ...passed 00:22:53.251 Test: blockdev writev readv size > 128k ...passed 00:22:53.251 Test: blockdev writev readv size > 128k in two iovs ...passed 00:22:53.251 Test: blockdev comparev and writev ...[2024-11-18 07:08:14.205097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:53.251 [2024-11-18 07:08:14.205133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:53.251 [2024-11-18 07:08:14.205166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:53.251 [2024-11-18 07:08:14.205184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:53.251 [2024-11-18 07:08:14.205506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:53.251 [2024-11-18 07:08:14.205531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:53.251 [2024-11-18 07:08:14.205553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:53.251 [2024-11-18 07:08:14.205570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:53.251 [2024-11-18 07:08:14.205901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:53.251 [2024-11-18 07:08:14.205926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:53.251 [2024-11-18 07:08:14.205947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:53.251 [2024-11-18 07:08:14.205963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:53.251 [2024-11-18 07:08:14.206256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:53.251 [2024-11-18 07:08:14.206280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:53.251 [2024-11-18 07:08:14.206302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:53.251 [2024-11-18 07:08:14.206319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:53.510 passed 00:22:53.510 Test: blockdev nvme passthru rw ...passed 00:22:53.510 Test: blockdev nvme passthru vendor specific ...[2024-11-18 07:08:14.290737] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:53.510 [2024-11-18 07:08:14.290767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:53.510 [2024-11-18 07:08:14.290922] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:53.510 [2024-11-18 07:08:14.290946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:53.510 [2024-11-18 07:08:14.291085] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:53.510 [2024-11-18 07:08:14.291107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:53.510 [2024-11-18 07:08:14.291248] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:53.510 [2024-11-18 07:08:14.291277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:53.510 passed 00:22:53.510 Test: blockdev nvme admin passthru ...passed 00:22:53.510 Test: blockdev copy ...passed 00:22:53.510 00:22:53.510 Run Summary: Type Total Ran Passed Failed Inactive 00:22:53.510 suites 1 1 n/a 0 0 00:22:53.510 tests 23 23 23 0 0 00:22:53.510 asserts 152 152 152 0 n/a 00:22:53.510 00:22:53.510 Elapsed time = 1.310 seconds 00:22:53.768 07:08:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:53.768 07:08:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:53.768 07:08:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:53.768 07:08:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:53.768 07:08:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:22:53.768 07:08:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:22:53.768 07:08:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:53.768 07:08:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:22:53.768 07:08:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:53.768 07:08:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:22:53.768 07:08:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:53.768 07:08:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:53.768 rmmod nvme_tcp 00:22:53.768 rmmod nvme_fabrics 00:22:53.768 rmmod nvme_keyring 00:22:53.768 07:08:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:53.768 07:08:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:22:53.768 07:08:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:22:53.768 07:08:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@517 -- # '[' -n 269588 ']' 00:22:53.768 07:08:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # killprocess 269588 00:22:53.768 07:08:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # '[' -z 269588 ']' 00:22:53.768 07:08:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # kill -0 269588 00:22:53.768 07:08:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # uname 00:22:53.768 07:08:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:53.768 07:08:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 269588 00:22:54.027 07:08:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:22:54.027 07:08:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:22:54.027 07:08:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # echo 'killing process with pid 269588' 00:22:54.027 killing process with pid 269588 00:22:54.027 07:08:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@973 -- # kill 269588 00:22:54.027 07:08:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@978 -- # wait 269588 00:22:54.287 07:08:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:54.287 07:08:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:54.287 07:08:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:54.287 07:08:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:22:54.288 07:08:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-save 00:22:54.288 07:08:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:54.288 07:08:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-restore 00:22:54.288 07:08:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:54.288 07:08:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:54.288 07:08:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:54.288 07:08:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:54.288 07:08:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:56.827 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:56.827 00:22:56.827 real 0m6.701s 00:22:56.827 user 0m11.569s 00:22:56.827 sys 0m2.621s 00:22:56.827 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:56.827 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:56.827 ************************************ 00:22:56.827 END TEST nvmf_bdevio_no_huge 00:22:56.827 ************************************ 00:22:56.827 07:08:17 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:22:56.827 07:08:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:56.827 07:08:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:56.827 07:08:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:56.827 ************************************ 00:22:56.827 START TEST nvmf_tls 00:22:56.827 ************************************ 00:22:56.827 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:22:56.827 * Looking for test storage... 00:22:56.827 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:56.827 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:56.827 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # lcov --version 00:22:56.827 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:56.827 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:56.827 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:56.827 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:56.827 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:56.827 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:22:56.827 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:22:56.827 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:22:56.827 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:22:56.827 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:22:56.827 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:22:56.827 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:22:56.827 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:56.827 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:22:56.827 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:22:56.827 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:56.827 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:56.827 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:22:56.827 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:22:56.827 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:56.827 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:22:56.827 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:22:56.827 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:22:56.827 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:22:56.828 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:56.828 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:22:56.828 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:22:56.828 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:56.828 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:56.828 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:22:56.828 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:56.828 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:56.828 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:56.828 --rc genhtml_branch_coverage=1 00:22:56.828 --rc genhtml_function_coverage=1 00:22:56.828 --rc genhtml_legend=1 00:22:56.828 --rc geninfo_all_blocks=1 00:22:56.828 --rc geninfo_unexecuted_blocks=1 00:22:56.828 00:22:56.828 ' 00:22:56.828 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:56.828 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:56.828 --rc genhtml_branch_coverage=1 00:22:56.828 --rc genhtml_function_coverage=1 00:22:56.828 --rc genhtml_legend=1 00:22:56.828 --rc geninfo_all_blocks=1 00:22:56.828 --rc geninfo_unexecuted_blocks=1 00:22:56.828 00:22:56.828 ' 00:22:56.828 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:56.828 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:56.828 --rc genhtml_branch_coverage=1 00:22:56.828 --rc genhtml_function_coverage=1 00:22:56.828 --rc genhtml_legend=1 00:22:56.828 --rc geninfo_all_blocks=1 00:22:56.828 --rc geninfo_unexecuted_blocks=1 00:22:56.828 00:22:56.828 ' 00:22:56.828 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:56.828 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:56.828 --rc genhtml_branch_coverage=1 00:22:56.828 --rc genhtml_function_coverage=1 00:22:56.828 --rc genhtml_legend=1 00:22:56.828 --rc geninfo_all_blocks=1 00:22:56.828 --rc geninfo_unexecuted_blocks=1 00:22:56.828 00:22:56.828 ' 00:22:56.828 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:56.828 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:22:56.828 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:56.828 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:56.828 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:56.828 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:56.828 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:56.828 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:56.828 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:56.828 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:56.828 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:56.828 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:56.828 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:56.828 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:22:56.828 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:56.828 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:56.828 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:56.828 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:56.828 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:56.828 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:22:56.828 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:56.828 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:56.828 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:56.828 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:56.828 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:56.828 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:56.828 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:22:56.828 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:56.828 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:22:56.828 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:56.828 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:56.828 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:56.828 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:56.828 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:56.828 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:56.828 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:56.828 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:56.828 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:56.828 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:56.828 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:56.828 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:22:56.828 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:56.828 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:56.828 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:56.828 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:56.828 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:56.828 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:56.828 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:56.828 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:56.828 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:56.828 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:56.828 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@309 -- # xtrace_disable 00:22:56.828 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:58.734 07:08:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:58.734 07:08:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # pci_devs=() 00:22:58.734 07:08:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:58.734 07:08:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:58.734 07:08:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:58.734 07:08:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:58.734 07:08:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:58.734 07:08:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # net_devs=() 00:22:58.734 07:08:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:58.734 07:08:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # e810=() 00:22:58.734 07:08:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # local -ga e810 00:22:58.734 07:08:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # x722=() 00:22:58.734 07:08:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # local -ga x722 00:22:58.734 07:08:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # mlx=() 00:22:58.734 07:08:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # local -ga mlx 00:22:58.734 07:08:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:58.734 07:08:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:58.734 07:08:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:58.734 07:08:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:58.734 07:08:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:58.734 07:08:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:58.734 07:08:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:58.734 07:08:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:58.734 07:08:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:58.734 07:08:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:58.734 07:08:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:58.734 07:08:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:58.734 07:08:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:58.734 07:08:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:58.734 07:08:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:58.734 07:08:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:58.734 07:08:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:58.734 07:08:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:58.734 07:08:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:58.734 07:08:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:22:58.734 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:22:58.734 07:08:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:58.734 07:08:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:58.734 07:08:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:58.734 07:08:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:58.734 07:08:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:58.734 07:08:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:58.734 07:08:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:22:58.734 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:22:58.734 07:08:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:58.734 07:08:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:58.734 07:08:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:58.734 07:08:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:58.734 07:08:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:58.734 07:08:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:58.734 07:08:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:58.734 07:08:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:58.734 07:08:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:58.734 07:08:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:58.734 07:08:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:58.734 07:08:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:58.734 07:08:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:58.734 07:08:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:58.734 07:08:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:58.734 07:08:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:22:58.734 Found net devices under 0000:0a:00.0: cvl_0_0 00:22:58.734 07:08:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:58.734 07:08:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:58.734 07:08:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:58.734 07:08:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:58.734 07:08:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:58.734 07:08:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:58.734 07:08:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:58.734 07:08:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:58.734 07:08:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:22:58.734 Found net devices under 0000:0a:00.1: cvl_0_1 00:22:58.734 07:08:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:58.734 07:08:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:58.734 07:08:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # is_hw=yes 00:22:58.734 07:08:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:58.734 07:08:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:58.734 07:08:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:58.735 07:08:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:58.735 07:08:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:58.735 07:08:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:58.735 07:08:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:58.735 07:08:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:58.735 07:08:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:58.735 07:08:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:58.735 07:08:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:58.735 07:08:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:58.735 07:08:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:58.735 07:08:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:58.735 07:08:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:58.735 07:08:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:58.735 07:08:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:58.735 07:08:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:58.735 07:08:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:58.735 07:08:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:58.735 07:08:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:58.735 07:08:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:58.735 07:08:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:58.735 07:08:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:58.735 07:08:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:58.735 07:08:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:58.735 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:58.735 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.135 ms 00:22:58.735 00:22:58.735 --- 10.0.0.2 ping statistics --- 00:22:58.735 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:58.735 rtt min/avg/max/mdev = 0.135/0.135/0.135/0.000 ms 00:22:58.735 07:08:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:58.735 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:58.735 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.104 ms 00:22:58.735 00:22:58.735 --- 10.0.0.1 ping statistics --- 00:22:58.735 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:58.735 rtt min/avg/max/mdev = 0.104/0.104/0.104/0.000 ms 00:22:58.735 07:08:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:58.735 07:08:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@450 -- # return 0 00:22:58.735 07:08:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:58.735 07:08:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:58.735 07:08:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:58.735 07:08:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:58.735 07:08:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:58.735 07:08:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:58.735 07:08:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:58.735 07:08:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:22:58.735 07:08:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:58.735 07:08:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:58.735 07:08:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:58.735 07:08:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=271758 00:22:58.735 07:08:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:22:58.735 07:08:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 271758 00:22:58.735 07:08:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 271758 ']' 00:22:58.735 07:08:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:58.735 07:08:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:58.735 07:08:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:58.735 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:58.735 07:08:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:58.735 07:08:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:58.735 [2024-11-18 07:08:19.690263] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:22:58.735 [2024-11-18 07:08:19.690339] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:58.993 [2024-11-18 07:08:19.762831] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:58.993 [2024-11-18 07:08:19.808343] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:58.993 [2024-11-18 07:08:19.808393] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:58.993 [2024-11-18 07:08:19.808416] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:58.993 [2024-11-18 07:08:19.808427] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:58.993 [2024-11-18 07:08:19.808437] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:58.993 [2024-11-18 07:08:19.809057] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:58.993 07:08:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:58.993 07:08:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:22:58.993 07:08:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:58.993 07:08:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:58.993 07:08:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:58.993 07:08:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:58.993 07:08:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:22:58.993 07:08:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:22:59.250 true 00:22:59.250 07:08:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:59.251 07:08:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:22:59.815 07:08:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:22:59.815 07:08:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:22:59.815 07:08:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:23:00.073 07:08:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:00.073 07:08:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:23:00.331 07:08:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:23:00.331 07:08:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:23:00.331 07:08:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:23:00.589 07:08:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:00.589 07:08:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:23:00.846 07:08:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:23:00.846 07:08:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:23:00.846 07:08:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:00.846 07:08:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:23:01.103 07:08:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:23:01.103 07:08:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:23:01.103 07:08:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:23:01.360 07:08:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:01.360 07:08:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:23:01.617 07:08:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:23:01.617 07:08:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:23:01.617 07:08:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:23:01.875 07:08:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:01.875 07:08:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:23:02.134 07:08:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:23:02.134 07:08:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:23:02.134 07:08:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:23:02.134 07:08:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:23:02.134 07:08:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:23:02.134 07:08:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:23:02.134 07:08:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:23:02.134 07:08:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:23:02.134 07:08:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:23:02.134 07:08:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:23:02.134 07:08:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:23:02.134 07:08:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:23:02.134 07:08:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:23:02.134 07:08:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:23:02.134 07:08:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=ffeeddccbbaa99887766554433221100 00:23:02.134 07:08:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:23:02.134 07:08:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:23:02.134 07:08:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:23:02.134 07:08:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:23:02.134 07:08:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.SCq1rnewsh 00:23:02.134 07:08:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:23:02.134 07:08:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.6ThPg9Nyk4 00:23:02.134 07:08:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:23:02.134 07:08:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:23:02.134 07:08:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.SCq1rnewsh 00:23:02.134 07:08:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.6ThPg9Nyk4 00:23:02.134 07:08:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:23:02.392 07:08:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:23:02.959 07:08:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.SCq1rnewsh 00:23:02.959 07:08:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.SCq1rnewsh 00:23:02.959 07:08:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:03.218 [2024-11-18 07:08:23.963095] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:03.218 07:08:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:03.476 07:08:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:03.733 [2024-11-18 07:08:24.560705] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:03.733 [2024-11-18 07:08:24.560949] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:03.733 07:08:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:03.992 malloc0 00:23:03.992 07:08:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:04.250 07:08:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.SCq1rnewsh 00:23:04.508 07:08:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:23:04.766 07:08:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.SCq1rnewsh 00:23:16.961 Initializing NVMe Controllers 00:23:16.961 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:16.961 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:16.961 Initialization complete. Launching workers. 00:23:16.961 ======================================================== 00:23:16.961 Latency(us) 00:23:16.961 Device Information : IOPS MiB/s Average min max 00:23:16.961 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8700.27 33.99 7358.13 977.46 8462.60 00:23:16.961 ======================================================== 00:23:16.961 Total : 8700.27 33.99 7358.13 977.46 8462.60 00:23:16.961 00:23:16.961 07:08:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.SCq1rnewsh 00:23:16.961 07:08:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:16.961 07:08:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:16.961 07:08:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:16.961 07:08:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.SCq1rnewsh 00:23:16.961 07:08:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:16.961 07:08:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=273715 00:23:16.961 07:08:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:16.961 07:08:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 273715 /var/tmp/bdevperf.sock 00:23:16.961 07:08:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:16.961 07:08:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 273715 ']' 00:23:16.961 07:08:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:16.961 07:08:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:16.961 07:08:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:16.961 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:16.961 07:08:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:16.961 07:08:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:16.961 [2024-11-18 07:08:35.871724] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:23:16.961 [2024-11-18 07:08:35.871826] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid273715 ] 00:23:16.961 [2024-11-18 07:08:35.941913] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:16.961 [2024-11-18 07:08:35.991323] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:16.961 07:08:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:16.961 07:08:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:16.961 07:08:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.SCq1rnewsh 00:23:16.961 07:08:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:16.961 [2024-11-18 07:08:36.651312] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:16.961 TLSTESTn1 00:23:16.961 07:08:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:23:16.961 Running I/O for 10 seconds... 00:23:17.896 3065.00 IOPS, 11.97 MiB/s [2024-11-18T06:08:40.246Z] 3144.00 IOPS, 12.28 MiB/s [2024-11-18T06:08:41.180Z] 3164.00 IOPS, 12.36 MiB/s [2024-11-18T06:08:42.113Z] 3184.50 IOPS, 12.44 MiB/s [2024-11-18T06:08:43.047Z] 3174.60 IOPS, 12.40 MiB/s [2024-11-18T06:08:43.980Z] 3171.33 IOPS, 12.39 MiB/s [2024-11-18T06:08:44.913Z] 3183.14 IOPS, 12.43 MiB/s [2024-11-18T06:08:46.287Z] 3179.38 IOPS, 12.42 MiB/s [2024-11-18T06:08:47.221Z] 3196.44 IOPS, 12.49 MiB/s [2024-11-18T06:08:47.221Z] 3216.00 IOPS, 12.56 MiB/s 00:23:26.243 Latency(us) 00:23:26.243 [2024-11-18T06:08:47.221Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:26.243 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:26.243 Verification LBA range: start 0x0 length 0x2000 00:23:26.243 TLSTESTn1 : 10.02 3223.19 12.59 0.00 0.00 39647.65 6553.60 35729.26 00:23:26.243 [2024-11-18T06:08:47.221Z] =================================================================================================================== 00:23:26.243 [2024-11-18T06:08:47.221Z] Total : 3223.19 12.59 0.00 0.00 39647.65 6553.60 35729.26 00:23:26.243 { 00:23:26.243 "results": [ 00:23:26.243 { 00:23:26.243 "job": "TLSTESTn1", 00:23:26.243 "core_mask": "0x4", 00:23:26.243 "workload": "verify", 00:23:26.243 "status": "finished", 00:23:26.243 "verify_range": { 00:23:26.243 "start": 0, 00:23:26.243 "length": 8192 00:23:26.243 }, 00:23:26.243 "queue_depth": 128, 00:23:26.243 "io_size": 4096, 00:23:26.243 "runtime": 10.01678, 00:23:26.243 "iops": 3223.1914846886925, 00:23:26.243 "mibps": 12.590591737065205, 00:23:26.243 "io_failed": 0, 00:23:26.244 "io_timeout": 0, 00:23:26.244 "avg_latency_us": 39647.64634009466, 00:23:26.244 "min_latency_us": 6553.6, 00:23:26.244 "max_latency_us": 35729.2562962963 00:23:26.244 } 00:23:26.244 ], 00:23:26.244 "core_count": 1 00:23:26.244 } 00:23:26.244 07:08:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:26.244 07:08:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 273715 00:23:26.244 07:08:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 273715 ']' 00:23:26.244 07:08:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 273715 00:23:26.244 07:08:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:26.244 07:08:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:26.244 07:08:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 273715 00:23:26.244 07:08:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:26.244 07:08:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:26.244 07:08:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 273715' 00:23:26.244 killing process with pid 273715 00:23:26.244 07:08:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 273715 00:23:26.244 Received shutdown signal, test time was about 10.000000 seconds 00:23:26.244 00:23:26.244 Latency(us) 00:23:26.244 [2024-11-18T06:08:47.222Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:26.244 [2024-11-18T06:08:47.222Z] =================================================================================================================== 00:23:26.244 [2024-11-18T06:08:47.222Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:26.244 07:08:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 273715 00:23:26.244 07:08:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.6ThPg9Nyk4 00:23:26.244 07:08:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:23:26.244 07:08:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.6ThPg9Nyk4 00:23:26.244 07:08:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:23:26.244 07:08:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:26.244 07:08:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:23:26.244 07:08:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:26.244 07:08:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.6ThPg9Nyk4 00:23:26.244 07:08:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:26.244 07:08:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:26.244 07:08:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:26.244 07:08:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.6ThPg9Nyk4 00:23:26.244 07:08:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:26.244 07:08:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=274934 00:23:26.244 07:08:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:26.244 07:08:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:26.244 07:08:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 274934 /var/tmp/bdevperf.sock 00:23:26.244 07:08:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 274934 ']' 00:23:26.244 07:08:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:26.244 07:08:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:26.244 07:08:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:26.244 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:26.244 07:08:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:26.244 07:08:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:26.244 [2024-11-18 07:08:47.200043] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:23:26.244 [2024-11-18 07:08:47.200142] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid274934 ] 00:23:26.502 [2024-11-18 07:08:47.271258] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:26.502 [2024-11-18 07:08:47.316967] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:26.502 07:08:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:26.502 07:08:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:26.502 07:08:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.6ThPg9Nyk4 00:23:26.761 07:08:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:27.019 [2024-11-18 07:08:47.949387] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:27.019 [2024-11-18 07:08:47.960890] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:23:27.019 [2024-11-18 07:08:47.961606] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x154a370 (107): Transport endpoint is not connected 00:23:27.019 [2024-11-18 07:08:47.962596] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x154a370 (9): Bad file descriptor 00:23:27.019 [2024-11-18 07:08:47.963595] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:23:27.019 [2024-11-18 07:08:47.963622] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:23:27.019 [2024-11-18 07:08:47.963637] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:23:27.019 [2024-11-18 07:08:47.963656] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:23:27.020 request: 00:23:27.020 { 00:23:27.020 "name": "TLSTEST", 00:23:27.020 "trtype": "tcp", 00:23:27.020 "traddr": "10.0.0.2", 00:23:27.020 "adrfam": "ipv4", 00:23:27.020 "trsvcid": "4420", 00:23:27.020 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:27.020 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:27.020 "prchk_reftag": false, 00:23:27.020 "prchk_guard": false, 00:23:27.020 "hdgst": false, 00:23:27.020 "ddgst": false, 00:23:27.020 "psk": "key0", 00:23:27.020 "allow_unrecognized_csi": false, 00:23:27.020 "method": "bdev_nvme_attach_controller", 00:23:27.020 "req_id": 1 00:23:27.020 } 00:23:27.020 Got JSON-RPC error response 00:23:27.020 response: 00:23:27.020 { 00:23:27.020 "code": -5, 00:23:27.020 "message": "Input/output error" 00:23:27.020 } 00:23:27.020 07:08:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 274934 00:23:27.020 07:08:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 274934 ']' 00:23:27.020 07:08:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 274934 00:23:27.020 07:08:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:27.020 07:08:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:27.020 07:08:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 274934 00:23:27.278 07:08:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:27.279 07:08:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:27.279 07:08:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 274934' 00:23:27.279 killing process with pid 274934 00:23:27.279 07:08:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 274934 00:23:27.279 Received shutdown signal, test time was about 10.000000 seconds 00:23:27.279 00:23:27.279 Latency(us) 00:23:27.279 [2024-11-18T06:08:48.257Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:27.279 [2024-11-18T06:08:48.257Z] =================================================================================================================== 00:23:27.279 [2024-11-18T06:08:48.257Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:27.279 07:08:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 274934 00:23:27.279 07:08:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:23:27.279 07:08:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:23:27.279 07:08:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:27.279 07:08:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:27.279 07:08:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:27.279 07:08:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.SCq1rnewsh 00:23:27.279 07:08:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:23:27.279 07:08:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.SCq1rnewsh 00:23:27.279 07:08:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:23:27.279 07:08:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:27.279 07:08:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:23:27.279 07:08:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:27.279 07:08:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.SCq1rnewsh 00:23:27.279 07:08:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:27.279 07:08:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:27.279 07:08:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:23:27.279 07:08:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.SCq1rnewsh 00:23:27.279 07:08:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:27.279 07:08:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=275063 00:23:27.279 07:08:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:27.279 07:08:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:27.279 07:08:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 275063 /var/tmp/bdevperf.sock 00:23:27.279 07:08:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 275063 ']' 00:23:27.279 07:08:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:27.279 07:08:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:27.279 07:08:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:27.279 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:27.279 07:08:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:27.279 07:08:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:27.538 [2024-11-18 07:08:48.270193] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:23:27.538 [2024-11-18 07:08:48.270290] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid275063 ] 00:23:27.538 [2024-11-18 07:08:48.339534] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:27.538 [2024-11-18 07:08:48.386003] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:27.538 07:08:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:27.538 07:08:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:27.538 07:08:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.SCq1rnewsh 00:23:28.104 07:08:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:23:28.104 [2024-11-18 07:08:49.025744] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:28.104 [2024-11-18 07:08:49.033516] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:23:28.104 [2024-11-18 07:08:49.033549] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:23:28.104 [2024-11-18 07:08:49.033604] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:23:28.104 [2024-11-18 07:08:49.033834] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x119e370 (107): Transport endpoint is not connected 00:23:28.104 [2024-11-18 07:08:49.034823] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x119e370 (9): Bad file descriptor 00:23:28.105 [2024-11-18 07:08:49.035836] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:23:28.105 [2024-11-18 07:08:49.035856] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:23:28.105 [2024-11-18 07:08:49.035869] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:23:28.105 [2024-11-18 07:08:49.035887] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:23:28.105 request: 00:23:28.105 { 00:23:28.105 "name": "TLSTEST", 00:23:28.105 "trtype": "tcp", 00:23:28.105 "traddr": "10.0.0.2", 00:23:28.105 "adrfam": "ipv4", 00:23:28.105 "trsvcid": "4420", 00:23:28.105 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:28.105 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:28.105 "prchk_reftag": false, 00:23:28.105 "prchk_guard": false, 00:23:28.105 "hdgst": false, 00:23:28.105 "ddgst": false, 00:23:28.105 "psk": "key0", 00:23:28.105 "allow_unrecognized_csi": false, 00:23:28.105 "method": "bdev_nvme_attach_controller", 00:23:28.105 "req_id": 1 00:23:28.105 } 00:23:28.105 Got JSON-RPC error response 00:23:28.105 response: 00:23:28.105 { 00:23:28.105 "code": -5, 00:23:28.105 "message": "Input/output error" 00:23:28.105 } 00:23:28.105 07:08:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 275063 00:23:28.105 07:08:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 275063 ']' 00:23:28.105 07:08:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 275063 00:23:28.105 07:08:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:28.105 07:08:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:28.105 07:08:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 275063 00:23:28.362 07:08:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:28.362 07:08:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:28.362 07:08:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 275063' 00:23:28.362 killing process with pid 275063 00:23:28.362 07:08:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 275063 00:23:28.362 Received shutdown signal, test time was about 10.000000 seconds 00:23:28.362 00:23:28.362 Latency(us) 00:23:28.362 [2024-11-18T06:08:49.340Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:28.362 [2024-11-18T06:08:49.340Z] =================================================================================================================== 00:23:28.362 [2024-11-18T06:08:49.340Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:28.362 07:08:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 275063 00:23:28.362 07:08:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:23:28.362 07:08:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:23:28.362 07:08:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:28.362 07:08:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:28.362 07:08:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:28.362 07:08:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.SCq1rnewsh 00:23:28.362 07:08:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:23:28.362 07:08:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.SCq1rnewsh 00:23:28.362 07:08:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:23:28.362 07:08:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:28.362 07:08:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:23:28.362 07:08:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:28.362 07:08:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.SCq1rnewsh 00:23:28.362 07:08:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:28.362 07:08:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:23:28.362 07:08:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:28.362 07:08:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.SCq1rnewsh 00:23:28.362 07:08:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:28.362 07:08:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=275204 00:23:28.362 07:08:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:28.362 07:08:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:28.362 07:08:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 275204 /var/tmp/bdevperf.sock 00:23:28.362 07:08:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 275204 ']' 00:23:28.362 07:08:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:28.362 07:08:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:28.362 07:08:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:28.362 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:28.362 07:08:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:28.362 07:08:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:28.620 [2024-11-18 07:08:49.341680] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:23:28.620 [2024-11-18 07:08:49.341797] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid275204 ] 00:23:28.620 [2024-11-18 07:08:49.410631] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:28.620 [2024-11-18 07:08:49.458236] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:28.620 07:08:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:28.620 07:08:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:28.620 07:08:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.SCq1rnewsh 00:23:29.186 07:08:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:29.186 [2024-11-18 07:08:50.117016] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:29.186 [2024-11-18 07:08:50.126403] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:23:29.186 [2024-11-18 07:08:50.126432] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:23:29.186 [2024-11-18 07:08:50.126484] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:23:29.186 [2024-11-18 07:08:50.127456] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bea370 (107): Transport endpoint is not connected 00:23:29.186 [2024-11-18 07:08:50.128448] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bea370 (9): Bad file descriptor 00:23:29.186 [2024-11-18 07:08:50.129448] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] Ctrlr is in error state 00:23:29.186 [2024-11-18 07:08:50.129483] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:23:29.186 [2024-11-18 07:08:50.129505] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:23:29.186 [2024-11-18 07:08:50.129525] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] in failed state. 00:23:29.186 request: 00:23:29.186 { 00:23:29.186 "name": "TLSTEST", 00:23:29.186 "trtype": "tcp", 00:23:29.186 "traddr": "10.0.0.2", 00:23:29.186 "adrfam": "ipv4", 00:23:29.186 "trsvcid": "4420", 00:23:29.186 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:29.186 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:29.186 "prchk_reftag": false, 00:23:29.186 "prchk_guard": false, 00:23:29.186 "hdgst": false, 00:23:29.186 "ddgst": false, 00:23:29.186 "psk": "key0", 00:23:29.186 "allow_unrecognized_csi": false, 00:23:29.186 "method": "bdev_nvme_attach_controller", 00:23:29.186 "req_id": 1 00:23:29.186 } 00:23:29.186 Got JSON-RPC error response 00:23:29.186 response: 00:23:29.186 { 00:23:29.186 "code": -5, 00:23:29.186 "message": "Input/output error" 00:23:29.186 } 00:23:29.186 07:08:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 275204 00:23:29.186 07:08:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 275204 ']' 00:23:29.186 07:08:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 275204 00:23:29.186 07:08:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:29.186 07:08:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:29.186 07:08:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 275204 00:23:29.445 07:08:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:29.445 07:08:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:29.445 07:08:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 275204' 00:23:29.445 killing process with pid 275204 00:23:29.445 07:08:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 275204 00:23:29.445 Received shutdown signal, test time was about 10.000000 seconds 00:23:29.445 00:23:29.445 Latency(us) 00:23:29.445 [2024-11-18T06:08:50.423Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:29.445 [2024-11-18T06:08:50.423Z] =================================================================================================================== 00:23:29.445 [2024-11-18T06:08:50.423Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:29.445 07:08:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 275204 00:23:29.445 07:08:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:23:29.445 07:08:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:23:29.445 07:08:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:29.445 07:08:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:29.445 07:08:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:29.445 07:08:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:23:29.445 07:08:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:23:29.445 07:08:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:23:29.445 07:08:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:23:29.445 07:08:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:29.445 07:08:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:23:29.445 07:08:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:29.445 07:08:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:23:29.445 07:08:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:29.445 07:08:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:29.445 07:08:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:29.445 07:08:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:23:29.445 07:08:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:29.445 07:08:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=275344 00:23:29.445 07:08:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:29.445 07:08:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:29.445 07:08:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 275344 /var/tmp/bdevperf.sock 00:23:29.445 07:08:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 275344 ']' 00:23:29.445 07:08:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:29.445 07:08:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:29.445 07:08:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:29.445 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:29.445 07:08:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:29.445 07:08:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:29.704 [2024-11-18 07:08:50.437595] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:23:29.704 [2024-11-18 07:08:50.437678] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid275344 ] 00:23:29.704 [2024-11-18 07:08:50.504809] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:29.704 [2024-11-18 07:08:50.550274] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:29.962 07:08:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:29.962 07:08:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:29.962 07:08:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:23:29.962 [2024-11-18 07:08:50.937567] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:23:29.962 [2024-11-18 07:08:50.937615] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:23:30.219 request: 00:23:30.219 { 00:23:30.219 "name": "key0", 00:23:30.219 "path": "", 00:23:30.219 "method": "keyring_file_add_key", 00:23:30.219 "req_id": 1 00:23:30.219 } 00:23:30.219 Got JSON-RPC error response 00:23:30.219 response: 00:23:30.219 { 00:23:30.219 "code": -1, 00:23:30.219 "message": "Operation not permitted" 00:23:30.219 } 00:23:30.219 07:08:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:30.478 [2024-11-18 07:08:51.210387] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:30.478 [2024-11-18 07:08:51.210442] bdev_nvme.c:6622:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:23:30.478 request: 00:23:30.478 { 00:23:30.478 "name": "TLSTEST", 00:23:30.478 "trtype": "tcp", 00:23:30.478 "traddr": "10.0.0.2", 00:23:30.478 "adrfam": "ipv4", 00:23:30.478 "trsvcid": "4420", 00:23:30.478 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:30.478 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:30.478 "prchk_reftag": false, 00:23:30.478 "prchk_guard": false, 00:23:30.478 "hdgst": false, 00:23:30.478 "ddgst": false, 00:23:30.478 "psk": "key0", 00:23:30.478 "allow_unrecognized_csi": false, 00:23:30.478 "method": "bdev_nvme_attach_controller", 00:23:30.478 "req_id": 1 00:23:30.478 } 00:23:30.478 Got JSON-RPC error response 00:23:30.478 response: 00:23:30.478 { 00:23:30.478 "code": -126, 00:23:30.478 "message": "Required key not available" 00:23:30.478 } 00:23:30.478 07:08:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 275344 00:23:30.478 07:08:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 275344 ']' 00:23:30.478 07:08:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 275344 00:23:30.478 07:08:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:30.478 07:08:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:30.478 07:08:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 275344 00:23:30.478 07:08:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:30.478 07:08:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:30.478 07:08:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 275344' 00:23:30.478 killing process with pid 275344 00:23:30.478 07:08:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 275344 00:23:30.478 Received shutdown signal, test time was about 10.000000 seconds 00:23:30.478 00:23:30.478 Latency(us) 00:23:30.478 [2024-11-18T06:08:51.456Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:30.478 [2024-11-18T06:08:51.456Z] =================================================================================================================== 00:23:30.478 [2024-11-18T06:08:51.456Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:30.478 07:08:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 275344 00:23:30.478 07:08:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:23:30.478 07:08:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:23:30.478 07:08:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:30.478 07:08:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:30.478 07:08:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:30.478 07:08:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 271758 00:23:30.478 07:08:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 271758 ']' 00:23:30.478 07:08:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 271758 00:23:30.478 07:08:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:30.478 07:08:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:30.478 07:08:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 271758 00:23:30.736 07:08:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:30.737 07:08:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:30.737 07:08:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 271758' 00:23:30.737 killing process with pid 271758 00:23:30.737 07:08:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 271758 00:23:30.737 07:08:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 271758 00:23:30.737 07:08:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:23:30.737 07:08:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:23:30.737 07:08:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:23:30.737 07:08:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:23:30.737 07:08:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:23:30.737 07:08:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=2 00:23:30.737 07:08:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:23:30.737 07:08:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:23:30.737 07:08:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:23:30.737 07:08:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.z1oj8GbH3J 00:23:30.737 07:08:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:23:30.737 07:08:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.z1oj8GbH3J 00:23:30.737 07:08:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:23:30.737 07:08:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:30.737 07:08:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:30.737 07:08:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:30.737 07:08:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=275568 00:23:30.737 07:08:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:30.737 07:08:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 275568 00:23:30.737 07:08:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 275568 ']' 00:23:30.737 07:08:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:30.737 07:08:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:30.737 07:08:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:30.737 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:30.737 07:08:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:30.737 07:08:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:30.995 [2024-11-18 07:08:51.750216] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:23:30.995 [2024-11-18 07:08:51.750307] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:30.995 [2024-11-18 07:08:51.820718] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:30.995 [2024-11-18 07:08:51.869028] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:30.995 [2024-11-18 07:08:51.869105] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:30.995 [2024-11-18 07:08:51.869119] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:30.995 [2024-11-18 07:08:51.869130] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:30.995 [2024-11-18 07:08:51.869139] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:30.995 [2024-11-18 07:08:51.869742] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:31.253 07:08:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:31.253 07:08:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:31.253 07:08:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:31.253 07:08:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:31.253 07:08:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:31.253 07:08:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:31.253 07:08:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.z1oj8GbH3J 00:23:31.253 07:08:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.z1oj8GbH3J 00:23:31.253 07:08:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:31.516 [2024-11-18 07:08:52.257938] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:31.516 07:08:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:31.773 07:08:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:32.031 [2024-11-18 07:08:52.783335] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:32.031 [2024-11-18 07:08:52.783593] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:32.031 07:08:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:32.289 malloc0 00:23:32.289 07:08:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:32.547 07:08:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.z1oj8GbH3J 00:23:32.804 07:08:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:23:33.062 07:08:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.z1oj8GbH3J 00:23:33.062 07:08:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:33.062 07:08:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:33.062 07:08:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:33.062 07:08:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.z1oj8GbH3J 00:23:33.062 07:08:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:33.062 07:08:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=275782 00:23:33.062 07:08:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:33.062 07:08:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:33.062 07:08:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 275782 /var/tmp/bdevperf.sock 00:23:33.062 07:08:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 275782 ']' 00:23:33.062 07:08:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:33.062 07:08:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:33.062 07:08:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:33.062 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:33.062 07:08:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:33.062 07:08:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:33.062 [2024-11-18 07:08:53.915329] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:23:33.062 [2024-11-18 07:08:53.915420] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid275782 ] 00:23:33.062 [2024-11-18 07:08:53.984362] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:33.062 [2024-11-18 07:08:54.030145] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:33.320 07:08:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:33.320 07:08:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:33.320 07:08:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.z1oj8GbH3J 00:23:33.578 07:08:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:33.836 [2024-11-18 07:08:54.677848] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:33.836 TLSTESTn1 00:23:33.836 07:08:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:23:34.094 Running I/O for 10 seconds... 00:23:35.961 3138.00 IOPS, 12.26 MiB/s [2024-11-18T06:08:58.312Z] 3268.00 IOPS, 12.77 MiB/s [2024-11-18T06:08:59.246Z] 3302.00 IOPS, 12.90 MiB/s [2024-11-18T06:09:00.179Z] 3330.25 IOPS, 13.01 MiB/s [2024-11-18T06:09:01.114Z] 3341.80 IOPS, 13.05 MiB/s [2024-11-18T06:09:02.049Z] 3347.50 IOPS, 13.08 MiB/s [2024-11-18T06:09:02.983Z] 3336.29 IOPS, 13.03 MiB/s [2024-11-18T06:09:03.916Z] 3345.25 IOPS, 13.07 MiB/s [2024-11-18T06:09:05.291Z] 3330.33 IOPS, 13.01 MiB/s [2024-11-18T06:09:05.291Z] 3323.90 IOPS, 12.98 MiB/s 00:23:44.313 Latency(us) 00:23:44.313 [2024-11-18T06:09:05.291Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:44.313 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:44.313 Verification LBA range: start 0x0 length 0x2000 00:23:44.313 TLSTESTn1 : 10.02 3329.98 13.01 0.00 0.00 38376.42 6359.42 50875.35 00:23:44.313 [2024-11-18T06:09:05.291Z] =================================================================================================================== 00:23:44.313 [2024-11-18T06:09:05.291Z] Total : 3329.98 13.01 0.00 0.00 38376.42 6359.42 50875.35 00:23:44.313 { 00:23:44.313 "results": [ 00:23:44.313 { 00:23:44.313 "job": "TLSTESTn1", 00:23:44.313 "core_mask": "0x4", 00:23:44.313 "workload": "verify", 00:23:44.313 "status": "finished", 00:23:44.313 "verify_range": { 00:23:44.313 "start": 0, 00:23:44.313 "length": 8192 00:23:44.313 }, 00:23:44.313 "queue_depth": 128, 00:23:44.313 "io_size": 4096, 00:23:44.313 "runtime": 10.019282, 00:23:44.313 "iops": 3329.979134233371, 00:23:44.313 "mibps": 13.007730993099106, 00:23:44.313 "io_failed": 0, 00:23:44.313 "io_timeout": 0, 00:23:44.313 "avg_latency_us": 38376.41590323569, 00:23:44.313 "min_latency_us": 6359.419259259259, 00:23:44.313 "max_latency_us": 50875.35407407407 00:23:44.313 } 00:23:44.313 ], 00:23:44.313 "core_count": 1 00:23:44.313 } 00:23:44.313 07:09:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:44.313 07:09:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 275782 00:23:44.313 07:09:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 275782 ']' 00:23:44.313 07:09:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 275782 00:23:44.313 07:09:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:44.313 07:09:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:44.313 07:09:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 275782 00:23:44.313 07:09:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:44.313 07:09:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:44.313 07:09:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 275782' 00:23:44.313 killing process with pid 275782 00:23:44.313 07:09:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 275782 00:23:44.313 Received shutdown signal, test time was about 10.000000 seconds 00:23:44.313 00:23:44.313 Latency(us) 00:23:44.313 [2024-11-18T06:09:05.291Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:44.313 [2024-11-18T06:09:05.291Z] =================================================================================================================== 00:23:44.313 [2024-11-18T06:09:05.291Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:44.313 07:09:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 275782 00:23:44.313 07:09:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.z1oj8GbH3J 00:23:44.313 07:09:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.z1oj8GbH3J 00:23:44.313 07:09:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:23:44.313 07:09:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.z1oj8GbH3J 00:23:44.313 07:09:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:23:44.313 07:09:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:44.313 07:09:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:23:44.313 07:09:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:44.313 07:09:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.z1oj8GbH3J 00:23:44.313 07:09:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:44.313 07:09:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:44.313 07:09:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:44.313 07:09:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.z1oj8GbH3J 00:23:44.313 07:09:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:44.313 07:09:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=277208 00:23:44.313 07:09:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:44.313 07:09:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:44.313 07:09:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 277208 /var/tmp/bdevperf.sock 00:23:44.313 07:09:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 277208 ']' 00:23:44.313 07:09:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:44.313 07:09:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:44.313 07:09:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:44.313 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:44.313 07:09:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:44.313 07:09:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:44.313 [2024-11-18 07:09:05.224741] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:23:44.313 [2024-11-18 07:09:05.224841] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid277208 ] 00:23:44.572 [2024-11-18 07:09:05.291901] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:44.572 [2024-11-18 07:09:05.343508] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:44.572 07:09:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:44.572 07:09:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:44.572 07:09:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.z1oj8GbH3J 00:23:44.829 [2024-11-18 07:09:05.723023] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.z1oj8GbH3J': 0100666 00:23:44.829 [2024-11-18 07:09:05.723067] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:23:44.829 request: 00:23:44.829 { 00:23:44.829 "name": "key0", 00:23:44.829 "path": "/tmp/tmp.z1oj8GbH3J", 00:23:44.829 "method": "keyring_file_add_key", 00:23:44.829 "req_id": 1 00:23:44.829 } 00:23:44.829 Got JSON-RPC error response 00:23:44.829 response: 00:23:44.829 { 00:23:44.829 "code": -1, 00:23:44.829 "message": "Operation not permitted" 00:23:44.829 } 00:23:44.829 07:09:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:45.087 [2024-11-18 07:09:05.995873] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:45.087 [2024-11-18 07:09:05.995926] bdev_nvme.c:6622:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:23:45.087 request: 00:23:45.087 { 00:23:45.087 "name": "TLSTEST", 00:23:45.087 "trtype": "tcp", 00:23:45.087 "traddr": "10.0.0.2", 00:23:45.087 "adrfam": "ipv4", 00:23:45.087 "trsvcid": "4420", 00:23:45.087 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:45.087 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:45.087 "prchk_reftag": false, 00:23:45.087 "prchk_guard": false, 00:23:45.087 "hdgst": false, 00:23:45.087 "ddgst": false, 00:23:45.087 "psk": "key0", 00:23:45.087 "allow_unrecognized_csi": false, 00:23:45.087 "method": "bdev_nvme_attach_controller", 00:23:45.087 "req_id": 1 00:23:45.087 } 00:23:45.087 Got JSON-RPC error response 00:23:45.087 response: 00:23:45.087 { 00:23:45.087 "code": -126, 00:23:45.087 "message": "Required key not available" 00:23:45.087 } 00:23:45.087 07:09:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 277208 00:23:45.087 07:09:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 277208 ']' 00:23:45.087 07:09:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 277208 00:23:45.087 07:09:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:45.087 07:09:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:45.087 07:09:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 277208 00:23:45.087 07:09:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:45.087 07:09:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:45.087 07:09:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 277208' 00:23:45.087 killing process with pid 277208 00:23:45.087 07:09:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 277208 00:23:45.087 Received shutdown signal, test time was about 10.000000 seconds 00:23:45.087 00:23:45.088 Latency(us) 00:23:45.088 [2024-11-18T06:09:06.066Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:45.088 [2024-11-18T06:09:06.066Z] =================================================================================================================== 00:23:45.088 [2024-11-18T06:09:06.066Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:45.088 07:09:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 277208 00:23:45.345 07:09:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:23:45.345 07:09:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:23:45.346 07:09:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:45.346 07:09:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:45.346 07:09:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:45.346 07:09:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 275568 00:23:45.346 07:09:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 275568 ']' 00:23:45.346 07:09:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 275568 00:23:45.346 07:09:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:45.346 07:09:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:45.346 07:09:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 275568 00:23:45.346 07:09:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:45.346 07:09:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:45.346 07:09:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 275568' 00:23:45.346 killing process with pid 275568 00:23:45.346 07:09:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 275568 00:23:45.346 07:09:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 275568 00:23:45.605 07:09:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:23:45.605 07:09:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:45.605 07:09:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:45.605 07:09:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:45.605 07:09:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=277425 00:23:45.605 07:09:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:45.605 07:09:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 277425 00:23:45.605 07:09:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 277425 ']' 00:23:45.605 07:09:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:45.605 07:09:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:45.605 07:09:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:45.605 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:45.605 07:09:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:45.605 07:09:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:45.605 [2024-11-18 07:09:06.560715] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:23:45.605 [2024-11-18 07:09:06.560803] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:45.864 [2024-11-18 07:09:06.635337] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:45.864 [2024-11-18 07:09:06.683523] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:45.864 [2024-11-18 07:09:06.683579] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:45.864 [2024-11-18 07:09:06.683594] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:45.864 [2024-11-18 07:09:06.683606] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:45.864 [2024-11-18 07:09:06.683616] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:45.864 [2024-11-18 07:09:06.684206] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:45.864 07:09:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:45.864 07:09:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:45.864 07:09:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:45.864 07:09:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:45.864 07:09:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:45.864 07:09:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:45.864 07:09:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.z1oj8GbH3J 00:23:45.864 07:09:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:23:45.864 07:09:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.z1oj8GbH3J 00:23:45.864 07:09:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=setup_nvmf_tgt 00:23:45.864 07:09:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:45.864 07:09:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t setup_nvmf_tgt 00:23:45.864 07:09:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:45.864 07:09:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # setup_nvmf_tgt /tmp/tmp.z1oj8GbH3J 00:23:45.864 07:09:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.z1oj8GbH3J 00:23:45.864 07:09:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:46.123 [2024-11-18 07:09:07.095632] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:46.381 07:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:46.639 07:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:46.898 [2024-11-18 07:09:07.669208] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:46.898 [2024-11-18 07:09:07.669425] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:46.898 07:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:47.157 malloc0 00:23:47.157 07:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:47.415 07:09:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.z1oj8GbH3J 00:23:47.673 [2024-11-18 07:09:08.499179] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.z1oj8GbH3J': 0100666 00:23:47.673 [2024-11-18 07:09:08.499225] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:23:47.673 request: 00:23:47.673 { 00:23:47.673 "name": "key0", 00:23:47.673 "path": "/tmp/tmp.z1oj8GbH3J", 00:23:47.673 "method": "keyring_file_add_key", 00:23:47.673 "req_id": 1 00:23:47.673 } 00:23:47.673 Got JSON-RPC error response 00:23:47.673 response: 00:23:47.673 { 00:23:47.673 "code": -1, 00:23:47.673 "message": "Operation not permitted" 00:23:47.673 } 00:23:47.673 07:09:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:23:47.931 [2024-11-18 07:09:08.780028] tcp.c:3792:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:23:47.931 [2024-11-18 07:09:08.780102] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:23:47.931 request: 00:23:47.931 { 00:23:47.931 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:47.931 "host": "nqn.2016-06.io.spdk:host1", 00:23:47.931 "psk": "key0", 00:23:47.931 "method": "nvmf_subsystem_add_host", 00:23:47.931 "req_id": 1 00:23:47.931 } 00:23:47.931 Got JSON-RPC error response 00:23:47.931 response: 00:23:47.931 { 00:23:47.931 "code": -32603, 00:23:47.931 "message": "Internal error" 00:23:47.931 } 00:23:47.931 07:09:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:23:47.931 07:09:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:47.931 07:09:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:47.931 07:09:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:47.931 07:09:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 277425 00:23:47.931 07:09:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 277425 ']' 00:23:47.931 07:09:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 277425 00:23:47.931 07:09:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:47.931 07:09:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:47.931 07:09:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 277425 00:23:47.931 07:09:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:47.931 07:09:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:47.931 07:09:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 277425' 00:23:47.931 killing process with pid 277425 00:23:47.931 07:09:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 277425 00:23:47.931 07:09:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 277425 00:23:48.189 07:09:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.z1oj8GbH3J 00:23:48.189 07:09:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:23:48.189 07:09:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:48.189 07:09:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:48.189 07:09:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:48.189 07:09:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=278275 00:23:48.189 07:09:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:48.189 07:09:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 278275 00:23:48.189 07:09:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 278275 ']' 00:23:48.189 07:09:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:48.189 07:09:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:48.189 07:09:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:48.189 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:48.189 07:09:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:48.189 07:09:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:48.189 [2024-11-18 07:09:09.085729] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:23:48.189 [2024-11-18 07:09:09.085823] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:48.189 [2024-11-18 07:09:09.158506] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:48.447 [2024-11-18 07:09:09.206165] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:48.447 [2024-11-18 07:09:09.206214] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:48.447 [2024-11-18 07:09:09.206228] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:48.447 [2024-11-18 07:09:09.206239] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:48.447 [2024-11-18 07:09:09.206249] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:48.447 [2024-11-18 07:09:09.206852] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:48.447 07:09:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:48.447 07:09:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:48.447 07:09:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:48.447 07:09:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:48.447 07:09:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:48.447 07:09:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:48.447 07:09:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.z1oj8GbH3J 00:23:48.447 07:09:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.z1oj8GbH3J 00:23:48.447 07:09:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:48.705 [2024-11-18 07:09:09.591347] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:48.705 07:09:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:48.963 07:09:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:49.221 [2024-11-18 07:09:10.152985] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:49.221 [2024-11-18 07:09:10.153289] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:49.221 07:09:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:49.480 malloc0 00:23:49.480 07:09:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:50.046 07:09:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.z1oj8GbH3J 00:23:50.046 07:09:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:23:50.305 07:09:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=278565 00:23:50.305 07:09:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:50.305 07:09:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:50.305 07:09:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 278565 /var/tmp/bdevperf.sock 00:23:50.305 07:09:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 278565 ']' 00:23:50.305 07:09:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:50.305 07:09:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:50.305 07:09:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:50.305 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:50.305 07:09:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:50.305 07:09:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:50.563 [2024-11-18 07:09:11.317505] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:23:50.563 [2024-11-18 07:09:11.317596] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid278565 ] 00:23:50.563 [2024-11-18 07:09:11.383017] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:50.563 [2024-11-18 07:09:11.429876] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:50.821 07:09:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:50.821 07:09:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:50.821 07:09:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.z1oj8GbH3J 00:23:51.079 07:09:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:51.079 [2024-11-18 07:09:12.056222] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:51.338 TLSTESTn1 00:23:51.338 07:09:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:23:51.596 07:09:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:23:51.596 "subsystems": [ 00:23:51.596 { 00:23:51.596 "subsystem": "keyring", 00:23:51.596 "config": [ 00:23:51.596 { 00:23:51.596 "method": "keyring_file_add_key", 00:23:51.596 "params": { 00:23:51.596 "name": "key0", 00:23:51.596 "path": "/tmp/tmp.z1oj8GbH3J" 00:23:51.596 } 00:23:51.596 } 00:23:51.596 ] 00:23:51.597 }, 00:23:51.597 { 00:23:51.597 "subsystem": "iobuf", 00:23:51.597 "config": [ 00:23:51.597 { 00:23:51.597 "method": "iobuf_set_options", 00:23:51.597 "params": { 00:23:51.597 "small_pool_count": 8192, 00:23:51.597 "large_pool_count": 1024, 00:23:51.597 "small_bufsize": 8192, 00:23:51.597 "large_bufsize": 135168, 00:23:51.597 "enable_numa": false 00:23:51.597 } 00:23:51.597 } 00:23:51.597 ] 00:23:51.597 }, 00:23:51.597 { 00:23:51.597 "subsystem": "sock", 00:23:51.597 "config": [ 00:23:51.597 { 00:23:51.597 "method": "sock_set_default_impl", 00:23:51.597 "params": { 00:23:51.597 "impl_name": "posix" 00:23:51.597 } 00:23:51.597 }, 00:23:51.597 { 00:23:51.597 "method": "sock_impl_set_options", 00:23:51.597 "params": { 00:23:51.597 "impl_name": "ssl", 00:23:51.597 "recv_buf_size": 4096, 00:23:51.597 "send_buf_size": 4096, 00:23:51.597 "enable_recv_pipe": true, 00:23:51.597 "enable_quickack": false, 00:23:51.597 "enable_placement_id": 0, 00:23:51.597 "enable_zerocopy_send_server": true, 00:23:51.597 "enable_zerocopy_send_client": false, 00:23:51.597 "zerocopy_threshold": 0, 00:23:51.597 "tls_version": 0, 00:23:51.597 "enable_ktls": false 00:23:51.597 } 00:23:51.597 }, 00:23:51.597 { 00:23:51.597 "method": "sock_impl_set_options", 00:23:51.597 "params": { 00:23:51.597 "impl_name": "posix", 00:23:51.597 "recv_buf_size": 2097152, 00:23:51.597 "send_buf_size": 2097152, 00:23:51.597 "enable_recv_pipe": true, 00:23:51.597 "enable_quickack": false, 00:23:51.597 "enable_placement_id": 0, 00:23:51.597 "enable_zerocopy_send_server": true, 00:23:51.597 "enable_zerocopy_send_client": false, 00:23:51.597 "zerocopy_threshold": 0, 00:23:51.597 "tls_version": 0, 00:23:51.597 "enable_ktls": false 00:23:51.597 } 00:23:51.597 } 00:23:51.597 ] 00:23:51.597 }, 00:23:51.597 { 00:23:51.597 "subsystem": "vmd", 00:23:51.597 "config": [] 00:23:51.597 }, 00:23:51.597 { 00:23:51.597 "subsystem": "accel", 00:23:51.597 "config": [ 00:23:51.597 { 00:23:51.597 "method": "accel_set_options", 00:23:51.597 "params": { 00:23:51.597 "small_cache_size": 128, 00:23:51.597 "large_cache_size": 16, 00:23:51.597 "task_count": 2048, 00:23:51.597 "sequence_count": 2048, 00:23:51.597 "buf_count": 2048 00:23:51.597 } 00:23:51.597 } 00:23:51.597 ] 00:23:51.597 }, 00:23:51.597 { 00:23:51.597 "subsystem": "bdev", 00:23:51.597 "config": [ 00:23:51.597 { 00:23:51.597 "method": "bdev_set_options", 00:23:51.597 "params": { 00:23:51.597 "bdev_io_pool_size": 65535, 00:23:51.597 "bdev_io_cache_size": 256, 00:23:51.597 "bdev_auto_examine": true, 00:23:51.597 "iobuf_small_cache_size": 128, 00:23:51.597 "iobuf_large_cache_size": 16 00:23:51.597 } 00:23:51.597 }, 00:23:51.597 { 00:23:51.597 "method": "bdev_raid_set_options", 00:23:51.597 "params": { 00:23:51.597 "process_window_size_kb": 1024, 00:23:51.597 "process_max_bandwidth_mb_sec": 0 00:23:51.597 } 00:23:51.597 }, 00:23:51.597 { 00:23:51.597 "method": "bdev_iscsi_set_options", 00:23:51.597 "params": { 00:23:51.597 "timeout_sec": 30 00:23:51.597 } 00:23:51.597 }, 00:23:51.597 { 00:23:51.597 "method": "bdev_nvme_set_options", 00:23:51.597 "params": { 00:23:51.597 "action_on_timeout": "none", 00:23:51.597 "timeout_us": 0, 00:23:51.597 "timeout_admin_us": 0, 00:23:51.597 "keep_alive_timeout_ms": 10000, 00:23:51.597 "arbitration_burst": 0, 00:23:51.597 "low_priority_weight": 0, 00:23:51.597 "medium_priority_weight": 0, 00:23:51.597 "high_priority_weight": 0, 00:23:51.597 "nvme_adminq_poll_period_us": 10000, 00:23:51.597 "nvme_ioq_poll_period_us": 0, 00:23:51.597 "io_queue_requests": 0, 00:23:51.597 "delay_cmd_submit": true, 00:23:51.597 "transport_retry_count": 4, 00:23:51.597 "bdev_retry_count": 3, 00:23:51.597 "transport_ack_timeout": 0, 00:23:51.597 "ctrlr_loss_timeout_sec": 0, 00:23:51.597 "reconnect_delay_sec": 0, 00:23:51.597 "fast_io_fail_timeout_sec": 0, 00:23:51.597 "disable_auto_failback": false, 00:23:51.597 "generate_uuids": false, 00:23:51.597 "transport_tos": 0, 00:23:51.597 "nvme_error_stat": false, 00:23:51.597 "rdma_srq_size": 0, 00:23:51.597 "io_path_stat": false, 00:23:51.597 "allow_accel_sequence": false, 00:23:51.597 "rdma_max_cq_size": 0, 00:23:51.597 "rdma_cm_event_timeout_ms": 0, 00:23:51.597 "dhchap_digests": [ 00:23:51.597 "sha256", 00:23:51.597 "sha384", 00:23:51.597 "sha512" 00:23:51.597 ], 00:23:51.597 "dhchap_dhgroups": [ 00:23:51.597 "null", 00:23:51.597 "ffdhe2048", 00:23:51.597 "ffdhe3072", 00:23:51.597 "ffdhe4096", 00:23:51.597 "ffdhe6144", 00:23:51.597 "ffdhe8192" 00:23:51.597 ] 00:23:51.597 } 00:23:51.597 }, 00:23:51.597 { 00:23:51.597 "method": "bdev_nvme_set_hotplug", 00:23:51.597 "params": { 00:23:51.597 "period_us": 100000, 00:23:51.597 "enable": false 00:23:51.597 } 00:23:51.597 }, 00:23:51.597 { 00:23:51.597 "method": "bdev_malloc_create", 00:23:51.597 "params": { 00:23:51.597 "name": "malloc0", 00:23:51.597 "num_blocks": 8192, 00:23:51.597 "block_size": 4096, 00:23:51.597 "physical_block_size": 4096, 00:23:51.597 "uuid": "d0f553b4-0dff-4161-ab26-81b0a6b5777e", 00:23:51.597 "optimal_io_boundary": 0, 00:23:51.597 "md_size": 0, 00:23:51.597 "dif_type": 0, 00:23:51.597 "dif_is_head_of_md": false, 00:23:51.597 "dif_pi_format": 0 00:23:51.597 } 00:23:51.597 }, 00:23:51.597 { 00:23:51.597 "method": "bdev_wait_for_examine" 00:23:51.597 } 00:23:51.597 ] 00:23:51.597 }, 00:23:51.597 { 00:23:51.597 "subsystem": "nbd", 00:23:51.597 "config": [] 00:23:51.597 }, 00:23:51.597 { 00:23:51.597 "subsystem": "scheduler", 00:23:51.597 "config": [ 00:23:51.597 { 00:23:51.597 "method": "framework_set_scheduler", 00:23:51.597 "params": { 00:23:51.597 "name": "static" 00:23:51.597 } 00:23:51.597 } 00:23:51.597 ] 00:23:51.597 }, 00:23:51.597 { 00:23:51.597 "subsystem": "nvmf", 00:23:51.597 "config": [ 00:23:51.597 { 00:23:51.597 "method": "nvmf_set_config", 00:23:51.597 "params": { 00:23:51.597 "discovery_filter": "match_any", 00:23:51.597 "admin_cmd_passthru": { 00:23:51.597 "identify_ctrlr": false 00:23:51.597 }, 00:23:51.597 "dhchap_digests": [ 00:23:51.597 "sha256", 00:23:51.597 "sha384", 00:23:51.597 "sha512" 00:23:51.597 ], 00:23:51.597 "dhchap_dhgroups": [ 00:23:51.597 "null", 00:23:51.597 "ffdhe2048", 00:23:51.597 "ffdhe3072", 00:23:51.597 "ffdhe4096", 00:23:51.597 "ffdhe6144", 00:23:51.597 "ffdhe8192" 00:23:51.597 ] 00:23:51.597 } 00:23:51.597 }, 00:23:51.597 { 00:23:51.597 "method": "nvmf_set_max_subsystems", 00:23:51.597 "params": { 00:23:51.597 "max_subsystems": 1024 00:23:51.597 } 00:23:51.597 }, 00:23:51.597 { 00:23:51.597 "method": "nvmf_set_crdt", 00:23:51.597 "params": { 00:23:51.597 "crdt1": 0, 00:23:51.597 "crdt2": 0, 00:23:51.597 "crdt3": 0 00:23:51.597 } 00:23:51.597 }, 00:23:51.597 { 00:23:51.597 "method": "nvmf_create_transport", 00:23:51.597 "params": { 00:23:51.597 "trtype": "TCP", 00:23:51.597 "max_queue_depth": 128, 00:23:51.598 "max_io_qpairs_per_ctrlr": 127, 00:23:51.598 "in_capsule_data_size": 4096, 00:23:51.598 "max_io_size": 131072, 00:23:51.598 "io_unit_size": 131072, 00:23:51.598 "max_aq_depth": 128, 00:23:51.598 "num_shared_buffers": 511, 00:23:51.598 "buf_cache_size": 4294967295, 00:23:51.598 "dif_insert_or_strip": false, 00:23:51.598 "zcopy": false, 00:23:51.598 "c2h_success": false, 00:23:51.598 "sock_priority": 0, 00:23:51.598 "abort_timeout_sec": 1, 00:23:51.598 "ack_timeout": 0, 00:23:51.598 "data_wr_pool_size": 0 00:23:51.598 } 00:23:51.598 }, 00:23:51.598 { 00:23:51.598 "method": "nvmf_create_subsystem", 00:23:51.598 "params": { 00:23:51.598 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:51.598 "allow_any_host": false, 00:23:51.598 "serial_number": "SPDK00000000000001", 00:23:51.598 "model_number": "SPDK bdev Controller", 00:23:51.598 "max_namespaces": 10, 00:23:51.598 "min_cntlid": 1, 00:23:51.598 "max_cntlid": 65519, 00:23:51.598 "ana_reporting": false 00:23:51.598 } 00:23:51.598 }, 00:23:51.598 { 00:23:51.598 "method": "nvmf_subsystem_add_host", 00:23:51.598 "params": { 00:23:51.598 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:51.598 "host": "nqn.2016-06.io.spdk:host1", 00:23:51.598 "psk": "key0" 00:23:51.598 } 00:23:51.598 }, 00:23:51.598 { 00:23:51.598 "method": "nvmf_subsystem_add_ns", 00:23:51.598 "params": { 00:23:51.598 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:51.598 "namespace": { 00:23:51.598 "nsid": 1, 00:23:51.598 "bdev_name": "malloc0", 00:23:51.598 "nguid": "D0F553B40DFF4161AB2681B0A6B5777E", 00:23:51.598 "uuid": "d0f553b4-0dff-4161-ab26-81b0a6b5777e", 00:23:51.598 "no_auto_visible": false 00:23:51.598 } 00:23:51.598 } 00:23:51.598 }, 00:23:51.598 { 00:23:51.598 "method": "nvmf_subsystem_add_listener", 00:23:51.598 "params": { 00:23:51.598 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:51.598 "listen_address": { 00:23:51.598 "trtype": "TCP", 00:23:51.598 "adrfam": "IPv4", 00:23:51.598 "traddr": "10.0.0.2", 00:23:51.598 "trsvcid": "4420" 00:23:51.598 }, 00:23:51.598 "secure_channel": true 00:23:51.598 } 00:23:51.598 } 00:23:51.598 ] 00:23:51.598 } 00:23:51.598 ] 00:23:51.598 }' 00:23:51.598 07:09:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:23:51.856 07:09:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:23:51.856 "subsystems": [ 00:23:51.856 { 00:23:51.856 "subsystem": "keyring", 00:23:51.856 "config": [ 00:23:51.856 { 00:23:51.856 "method": "keyring_file_add_key", 00:23:51.856 "params": { 00:23:51.856 "name": "key0", 00:23:51.856 "path": "/tmp/tmp.z1oj8GbH3J" 00:23:51.856 } 00:23:51.856 } 00:23:51.856 ] 00:23:51.856 }, 00:23:51.856 { 00:23:51.856 "subsystem": "iobuf", 00:23:51.856 "config": [ 00:23:51.856 { 00:23:51.856 "method": "iobuf_set_options", 00:23:51.856 "params": { 00:23:51.856 "small_pool_count": 8192, 00:23:51.856 "large_pool_count": 1024, 00:23:51.856 "small_bufsize": 8192, 00:23:51.856 "large_bufsize": 135168, 00:23:51.856 "enable_numa": false 00:23:51.856 } 00:23:51.856 } 00:23:51.856 ] 00:23:51.856 }, 00:23:51.856 { 00:23:51.856 "subsystem": "sock", 00:23:51.856 "config": [ 00:23:51.856 { 00:23:51.856 "method": "sock_set_default_impl", 00:23:51.856 "params": { 00:23:51.856 "impl_name": "posix" 00:23:51.856 } 00:23:51.856 }, 00:23:51.856 { 00:23:51.856 "method": "sock_impl_set_options", 00:23:51.856 "params": { 00:23:51.856 "impl_name": "ssl", 00:23:51.856 "recv_buf_size": 4096, 00:23:51.856 "send_buf_size": 4096, 00:23:51.856 "enable_recv_pipe": true, 00:23:51.856 "enable_quickack": false, 00:23:51.856 "enable_placement_id": 0, 00:23:51.856 "enable_zerocopy_send_server": true, 00:23:51.856 "enable_zerocopy_send_client": false, 00:23:51.856 "zerocopy_threshold": 0, 00:23:51.856 "tls_version": 0, 00:23:51.856 "enable_ktls": false 00:23:51.856 } 00:23:51.856 }, 00:23:51.856 { 00:23:51.856 "method": "sock_impl_set_options", 00:23:51.856 "params": { 00:23:51.856 "impl_name": "posix", 00:23:51.856 "recv_buf_size": 2097152, 00:23:51.856 "send_buf_size": 2097152, 00:23:51.856 "enable_recv_pipe": true, 00:23:51.856 "enable_quickack": false, 00:23:51.856 "enable_placement_id": 0, 00:23:51.856 "enable_zerocopy_send_server": true, 00:23:51.856 "enable_zerocopy_send_client": false, 00:23:51.856 "zerocopy_threshold": 0, 00:23:51.856 "tls_version": 0, 00:23:51.856 "enable_ktls": false 00:23:51.856 } 00:23:51.856 } 00:23:51.856 ] 00:23:51.856 }, 00:23:51.856 { 00:23:51.856 "subsystem": "vmd", 00:23:51.856 "config": [] 00:23:51.856 }, 00:23:51.856 { 00:23:51.856 "subsystem": "accel", 00:23:51.856 "config": [ 00:23:51.856 { 00:23:51.856 "method": "accel_set_options", 00:23:51.856 "params": { 00:23:51.856 "small_cache_size": 128, 00:23:51.856 "large_cache_size": 16, 00:23:51.856 "task_count": 2048, 00:23:51.856 "sequence_count": 2048, 00:23:51.856 "buf_count": 2048 00:23:51.856 } 00:23:51.856 } 00:23:51.856 ] 00:23:51.856 }, 00:23:51.856 { 00:23:51.856 "subsystem": "bdev", 00:23:51.856 "config": [ 00:23:51.856 { 00:23:51.856 "method": "bdev_set_options", 00:23:51.856 "params": { 00:23:51.856 "bdev_io_pool_size": 65535, 00:23:51.856 "bdev_io_cache_size": 256, 00:23:51.856 "bdev_auto_examine": true, 00:23:51.856 "iobuf_small_cache_size": 128, 00:23:51.857 "iobuf_large_cache_size": 16 00:23:51.857 } 00:23:51.857 }, 00:23:51.857 { 00:23:51.857 "method": "bdev_raid_set_options", 00:23:51.857 "params": { 00:23:51.857 "process_window_size_kb": 1024, 00:23:51.857 "process_max_bandwidth_mb_sec": 0 00:23:51.857 } 00:23:51.857 }, 00:23:51.857 { 00:23:51.857 "method": "bdev_iscsi_set_options", 00:23:51.857 "params": { 00:23:51.857 "timeout_sec": 30 00:23:51.857 } 00:23:51.857 }, 00:23:51.857 { 00:23:51.857 "method": "bdev_nvme_set_options", 00:23:51.857 "params": { 00:23:51.857 "action_on_timeout": "none", 00:23:51.857 "timeout_us": 0, 00:23:51.857 "timeout_admin_us": 0, 00:23:51.857 "keep_alive_timeout_ms": 10000, 00:23:51.857 "arbitration_burst": 0, 00:23:51.857 "low_priority_weight": 0, 00:23:51.857 "medium_priority_weight": 0, 00:23:51.857 "high_priority_weight": 0, 00:23:51.857 "nvme_adminq_poll_period_us": 10000, 00:23:51.857 "nvme_ioq_poll_period_us": 0, 00:23:51.857 "io_queue_requests": 512, 00:23:51.857 "delay_cmd_submit": true, 00:23:51.857 "transport_retry_count": 4, 00:23:51.857 "bdev_retry_count": 3, 00:23:51.857 "transport_ack_timeout": 0, 00:23:51.857 "ctrlr_loss_timeout_sec": 0, 00:23:51.857 "reconnect_delay_sec": 0, 00:23:51.857 "fast_io_fail_timeout_sec": 0, 00:23:51.857 "disable_auto_failback": false, 00:23:51.857 "generate_uuids": false, 00:23:51.857 "transport_tos": 0, 00:23:51.857 "nvme_error_stat": false, 00:23:51.857 "rdma_srq_size": 0, 00:23:51.857 "io_path_stat": false, 00:23:51.857 "allow_accel_sequence": false, 00:23:51.857 "rdma_max_cq_size": 0, 00:23:51.857 "rdma_cm_event_timeout_ms": 0, 00:23:51.857 "dhchap_digests": [ 00:23:51.857 "sha256", 00:23:51.857 "sha384", 00:23:51.857 "sha512" 00:23:51.857 ], 00:23:51.857 "dhchap_dhgroups": [ 00:23:51.857 "null", 00:23:51.857 "ffdhe2048", 00:23:51.857 "ffdhe3072", 00:23:51.857 "ffdhe4096", 00:23:51.857 "ffdhe6144", 00:23:51.857 "ffdhe8192" 00:23:51.857 ] 00:23:51.857 } 00:23:51.857 }, 00:23:51.857 { 00:23:51.857 "method": "bdev_nvme_attach_controller", 00:23:51.857 "params": { 00:23:51.857 "name": "TLSTEST", 00:23:51.857 "trtype": "TCP", 00:23:51.857 "adrfam": "IPv4", 00:23:51.857 "traddr": "10.0.0.2", 00:23:51.857 "trsvcid": "4420", 00:23:51.857 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:51.857 "prchk_reftag": false, 00:23:51.857 "prchk_guard": false, 00:23:51.857 "ctrlr_loss_timeout_sec": 0, 00:23:51.857 "reconnect_delay_sec": 0, 00:23:51.857 "fast_io_fail_timeout_sec": 0, 00:23:51.857 "psk": "key0", 00:23:51.857 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:51.857 "hdgst": false, 00:23:51.857 "ddgst": false, 00:23:51.857 "multipath": "multipath" 00:23:51.857 } 00:23:51.857 }, 00:23:51.857 { 00:23:51.857 "method": "bdev_nvme_set_hotplug", 00:23:51.857 "params": { 00:23:51.857 "period_us": 100000, 00:23:51.857 "enable": false 00:23:51.857 } 00:23:51.857 }, 00:23:51.857 { 00:23:51.857 "method": "bdev_wait_for_examine" 00:23:51.857 } 00:23:51.857 ] 00:23:51.857 }, 00:23:51.857 { 00:23:51.857 "subsystem": "nbd", 00:23:51.857 "config": [] 00:23:51.857 } 00:23:51.857 ] 00:23:51.857 }' 00:23:51.857 07:09:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 278565 00:23:51.857 07:09:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 278565 ']' 00:23:51.857 07:09:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 278565 00:23:51.857 07:09:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:51.857 07:09:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:51.857 07:09:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 278565 00:23:52.115 07:09:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:52.115 07:09:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:52.115 07:09:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 278565' 00:23:52.115 killing process with pid 278565 00:23:52.115 07:09:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 278565 00:23:52.115 Received shutdown signal, test time was about 10.000000 seconds 00:23:52.115 00:23:52.115 Latency(us) 00:23:52.115 [2024-11-18T06:09:13.093Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:52.115 [2024-11-18T06:09:13.093Z] =================================================================================================================== 00:23:52.115 [2024-11-18T06:09:13.093Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:52.115 07:09:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 278565 00:23:52.115 07:09:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 278275 00:23:52.115 07:09:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 278275 ']' 00:23:52.115 07:09:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 278275 00:23:52.115 07:09:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:52.115 07:09:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:52.115 07:09:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 278275 00:23:52.115 07:09:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:52.115 07:09:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:52.116 07:09:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 278275' 00:23:52.116 killing process with pid 278275 00:23:52.116 07:09:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 278275 00:23:52.116 07:09:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 278275 00:23:52.375 07:09:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:23:52.375 07:09:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:52.375 07:09:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:23:52.375 "subsystems": [ 00:23:52.375 { 00:23:52.375 "subsystem": "keyring", 00:23:52.375 "config": [ 00:23:52.375 { 00:23:52.375 "method": "keyring_file_add_key", 00:23:52.375 "params": { 00:23:52.375 "name": "key0", 00:23:52.375 "path": "/tmp/tmp.z1oj8GbH3J" 00:23:52.375 } 00:23:52.375 } 00:23:52.375 ] 00:23:52.375 }, 00:23:52.375 { 00:23:52.375 "subsystem": "iobuf", 00:23:52.375 "config": [ 00:23:52.375 { 00:23:52.375 "method": "iobuf_set_options", 00:23:52.375 "params": { 00:23:52.375 "small_pool_count": 8192, 00:23:52.375 "large_pool_count": 1024, 00:23:52.375 "small_bufsize": 8192, 00:23:52.375 "large_bufsize": 135168, 00:23:52.375 "enable_numa": false 00:23:52.375 } 00:23:52.375 } 00:23:52.375 ] 00:23:52.375 }, 00:23:52.375 { 00:23:52.375 "subsystem": "sock", 00:23:52.375 "config": [ 00:23:52.375 { 00:23:52.375 "method": "sock_set_default_impl", 00:23:52.375 "params": { 00:23:52.375 "impl_name": "posix" 00:23:52.375 } 00:23:52.375 }, 00:23:52.375 { 00:23:52.375 "method": "sock_impl_set_options", 00:23:52.375 "params": { 00:23:52.375 "impl_name": "ssl", 00:23:52.375 "recv_buf_size": 4096, 00:23:52.375 "send_buf_size": 4096, 00:23:52.375 "enable_recv_pipe": true, 00:23:52.375 "enable_quickack": false, 00:23:52.375 "enable_placement_id": 0, 00:23:52.375 "enable_zerocopy_send_server": true, 00:23:52.375 "enable_zerocopy_send_client": false, 00:23:52.375 "zerocopy_threshold": 0, 00:23:52.375 "tls_version": 0, 00:23:52.375 "enable_ktls": false 00:23:52.375 } 00:23:52.375 }, 00:23:52.375 { 00:23:52.375 "method": "sock_impl_set_options", 00:23:52.375 "params": { 00:23:52.375 "impl_name": "posix", 00:23:52.375 "recv_buf_size": 2097152, 00:23:52.375 "send_buf_size": 2097152, 00:23:52.375 "enable_recv_pipe": true, 00:23:52.375 "enable_quickack": false, 00:23:52.375 "enable_placement_id": 0, 00:23:52.375 "enable_zerocopy_send_server": true, 00:23:52.375 "enable_zerocopy_send_client": false, 00:23:52.375 "zerocopy_threshold": 0, 00:23:52.375 "tls_version": 0, 00:23:52.375 "enable_ktls": false 00:23:52.375 } 00:23:52.375 } 00:23:52.375 ] 00:23:52.375 }, 00:23:52.375 { 00:23:52.375 "subsystem": "vmd", 00:23:52.375 "config": [] 00:23:52.375 }, 00:23:52.375 { 00:23:52.375 "subsystem": "accel", 00:23:52.375 "config": [ 00:23:52.375 { 00:23:52.375 "method": "accel_set_options", 00:23:52.375 "params": { 00:23:52.375 "small_cache_size": 128, 00:23:52.375 "large_cache_size": 16, 00:23:52.375 "task_count": 2048, 00:23:52.375 "sequence_count": 2048, 00:23:52.375 "buf_count": 2048 00:23:52.375 } 00:23:52.375 } 00:23:52.375 ] 00:23:52.375 }, 00:23:52.375 { 00:23:52.375 "subsystem": "bdev", 00:23:52.375 "config": [ 00:23:52.375 { 00:23:52.375 "method": "bdev_set_options", 00:23:52.375 "params": { 00:23:52.375 "bdev_io_pool_size": 65535, 00:23:52.375 "bdev_io_cache_size": 256, 00:23:52.375 "bdev_auto_examine": true, 00:23:52.375 "iobuf_small_cache_size": 128, 00:23:52.375 "iobuf_large_cache_size": 16 00:23:52.375 } 00:23:52.375 }, 00:23:52.375 { 00:23:52.375 "method": "bdev_raid_set_options", 00:23:52.375 "params": { 00:23:52.375 "process_window_size_kb": 1024, 00:23:52.375 "process_max_bandwidth_mb_sec": 0 00:23:52.375 } 00:23:52.375 }, 00:23:52.375 { 00:23:52.375 "method": "bdev_iscsi_set_options", 00:23:52.375 "params": { 00:23:52.375 "timeout_sec": 30 00:23:52.375 } 00:23:52.375 }, 00:23:52.375 { 00:23:52.375 "method": "bdev_nvme_set_options", 00:23:52.375 "params": { 00:23:52.375 "action_on_timeout": "none", 00:23:52.375 "timeout_us": 0, 00:23:52.375 "timeout_admin_us": 0, 00:23:52.375 "keep_alive_timeout_ms": 10000, 00:23:52.375 "arbitration_burst": 0, 00:23:52.375 "low_priority_weight": 0, 00:23:52.375 "medium_priority_weight": 0, 00:23:52.375 "high_priority_weight": 0, 00:23:52.375 "nvme_adminq_poll_period_us": 10000, 00:23:52.375 "nvme_ioq_poll_period_us": 0, 00:23:52.375 "io_queue_requests": 0, 00:23:52.375 "delay_cmd_submit": true, 00:23:52.375 "transport_retry_count": 4, 00:23:52.375 "bdev_retry_count": 3, 00:23:52.375 "transport_ack_timeout": 0, 00:23:52.375 "ctrlr_loss_timeout_sec": 0, 00:23:52.375 "reconnect_delay_sec": 0, 00:23:52.375 "fast_io_fail_timeout_sec": 0, 00:23:52.375 "disable_auto_failback": false, 00:23:52.375 "generate_uuids": false, 00:23:52.375 "transport_tos": 0, 00:23:52.375 "nvme_error_stat": false, 00:23:52.375 "rdma_srq_size": 0, 00:23:52.375 "io_path_stat": false, 00:23:52.375 "allow_accel_sequence": false, 00:23:52.375 "rdma_max_cq_size": 0, 00:23:52.375 "rdma_cm_event_timeout_ms": 0, 00:23:52.375 "dhchap_digests": [ 00:23:52.375 "sha256", 00:23:52.375 "sha384", 00:23:52.375 "sha512" 00:23:52.375 ], 00:23:52.375 "dhchap_dhgroups": [ 00:23:52.375 "null", 00:23:52.375 "ffdhe2048", 00:23:52.375 "ffdhe3072", 00:23:52.375 "ffdhe4096", 00:23:52.375 "ffdhe6144", 00:23:52.375 "ffdhe8192" 00:23:52.375 ] 00:23:52.375 } 00:23:52.375 }, 00:23:52.375 { 00:23:52.375 "method": "bdev_nvme_set_hotplug", 00:23:52.375 "params": { 00:23:52.375 "period_us": 100000, 00:23:52.375 "enable": false 00:23:52.375 } 00:23:52.375 }, 00:23:52.375 { 00:23:52.375 "method": "bdev_malloc_create", 00:23:52.375 "params": { 00:23:52.375 "name": "malloc0", 00:23:52.375 "num_blocks": 8192, 00:23:52.375 "block_size": 4096, 00:23:52.375 "physical_block_size": 4096, 00:23:52.375 "uuid": "d0f553b4-0dff-4161-ab26-81b0a6b5777e", 00:23:52.375 "optimal_io_boundary": 0, 00:23:52.375 "md_size": 0, 00:23:52.375 "dif_type": 0, 00:23:52.375 "dif_is_head_of_md": false, 00:23:52.375 "dif_pi_format": 0 00:23:52.375 } 00:23:52.375 }, 00:23:52.375 { 00:23:52.375 "method": "bdev_wait_for_examine" 00:23:52.375 } 00:23:52.375 ] 00:23:52.375 }, 00:23:52.375 { 00:23:52.375 "subsystem": "nbd", 00:23:52.376 "config": [] 00:23:52.376 }, 00:23:52.376 { 00:23:52.376 "subsystem": "scheduler", 00:23:52.376 "config": [ 00:23:52.376 { 00:23:52.376 "method": "framework_set_scheduler", 00:23:52.376 "params": { 00:23:52.376 "name": "static" 00:23:52.376 } 00:23:52.376 } 00:23:52.376 ] 00:23:52.376 }, 00:23:52.376 { 00:23:52.376 "subsystem": "nvmf", 00:23:52.376 "config": [ 00:23:52.376 { 00:23:52.376 "method": "nvmf_set_config", 00:23:52.376 "params": { 00:23:52.376 "discovery_filter": "match_any", 00:23:52.376 "admin_cmd_passthru": { 00:23:52.376 "identify_ctrlr": false 00:23:52.376 }, 00:23:52.376 "dhchap_digests": [ 00:23:52.376 "sha256", 00:23:52.376 "sha384", 00:23:52.376 "sha512" 00:23:52.376 ], 00:23:52.376 "dhchap_dhgroups": [ 00:23:52.376 "null", 00:23:52.376 "ffdhe2048", 00:23:52.376 "ffdhe3072", 00:23:52.376 "ffdhe4096", 00:23:52.376 "ffdhe6144", 00:23:52.376 "ffdhe8192" 00:23:52.376 ] 00:23:52.376 } 00:23:52.376 }, 00:23:52.376 { 00:23:52.376 "method": "nvmf_set_max_subsystems", 00:23:52.376 "params": { 00:23:52.376 "max_subsystems": 1024 00:23:52.376 } 00:23:52.376 }, 00:23:52.376 { 00:23:52.376 "method": "nvmf_set_crdt", 00:23:52.376 "params": { 00:23:52.376 "crdt1": 0, 00:23:52.376 "crdt2": 0, 00:23:52.376 "crdt3": 0 00:23:52.376 } 00:23:52.376 }, 00:23:52.376 { 00:23:52.376 "method": "nvmf_create_transport", 00:23:52.376 "params": { 00:23:52.376 "trtype": "TCP", 00:23:52.376 "max_queue_depth": 128, 00:23:52.376 "max_io_qpairs_per_ctrlr": 127, 00:23:52.376 "in_capsule_data_size": 4096, 00:23:52.376 "max_io_size": 131072, 00:23:52.376 "io_unit_size": 131072, 00:23:52.376 "max_aq_depth": 128, 00:23:52.376 "num_shared_buffers": 511, 00:23:52.376 "buf_cache_size": 4294967295, 00:23:52.376 "dif_insert_or_strip": false, 00:23:52.376 "zcopy": false, 00:23:52.376 "c2h_success": false, 00:23:52.376 "sock_priority": 0, 00:23:52.376 "abort_timeout_sec": 1, 00:23:52.376 "ack_timeout": 0, 00:23:52.376 "data_wr_pool_size": 0 00:23:52.376 } 00:23:52.376 }, 00:23:52.376 { 00:23:52.376 "method": "nvmf_create_subsystem", 00:23:52.376 "params": { 00:23:52.376 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:52.376 "allow_any_host": false, 00:23:52.376 "serial_number": "SPDK00000000000001", 00:23:52.376 "model_number": "SPDK bdev Controller", 00:23:52.376 "max_namespaces": 10, 00:23:52.376 "min_cntlid": 1, 00:23:52.376 "max_cntlid": 65519, 00:23:52.376 "ana_reporting": false 00:23:52.376 } 00:23:52.376 }, 00:23:52.376 { 00:23:52.376 "method": "nvmf_subsystem_add_host", 00:23:52.376 "params": { 00:23:52.376 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:52.376 "host": "nqn.2016-06.io.spdk:host1", 00:23:52.376 "psk": "key0" 00:23:52.376 } 00:23:52.376 }, 00:23:52.376 { 00:23:52.376 "method": "nvmf_subsystem_add_ns", 00:23:52.376 "params": { 00:23:52.376 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:52.376 "namespace": { 00:23:52.376 "nsid": 1, 00:23:52.376 "bdev_name": "malloc0", 00:23:52.376 "nguid": "D0F553B40DFF4161AB2681B0A6B5777E", 00:23:52.376 "uuid": "d0f553b4-0dff-4161-ab26-81b0a6b5777e", 00:23:52.376 "no_auto_visible": false 00:23:52.376 } 00:23:52.376 } 00:23:52.376 }, 00:23:52.376 { 00:23:52.376 "method": "nvmf_subsystem_add_listener", 00:23:52.376 "params": { 00:23:52.376 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:52.376 "listen_address": { 00:23:52.376 "trtype": "TCP", 00:23:52.376 "adrfam": "IPv4", 00:23:52.376 "traddr": "10.0.0.2", 00:23:52.376 "trsvcid": "4420" 00:23:52.376 }, 00:23:52.376 "secure_channel": true 00:23:52.376 } 00:23:52.376 } 00:23:52.376 ] 00:23:52.376 } 00:23:52.376 ] 00:23:52.376 }' 00:23:52.376 07:09:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:52.376 07:09:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:52.376 07:09:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=278739 00:23:52.376 07:09:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:23:52.376 07:09:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 278739 00:23:52.376 07:09:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 278739 ']' 00:23:52.376 07:09:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:52.376 07:09:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:52.376 07:09:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:52.376 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:52.376 07:09:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:52.376 07:09:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:52.376 [2024-11-18 07:09:13.330048] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:23:52.376 [2024-11-18 07:09:13.330150] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:52.635 [2024-11-18 07:09:13.404543] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:52.635 [2024-11-18 07:09:13.449427] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:52.635 [2024-11-18 07:09:13.449502] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:52.635 [2024-11-18 07:09:13.449534] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:52.635 [2024-11-18 07:09:13.449558] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:52.635 [2024-11-18 07:09:13.449568] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:52.635 [2024-11-18 07:09:13.450159] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:52.893 [2024-11-18 07:09:13.689580] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:52.893 [2024-11-18 07:09:13.721605] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:52.893 [2024-11-18 07:09:13.721851] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:53.459 07:09:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:53.459 07:09:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:53.459 07:09:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:53.459 07:09:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:53.459 07:09:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:53.459 07:09:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:53.459 07:09:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=278881 00:23:53.459 07:09:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 278881 /var/tmp/bdevperf.sock 00:23:53.459 07:09:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 278881 ']' 00:23:53.459 07:09:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:53.459 07:09:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:23:53.459 07:09:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:53.459 07:09:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:23:53.459 "subsystems": [ 00:23:53.459 { 00:23:53.459 "subsystem": "keyring", 00:23:53.459 "config": [ 00:23:53.459 { 00:23:53.459 "method": "keyring_file_add_key", 00:23:53.459 "params": { 00:23:53.459 "name": "key0", 00:23:53.459 "path": "/tmp/tmp.z1oj8GbH3J" 00:23:53.459 } 00:23:53.459 } 00:23:53.459 ] 00:23:53.459 }, 00:23:53.459 { 00:23:53.459 "subsystem": "iobuf", 00:23:53.459 "config": [ 00:23:53.459 { 00:23:53.459 "method": "iobuf_set_options", 00:23:53.459 "params": { 00:23:53.459 "small_pool_count": 8192, 00:23:53.459 "large_pool_count": 1024, 00:23:53.459 "small_bufsize": 8192, 00:23:53.459 "large_bufsize": 135168, 00:23:53.459 "enable_numa": false 00:23:53.459 } 00:23:53.459 } 00:23:53.459 ] 00:23:53.459 }, 00:23:53.459 { 00:23:53.459 "subsystem": "sock", 00:23:53.459 "config": [ 00:23:53.459 { 00:23:53.459 "method": "sock_set_default_impl", 00:23:53.459 "params": { 00:23:53.459 "impl_name": "posix" 00:23:53.459 } 00:23:53.459 }, 00:23:53.459 { 00:23:53.459 "method": "sock_impl_set_options", 00:23:53.459 "params": { 00:23:53.459 "impl_name": "ssl", 00:23:53.459 "recv_buf_size": 4096, 00:23:53.459 "send_buf_size": 4096, 00:23:53.459 "enable_recv_pipe": true, 00:23:53.459 "enable_quickack": false, 00:23:53.459 "enable_placement_id": 0, 00:23:53.459 "enable_zerocopy_send_server": true, 00:23:53.459 "enable_zerocopy_send_client": false, 00:23:53.460 "zerocopy_threshold": 0, 00:23:53.460 "tls_version": 0, 00:23:53.460 "enable_ktls": false 00:23:53.460 } 00:23:53.460 }, 00:23:53.460 { 00:23:53.460 "method": "sock_impl_set_options", 00:23:53.460 "params": { 00:23:53.460 "impl_name": "posix", 00:23:53.460 "recv_buf_size": 2097152, 00:23:53.460 "send_buf_size": 2097152, 00:23:53.460 "enable_recv_pipe": true, 00:23:53.460 "enable_quickack": false, 00:23:53.460 "enable_placement_id": 0, 00:23:53.460 "enable_zerocopy_send_server": true, 00:23:53.460 "enable_zerocopy_send_client": false, 00:23:53.460 "zerocopy_threshold": 0, 00:23:53.460 "tls_version": 0, 00:23:53.460 "enable_ktls": false 00:23:53.460 } 00:23:53.460 } 00:23:53.460 ] 00:23:53.460 }, 00:23:53.460 { 00:23:53.460 "subsystem": "vmd", 00:23:53.460 "config": [] 00:23:53.460 }, 00:23:53.460 { 00:23:53.460 "subsystem": "accel", 00:23:53.460 "config": [ 00:23:53.460 { 00:23:53.460 "method": "accel_set_options", 00:23:53.460 "params": { 00:23:53.460 "small_cache_size": 128, 00:23:53.460 "large_cache_size": 16, 00:23:53.460 "task_count": 2048, 00:23:53.460 "sequence_count": 2048, 00:23:53.460 "buf_count": 2048 00:23:53.460 } 00:23:53.460 } 00:23:53.460 ] 00:23:53.460 }, 00:23:53.460 { 00:23:53.460 "subsystem": "bdev", 00:23:53.460 "config": [ 00:23:53.460 { 00:23:53.460 "method": "bdev_set_options", 00:23:53.460 "params": { 00:23:53.460 "bdev_io_pool_size": 65535, 00:23:53.460 "bdev_io_cache_size": 256, 00:23:53.460 "bdev_auto_examine": true, 00:23:53.460 "iobuf_small_cache_size": 128, 00:23:53.460 "iobuf_large_cache_size": 16 00:23:53.460 } 00:23:53.460 }, 00:23:53.460 { 00:23:53.460 "method": "bdev_raid_set_options", 00:23:53.460 "params": { 00:23:53.460 "process_window_size_kb": 1024, 00:23:53.460 "process_max_bandwidth_mb_sec": 0 00:23:53.460 } 00:23:53.460 }, 00:23:53.460 { 00:23:53.460 "method": "bdev_iscsi_set_options", 00:23:53.460 "params": { 00:23:53.460 "timeout_sec": 30 00:23:53.460 } 00:23:53.460 }, 00:23:53.460 { 00:23:53.460 "method": "bdev_nvme_set_options", 00:23:53.460 "params": { 00:23:53.460 "action_on_timeout": "none", 00:23:53.460 "timeout_us": 0, 00:23:53.460 "timeout_admin_us": 0, 00:23:53.460 "keep_alive_timeout_ms": 10000, 00:23:53.460 "arbitration_burst": 0, 00:23:53.460 "low_priority_weight": 0, 00:23:53.460 "medium_priority_weight": 0, 00:23:53.460 "high_priority_weight": 0, 00:23:53.460 "nvme_adminq_poll_period_us": 10000, 00:23:53.460 "nvme_ioq_poll_period_us": 0, 00:23:53.460 "io_queue_requests": 512, 00:23:53.460 "delay_cmd_submit": true, 00:23:53.460 "transport_retry_count": 4, 00:23:53.460 "bdev_retry_count": 3, 00:23:53.460 "transport_ack_timeout": 0, 00:23:53.460 "ctrlr_loss_timeout_sec": 0, 00:23:53.460 "reconnect_delay_sec": 0, 00:23:53.460 "fast_io_fail_timeout_sec": 0, 00:23:53.460 "disable_auto_failback": false, 00:23:53.460 "generate_uuids": false, 00:23:53.460 "transport_tos": 0, 00:23:53.460 "nvme_error_stat": false, 00:23:53.460 "rdma_srq_size": 0, 00:23:53.460 "io_path_stat": false, 00:23:53.460 "allow_accel_sequence": false, 00:23:53.460 "rdma_max_cq_size": 0, 00:23:53.460 "rdma_cm_event_timeout_ms": 0, 00:23:53.460 "dhchap_digests": [ 00:23:53.460 "sha256", 00:23:53.460 "sha384", 00:23:53.460 "sha512" 00:23:53.460 ], 00:23:53.460 "dhchap_dhgroups": [ 00:23:53.460 "null", 00:23:53.460 "ffdhe2048", 00:23:53.460 "ffdhe3072", 00:23:53.460 "ffdhe4096", 00:23:53.460 "ffdhe6144", 00:23:53.460 "ffdhe8192" 00:23:53.460 ] 00:23:53.460 } 00:23:53.460 }, 00:23:53.460 { 00:23:53.460 "method": "bdev_nvme_attach_controller", 00:23:53.460 "params": { 00:23:53.460 "name": "TLSTEST", 00:23:53.460 "trtype": "TCP", 00:23:53.460 "adrfam": "IPv4", 00:23:53.460 "traddr": "10.0.0.2", 00:23:53.460 "trsvcid": "4420", 00:23:53.460 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:53.460 "prchk_reftag": false, 00:23:53.460 "prchk_guard": false, 00:23:53.460 "ctrlr_loss_timeout_sec": 0, 00:23:53.460 "reconnect_delay_sec": 0, 00:23:53.460 "fast_io_fail_timeout_sec": 0, 00:23:53.460 "psk": "key0", 00:23:53.460 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:53.460 "hdgst": false, 00:23:53.460 "ddgst": false, 00:23:53.460 "multipath": "multipath" 00:23:53.460 } 00:23:53.460 }, 00:23:53.460 { 00:23:53.460 "method": "bdev_nvme_set_hotplug", 00:23:53.460 "params": { 00:23:53.460 "period_us": 100000, 00:23:53.460 "enable": false 00:23:53.460 } 00:23:53.460 }, 00:23:53.460 { 00:23:53.460 "method": "bdev_wait_for_examine" 00:23:53.460 } 00:23:53.460 ] 00:23:53.460 }, 00:23:53.460 { 00:23:53.460 "subsystem": "nbd", 00:23:53.460 "config": [] 00:23:53.460 } 00:23:53.460 ] 00:23:53.460 }' 00:23:53.460 07:09:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:53.460 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:53.460 07:09:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:53.460 07:09:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:53.460 [2024-11-18 07:09:14.393341] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:23:53.460 [2024-11-18 07:09:14.393435] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid278881 ] 00:23:53.759 [2024-11-18 07:09:14.465523] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:53.759 [2024-11-18 07:09:14.513510] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:53.759 [2024-11-18 07:09:14.691424] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:54.017 07:09:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:54.017 07:09:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:54.017 07:09:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:23:54.017 Running I/O for 10 seconds... 00:23:56.323 3074.00 IOPS, 12.01 MiB/s [2024-11-18T06:09:18.233Z] 3097.50 IOPS, 12.10 MiB/s [2024-11-18T06:09:19.166Z] 3194.00 IOPS, 12.48 MiB/s [2024-11-18T06:09:20.099Z] 3244.75 IOPS, 12.67 MiB/s [2024-11-18T06:09:21.033Z] 3232.80 IOPS, 12.63 MiB/s [2024-11-18T06:09:21.965Z] 3228.67 IOPS, 12.61 MiB/s [2024-11-18T06:09:23.338Z] 3235.29 IOPS, 12.64 MiB/s [2024-11-18T06:09:24.271Z] 3237.50 IOPS, 12.65 MiB/s [2024-11-18T06:09:25.204Z] 3247.89 IOPS, 12.69 MiB/s [2024-11-18T06:09:25.204Z] 3255.20 IOPS, 12.72 MiB/s 00:24:04.226 Latency(us) 00:24:04.226 [2024-11-18T06:09:25.204Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:04.226 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:04.226 Verification LBA range: start 0x0 length 0x2000 00:24:04.226 TLSTESTn1 : 10.02 3261.39 12.74 0.00 0.00 39186.68 7670.14 79614.10 00:24:04.226 [2024-11-18T06:09:25.204Z] =================================================================================================================== 00:24:04.226 [2024-11-18T06:09:25.204Z] Total : 3261.39 12.74 0.00 0.00 39186.68 7670.14 79614.10 00:24:04.226 { 00:24:04.226 "results": [ 00:24:04.226 { 00:24:04.226 "job": "TLSTESTn1", 00:24:04.226 "core_mask": "0x4", 00:24:04.226 "workload": "verify", 00:24:04.226 "status": "finished", 00:24:04.226 "verify_range": { 00:24:04.226 "start": 0, 00:24:04.226 "length": 8192 00:24:04.226 }, 00:24:04.226 "queue_depth": 128, 00:24:04.226 "io_size": 4096, 00:24:04.226 "runtime": 10.019975, 00:24:04.226 "iops": 3261.3853826980608, 00:24:04.226 "mibps": 12.7397866511643, 00:24:04.226 "io_failed": 0, 00:24:04.226 "io_timeout": 0, 00:24:04.226 "avg_latency_us": 39186.677987562514, 00:24:04.226 "min_latency_us": 7670.139259259259, 00:24:04.226 "max_latency_us": 79614.1037037037 00:24:04.226 } 00:24:04.226 ], 00:24:04.226 "core_count": 1 00:24:04.226 } 00:24:04.226 07:09:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:04.226 07:09:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 278881 00:24:04.226 07:09:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 278881 ']' 00:24:04.226 07:09:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 278881 00:24:04.226 07:09:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:04.226 07:09:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:04.226 07:09:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 278881 00:24:04.226 07:09:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:24:04.226 07:09:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:24:04.226 07:09:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 278881' 00:24:04.226 killing process with pid 278881 00:24:04.226 07:09:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 278881 00:24:04.226 Received shutdown signal, test time was about 10.000000 seconds 00:24:04.226 00:24:04.226 Latency(us) 00:24:04.226 [2024-11-18T06:09:25.204Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:04.226 [2024-11-18T06:09:25.204Z] =================================================================================================================== 00:24:04.226 [2024-11-18T06:09:25.204Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:04.226 07:09:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 278881 00:24:04.484 07:09:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 278739 00:24:04.484 07:09:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 278739 ']' 00:24:04.484 07:09:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 278739 00:24:04.485 07:09:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:04.485 07:09:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:04.485 07:09:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 278739 00:24:04.485 07:09:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:04.485 07:09:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:04.485 07:09:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 278739' 00:24:04.485 killing process with pid 278739 00:24:04.485 07:09:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 278739 00:24:04.485 07:09:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 278739 00:24:04.743 07:09:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:24:04.743 07:09:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:04.743 07:09:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:04.743 07:09:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:04.743 07:09:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=280197 00:24:04.743 07:09:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:24:04.743 07:09:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 280197 00:24:04.743 07:09:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 280197 ']' 00:24:04.743 07:09:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:04.743 07:09:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:04.743 07:09:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:04.743 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:04.743 07:09:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:04.743 07:09:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:04.743 [2024-11-18 07:09:25.538538] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:24:04.743 [2024-11-18 07:09:25.538620] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:04.743 [2024-11-18 07:09:25.609700] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:04.743 [2024-11-18 07:09:25.654539] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:04.743 [2024-11-18 07:09:25.654591] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:04.743 [2024-11-18 07:09:25.654614] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:04.743 [2024-11-18 07:09:25.654625] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:04.743 [2024-11-18 07:09:25.654634] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:04.743 [2024-11-18 07:09:25.655151] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:05.001 07:09:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:05.001 07:09:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:24:05.001 07:09:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:05.001 07:09:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:05.001 07:09:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:05.001 07:09:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:05.001 07:09:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.z1oj8GbH3J 00:24:05.001 07:09:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.z1oj8GbH3J 00:24:05.001 07:09:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:24:05.259 [2024-11-18 07:09:26.042066] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:05.259 07:09:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:24:05.517 07:09:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:24:05.776 [2024-11-18 07:09:26.583561] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:05.776 [2024-11-18 07:09:26.583867] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:05.776 07:09:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:24:06.034 malloc0 00:24:06.034 07:09:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:24:06.292 07:09:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.z1oj8GbH3J 00:24:06.550 07:09:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:24:06.808 07:09:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=280488 00:24:06.808 07:09:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:24:06.808 07:09:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:06.808 07:09:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 280488 /var/tmp/bdevperf.sock 00:24:06.808 07:09:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 280488 ']' 00:24:06.808 07:09:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:06.808 07:09:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:06.808 07:09:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:06.808 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:06.808 07:09:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:06.808 07:09:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:06.808 [2024-11-18 07:09:27.739598] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:24:06.808 [2024-11-18 07:09:27.739676] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid280488 ] 00:24:07.066 [2024-11-18 07:09:27.807363] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:07.066 [2024-11-18 07:09:27.853076] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:07.066 07:09:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:07.066 07:09:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:24:07.066 07:09:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.z1oj8GbH3J 00:24:07.324 07:09:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:24:07.581 [2024-11-18 07:09:28.495947] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:07.839 nvme0n1 00:24:07.839 07:09:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:07.839 Running I/O for 1 seconds... 00:24:08.784 3315.00 IOPS, 12.95 MiB/s 00:24:08.784 Latency(us) 00:24:08.784 [2024-11-18T06:09:29.762Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:08.784 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:24:08.784 Verification LBA range: start 0x0 length 0x2000 00:24:08.784 nvme0n1 : 1.03 3330.20 13.01 0.00 0.00 37893.97 9369.22 39612.87 00:24:08.784 [2024-11-18T06:09:29.762Z] =================================================================================================================== 00:24:08.784 [2024-11-18T06:09:29.762Z] Total : 3330.20 13.01 0.00 0.00 37893.97 9369.22 39612.87 00:24:08.784 { 00:24:08.784 "results": [ 00:24:08.784 { 00:24:08.784 "job": "nvme0n1", 00:24:08.784 "core_mask": "0x2", 00:24:08.784 "workload": "verify", 00:24:08.784 "status": "finished", 00:24:08.784 "verify_range": { 00:24:08.784 "start": 0, 00:24:08.784 "length": 8192 00:24:08.784 }, 00:24:08.784 "queue_depth": 128, 00:24:08.784 "io_size": 4096, 00:24:08.784 "runtime": 1.034173, 00:24:08.784 "iops": 3330.197172039881, 00:24:08.784 "mibps": 13.008582703280785, 00:24:08.784 "io_failed": 0, 00:24:08.784 "io_timeout": 0, 00:24:08.784 "avg_latency_us": 37893.97124102035, 00:24:08.784 "min_latency_us": 9369.22074074074, 00:24:08.784 "max_latency_us": 39612.87111111111 00:24:08.784 } 00:24:08.784 ], 00:24:08.784 "core_count": 1 00:24:08.784 } 00:24:08.784 07:09:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 280488 00:24:08.784 07:09:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 280488 ']' 00:24:08.784 07:09:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 280488 00:24:08.784 07:09:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:08.784 07:09:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:08.784 07:09:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 280488 00:24:09.042 07:09:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:09.042 07:09:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:09.042 07:09:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 280488' 00:24:09.042 killing process with pid 280488 00:24:09.043 07:09:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 280488 00:24:09.043 Received shutdown signal, test time was about 1.000000 seconds 00:24:09.043 00:24:09.043 Latency(us) 00:24:09.043 [2024-11-18T06:09:30.021Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:09.043 [2024-11-18T06:09:30.021Z] =================================================================================================================== 00:24:09.043 [2024-11-18T06:09:30.021Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:09.043 07:09:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 280488 00:24:09.043 07:09:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 280197 00:24:09.043 07:09:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 280197 ']' 00:24:09.043 07:09:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 280197 00:24:09.043 07:09:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:09.043 07:09:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:09.043 07:09:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 280197 00:24:09.043 07:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:09.043 07:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:09.043 07:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 280197' 00:24:09.043 killing process with pid 280197 00:24:09.043 07:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 280197 00:24:09.043 07:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 280197 00:24:09.303 07:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:24:09.303 07:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:09.303 07:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:09.303 07:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:09.303 07:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=280767 00:24:09.303 07:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:24:09.303 07:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 280767 00:24:09.303 07:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 280767 ']' 00:24:09.303 07:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:09.303 07:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:09.303 07:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:09.303 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:09.303 07:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:09.303 07:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:09.563 [2024-11-18 07:09:30.282206] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:24:09.563 [2024-11-18 07:09:30.282333] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:09.563 [2024-11-18 07:09:30.357905] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:09.563 [2024-11-18 07:09:30.398369] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:09.563 [2024-11-18 07:09:30.398432] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:09.563 [2024-11-18 07:09:30.398456] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:09.563 [2024-11-18 07:09:30.398467] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:09.563 [2024-11-18 07:09:30.398477] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:09.563 [2024-11-18 07:09:30.399062] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:09.563 07:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:09.563 07:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:24:09.563 07:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:09.563 07:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:09.563 07:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:09.563 07:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:09.563 07:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:24:09.563 07:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:09.563 07:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:09.563 [2024-11-18 07:09:30.535944] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:09.821 malloc0 00:24:09.821 [2024-11-18 07:09:30.567092] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:09.821 [2024-11-18 07:09:30.567331] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:09.821 07:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:09.821 07:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=280872 00:24:09.821 07:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:24:09.821 07:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 280872 /var/tmp/bdevperf.sock 00:24:09.821 07:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 280872 ']' 00:24:09.821 07:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:09.821 07:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:09.821 07:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:09.821 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:09.821 07:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:09.821 07:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:09.821 [2024-11-18 07:09:30.639130] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:24:09.821 [2024-11-18 07:09:30.639194] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid280872 ] 00:24:09.821 [2024-11-18 07:09:30.705235] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:09.821 [2024-11-18 07:09:30.750770] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:10.079 07:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:10.079 07:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:24:10.079 07:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.z1oj8GbH3J 00:24:10.337 07:09:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:24:10.595 [2024-11-18 07:09:31.413249] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:10.595 nvme0n1 00:24:10.595 07:09:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:10.853 Running I/O for 1 seconds... 00:24:11.787 3353.00 IOPS, 13.10 MiB/s 00:24:11.787 Latency(us) 00:24:11.787 [2024-11-18T06:09:32.765Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:11.787 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:24:11.787 Verification LBA range: start 0x0 length 0x2000 00:24:11.787 nvme0n1 : 1.03 3371.72 13.17 0.00 0.00 37500.31 9611.95 44079.03 00:24:11.787 [2024-11-18T06:09:32.765Z] =================================================================================================================== 00:24:11.787 [2024-11-18T06:09:32.765Z] Total : 3371.72 13.17 0.00 0.00 37500.31 9611.95 44079.03 00:24:11.787 { 00:24:11.787 "results": [ 00:24:11.787 { 00:24:11.787 "job": "nvme0n1", 00:24:11.787 "core_mask": "0x2", 00:24:11.787 "workload": "verify", 00:24:11.787 "status": "finished", 00:24:11.787 "verify_range": { 00:24:11.787 "start": 0, 00:24:11.787 "length": 8192 00:24:11.787 }, 00:24:11.787 "queue_depth": 128, 00:24:11.787 "io_size": 4096, 00:24:11.787 "runtime": 1.032708, 00:24:11.787 "iops": 3371.7178524810497, 00:24:11.787 "mibps": 13.1707728612541, 00:24:11.787 "io_failed": 0, 00:24:11.787 "io_timeout": 0, 00:24:11.787 "avg_latency_us": 37500.30510860085, 00:24:11.787 "min_latency_us": 9611.946666666667, 00:24:11.787 "max_latency_us": 44079.02814814815 00:24:11.787 } 00:24:11.787 ], 00:24:11.787 "core_count": 1 00:24:11.787 } 00:24:11.787 07:09:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:24:11.787 07:09:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:11.787 07:09:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:11.787 07:09:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:11.787 07:09:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:24:11.787 "subsystems": [ 00:24:11.787 { 00:24:11.787 "subsystem": "keyring", 00:24:11.787 "config": [ 00:24:11.787 { 00:24:11.787 "method": "keyring_file_add_key", 00:24:11.787 "params": { 00:24:11.787 "name": "key0", 00:24:11.787 "path": "/tmp/tmp.z1oj8GbH3J" 00:24:11.787 } 00:24:11.787 } 00:24:11.787 ] 00:24:11.787 }, 00:24:11.787 { 00:24:11.787 "subsystem": "iobuf", 00:24:11.787 "config": [ 00:24:11.787 { 00:24:11.787 "method": "iobuf_set_options", 00:24:11.787 "params": { 00:24:11.787 "small_pool_count": 8192, 00:24:11.787 "large_pool_count": 1024, 00:24:11.787 "small_bufsize": 8192, 00:24:11.787 "large_bufsize": 135168, 00:24:11.787 "enable_numa": false 00:24:11.787 } 00:24:11.787 } 00:24:11.787 ] 00:24:11.787 }, 00:24:11.787 { 00:24:11.787 "subsystem": "sock", 00:24:11.787 "config": [ 00:24:11.787 { 00:24:11.787 "method": "sock_set_default_impl", 00:24:11.787 "params": { 00:24:11.787 "impl_name": "posix" 00:24:11.787 } 00:24:11.787 }, 00:24:11.787 { 00:24:11.787 "method": "sock_impl_set_options", 00:24:11.787 "params": { 00:24:11.787 "impl_name": "ssl", 00:24:11.787 "recv_buf_size": 4096, 00:24:11.787 "send_buf_size": 4096, 00:24:11.787 "enable_recv_pipe": true, 00:24:11.787 "enable_quickack": false, 00:24:11.787 "enable_placement_id": 0, 00:24:11.787 "enable_zerocopy_send_server": true, 00:24:11.787 "enable_zerocopy_send_client": false, 00:24:11.787 "zerocopy_threshold": 0, 00:24:11.787 "tls_version": 0, 00:24:11.787 "enable_ktls": false 00:24:11.787 } 00:24:11.787 }, 00:24:11.787 { 00:24:11.787 "method": "sock_impl_set_options", 00:24:11.787 "params": { 00:24:11.787 "impl_name": "posix", 00:24:11.787 "recv_buf_size": 2097152, 00:24:11.787 "send_buf_size": 2097152, 00:24:11.787 "enable_recv_pipe": true, 00:24:11.787 "enable_quickack": false, 00:24:11.787 "enable_placement_id": 0, 00:24:11.787 "enable_zerocopy_send_server": true, 00:24:11.787 "enable_zerocopy_send_client": false, 00:24:11.787 "zerocopy_threshold": 0, 00:24:11.787 "tls_version": 0, 00:24:11.787 "enable_ktls": false 00:24:11.787 } 00:24:11.787 } 00:24:11.787 ] 00:24:11.787 }, 00:24:11.787 { 00:24:11.787 "subsystem": "vmd", 00:24:11.787 "config": [] 00:24:11.787 }, 00:24:11.787 { 00:24:11.787 "subsystem": "accel", 00:24:11.787 "config": [ 00:24:11.787 { 00:24:11.787 "method": "accel_set_options", 00:24:11.787 "params": { 00:24:11.787 "small_cache_size": 128, 00:24:11.787 "large_cache_size": 16, 00:24:11.787 "task_count": 2048, 00:24:11.787 "sequence_count": 2048, 00:24:11.787 "buf_count": 2048 00:24:11.787 } 00:24:11.787 } 00:24:11.787 ] 00:24:11.787 }, 00:24:11.787 { 00:24:11.787 "subsystem": "bdev", 00:24:11.787 "config": [ 00:24:11.787 { 00:24:11.787 "method": "bdev_set_options", 00:24:11.787 "params": { 00:24:11.787 "bdev_io_pool_size": 65535, 00:24:11.787 "bdev_io_cache_size": 256, 00:24:11.787 "bdev_auto_examine": true, 00:24:11.787 "iobuf_small_cache_size": 128, 00:24:11.787 "iobuf_large_cache_size": 16 00:24:11.787 } 00:24:11.787 }, 00:24:11.787 { 00:24:11.787 "method": "bdev_raid_set_options", 00:24:11.788 "params": { 00:24:11.788 "process_window_size_kb": 1024, 00:24:11.788 "process_max_bandwidth_mb_sec": 0 00:24:11.788 } 00:24:11.788 }, 00:24:11.788 { 00:24:11.788 "method": "bdev_iscsi_set_options", 00:24:11.788 "params": { 00:24:11.788 "timeout_sec": 30 00:24:11.788 } 00:24:11.788 }, 00:24:11.788 { 00:24:11.788 "method": "bdev_nvme_set_options", 00:24:11.788 "params": { 00:24:11.788 "action_on_timeout": "none", 00:24:11.788 "timeout_us": 0, 00:24:11.788 "timeout_admin_us": 0, 00:24:11.788 "keep_alive_timeout_ms": 10000, 00:24:11.788 "arbitration_burst": 0, 00:24:11.788 "low_priority_weight": 0, 00:24:11.788 "medium_priority_weight": 0, 00:24:11.788 "high_priority_weight": 0, 00:24:11.788 "nvme_adminq_poll_period_us": 10000, 00:24:11.788 "nvme_ioq_poll_period_us": 0, 00:24:11.788 "io_queue_requests": 0, 00:24:11.788 "delay_cmd_submit": true, 00:24:11.788 "transport_retry_count": 4, 00:24:11.788 "bdev_retry_count": 3, 00:24:11.788 "transport_ack_timeout": 0, 00:24:11.788 "ctrlr_loss_timeout_sec": 0, 00:24:11.788 "reconnect_delay_sec": 0, 00:24:11.788 "fast_io_fail_timeout_sec": 0, 00:24:11.788 "disable_auto_failback": false, 00:24:11.788 "generate_uuids": false, 00:24:11.788 "transport_tos": 0, 00:24:11.788 "nvme_error_stat": false, 00:24:11.788 "rdma_srq_size": 0, 00:24:11.788 "io_path_stat": false, 00:24:11.788 "allow_accel_sequence": false, 00:24:11.788 "rdma_max_cq_size": 0, 00:24:11.788 "rdma_cm_event_timeout_ms": 0, 00:24:11.788 "dhchap_digests": [ 00:24:11.788 "sha256", 00:24:11.788 "sha384", 00:24:11.788 "sha512" 00:24:11.788 ], 00:24:11.788 "dhchap_dhgroups": [ 00:24:11.788 "null", 00:24:11.788 "ffdhe2048", 00:24:11.788 "ffdhe3072", 00:24:11.788 "ffdhe4096", 00:24:11.788 "ffdhe6144", 00:24:11.788 "ffdhe8192" 00:24:11.788 ] 00:24:11.788 } 00:24:11.788 }, 00:24:11.788 { 00:24:11.788 "method": "bdev_nvme_set_hotplug", 00:24:11.788 "params": { 00:24:11.788 "period_us": 100000, 00:24:11.788 "enable": false 00:24:11.788 } 00:24:11.788 }, 00:24:11.788 { 00:24:11.788 "method": "bdev_malloc_create", 00:24:11.788 "params": { 00:24:11.788 "name": "malloc0", 00:24:11.788 "num_blocks": 8192, 00:24:11.788 "block_size": 4096, 00:24:11.788 "physical_block_size": 4096, 00:24:11.788 "uuid": "151cb1af-ac65-466b-b79f-acb510a3605e", 00:24:11.788 "optimal_io_boundary": 0, 00:24:11.788 "md_size": 0, 00:24:11.788 "dif_type": 0, 00:24:11.788 "dif_is_head_of_md": false, 00:24:11.788 "dif_pi_format": 0 00:24:11.788 } 00:24:11.788 }, 00:24:11.788 { 00:24:11.788 "method": "bdev_wait_for_examine" 00:24:11.788 } 00:24:11.788 ] 00:24:11.788 }, 00:24:11.788 { 00:24:11.788 "subsystem": "nbd", 00:24:11.788 "config": [] 00:24:11.788 }, 00:24:11.788 { 00:24:11.788 "subsystem": "scheduler", 00:24:11.788 "config": [ 00:24:11.788 { 00:24:11.788 "method": "framework_set_scheduler", 00:24:11.788 "params": { 00:24:11.788 "name": "static" 00:24:11.788 } 00:24:11.788 } 00:24:11.788 ] 00:24:11.788 }, 00:24:11.788 { 00:24:11.788 "subsystem": "nvmf", 00:24:11.788 "config": [ 00:24:11.788 { 00:24:11.788 "method": "nvmf_set_config", 00:24:11.788 "params": { 00:24:11.788 "discovery_filter": "match_any", 00:24:11.788 "admin_cmd_passthru": { 00:24:11.788 "identify_ctrlr": false 00:24:11.788 }, 00:24:11.788 "dhchap_digests": [ 00:24:11.788 "sha256", 00:24:11.788 "sha384", 00:24:11.788 "sha512" 00:24:11.788 ], 00:24:11.788 "dhchap_dhgroups": [ 00:24:11.788 "null", 00:24:11.788 "ffdhe2048", 00:24:11.788 "ffdhe3072", 00:24:11.788 "ffdhe4096", 00:24:11.788 "ffdhe6144", 00:24:11.788 "ffdhe8192" 00:24:11.788 ] 00:24:11.788 } 00:24:11.788 }, 00:24:11.788 { 00:24:11.788 "method": "nvmf_set_max_subsystems", 00:24:11.788 "params": { 00:24:11.788 "max_subsystems": 1024 00:24:11.788 } 00:24:11.788 }, 00:24:11.788 { 00:24:11.788 "method": "nvmf_set_crdt", 00:24:11.788 "params": { 00:24:11.788 "crdt1": 0, 00:24:11.788 "crdt2": 0, 00:24:11.788 "crdt3": 0 00:24:11.788 } 00:24:11.788 }, 00:24:11.788 { 00:24:11.788 "method": "nvmf_create_transport", 00:24:11.788 "params": { 00:24:11.788 "trtype": "TCP", 00:24:11.788 "max_queue_depth": 128, 00:24:11.788 "max_io_qpairs_per_ctrlr": 127, 00:24:11.788 "in_capsule_data_size": 4096, 00:24:11.788 "max_io_size": 131072, 00:24:11.788 "io_unit_size": 131072, 00:24:11.788 "max_aq_depth": 128, 00:24:11.788 "num_shared_buffers": 511, 00:24:11.788 "buf_cache_size": 4294967295, 00:24:11.788 "dif_insert_or_strip": false, 00:24:11.788 "zcopy": false, 00:24:11.788 "c2h_success": false, 00:24:11.788 "sock_priority": 0, 00:24:11.788 "abort_timeout_sec": 1, 00:24:11.788 "ack_timeout": 0, 00:24:11.788 "data_wr_pool_size": 0 00:24:11.788 } 00:24:11.788 }, 00:24:11.788 { 00:24:11.788 "method": "nvmf_create_subsystem", 00:24:11.788 "params": { 00:24:11.788 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:11.788 "allow_any_host": false, 00:24:11.788 "serial_number": "00000000000000000000", 00:24:11.788 "model_number": "SPDK bdev Controller", 00:24:11.788 "max_namespaces": 32, 00:24:11.788 "min_cntlid": 1, 00:24:11.788 "max_cntlid": 65519, 00:24:11.788 "ana_reporting": false 00:24:11.788 } 00:24:11.788 }, 00:24:11.788 { 00:24:11.788 "method": "nvmf_subsystem_add_host", 00:24:11.788 "params": { 00:24:11.788 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:11.788 "host": "nqn.2016-06.io.spdk:host1", 00:24:11.788 "psk": "key0" 00:24:11.788 } 00:24:11.788 }, 00:24:11.788 { 00:24:11.788 "method": "nvmf_subsystem_add_ns", 00:24:11.788 "params": { 00:24:11.788 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:11.788 "namespace": { 00:24:11.788 "nsid": 1, 00:24:11.788 "bdev_name": "malloc0", 00:24:11.788 "nguid": "151CB1AFAC65466BB79FACB510A3605E", 00:24:11.788 "uuid": "151cb1af-ac65-466b-b79f-acb510a3605e", 00:24:11.788 "no_auto_visible": false 00:24:11.788 } 00:24:11.788 } 00:24:11.788 }, 00:24:11.788 { 00:24:11.788 "method": "nvmf_subsystem_add_listener", 00:24:11.788 "params": { 00:24:11.788 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:11.788 "listen_address": { 00:24:11.788 "trtype": "TCP", 00:24:11.788 "adrfam": "IPv4", 00:24:11.788 "traddr": "10.0.0.2", 00:24:11.788 "trsvcid": "4420" 00:24:11.788 }, 00:24:11.788 "secure_channel": false, 00:24:11.788 "sock_impl": "ssl" 00:24:11.788 } 00:24:11.788 } 00:24:11.788 ] 00:24:11.788 } 00:24:11.788 ] 00:24:11.788 }' 00:24:11.788 07:09:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:24:12.354 07:09:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:24:12.354 "subsystems": [ 00:24:12.354 { 00:24:12.354 "subsystem": "keyring", 00:24:12.354 "config": [ 00:24:12.354 { 00:24:12.354 "method": "keyring_file_add_key", 00:24:12.354 "params": { 00:24:12.354 "name": "key0", 00:24:12.354 "path": "/tmp/tmp.z1oj8GbH3J" 00:24:12.354 } 00:24:12.354 } 00:24:12.354 ] 00:24:12.354 }, 00:24:12.354 { 00:24:12.354 "subsystem": "iobuf", 00:24:12.354 "config": [ 00:24:12.354 { 00:24:12.354 "method": "iobuf_set_options", 00:24:12.354 "params": { 00:24:12.354 "small_pool_count": 8192, 00:24:12.354 "large_pool_count": 1024, 00:24:12.354 "small_bufsize": 8192, 00:24:12.354 "large_bufsize": 135168, 00:24:12.354 "enable_numa": false 00:24:12.354 } 00:24:12.354 } 00:24:12.354 ] 00:24:12.354 }, 00:24:12.354 { 00:24:12.354 "subsystem": "sock", 00:24:12.354 "config": [ 00:24:12.354 { 00:24:12.354 "method": "sock_set_default_impl", 00:24:12.354 "params": { 00:24:12.354 "impl_name": "posix" 00:24:12.354 } 00:24:12.354 }, 00:24:12.354 { 00:24:12.354 "method": "sock_impl_set_options", 00:24:12.354 "params": { 00:24:12.354 "impl_name": "ssl", 00:24:12.354 "recv_buf_size": 4096, 00:24:12.354 "send_buf_size": 4096, 00:24:12.354 "enable_recv_pipe": true, 00:24:12.354 "enable_quickack": false, 00:24:12.354 "enable_placement_id": 0, 00:24:12.354 "enable_zerocopy_send_server": true, 00:24:12.354 "enable_zerocopy_send_client": false, 00:24:12.354 "zerocopy_threshold": 0, 00:24:12.354 "tls_version": 0, 00:24:12.354 "enable_ktls": false 00:24:12.355 } 00:24:12.355 }, 00:24:12.355 { 00:24:12.355 "method": "sock_impl_set_options", 00:24:12.355 "params": { 00:24:12.355 "impl_name": "posix", 00:24:12.355 "recv_buf_size": 2097152, 00:24:12.355 "send_buf_size": 2097152, 00:24:12.355 "enable_recv_pipe": true, 00:24:12.355 "enable_quickack": false, 00:24:12.355 "enable_placement_id": 0, 00:24:12.355 "enable_zerocopy_send_server": true, 00:24:12.355 "enable_zerocopy_send_client": false, 00:24:12.355 "zerocopy_threshold": 0, 00:24:12.355 "tls_version": 0, 00:24:12.355 "enable_ktls": false 00:24:12.355 } 00:24:12.355 } 00:24:12.355 ] 00:24:12.355 }, 00:24:12.355 { 00:24:12.355 "subsystem": "vmd", 00:24:12.355 "config": [] 00:24:12.355 }, 00:24:12.355 { 00:24:12.355 "subsystem": "accel", 00:24:12.355 "config": [ 00:24:12.355 { 00:24:12.355 "method": "accel_set_options", 00:24:12.355 "params": { 00:24:12.355 "small_cache_size": 128, 00:24:12.355 "large_cache_size": 16, 00:24:12.355 "task_count": 2048, 00:24:12.355 "sequence_count": 2048, 00:24:12.355 "buf_count": 2048 00:24:12.355 } 00:24:12.355 } 00:24:12.355 ] 00:24:12.355 }, 00:24:12.355 { 00:24:12.355 "subsystem": "bdev", 00:24:12.355 "config": [ 00:24:12.355 { 00:24:12.355 "method": "bdev_set_options", 00:24:12.355 "params": { 00:24:12.355 "bdev_io_pool_size": 65535, 00:24:12.355 "bdev_io_cache_size": 256, 00:24:12.355 "bdev_auto_examine": true, 00:24:12.355 "iobuf_small_cache_size": 128, 00:24:12.355 "iobuf_large_cache_size": 16 00:24:12.355 } 00:24:12.355 }, 00:24:12.355 { 00:24:12.355 "method": "bdev_raid_set_options", 00:24:12.355 "params": { 00:24:12.355 "process_window_size_kb": 1024, 00:24:12.355 "process_max_bandwidth_mb_sec": 0 00:24:12.355 } 00:24:12.355 }, 00:24:12.355 { 00:24:12.355 "method": "bdev_iscsi_set_options", 00:24:12.355 "params": { 00:24:12.355 "timeout_sec": 30 00:24:12.355 } 00:24:12.355 }, 00:24:12.355 { 00:24:12.355 "method": "bdev_nvme_set_options", 00:24:12.355 "params": { 00:24:12.355 "action_on_timeout": "none", 00:24:12.355 "timeout_us": 0, 00:24:12.355 "timeout_admin_us": 0, 00:24:12.355 "keep_alive_timeout_ms": 10000, 00:24:12.355 "arbitration_burst": 0, 00:24:12.355 "low_priority_weight": 0, 00:24:12.355 "medium_priority_weight": 0, 00:24:12.355 "high_priority_weight": 0, 00:24:12.355 "nvme_adminq_poll_period_us": 10000, 00:24:12.355 "nvme_ioq_poll_period_us": 0, 00:24:12.355 "io_queue_requests": 512, 00:24:12.355 "delay_cmd_submit": true, 00:24:12.355 "transport_retry_count": 4, 00:24:12.355 "bdev_retry_count": 3, 00:24:12.355 "transport_ack_timeout": 0, 00:24:12.355 "ctrlr_loss_timeout_sec": 0, 00:24:12.355 "reconnect_delay_sec": 0, 00:24:12.355 "fast_io_fail_timeout_sec": 0, 00:24:12.355 "disable_auto_failback": false, 00:24:12.355 "generate_uuids": false, 00:24:12.355 "transport_tos": 0, 00:24:12.355 "nvme_error_stat": false, 00:24:12.355 "rdma_srq_size": 0, 00:24:12.355 "io_path_stat": false, 00:24:12.355 "allow_accel_sequence": false, 00:24:12.355 "rdma_max_cq_size": 0, 00:24:12.355 "rdma_cm_event_timeout_ms": 0, 00:24:12.355 "dhchap_digests": [ 00:24:12.355 "sha256", 00:24:12.355 "sha384", 00:24:12.355 "sha512" 00:24:12.355 ], 00:24:12.355 "dhchap_dhgroups": [ 00:24:12.355 "null", 00:24:12.355 "ffdhe2048", 00:24:12.355 "ffdhe3072", 00:24:12.355 "ffdhe4096", 00:24:12.355 "ffdhe6144", 00:24:12.355 "ffdhe8192" 00:24:12.355 ] 00:24:12.355 } 00:24:12.355 }, 00:24:12.355 { 00:24:12.355 "method": "bdev_nvme_attach_controller", 00:24:12.355 "params": { 00:24:12.355 "name": "nvme0", 00:24:12.355 "trtype": "TCP", 00:24:12.355 "adrfam": "IPv4", 00:24:12.355 "traddr": "10.0.0.2", 00:24:12.355 "trsvcid": "4420", 00:24:12.355 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:12.355 "prchk_reftag": false, 00:24:12.355 "prchk_guard": false, 00:24:12.355 "ctrlr_loss_timeout_sec": 0, 00:24:12.355 "reconnect_delay_sec": 0, 00:24:12.355 "fast_io_fail_timeout_sec": 0, 00:24:12.355 "psk": "key0", 00:24:12.355 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:12.355 "hdgst": false, 00:24:12.355 "ddgst": false, 00:24:12.355 "multipath": "multipath" 00:24:12.355 } 00:24:12.355 }, 00:24:12.355 { 00:24:12.355 "method": "bdev_nvme_set_hotplug", 00:24:12.355 "params": { 00:24:12.355 "period_us": 100000, 00:24:12.355 "enable": false 00:24:12.355 } 00:24:12.355 }, 00:24:12.355 { 00:24:12.355 "method": "bdev_enable_histogram", 00:24:12.355 "params": { 00:24:12.355 "name": "nvme0n1", 00:24:12.355 "enable": true 00:24:12.355 } 00:24:12.355 }, 00:24:12.355 { 00:24:12.355 "method": "bdev_wait_for_examine" 00:24:12.355 } 00:24:12.355 ] 00:24:12.355 }, 00:24:12.355 { 00:24:12.355 "subsystem": "nbd", 00:24:12.355 "config": [] 00:24:12.355 } 00:24:12.355 ] 00:24:12.355 }' 00:24:12.355 07:09:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 280872 00:24:12.355 07:09:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 280872 ']' 00:24:12.355 07:09:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 280872 00:24:12.355 07:09:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:12.355 07:09:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:12.355 07:09:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 280872 00:24:12.355 07:09:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:12.355 07:09:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:12.355 07:09:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 280872' 00:24:12.355 killing process with pid 280872 00:24:12.355 07:09:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 280872 00:24:12.355 Received shutdown signal, test time was about 1.000000 seconds 00:24:12.355 00:24:12.355 Latency(us) 00:24:12.355 [2024-11-18T06:09:33.333Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:12.355 [2024-11-18T06:09:33.333Z] =================================================================================================================== 00:24:12.355 [2024-11-18T06:09:33.333Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:12.355 07:09:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 280872 00:24:12.614 07:09:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 280767 00:24:12.614 07:09:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 280767 ']' 00:24:12.614 07:09:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 280767 00:24:12.614 07:09:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:12.614 07:09:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:12.614 07:09:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 280767 00:24:12.614 07:09:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:12.614 07:09:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:12.614 07:09:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 280767' 00:24:12.614 killing process with pid 280767 00:24:12.614 07:09:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 280767 00:24:12.614 07:09:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 280767 00:24:12.874 07:09:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:24:12.874 07:09:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:12.874 07:09:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:24:12.874 "subsystems": [ 00:24:12.874 { 00:24:12.874 "subsystem": "keyring", 00:24:12.874 "config": [ 00:24:12.874 { 00:24:12.874 "method": "keyring_file_add_key", 00:24:12.874 "params": { 00:24:12.874 "name": "key0", 00:24:12.874 "path": "/tmp/tmp.z1oj8GbH3J" 00:24:12.874 } 00:24:12.874 } 00:24:12.874 ] 00:24:12.874 }, 00:24:12.874 { 00:24:12.874 "subsystem": "iobuf", 00:24:12.874 "config": [ 00:24:12.874 { 00:24:12.874 "method": "iobuf_set_options", 00:24:12.874 "params": { 00:24:12.874 "small_pool_count": 8192, 00:24:12.874 "large_pool_count": 1024, 00:24:12.874 "small_bufsize": 8192, 00:24:12.874 "large_bufsize": 135168, 00:24:12.874 "enable_numa": false 00:24:12.874 } 00:24:12.874 } 00:24:12.874 ] 00:24:12.874 }, 00:24:12.874 { 00:24:12.874 "subsystem": "sock", 00:24:12.874 "config": [ 00:24:12.874 { 00:24:12.874 "method": "sock_set_default_impl", 00:24:12.874 "params": { 00:24:12.874 "impl_name": "posix" 00:24:12.874 } 00:24:12.874 }, 00:24:12.874 { 00:24:12.874 "method": "sock_impl_set_options", 00:24:12.874 "params": { 00:24:12.874 "impl_name": "ssl", 00:24:12.874 "recv_buf_size": 4096, 00:24:12.874 "send_buf_size": 4096, 00:24:12.874 "enable_recv_pipe": true, 00:24:12.874 "enable_quickack": false, 00:24:12.874 "enable_placement_id": 0, 00:24:12.874 "enable_zerocopy_send_server": true, 00:24:12.874 "enable_zerocopy_send_client": false, 00:24:12.874 "zerocopy_threshold": 0, 00:24:12.874 "tls_version": 0, 00:24:12.874 "enable_ktls": false 00:24:12.874 } 00:24:12.874 }, 00:24:12.874 { 00:24:12.874 "method": "sock_impl_set_options", 00:24:12.874 "params": { 00:24:12.874 "impl_name": "posix", 00:24:12.874 "recv_buf_size": 2097152, 00:24:12.874 "send_buf_size": 2097152, 00:24:12.874 "enable_recv_pipe": true, 00:24:12.874 "enable_quickack": false, 00:24:12.874 "enable_placement_id": 0, 00:24:12.874 "enable_zerocopy_send_server": true, 00:24:12.874 "enable_zerocopy_send_client": false, 00:24:12.874 "zerocopy_threshold": 0, 00:24:12.874 "tls_version": 0, 00:24:12.874 "enable_ktls": false 00:24:12.874 } 00:24:12.874 } 00:24:12.874 ] 00:24:12.874 }, 00:24:12.874 { 00:24:12.874 "subsystem": "vmd", 00:24:12.874 "config": [] 00:24:12.874 }, 00:24:12.874 { 00:24:12.874 "subsystem": "accel", 00:24:12.874 "config": [ 00:24:12.874 { 00:24:12.874 "method": "accel_set_options", 00:24:12.874 "params": { 00:24:12.874 "small_cache_size": 128, 00:24:12.874 "large_cache_size": 16, 00:24:12.874 "task_count": 2048, 00:24:12.874 "sequence_count": 2048, 00:24:12.874 "buf_count": 2048 00:24:12.874 } 00:24:12.874 } 00:24:12.874 ] 00:24:12.874 }, 00:24:12.874 { 00:24:12.874 "subsystem": "bdev", 00:24:12.874 "config": [ 00:24:12.874 { 00:24:12.874 "method": "bdev_set_options", 00:24:12.874 "params": { 00:24:12.874 "bdev_io_pool_size": 65535, 00:24:12.874 "bdev_io_cache_size": 256, 00:24:12.874 "bdev_auto_examine": true, 00:24:12.874 "iobuf_small_cache_size": 128, 00:24:12.874 "iobuf_large_cache_size": 16 00:24:12.874 } 00:24:12.874 }, 00:24:12.874 { 00:24:12.874 "method": "bdev_raid_set_options", 00:24:12.874 "params": { 00:24:12.874 "process_window_size_kb": 1024, 00:24:12.874 "process_max_bandwidth_mb_sec": 0 00:24:12.874 } 00:24:12.874 }, 00:24:12.874 { 00:24:12.874 "method": "bdev_iscsi_set_options", 00:24:12.874 "params": { 00:24:12.874 "timeout_sec": 30 00:24:12.874 } 00:24:12.874 }, 00:24:12.874 { 00:24:12.874 "method": "bdev_nvme_set_options", 00:24:12.874 "params": { 00:24:12.874 "action_on_timeout": "none", 00:24:12.874 "timeout_us": 0, 00:24:12.874 "timeout_admin_us": 0, 00:24:12.874 "keep_alive_timeout_ms": 10000, 00:24:12.874 "arbitration_burst": 0, 00:24:12.874 "low_priority_weight": 0, 00:24:12.874 "medium_priority_weight": 0, 00:24:12.874 "high_priority_weight": 0, 00:24:12.874 "nvme_adminq_poll_period_us": 10000, 00:24:12.874 "nvme_ioq_poll_period_us": 0, 00:24:12.874 "io_queue_requests": 0, 00:24:12.874 "delay_cmd_submit": true, 00:24:12.874 "transport_retry_count": 4, 00:24:12.874 "bdev_retry_count": 3, 00:24:12.874 "transport_ack_timeout": 0, 00:24:12.874 "ctrlr_loss_timeout_sec": 0, 00:24:12.874 "reconnect_delay_sec": 0, 00:24:12.874 "fast_io_fail_timeout_sec": 0, 00:24:12.874 "disable_auto_failback": false, 00:24:12.874 "generate_uuids": false, 00:24:12.874 "transport_tos": 0, 00:24:12.874 "nvme_error_stat": false, 00:24:12.874 "rdma_srq_size": 0, 00:24:12.874 "io_path_stat": false, 00:24:12.874 "allow_accel_sequence": false, 00:24:12.874 "rdma_max_cq_size": 0, 00:24:12.874 "rdma_cm_event_timeout_ms": 0, 00:24:12.874 "dhchap_digests": [ 00:24:12.874 "sha256", 00:24:12.874 "sha384", 00:24:12.874 "sha512" 00:24:12.874 ], 00:24:12.874 "dhchap_dhgroups": [ 00:24:12.875 "null", 00:24:12.875 "ffdhe2048", 00:24:12.875 "ffdhe3072", 00:24:12.875 "ffdhe4096", 00:24:12.875 "ffdhe6144", 00:24:12.875 "ffdhe8192" 00:24:12.875 ] 00:24:12.875 } 00:24:12.875 }, 00:24:12.875 { 00:24:12.875 "method": "bdev_nvme_set_hotplug", 00:24:12.875 "params": { 00:24:12.875 "period_us": 100000, 00:24:12.875 "enable": false 00:24:12.875 } 00:24:12.875 }, 00:24:12.875 { 00:24:12.875 "method": "bdev_malloc_create", 00:24:12.875 "params": { 00:24:12.875 "name": "malloc0", 00:24:12.875 "num_blocks": 8192, 00:24:12.875 "block_size": 4096, 00:24:12.875 "physical_block_size": 4096, 00:24:12.875 "uuid": "151cb1af-ac65-466b-b79f-acb510a3605e", 00:24:12.875 "optimal_io_boundary": 0, 00:24:12.875 "md_size": 0, 00:24:12.875 "dif_type": 0, 00:24:12.875 "dif_is_head_of_md": false, 00:24:12.875 "dif_pi_format": 0 00:24:12.875 } 00:24:12.875 }, 00:24:12.875 { 00:24:12.875 "method": "bdev_wait_for_examine" 00:24:12.875 } 00:24:12.875 ] 00:24:12.875 }, 00:24:12.875 { 00:24:12.875 "subsystem": "nbd", 00:24:12.875 "config": [] 00:24:12.875 }, 00:24:12.875 { 00:24:12.875 "subsystem": "scheduler", 00:24:12.875 "config": [ 00:24:12.875 { 00:24:12.875 "method": "framework_set_scheduler", 00:24:12.875 "params": { 00:24:12.875 "name": "static" 00:24:12.875 } 00:24:12.875 } 00:24:12.875 ] 00:24:12.875 }, 00:24:12.875 { 00:24:12.875 "subsystem": "nvmf", 00:24:12.875 "config": [ 00:24:12.875 { 00:24:12.875 "method": "nvmf_set_config", 00:24:12.875 "params": { 00:24:12.875 "discovery_filter": "match_any", 00:24:12.875 "admin_cmd_passthru": { 00:24:12.875 "identify_ctrlr": false 00:24:12.875 }, 00:24:12.875 "dhchap_digests": [ 00:24:12.875 "sha256", 00:24:12.875 "sha384", 00:24:12.875 "sha512" 00:24:12.875 ], 00:24:12.875 "dhchap_dhgroups": [ 00:24:12.875 "null", 00:24:12.875 "ffdhe2048", 00:24:12.875 "ffdhe3072", 00:24:12.875 "ffdhe4096", 00:24:12.875 "ffdhe6144", 00:24:12.875 "ffdhe8192" 00:24:12.875 ] 00:24:12.875 } 00:24:12.875 }, 00:24:12.875 { 00:24:12.875 "method": "nvmf_set_max_subsystems", 00:24:12.875 "params": { 00:24:12.875 "max_subsystems": 1024 00:24:12.875 } 00:24:12.875 }, 00:24:12.875 { 00:24:12.875 "method": "nvmf_set_crdt", 00:24:12.875 "params": { 00:24:12.875 "crdt1": 0, 00:24:12.875 "crdt2": 0, 00:24:12.875 "crdt3": 0 00:24:12.875 } 00:24:12.875 }, 00:24:12.875 { 00:24:12.875 "method": "nvmf_create_transport", 00:24:12.875 "params": { 00:24:12.875 "trtype": "TCP", 00:24:12.875 "max_queue_depth": 128, 00:24:12.875 "max_io_qpairs_per_ctrlr": 127, 00:24:12.875 "in_capsule_data_size": 4096, 00:24:12.875 "max_io_size": 131072, 00:24:12.875 "io_unit_size": 131072, 00:24:12.875 "max_aq_depth": 128, 00:24:12.875 "num_shared_buffers": 511, 00:24:12.875 "buf_cache_size": 4294967295, 00:24:12.875 "dif_insert_or_strip": false, 00:24:12.875 "zcopy": false, 00:24:12.875 "c2h_success": false, 00:24:12.875 "sock_priority": 0, 00:24:12.875 "abort_timeout_sec": 1, 00:24:12.875 "ack_timeout": 0, 00:24:12.875 "data_wr_pool_size": 0 00:24:12.875 } 00:24:12.875 }, 00:24:12.875 { 00:24:12.875 "method": "nvmf_create_subsystem", 00:24:12.875 "params": { 00:24:12.875 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:12.875 "allow_any_host": false, 00:24:12.875 "serial_number": "00000000000000000000", 00:24:12.875 "model_number": "SPDK bdev Controller", 00:24:12.875 "max_namespaces": 32, 00:24:12.875 "min_cntlid": 1, 00:24:12.875 "max_cntlid": 65519, 00:24:12.875 "ana_reporting": false 00:24:12.875 } 00:24:12.875 }, 00:24:12.875 { 00:24:12.875 "method": "nvmf_subsystem_add_host", 00:24:12.875 "params": { 00:24:12.875 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:12.875 "host": "nqn.2016-06.io.spdk:host1", 00:24:12.875 "psk": "key0" 00:24:12.875 } 00:24:12.875 }, 00:24:12.875 { 00:24:12.875 "method": "nvmf_subsystem_add_ns", 00:24:12.875 "params": { 00:24:12.875 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:12.875 "namespace": { 00:24:12.875 "nsid": 1, 00:24:12.875 "bdev_name": "malloc0", 00:24:12.875 "nguid": "151CB1AFAC65466BB79FACB510A3605E", 00:24:12.875 "uuid": "151cb1af-ac65-466b-b79f-acb510a3605e", 00:24:12.875 "no_auto_visible": false 00:24:12.875 } 00:24:12.875 } 00:24:12.875 }, 00:24:12.875 { 00:24:12.875 "method": "nvmf_subsystem_add_listener", 00:24:12.875 "params": { 00:24:12.875 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:12.875 "listen_address": { 00:24:12.875 "trtype": "TCP", 00:24:12.875 "adrfam": "IPv4", 00:24:12.875 "traddr": "10.0.0.2", 00:24:12.875 "trsvcid": "4420" 00:24:12.875 }, 00:24:12.875 "secure_channel": false, 00:24:12.875 "sock_impl": "ssl" 00:24:12.875 } 00:24:12.875 } 00:24:12.875 ] 00:24:12.875 } 00:24:12.875 ] 00:24:12.875 }' 00:24:12.875 07:09:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:12.875 07:09:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:12.875 07:09:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=281196 00:24:12.875 07:09:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:24:12.875 07:09:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 281196 00:24:12.875 07:09:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 281196 ']' 00:24:12.875 07:09:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:12.875 07:09:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:12.875 07:09:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:12.875 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:12.875 07:09:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:12.875 07:09:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:12.875 [2024-11-18 07:09:33.652628] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:24:12.875 [2024-11-18 07:09:33.652713] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:12.875 [2024-11-18 07:09:33.724239] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:12.875 [2024-11-18 07:09:33.771255] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:12.875 [2024-11-18 07:09:33.771321] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:12.875 [2024-11-18 07:09:33.771335] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:12.875 [2024-11-18 07:09:33.771346] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:12.875 [2024-11-18 07:09:33.771355] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:12.875 [2024-11-18 07:09:33.772009] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:13.134 [2024-11-18 07:09:34.008548] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:13.134 [2024-11-18 07:09:34.040579] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:13.134 [2024-11-18 07:09:34.040828] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:14.067 07:09:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:14.067 07:09:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:24:14.067 07:09:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:14.067 07:09:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:14.067 07:09:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:14.067 07:09:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:14.067 07:09:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=281348 00:24:14.067 07:09:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 281348 /var/tmp/bdevperf.sock 00:24:14.067 07:09:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 281348 ']' 00:24:14.067 07:09:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:14.067 07:09:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:14.067 07:09:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:24:14.067 07:09:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:14.067 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:14.067 07:09:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:24:14.067 "subsystems": [ 00:24:14.067 { 00:24:14.067 "subsystem": "keyring", 00:24:14.067 "config": [ 00:24:14.067 { 00:24:14.067 "method": "keyring_file_add_key", 00:24:14.067 "params": { 00:24:14.067 "name": "key0", 00:24:14.067 "path": "/tmp/tmp.z1oj8GbH3J" 00:24:14.067 } 00:24:14.067 } 00:24:14.067 ] 00:24:14.067 }, 00:24:14.067 { 00:24:14.067 "subsystem": "iobuf", 00:24:14.067 "config": [ 00:24:14.067 { 00:24:14.067 "method": "iobuf_set_options", 00:24:14.067 "params": { 00:24:14.067 "small_pool_count": 8192, 00:24:14.067 "large_pool_count": 1024, 00:24:14.067 "small_bufsize": 8192, 00:24:14.067 "large_bufsize": 135168, 00:24:14.067 "enable_numa": false 00:24:14.067 } 00:24:14.067 } 00:24:14.067 ] 00:24:14.067 }, 00:24:14.067 { 00:24:14.067 "subsystem": "sock", 00:24:14.067 "config": [ 00:24:14.067 { 00:24:14.067 "method": "sock_set_default_impl", 00:24:14.067 "params": { 00:24:14.067 "impl_name": "posix" 00:24:14.067 } 00:24:14.067 }, 00:24:14.067 { 00:24:14.067 "method": "sock_impl_set_options", 00:24:14.067 "params": { 00:24:14.067 "impl_name": "ssl", 00:24:14.067 "recv_buf_size": 4096, 00:24:14.067 "send_buf_size": 4096, 00:24:14.067 "enable_recv_pipe": true, 00:24:14.067 "enable_quickack": false, 00:24:14.067 "enable_placement_id": 0, 00:24:14.067 "enable_zerocopy_send_server": true, 00:24:14.067 "enable_zerocopy_send_client": false, 00:24:14.067 "zerocopy_threshold": 0, 00:24:14.067 "tls_version": 0, 00:24:14.067 "enable_ktls": false 00:24:14.067 } 00:24:14.067 }, 00:24:14.067 { 00:24:14.067 "method": "sock_impl_set_options", 00:24:14.067 "params": { 00:24:14.067 "impl_name": "posix", 00:24:14.067 "recv_buf_size": 2097152, 00:24:14.067 "send_buf_size": 2097152, 00:24:14.067 "enable_recv_pipe": true, 00:24:14.067 "enable_quickack": false, 00:24:14.067 "enable_placement_id": 0, 00:24:14.067 "enable_zerocopy_send_server": true, 00:24:14.067 "enable_zerocopy_send_client": false, 00:24:14.067 "zerocopy_threshold": 0, 00:24:14.067 "tls_version": 0, 00:24:14.067 "enable_ktls": false 00:24:14.067 } 00:24:14.067 } 00:24:14.067 ] 00:24:14.067 }, 00:24:14.067 { 00:24:14.067 "subsystem": "vmd", 00:24:14.067 "config": [] 00:24:14.067 }, 00:24:14.067 { 00:24:14.067 "subsystem": "accel", 00:24:14.067 "config": [ 00:24:14.067 { 00:24:14.067 "method": "accel_set_options", 00:24:14.067 "params": { 00:24:14.067 "small_cache_size": 128, 00:24:14.067 "large_cache_size": 16, 00:24:14.067 "task_count": 2048, 00:24:14.067 "sequence_count": 2048, 00:24:14.067 "buf_count": 2048 00:24:14.067 } 00:24:14.067 } 00:24:14.067 ] 00:24:14.067 }, 00:24:14.067 { 00:24:14.067 "subsystem": "bdev", 00:24:14.067 "config": [ 00:24:14.067 { 00:24:14.067 "method": "bdev_set_options", 00:24:14.067 "params": { 00:24:14.067 "bdev_io_pool_size": 65535, 00:24:14.067 "bdev_io_cache_size": 256, 00:24:14.067 "bdev_auto_examine": true, 00:24:14.067 "iobuf_small_cache_size": 128, 00:24:14.067 "iobuf_large_cache_size": 16 00:24:14.067 } 00:24:14.067 }, 00:24:14.067 { 00:24:14.067 "method": "bdev_raid_set_options", 00:24:14.067 "params": { 00:24:14.067 "process_window_size_kb": 1024, 00:24:14.067 "process_max_bandwidth_mb_sec": 0 00:24:14.067 } 00:24:14.067 }, 00:24:14.067 { 00:24:14.067 "method": "bdev_iscsi_set_options", 00:24:14.067 "params": { 00:24:14.067 "timeout_sec": 30 00:24:14.067 } 00:24:14.067 }, 00:24:14.067 { 00:24:14.067 "method": "bdev_nvme_set_options", 00:24:14.067 "params": { 00:24:14.067 "action_on_timeout": "none", 00:24:14.067 "timeout_us": 0, 00:24:14.067 "timeout_admin_us": 0, 00:24:14.067 "keep_alive_timeout_ms": 10000, 00:24:14.067 "arbitration_burst": 0, 00:24:14.067 "low_priority_weight": 0, 00:24:14.067 "medium_priority_weight": 0, 00:24:14.067 "high_priority_weight": 0, 00:24:14.067 "nvme_adminq_poll_period_us": 10000, 00:24:14.067 "nvme_ioq_poll_period_us": 0, 00:24:14.067 "io_queue_requests": 512, 00:24:14.067 "delay_cmd_submit": true, 00:24:14.067 "transport_retry_count": 4, 00:24:14.067 "bdev_retry_count": 3, 00:24:14.067 "transport_ack_timeout": 0, 00:24:14.067 "ctrlr_loss_timeout_sec": 0, 00:24:14.067 "reconnect_delay_sec": 0, 00:24:14.067 "fast_io_fail_timeout_sec": 0, 00:24:14.067 "disable_auto_failback": false, 00:24:14.067 "generate_uuids": false, 00:24:14.067 "transport_tos": 0, 00:24:14.067 "nvme_error_stat": false, 00:24:14.067 "rdma_srq_size": 0, 00:24:14.067 "io_path_stat": false, 00:24:14.067 "allow_accel_sequence": false, 00:24:14.067 "rdma_max_cq_size": 0, 00:24:14.067 "rdma_cm_event_timeout_ms": 0, 00:24:14.067 "dhchap_digests": [ 00:24:14.067 "sha256", 00:24:14.067 "sha384", 00:24:14.067 "sha512" 00:24:14.067 ], 00:24:14.067 "dhchap_dhgroups": [ 00:24:14.067 "null", 00:24:14.067 "ffdhe2048", 00:24:14.067 "ffdhe3072", 00:24:14.067 "ffdhe4096", 00:24:14.067 "ffdhe6144", 00:24:14.067 "ffdhe8192" 00:24:14.067 ] 00:24:14.067 } 00:24:14.067 }, 00:24:14.067 { 00:24:14.067 "method": "bdev_nvme_attach_controller", 00:24:14.067 "params": { 00:24:14.067 "name": "nvme0", 00:24:14.067 "trtype": "TCP", 00:24:14.067 "adrfam": "IPv4", 00:24:14.067 "traddr": "10.0.0.2", 00:24:14.068 "trsvcid": "4420", 00:24:14.068 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:14.068 "prchk_reftag": false, 00:24:14.068 "prchk_guard": false, 00:24:14.068 "ctrlr_loss_timeout_sec": 0, 00:24:14.068 "reconnect_delay_sec": 0, 00:24:14.068 "fast_io_fail_timeout_sec": 0, 00:24:14.068 "psk": "key0", 00:24:14.068 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:14.068 "hdgst": false, 00:24:14.068 "ddgst": false, 00:24:14.068 "multipath": "multipath" 00:24:14.068 } 00:24:14.068 }, 00:24:14.068 { 00:24:14.068 "method": "bdev_nvme_set_hotplug", 00:24:14.068 "params": { 00:24:14.068 "period_us": 100000, 00:24:14.068 "enable": false 00:24:14.068 } 00:24:14.068 }, 00:24:14.068 { 00:24:14.068 "method": "bdev_enable_histogram", 00:24:14.068 "params": { 00:24:14.068 "name": "nvme0n1", 00:24:14.068 "enable": true 00:24:14.068 } 00:24:14.068 }, 00:24:14.068 { 00:24:14.068 "method": "bdev_wait_for_examine" 00:24:14.068 } 00:24:14.068 ] 00:24:14.068 }, 00:24:14.068 { 00:24:14.068 "subsystem": "nbd", 00:24:14.068 "config": [] 00:24:14.068 } 00:24:14.068 ] 00:24:14.068 }' 00:24:14.068 07:09:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:14.068 07:09:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:14.068 [2024-11-18 07:09:34.757867] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:24:14.068 [2024-11-18 07:09:34.757962] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid281348 ] 00:24:14.068 [2024-11-18 07:09:34.824904] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:14.068 [2024-11-18 07:09:34.871734] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:14.326 [2024-11-18 07:09:35.051841] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:14.326 07:09:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:14.326 07:09:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:24:14.326 07:09:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:14.326 07:09:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:24:14.584 07:09:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:14.584 07:09:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:14.841 Running I/O for 1 seconds... 00:24:15.775 3281.00 IOPS, 12.82 MiB/s 00:24:15.775 Latency(us) 00:24:15.775 [2024-11-18T06:09:36.753Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:15.775 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:24:15.775 Verification LBA range: start 0x0 length 0x2000 00:24:15.775 nvme0n1 : 1.03 3325.92 12.99 0.00 0.00 38066.64 6140.97 37671.06 00:24:15.775 [2024-11-18T06:09:36.753Z] =================================================================================================================== 00:24:15.775 [2024-11-18T06:09:36.753Z] Total : 3325.92 12.99 0.00 0.00 38066.64 6140.97 37671.06 00:24:15.775 { 00:24:15.775 "results": [ 00:24:15.775 { 00:24:15.775 "job": "nvme0n1", 00:24:15.775 "core_mask": "0x2", 00:24:15.775 "workload": "verify", 00:24:15.775 "status": "finished", 00:24:15.775 "verify_range": { 00:24:15.775 "start": 0, 00:24:15.775 "length": 8192 00:24:15.775 }, 00:24:15.775 "queue_depth": 128, 00:24:15.775 "io_size": 4096, 00:24:15.775 "runtime": 1.02528, 00:24:15.775 "iops": 3325.9207240948813, 00:24:15.775 "mibps": 12.99187782849563, 00:24:15.775 "io_failed": 0, 00:24:15.775 "io_timeout": 0, 00:24:15.775 "avg_latency_us": 38066.6393413707, 00:24:15.775 "min_latency_us": 6140.965925925926, 00:24:15.775 "max_latency_us": 37671.0637037037 00:24:15.775 } 00:24:15.775 ], 00:24:15.775 "core_count": 1 00:24:15.775 } 00:24:15.775 07:09:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:24:15.775 07:09:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:24:15.775 07:09:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:24:15.775 07:09:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # type=--id 00:24:15.775 07:09:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@813 -- # id=0 00:24:15.775 07:09:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:24:15.775 07:09:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:24:15.775 07:09:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:24:15.775 07:09:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:24:15.775 07:09:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@824 -- # for n in $shm_files 00:24:15.775 07:09:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:24:15.775 nvmf_trace.0 00:24:15.775 07:09:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@827 -- # return 0 00:24:15.775 07:09:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 281348 00:24:15.775 07:09:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 281348 ']' 00:24:15.775 07:09:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 281348 00:24:15.775 07:09:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:15.775 07:09:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:15.775 07:09:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 281348 00:24:16.033 07:09:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:16.033 07:09:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:16.033 07:09:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 281348' 00:24:16.033 killing process with pid 281348 00:24:16.033 07:09:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 281348 00:24:16.033 Received shutdown signal, test time was about 1.000000 seconds 00:24:16.033 00:24:16.033 Latency(us) 00:24:16.033 [2024-11-18T06:09:37.011Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:16.033 [2024-11-18T06:09:37.011Z] =================================================================================================================== 00:24:16.033 [2024-11-18T06:09:37.011Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:16.033 07:09:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 281348 00:24:16.033 07:09:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:24:16.033 07:09:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:16.033 07:09:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:24:16.033 07:09:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:16.033 07:09:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:24:16.033 07:09:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:16.033 07:09:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:16.033 rmmod nvme_tcp 00:24:16.033 rmmod nvme_fabrics 00:24:16.033 rmmod nvme_keyring 00:24:16.033 07:09:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:16.033 07:09:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:24:16.033 07:09:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:24:16.033 07:09:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # '[' -n 281196 ']' 00:24:16.033 07:09:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # killprocess 281196 00:24:16.033 07:09:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 281196 ']' 00:24:16.033 07:09:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 281196 00:24:16.033 07:09:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:16.033 07:09:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:16.033 07:09:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 281196 00:24:16.293 07:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:16.293 07:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:16.293 07:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 281196' 00:24:16.293 killing process with pid 281196 00:24:16.293 07:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 281196 00:24:16.293 07:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 281196 00:24:16.293 07:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:16.293 07:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:16.293 07:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:16.293 07:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:24:16.293 07:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-save 00:24:16.293 07:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:16.293 07:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-restore 00:24:16.293 07:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:16.293 07:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:16.293 07:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:16.293 07:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:16.293 07:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:18.827 07:09:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:18.827 07:09:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.SCq1rnewsh /tmp/tmp.6ThPg9Nyk4 /tmp/tmp.z1oj8GbH3J 00:24:18.827 00:24:18.827 real 1m22.049s 00:24:18.827 user 2m15.398s 00:24:18.827 sys 0m25.524s 00:24:18.827 07:09:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:18.827 07:09:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:18.827 ************************************ 00:24:18.827 END TEST nvmf_tls 00:24:18.827 ************************************ 00:24:18.827 07:09:39 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:24:18.827 07:09:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:18.827 07:09:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:18.827 07:09:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:18.827 ************************************ 00:24:18.827 START TEST nvmf_fips 00:24:18.827 ************************************ 00:24:18.827 07:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:24:18.827 * Looking for test storage... 00:24:18.827 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:24:18.827 07:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:18.827 07:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # lcov --version 00:24:18.827 07:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:18.827 07:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:18.827 07:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:18.827 07:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:18.827 07:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:18.827 07:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:24:18.827 07:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:24:18.827 07:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:24:18.827 07:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:24:18.827 07:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:24:18.827 07:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:24:18.827 07:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:24:18.827 07:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:18.827 07:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:24:18.827 07:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:24:18.827 07:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:18.827 07:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:18.827 07:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:24:18.827 07:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:24:18.827 07:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:18.827 07:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:24:18.827 07:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:24:18.827 07:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:24:18.827 07:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:24:18.827 07:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:18.827 07:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:24:18.827 07:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:24:18.827 07:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:18.827 07:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:18.827 07:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:24:18.827 07:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:18.827 07:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:18.827 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:18.827 --rc genhtml_branch_coverage=1 00:24:18.827 --rc genhtml_function_coverage=1 00:24:18.827 --rc genhtml_legend=1 00:24:18.827 --rc geninfo_all_blocks=1 00:24:18.828 --rc geninfo_unexecuted_blocks=1 00:24:18.828 00:24:18.828 ' 00:24:18.828 07:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:18.828 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:18.828 --rc genhtml_branch_coverage=1 00:24:18.828 --rc genhtml_function_coverage=1 00:24:18.828 --rc genhtml_legend=1 00:24:18.828 --rc geninfo_all_blocks=1 00:24:18.828 --rc geninfo_unexecuted_blocks=1 00:24:18.828 00:24:18.828 ' 00:24:18.828 07:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:18.828 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:18.828 --rc genhtml_branch_coverage=1 00:24:18.828 --rc genhtml_function_coverage=1 00:24:18.828 --rc genhtml_legend=1 00:24:18.828 --rc geninfo_all_blocks=1 00:24:18.828 --rc geninfo_unexecuted_blocks=1 00:24:18.828 00:24:18.828 ' 00:24:18.828 07:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:18.828 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:18.828 --rc genhtml_branch_coverage=1 00:24:18.828 --rc genhtml_function_coverage=1 00:24:18.828 --rc genhtml_legend=1 00:24:18.828 --rc geninfo_all_blocks=1 00:24:18.828 --rc geninfo_unexecuted_blocks=1 00:24:18.828 00:24:18.828 ' 00:24:18.828 07:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:18.828 07:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:24:18.828 07:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:18.828 07:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:18.828 07:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:18.828 07:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:18.828 07:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:18.828 07:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:18.828 07:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:18.828 07:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:18.828 07:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:18.828 07:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:18.828 07:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:24:18.828 07:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:24:18.828 07:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:18.828 07:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:18.828 07:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:18.828 07:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:18.828 07:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:18.828 07:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:24:18.828 07:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:18.828 07:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:18.828 07:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:18.828 07:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:18.828 07:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:18.828 07:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:18.828 07:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:24:18.828 07:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:18.828 07:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:24:18.828 07:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:18.828 07:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:18.828 07:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:18.828 07:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:18.828 07:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:18.828 07:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:18.828 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:18.828 07:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:18.828 07:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:18.828 07:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:18.828 07:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:18.828 07:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:24:18.828 07:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:24:18.828 07:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:24:18.828 07:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:24:18.828 07:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:24:18.828 07:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:24:18.828 07:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:18.828 07:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:18.828 07:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:24:18.828 07:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:24:18.828 07:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:24:18.828 07:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:24:18.828 07:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:24:18.828 07:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:24:18.828 07:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:24:18.828 07:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:18.828 07:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:24:18.828 07:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:24:18.828 07:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:18.828 07:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:18.828 07:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:24:18.828 07:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:24:18.828 07:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:24:18.828 07:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:24:18.828 07:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:24:18.828 07:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:24:18.828 07:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:24:18.828 07:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:24:18.828 07:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:24:18.828 07:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:24:18.828 07:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:18.828 07:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:18.828 07:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:24:18.828 07:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:18.828 07:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:24:18.828 07:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:24:18.828 07:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:18.828 07:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:24:18.828 07:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:24:18.828 07:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:24:18.828 07:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:24:18.828 07:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:24:18.829 07:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:24:18.829 07:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:24:18.829 07:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:18.829 07:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:24:18.829 07:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:24:18.829 07:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:24:18.829 07:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:24:18.829 07:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:24:18.829 07:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:24:18.829 07:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:24:18.829 07:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:24:18.829 07:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:24:18.829 07:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:24:18.829 07:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:24:18.829 07:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:24:18.829 07:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:24:18.829 07:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:24:18.829 07:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:24:18.829 07:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:24:18.829 07:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:24:18.829 07:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:24:18.829 07:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:24:18.829 07:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:24:18.829 07:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:24:18.829 07:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:24:18.829 07:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # local es=0 00:24:18.829 07:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@654 -- # valid_exec_arg openssl md5 /dev/fd/62 00:24:18.829 07:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # local arg=openssl 00:24:18.829 07:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:18.829 07:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -t openssl 00:24:18.829 07:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:18.829 07:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # type -P openssl 00:24:18.829 07:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:18.829 07:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # arg=/usr/bin/openssl 00:24:18.829 07:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # [[ -x /usr/bin/openssl ]] 00:24:18.829 07:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # openssl md5 /dev/fd/62 00:24:18.829 Error setting digest 00:24:18.829 40425BC18D7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:24:18.829 40425BC18D7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:24:18.829 07:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # es=1 00:24:18.829 07:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:18.829 07:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:18.829 07:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:18.829 07:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:24:18.829 07:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:18.829 07:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:18.829 07:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:18.829 07:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:18.829 07:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:18.829 07:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:18.829 07:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:18.829 07:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:18.829 07:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:18.829 07:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:18.829 07:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@309 -- # xtrace_disable 00:24:18.829 07:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:21.364 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:21.364 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # pci_devs=() 00:24:21.364 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:21.364 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:21.364 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:21.364 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:21.364 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:21.364 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # net_devs=() 00:24:21.364 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:21.364 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # e810=() 00:24:21.364 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # local -ga e810 00:24:21.364 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # x722=() 00:24:21.364 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # local -ga x722 00:24:21.364 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # mlx=() 00:24:21.364 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # local -ga mlx 00:24:21.364 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:21.364 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:21.364 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:21.364 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:21.364 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:21.364 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:21.364 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:21.364 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:21.364 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:21.364 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:21.364 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:21.364 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:21.364 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:21.364 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:21.364 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:21.364 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:21.364 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:21.364 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:21.364 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:21.364 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:24:21.364 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:24:21.364 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:21.364 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:21.364 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:21.364 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:21.364 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:21.364 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:21.364 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:24:21.364 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:24:21.364 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:21.364 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:21.364 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:21.364 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:21.364 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:21.364 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:21.364 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:21.364 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:21.364 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:21.365 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:21.365 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:21.365 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:21.365 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:21.365 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:21.365 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:21.365 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:24:21.365 Found net devices under 0000:0a:00.0: cvl_0_0 00:24:21.365 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:21.365 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:21.365 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:21.365 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:21.365 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:21.365 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:21.365 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:21.365 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:21.365 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:24:21.365 Found net devices under 0000:0a:00.1: cvl_0_1 00:24:21.365 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:21.365 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:21.365 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # is_hw=yes 00:24:21.365 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:21.365 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:21.365 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:21.365 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:21.365 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:21.365 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:21.365 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:21.365 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:21.365 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:21.365 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:21.365 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:21.365 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:21.365 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:21.365 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:21.365 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:21.365 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:21.365 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:21.365 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:21.365 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:21.365 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:21.365 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:21.365 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:21.365 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:21.365 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:21.365 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:21.365 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:21.365 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:21.365 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.254 ms 00:24:21.365 00:24:21.365 --- 10.0.0.2 ping statistics --- 00:24:21.365 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:21.365 rtt min/avg/max/mdev = 0.254/0.254/0.254/0.000 ms 00:24:21.365 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:21.365 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:21.365 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.069 ms 00:24:21.365 00:24:21.365 --- 10.0.0.1 ping statistics --- 00:24:21.365 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:21.365 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:24:21.365 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:21.365 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@450 -- # return 0 00:24:21.365 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:21.365 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:21.365 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:21.365 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:21.365 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:21.365 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:21.365 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:21.365 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:24:21.365 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:21.365 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:21.365 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:21.365 07:09:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # nvmfpid=283612 00:24:21.365 07:09:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:21.365 07:09:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # waitforlisten 283612 00:24:21.365 07:09:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 283612 ']' 00:24:21.365 07:09:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:21.365 07:09:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:21.365 07:09:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:21.365 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:21.365 07:09:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:21.365 07:09:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:21.365 [2024-11-18 07:09:42.084634] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:24:21.365 [2024-11-18 07:09:42.084715] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:21.365 [2024-11-18 07:09:42.160217] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:21.365 [2024-11-18 07:09:42.207540] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:21.365 [2024-11-18 07:09:42.207601] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:21.365 [2024-11-18 07:09:42.207616] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:21.365 [2024-11-18 07:09:42.207628] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:21.365 [2024-11-18 07:09:42.207639] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:21.365 [2024-11-18 07:09:42.208261] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:21.365 07:09:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:21.365 07:09:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:24:21.365 07:09:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:21.365 07:09:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:21.365 07:09:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:21.623 07:09:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:21.624 07:09:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:24:21.624 07:09:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:24:21.624 07:09:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:24:21.624 07:09:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.HGx 00:24:21.624 07:09:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:24:21.624 07:09:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.HGx 00:24:21.624 07:09:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.HGx 00:24:21.624 07:09:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.HGx 00:24:21.624 07:09:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:21.882 [2024-11-18 07:09:42.653004] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:21.882 [2024-11-18 07:09:42.668995] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:21.882 [2024-11-18 07:09:42.669226] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:21.882 malloc0 00:24:21.882 07:09:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:21.882 07:09:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=283737 00:24:21.882 07:09:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:21.882 07:09:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 283737 /var/tmp/bdevperf.sock 00:24:21.882 07:09:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 283737 ']' 00:24:21.882 07:09:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:21.882 07:09:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:21.882 07:09:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:21.882 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:21.882 07:09:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:21.882 07:09:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:21.882 [2024-11-18 07:09:42.803160] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:24:21.882 [2024-11-18 07:09:42.803257] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid283737 ] 00:24:22.140 [2024-11-18 07:09:42.871625] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:22.141 [2024-11-18 07:09:42.917001] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:22.141 07:09:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:22.141 07:09:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:24:22.141 07:09:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.HGx 00:24:22.398 07:09:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:24:22.656 [2024-11-18 07:09:43.585389] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:22.913 TLSTESTn1 00:24:22.914 07:09:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:22.914 Running I/O for 10 seconds... 00:24:25.220 3241.00 IOPS, 12.66 MiB/s [2024-11-18T06:09:47.131Z] 3276.50 IOPS, 12.80 MiB/s [2024-11-18T06:09:48.063Z] 3308.67 IOPS, 12.92 MiB/s [2024-11-18T06:09:48.996Z] 3368.50 IOPS, 13.16 MiB/s [2024-11-18T06:09:49.929Z] 3268.20 IOPS, 12.77 MiB/s [2024-11-18T06:09:50.863Z] 3287.17 IOPS, 12.84 MiB/s [2024-11-18T06:09:52.231Z] 3316.71 IOPS, 12.96 MiB/s [2024-11-18T06:09:53.164Z] 3326.12 IOPS, 12.99 MiB/s [2024-11-18T06:09:54.101Z] 3318.00 IOPS, 12.96 MiB/s [2024-11-18T06:09:54.101Z] 3321.40 IOPS, 12.97 MiB/s 00:24:33.123 Latency(us) 00:24:33.123 [2024-11-18T06:09:54.101Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:33.123 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:33.123 Verification LBA range: start 0x0 length 0x2000 00:24:33.123 TLSTESTn1 : 10.02 3326.68 12.99 0.00 0.00 38413.91 8107.05 50875.35 00:24:33.123 [2024-11-18T06:09:54.101Z] =================================================================================================================== 00:24:33.123 [2024-11-18T06:09:54.101Z] Total : 3326.68 12.99 0.00 0.00 38413.91 8107.05 50875.35 00:24:33.123 { 00:24:33.123 "results": [ 00:24:33.123 { 00:24:33.123 "job": "TLSTESTn1", 00:24:33.123 "core_mask": "0x4", 00:24:33.123 "workload": "verify", 00:24:33.123 "status": "finished", 00:24:33.123 "verify_range": { 00:24:33.123 "start": 0, 00:24:33.123 "length": 8192 00:24:33.123 }, 00:24:33.123 "queue_depth": 128, 00:24:33.123 "io_size": 4096, 00:24:33.123 "runtime": 10.022303, 00:24:33.124 "iops": 3326.6805044708785, 00:24:33.124 "mibps": 12.99484572058937, 00:24:33.124 "io_failed": 0, 00:24:33.124 "io_timeout": 0, 00:24:33.124 "avg_latency_us": 38413.90724968813, 00:24:33.124 "min_latency_us": 8107.045925925926, 00:24:33.124 "max_latency_us": 50875.35407407407 00:24:33.124 } 00:24:33.124 ], 00:24:33.124 "core_count": 1 00:24:33.124 } 00:24:33.124 07:09:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:24:33.124 07:09:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:24:33.124 07:09:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # type=--id 00:24:33.124 07:09:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@813 -- # id=0 00:24:33.124 07:09:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:24:33.124 07:09:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:24:33.124 07:09:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:24:33.124 07:09:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:24:33.124 07:09:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@824 -- # for n in $shm_files 00:24:33.124 07:09:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:24:33.124 nvmf_trace.0 00:24:33.124 07:09:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@827 -- # return 0 00:24:33.124 07:09:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 283737 00:24:33.124 07:09:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 283737 ']' 00:24:33.124 07:09:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 283737 00:24:33.124 07:09:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:24:33.124 07:09:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:33.124 07:09:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 283737 00:24:33.124 07:09:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:24:33.124 07:09:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:24:33.124 07:09:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 283737' 00:24:33.124 killing process with pid 283737 00:24:33.124 07:09:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 283737 00:24:33.124 Received shutdown signal, test time was about 10.000000 seconds 00:24:33.124 00:24:33.124 Latency(us) 00:24:33.124 [2024-11-18T06:09:54.102Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:33.124 [2024-11-18T06:09:54.102Z] =================================================================================================================== 00:24:33.124 [2024-11-18T06:09:54.102Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:33.124 07:09:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 283737 00:24:33.384 07:09:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:24:33.384 07:09:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:33.384 07:09:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:24:33.384 07:09:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:33.384 07:09:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:24:33.384 07:09:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:33.384 07:09:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:33.384 rmmod nvme_tcp 00:24:33.384 rmmod nvme_fabrics 00:24:33.384 rmmod nvme_keyring 00:24:33.384 07:09:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:33.384 07:09:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:24:33.384 07:09:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:24:33.384 07:09:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@517 -- # '[' -n 283612 ']' 00:24:33.384 07:09:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # killprocess 283612 00:24:33.384 07:09:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 283612 ']' 00:24:33.384 07:09:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 283612 00:24:33.384 07:09:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:24:33.384 07:09:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:33.384 07:09:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 283612 00:24:33.384 07:09:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:33.384 07:09:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:33.384 07:09:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 283612' 00:24:33.384 killing process with pid 283612 00:24:33.384 07:09:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 283612 00:24:33.384 07:09:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 283612 00:24:33.665 07:09:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:33.665 07:09:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:33.665 07:09:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:33.665 07:09:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:24:33.665 07:09:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-save 00:24:33.665 07:09:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:33.665 07:09:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-restore 00:24:33.665 07:09:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:33.665 07:09:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:33.665 07:09:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:33.665 07:09:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:33.665 07:09:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:35.665 07:09:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:35.665 07:09:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.HGx 00:24:35.665 00:24:35.665 real 0m17.165s 00:24:35.665 user 0m22.890s 00:24:35.665 sys 0m5.259s 00:24:35.665 07:09:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:35.665 07:09:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:35.665 ************************************ 00:24:35.665 END TEST nvmf_fips 00:24:35.665 ************************************ 00:24:35.665 07:09:56 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:24:35.665 07:09:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:35.665 07:09:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:35.665 07:09:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:35.665 ************************************ 00:24:35.665 START TEST nvmf_control_msg_list 00:24:35.665 ************************************ 00:24:35.665 07:09:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:24:35.665 * Looking for test storage... 00:24:35.665 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:35.665 07:09:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:35.665 07:09:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # lcov --version 00:24:35.665 07:09:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:35.947 07:09:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:35.947 07:09:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:35.947 07:09:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:35.947 07:09:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:35.947 07:09:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:24:35.947 07:09:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:24:35.947 07:09:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:24:35.947 07:09:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:24:35.947 07:09:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:24:35.947 07:09:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:24:35.947 07:09:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:24:35.947 07:09:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:35.947 07:09:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:24:35.947 07:09:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:24:35.947 07:09:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:35.947 07:09:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:35.947 07:09:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:24:35.947 07:09:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:24:35.947 07:09:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:35.947 07:09:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:24:35.947 07:09:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:24:35.947 07:09:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:24:35.947 07:09:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:24:35.947 07:09:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:35.947 07:09:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:24:35.947 07:09:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:24:35.947 07:09:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:35.947 07:09:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:35.947 07:09:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:24:35.947 07:09:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:35.947 07:09:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:35.947 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:35.947 --rc genhtml_branch_coverage=1 00:24:35.947 --rc genhtml_function_coverage=1 00:24:35.947 --rc genhtml_legend=1 00:24:35.947 --rc geninfo_all_blocks=1 00:24:35.947 --rc geninfo_unexecuted_blocks=1 00:24:35.947 00:24:35.948 ' 00:24:35.948 07:09:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:35.948 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:35.948 --rc genhtml_branch_coverage=1 00:24:35.948 --rc genhtml_function_coverage=1 00:24:35.948 --rc genhtml_legend=1 00:24:35.948 --rc geninfo_all_blocks=1 00:24:35.948 --rc geninfo_unexecuted_blocks=1 00:24:35.948 00:24:35.948 ' 00:24:35.948 07:09:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:35.948 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:35.948 --rc genhtml_branch_coverage=1 00:24:35.948 --rc genhtml_function_coverage=1 00:24:35.948 --rc genhtml_legend=1 00:24:35.948 --rc geninfo_all_blocks=1 00:24:35.948 --rc geninfo_unexecuted_blocks=1 00:24:35.948 00:24:35.948 ' 00:24:35.948 07:09:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:35.948 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:35.948 --rc genhtml_branch_coverage=1 00:24:35.948 --rc genhtml_function_coverage=1 00:24:35.948 --rc genhtml_legend=1 00:24:35.948 --rc geninfo_all_blocks=1 00:24:35.948 --rc geninfo_unexecuted_blocks=1 00:24:35.948 00:24:35.948 ' 00:24:35.948 07:09:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:35.948 07:09:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:24:35.948 07:09:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:35.948 07:09:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:35.948 07:09:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:35.948 07:09:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:35.948 07:09:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:35.948 07:09:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:35.948 07:09:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:35.948 07:09:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:35.948 07:09:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:35.948 07:09:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:35.948 07:09:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:24:35.948 07:09:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:24:35.948 07:09:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:35.948 07:09:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:35.948 07:09:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:35.948 07:09:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:35.948 07:09:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:35.948 07:09:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:24:35.948 07:09:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:35.948 07:09:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:35.948 07:09:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:35.948 07:09:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:35.948 07:09:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:35.948 07:09:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:35.948 07:09:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:24:35.948 07:09:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:35.948 07:09:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:24:35.948 07:09:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:35.948 07:09:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:35.948 07:09:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:35.948 07:09:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:35.948 07:09:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:35.948 07:09:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:35.948 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:35.948 07:09:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:35.948 07:09:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:35.948 07:09:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:35.948 07:09:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:24:35.948 07:09:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:35.948 07:09:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:35.948 07:09:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:35.948 07:09:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:35.948 07:09:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:35.948 07:09:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:35.948 07:09:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:35.948 07:09:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:35.948 07:09:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:35.948 07:09:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:35.948 07:09:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@309 -- # xtrace_disable 00:24:35.948 07:09:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:37.911 07:09:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:37.911 07:09:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # pci_devs=() 00:24:37.911 07:09:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:37.911 07:09:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:37.911 07:09:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:37.911 07:09:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:37.911 07:09:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:37.911 07:09:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # net_devs=() 00:24:37.911 07:09:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:37.911 07:09:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # e810=() 00:24:37.911 07:09:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # local -ga e810 00:24:37.911 07:09:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # x722=() 00:24:37.911 07:09:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # local -ga x722 00:24:37.911 07:09:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # mlx=() 00:24:37.911 07:09:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # local -ga mlx 00:24:37.911 07:09:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:37.911 07:09:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:37.911 07:09:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:37.911 07:09:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:37.911 07:09:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:37.911 07:09:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:37.911 07:09:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:37.911 07:09:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:37.911 07:09:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:37.911 07:09:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:37.911 07:09:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:37.911 07:09:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:37.911 07:09:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:37.911 07:09:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:37.911 07:09:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:37.911 07:09:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:37.911 07:09:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:37.911 07:09:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:37.911 07:09:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:37.911 07:09:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:24:37.911 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:24:37.911 07:09:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:37.911 07:09:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:37.911 07:09:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:37.911 07:09:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:37.911 07:09:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:37.911 07:09:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:37.911 07:09:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:24:37.911 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:24:37.911 07:09:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:37.911 07:09:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:37.911 07:09:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:37.911 07:09:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:37.911 07:09:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:37.911 07:09:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:37.911 07:09:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:37.911 07:09:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:37.911 07:09:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:37.911 07:09:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:37.911 07:09:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:37.911 07:09:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:37.911 07:09:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:37.911 07:09:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:37.911 07:09:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:37.911 07:09:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:24:37.911 Found net devices under 0000:0a:00.0: cvl_0_0 00:24:37.911 07:09:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:37.911 07:09:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:37.911 07:09:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:37.911 07:09:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:37.911 07:09:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:37.911 07:09:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:37.911 07:09:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:37.911 07:09:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:37.911 07:09:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:24:37.911 Found net devices under 0000:0a:00.1: cvl_0_1 00:24:37.911 07:09:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:37.911 07:09:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:37.911 07:09:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # is_hw=yes 00:24:37.911 07:09:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:37.911 07:09:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:37.911 07:09:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:37.912 07:09:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:37.912 07:09:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:37.912 07:09:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:37.912 07:09:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:37.912 07:09:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:37.912 07:09:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:37.912 07:09:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:37.912 07:09:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:37.912 07:09:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:37.912 07:09:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:37.912 07:09:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:37.912 07:09:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:37.912 07:09:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:37.912 07:09:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:37.912 07:09:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:37.912 07:09:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:37.912 07:09:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:37.912 07:09:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:37.912 07:09:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:37.912 07:09:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:37.912 07:09:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:37.912 07:09:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:38.171 07:09:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:38.171 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:38.171 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.270 ms 00:24:38.171 00:24:38.171 --- 10.0.0.2 ping statistics --- 00:24:38.171 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:38.171 rtt min/avg/max/mdev = 0.270/0.270/0.270/0.000 ms 00:24:38.171 07:09:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:38.171 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:38.171 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.139 ms 00:24:38.171 00:24:38.171 --- 10.0.0.1 ping statistics --- 00:24:38.171 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:38.171 rtt min/avg/max/mdev = 0.139/0.139/0.139/0.000 ms 00:24:38.171 07:09:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:38.171 07:09:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@450 -- # return 0 00:24:38.171 07:09:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:38.171 07:09:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:38.171 07:09:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:38.171 07:09:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:38.171 07:09:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:38.171 07:09:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:38.171 07:09:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:38.171 07:09:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:24:38.171 07:09:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:38.171 07:09:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:38.171 07:09:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:38.171 07:09:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # nvmfpid=287011 00:24:38.171 07:09:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:24:38.171 07:09:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # waitforlisten 287011 00:24:38.171 07:09:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # '[' -z 287011 ']' 00:24:38.171 07:09:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:38.171 07:09:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:38.171 07:09:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:38.171 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:38.171 07:09:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:38.171 07:09:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:38.171 [2024-11-18 07:09:58.969976] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:24:38.171 [2024-11-18 07:09:58.970047] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:38.171 [2024-11-18 07:09:59.038902] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:38.171 [2024-11-18 07:09:59.080176] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:38.171 [2024-11-18 07:09:59.080236] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:38.171 [2024-11-18 07:09:59.080261] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:38.171 [2024-11-18 07:09:59.080271] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:38.171 [2024-11-18 07:09:59.080287] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:38.171 [2024-11-18 07:09:59.080956] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:38.433 07:09:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:38.433 07:09:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@868 -- # return 0 00:24:38.433 07:09:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:38.433 07:09:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:38.433 07:09:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:38.433 07:09:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:38.433 07:09:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:24:38.433 07:09:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:24:38.433 07:09:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:24:38.433 07:09:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:38.433 07:09:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:38.433 [2024-11-18 07:09:59.217147] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:38.433 07:09:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:38.433 07:09:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:24:38.433 07:09:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:38.433 07:09:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:38.433 07:09:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:38.433 07:09:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:24:38.433 07:09:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:38.433 07:09:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:38.433 Malloc0 00:24:38.433 07:09:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:38.433 07:09:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:24:38.433 07:09:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:38.433 07:09:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:38.433 07:09:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:38.433 07:09:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:24:38.433 07:09:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:38.433 07:09:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:38.433 [2024-11-18 07:09:59.256363] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:38.433 07:09:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:38.433 07:09:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=287130 00:24:38.433 07:09:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:38.433 07:09:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=287132 00:24:38.433 07:09:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:38.433 07:09:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=287134 00:24:38.433 07:09:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:38.433 07:09:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 287130 00:24:38.433 [2024-11-18 07:09:59.335568] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:24:38.433 [2024-11-18 07:09:59.335872] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:24:38.433 [2024-11-18 07:09:59.336136] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:24:39.814 Initializing NVMe Controllers 00:24:39.814 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:24:39.814 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:24:39.814 Initialization complete. Launching workers. 00:24:39.814 ======================================================== 00:24:39.814 Latency(us) 00:24:39.814 Device Information : IOPS MiB/s Average min max 00:24:39.814 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 25.00 0.10 40887.00 40619.09 40944.76 00:24:39.814 ======================================================== 00:24:39.814 Total : 25.00 0.10 40887.00 40619.09 40944.76 00:24:39.814 00:24:39.814 Initializing NVMe Controllers 00:24:39.814 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:24:39.814 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:24:39.814 Initialization complete. Launching workers. 00:24:39.814 ======================================================== 00:24:39.814 Latency(us) 00:24:39.814 Device Information : IOPS MiB/s Average min max 00:24:39.814 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 6199.99 24.22 160.88 150.56 311.20 00:24:39.814 ======================================================== 00:24:39.814 Total : 6199.99 24.22 160.88 150.56 311.20 00:24:39.814 00:24:39.814 07:10:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 287132 00:24:39.814 Initializing NVMe Controllers 00:24:39.814 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:24:39.814 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:24:39.814 Initialization complete. Launching workers. 00:24:39.814 ======================================================== 00:24:39.814 Latency(us) 00:24:39.814 Device Information : IOPS MiB/s Average min max 00:24:39.814 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 25.00 0.10 40900.26 40812.93 41009.95 00:24:39.814 ======================================================== 00:24:39.814 Total : 25.00 0.10 40900.26 40812.93 41009.95 00:24:39.814 00:24:39.814 07:10:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 287134 00:24:39.814 07:10:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:24:39.814 07:10:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:24:39.814 07:10:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:39.814 07:10:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:24:39.814 07:10:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:39.814 07:10:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:24:39.814 07:10:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:39.814 07:10:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:39.814 rmmod nvme_tcp 00:24:39.814 rmmod nvme_fabrics 00:24:39.814 rmmod nvme_keyring 00:24:39.814 07:10:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:39.814 07:10:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:24:39.814 07:10:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:24:39.814 07:10:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@517 -- # '[' -n 287011 ']' 00:24:39.814 07:10:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # killprocess 287011 00:24:39.814 07:10:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # '[' -z 287011 ']' 00:24:39.814 07:10:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # kill -0 287011 00:24:39.814 07:10:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # uname 00:24:39.814 07:10:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:39.814 07:10:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 287011 00:24:39.814 07:10:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:39.814 07:10:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:39.814 07:10:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@972 -- # echo 'killing process with pid 287011' 00:24:39.814 killing process with pid 287011 00:24:39.814 07:10:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@973 -- # kill 287011 00:24:39.814 07:10:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@978 -- # wait 287011 00:24:39.814 07:10:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:39.814 07:10:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:39.814 07:10:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:39.814 07:10:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:24:39.814 07:10:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-restore 00:24:39.814 07:10:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-save 00:24:39.814 07:10:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:39.814 07:10:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:39.814 07:10:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:39.814 07:10:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:39.814 07:10:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:39.814 07:10:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:42.354 07:10:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:42.354 00:24:42.354 real 0m6.271s 00:24:42.354 user 0m5.559s 00:24:42.354 sys 0m2.617s 00:24:42.354 07:10:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:42.354 07:10:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:42.354 ************************************ 00:24:42.354 END TEST nvmf_control_msg_list 00:24:42.354 ************************************ 00:24:42.354 07:10:02 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:24:42.354 07:10:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:42.354 07:10:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:42.354 07:10:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:42.354 ************************************ 00:24:42.354 START TEST nvmf_wait_for_buf 00:24:42.354 ************************************ 00:24:42.354 07:10:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:24:42.354 * Looking for test storage... 00:24:42.354 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:42.354 07:10:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:42.354 07:10:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # lcov --version 00:24:42.354 07:10:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:42.354 07:10:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:42.354 07:10:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:42.354 07:10:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:42.354 07:10:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:42.354 07:10:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:24:42.354 07:10:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:24:42.354 07:10:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:24:42.354 07:10:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:24:42.354 07:10:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:24:42.354 07:10:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:24:42.354 07:10:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:24:42.354 07:10:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:42.354 07:10:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:24:42.354 07:10:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:24:42.354 07:10:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:42.354 07:10:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:42.354 07:10:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:24:42.354 07:10:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:24:42.354 07:10:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:42.354 07:10:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:24:42.354 07:10:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:24:42.354 07:10:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:24:42.354 07:10:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:24:42.354 07:10:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:42.354 07:10:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:24:42.354 07:10:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:24:42.354 07:10:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:42.354 07:10:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:42.354 07:10:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:24:42.354 07:10:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:42.354 07:10:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:42.354 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:42.354 --rc genhtml_branch_coverage=1 00:24:42.354 --rc genhtml_function_coverage=1 00:24:42.355 --rc genhtml_legend=1 00:24:42.355 --rc geninfo_all_blocks=1 00:24:42.355 --rc geninfo_unexecuted_blocks=1 00:24:42.355 00:24:42.355 ' 00:24:42.355 07:10:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:42.355 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:42.355 --rc genhtml_branch_coverage=1 00:24:42.355 --rc genhtml_function_coverage=1 00:24:42.355 --rc genhtml_legend=1 00:24:42.355 --rc geninfo_all_blocks=1 00:24:42.355 --rc geninfo_unexecuted_blocks=1 00:24:42.355 00:24:42.355 ' 00:24:42.355 07:10:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:42.355 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:42.355 --rc genhtml_branch_coverage=1 00:24:42.355 --rc genhtml_function_coverage=1 00:24:42.355 --rc genhtml_legend=1 00:24:42.355 --rc geninfo_all_blocks=1 00:24:42.355 --rc geninfo_unexecuted_blocks=1 00:24:42.355 00:24:42.355 ' 00:24:42.355 07:10:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:42.355 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:42.355 --rc genhtml_branch_coverage=1 00:24:42.355 --rc genhtml_function_coverage=1 00:24:42.355 --rc genhtml_legend=1 00:24:42.355 --rc geninfo_all_blocks=1 00:24:42.355 --rc geninfo_unexecuted_blocks=1 00:24:42.355 00:24:42.355 ' 00:24:42.355 07:10:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:42.355 07:10:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:24:42.355 07:10:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:42.355 07:10:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:42.355 07:10:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:42.355 07:10:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:42.355 07:10:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:42.355 07:10:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:42.355 07:10:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:42.355 07:10:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:42.355 07:10:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:42.355 07:10:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:42.355 07:10:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:24:42.355 07:10:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:24:42.355 07:10:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:42.355 07:10:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:42.355 07:10:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:42.355 07:10:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:42.355 07:10:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:42.355 07:10:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:24:42.355 07:10:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:42.355 07:10:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:42.355 07:10:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:42.355 07:10:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:42.355 07:10:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:42.355 07:10:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:42.355 07:10:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:24:42.355 07:10:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:42.355 07:10:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:24:42.355 07:10:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:42.355 07:10:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:42.355 07:10:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:42.355 07:10:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:42.355 07:10:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:42.355 07:10:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:42.355 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:42.355 07:10:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:42.355 07:10:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:42.355 07:10:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:42.355 07:10:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:24:42.355 07:10:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:42.355 07:10:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:42.355 07:10:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:42.355 07:10:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:42.355 07:10:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:42.355 07:10:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:42.355 07:10:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:42.355 07:10:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:42.355 07:10:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:42.355 07:10:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:42.355 07:10:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@309 -- # xtrace_disable 00:24:42.355 07:10:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:44.264 07:10:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:44.264 07:10:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # pci_devs=() 00:24:44.264 07:10:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:44.264 07:10:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:44.264 07:10:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:44.264 07:10:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:44.264 07:10:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:44.264 07:10:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # net_devs=() 00:24:44.264 07:10:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:44.264 07:10:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # e810=() 00:24:44.264 07:10:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # local -ga e810 00:24:44.264 07:10:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # x722=() 00:24:44.264 07:10:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # local -ga x722 00:24:44.264 07:10:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # mlx=() 00:24:44.264 07:10:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # local -ga mlx 00:24:44.264 07:10:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:44.264 07:10:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:44.264 07:10:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:44.264 07:10:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:44.264 07:10:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:44.264 07:10:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:44.264 07:10:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:44.264 07:10:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:44.264 07:10:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:44.264 07:10:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:44.264 07:10:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:44.264 07:10:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:44.264 07:10:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:44.264 07:10:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:44.264 07:10:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:44.264 07:10:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:44.264 07:10:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:44.264 07:10:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:44.264 07:10:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:44.264 07:10:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:24:44.264 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:24:44.264 07:10:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:44.264 07:10:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:44.264 07:10:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:44.264 07:10:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:44.264 07:10:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:44.264 07:10:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:44.264 07:10:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:24:44.264 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:24:44.264 07:10:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:44.264 07:10:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:44.264 07:10:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:44.264 07:10:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:44.524 07:10:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:44.524 07:10:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:44.524 07:10:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:44.524 07:10:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:44.524 07:10:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:44.524 07:10:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:44.524 07:10:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:44.524 07:10:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:44.524 07:10:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:44.524 07:10:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:44.524 07:10:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:44.524 07:10:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:24:44.524 Found net devices under 0000:0a:00.0: cvl_0_0 00:24:44.524 07:10:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:44.524 07:10:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:44.524 07:10:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:44.524 07:10:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:44.524 07:10:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:44.524 07:10:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:44.524 07:10:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:44.524 07:10:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:44.524 07:10:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:24:44.524 Found net devices under 0000:0a:00.1: cvl_0_1 00:24:44.524 07:10:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:44.524 07:10:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:44.524 07:10:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # is_hw=yes 00:24:44.524 07:10:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:44.524 07:10:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:44.524 07:10:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:44.524 07:10:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:44.524 07:10:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:44.524 07:10:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:44.524 07:10:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:44.524 07:10:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:44.524 07:10:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:44.524 07:10:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:44.524 07:10:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:44.524 07:10:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:44.524 07:10:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:44.524 07:10:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:44.524 07:10:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:44.524 07:10:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:44.524 07:10:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:44.524 07:10:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:44.524 07:10:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:44.524 07:10:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:44.524 07:10:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:44.524 07:10:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:44.524 07:10:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:44.524 07:10:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:44.524 07:10:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:44.524 07:10:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:44.524 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:44.524 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.284 ms 00:24:44.524 00:24:44.524 --- 10.0.0.2 ping statistics --- 00:24:44.524 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:44.524 rtt min/avg/max/mdev = 0.284/0.284/0.284/0.000 ms 00:24:44.524 07:10:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:44.524 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:44.524 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.156 ms 00:24:44.524 00:24:44.524 --- 10.0.0.1 ping statistics --- 00:24:44.524 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:44.524 rtt min/avg/max/mdev = 0.156/0.156/0.156/0.000 ms 00:24:44.524 07:10:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:44.524 07:10:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@450 -- # return 0 00:24:44.524 07:10:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:44.524 07:10:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:44.524 07:10:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:44.524 07:10:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:44.524 07:10:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:44.524 07:10:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:44.525 07:10:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:44.525 07:10:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:24:44.525 07:10:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:44.525 07:10:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:44.525 07:10:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:44.525 07:10:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # nvmfpid=289230 00:24:44.525 07:10:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # waitforlisten 289230 00:24:44.525 07:10:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # '[' -z 289230 ']' 00:24:44.525 07:10:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:24:44.525 07:10:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:44.525 07:10:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:44.525 07:10:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:44.525 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:44.525 07:10:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:44.525 07:10:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:44.525 [2024-11-18 07:10:05.454249] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:24:44.525 [2024-11-18 07:10:05.454355] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:44.786 [2024-11-18 07:10:05.528240] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:44.786 [2024-11-18 07:10:05.573778] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:44.786 [2024-11-18 07:10:05.573850] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:44.786 [2024-11-18 07:10:05.573865] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:44.786 [2024-11-18 07:10:05.573891] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:44.786 [2024-11-18 07:10:05.573901] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:44.786 [2024-11-18 07:10:05.574507] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:44.786 07:10:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:44.786 07:10:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@868 -- # return 0 00:24:44.786 07:10:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:44.786 07:10:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:44.786 07:10:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:44.786 07:10:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:44.786 07:10:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:24:44.786 07:10:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:24:44.786 07:10:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:24:44.786 07:10:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:44.786 07:10:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:44.786 07:10:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:44.786 07:10:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:24:44.786 07:10:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:44.786 07:10:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:44.786 07:10:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:44.786 07:10:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:24:44.786 07:10:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:44.786 07:10:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:45.047 07:10:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:45.047 07:10:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:24:45.047 07:10:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:45.047 07:10:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:45.047 Malloc0 00:24:45.047 07:10:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:45.047 07:10:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:24:45.047 07:10:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:45.047 07:10:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:45.047 [2024-11-18 07:10:05.837830] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:45.047 07:10:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:45.047 07:10:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:24:45.047 07:10:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:45.047 07:10:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:45.047 07:10:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:45.047 07:10:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:24:45.047 07:10:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:45.047 07:10:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:45.047 07:10:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:45.047 07:10:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:24:45.047 07:10:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:45.047 07:10:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:45.047 [2024-11-18 07:10:05.862054] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:45.047 07:10:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:45.047 07:10:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:45.047 [2024-11-18 07:10:05.951627] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:24:46.959 Initializing NVMe Controllers 00:24:46.959 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:24:46.959 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:24:46.959 Initialization complete. Launching workers. 00:24:46.959 ======================================================== 00:24:46.959 Latency(us) 00:24:46.959 Device Information : IOPS MiB/s Average min max 00:24:46.959 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 97.76 12.22 42383.03 7971.86 111738.24 00:24:46.959 ======================================================== 00:24:46.959 Total : 97.76 12.22 42383.03 7971.86 111738.24 00:24:46.959 00:24:46.959 07:10:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:24:46.959 07:10:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:46.959 07:10:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:46.959 07:10:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:24:46.959 07:10:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:46.959 07:10:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=1542 00:24:46.959 07:10:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 1542 -eq 0 ]] 00:24:46.959 07:10:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:24:46.959 07:10:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:24:46.959 07:10:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:46.959 07:10:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:24:46.959 07:10:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:46.959 07:10:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:24:46.959 07:10:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:46.959 07:10:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:46.959 rmmod nvme_tcp 00:24:46.959 rmmod nvme_fabrics 00:24:46.959 rmmod nvme_keyring 00:24:46.959 07:10:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:46.959 07:10:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:24:46.959 07:10:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:24:46.959 07:10:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@517 -- # '[' -n 289230 ']' 00:24:46.959 07:10:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # killprocess 289230 00:24:46.959 07:10:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # '[' -z 289230 ']' 00:24:46.959 07:10:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # kill -0 289230 00:24:46.959 07:10:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # uname 00:24:46.959 07:10:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:46.959 07:10:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 289230 00:24:46.959 07:10:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:46.959 07:10:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:46.959 07:10:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 289230' 00:24:46.959 killing process with pid 289230 00:24:46.960 07:10:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@973 -- # kill 289230 00:24:46.960 07:10:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@978 -- # wait 289230 00:24:46.960 07:10:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:46.960 07:10:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:46.960 07:10:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:46.960 07:10:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:24:46.960 07:10:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-save 00:24:46.960 07:10:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:46.960 07:10:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-restore 00:24:46.960 07:10:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:46.960 07:10:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:46.960 07:10:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:46.960 07:10:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:46.960 07:10:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:48.868 07:10:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:48.868 00:24:48.868 real 0m6.932s 00:24:48.868 user 0m3.356s 00:24:48.868 sys 0m2.066s 00:24:48.868 07:10:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:48.868 07:10:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:48.868 ************************************ 00:24:48.868 END TEST nvmf_wait_for_buf 00:24:48.868 ************************************ 00:24:48.868 07:10:09 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 1 -eq 1 ']' 00:24:48.868 07:10:09 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@48 -- # run_test nvmf_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:24:48.868 07:10:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:48.869 07:10:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:48.869 07:10:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:49.128 ************************************ 00:24:49.128 START TEST nvmf_fuzz 00:24:49.128 ************************************ 00:24:49.128 07:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:24:49.128 * Looking for test storage... 00:24:49.128 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:49.128 07:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:49.128 07:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1693 -- # lcov --version 00:24:49.128 07:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:49.128 07:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:49.128 07:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:49.128 07:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:49.128 07:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:49.128 07:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:24:49.128 07:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:24:49.128 07:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:24:49.128 07:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:24:49.128 07:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:24:49.128 07:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:24:49.128 07:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:24:49.128 07:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:49.128 07:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:24:49.128 07:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@345 -- # : 1 00:24:49.128 07:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:49.128 07:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:49.128 07:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@365 -- # decimal 1 00:24:49.128 07:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@353 -- # local d=1 00:24:49.128 07:10:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:49.128 07:10:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@355 -- # echo 1 00:24:49.128 07:10:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:24:49.128 07:10:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@366 -- # decimal 2 00:24:49.128 07:10:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@353 -- # local d=2 00:24:49.128 07:10:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:49.128 07:10:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@355 -- # echo 2 00:24:49.128 07:10:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:24:49.128 07:10:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:49.128 07:10:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:49.128 07:10:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@368 -- # return 0 00:24:49.128 07:10:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:49.128 07:10:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:49.128 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:49.129 --rc genhtml_branch_coverage=1 00:24:49.129 --rc genhtml_function_coverage=1 00:24:49.129 --rc genhtml_legend=1 00:24:49.129 --rc geninfo_all_blocks=1 00:24:49.129 --rc geninfo_unexecuted_blocks=1 00:24:49.129 00:24:49.129 ' 00:24:49.129 07:10:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:49.129 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:49.129 --rc genhtml_branch_coverage=1 00:24:49.129 --rc genhtml_function_coverage=1 00:24:49.129 --rc genhtml_legend=1 00:24:49.129 --rc geninfo_all_blocks=1 00:24:49.129 --rc geninfo_unexecuted_blocks=1 00:24:49.129 00:24:49.129 ' 00:24:49.129 07:10:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:49.129 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:49.129 --rc genhtml_branch_coverage=1 00:24:49.129 --rc genhtml_function_coverage=1 00:24:49.129 --rc genhtml_legend=1 00:24:49.129 --rc geninfo_all_blocks=1 00:24:49.129 --rc geninfo_unexecuted_blocks=1 00:24:49.129 00:24:49.129 ' 00:24:49.129 07:10:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:49.129 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:49.129 --rc genhtml_branch_coverage=1 00:24:49.129 --rc genhtml_function_coverage=1 00:24:49.129 --rc genhtml_legend=1 00:24:49.129 --rc geninfo_all_blocks=1 00:24:49.129 --rc geninfo_unexecuted_blocks=1 00:24:49.129 00:24:49.129 ' 00:24:49.129 07:10:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:49.129 07:10:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@7 -- # uname -s 00:24:49.129 07:10:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:49.129 07:10:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:49.129 07:10:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:49.129 07:10:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:49.129 07:10:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:49.129 07:10:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:49.129 07:10:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:49.129 07:10:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:49.129 07:10:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:49.129 07:10:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:49.129 07:10:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:24:49.129 07:10:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:24:49.129 07:10:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:49.129 07:10:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:49.129 07:10:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:49.129 07:10:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:49.129 07:10:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:49.129 07:10:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:24:49.129 07:10:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:49.129 07:10:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:49.129 07:10:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:49.129 07:10:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:49.129 07:10:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:49.129 07:10:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:49.129 07:10:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@5 -- # export PATH 00:24:49.129 07:10:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:49.129 07:10:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@51 -- # : 0 00:24:49.129 07:10:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:49.129 07:10:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:49.129 07:10:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:49.129 07:10:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:49.129 07:10:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:49.129 07:10:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:49.129 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:49.129 07:10:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:49.129 07:10:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:49.129 07:10:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:49.129 07:10:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:24:49.129 07:10:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:49.129 07:10:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:49.129 07:10:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:49.129 07:10:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:49.129 07:10:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:49.129 07:10:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:49.129 07:10:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:49.129 07:10:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:49.129 07:10:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:49.129 07:10:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:49.129 07:10:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@309 -- # xtrace_disable 00:24:49.129 07:10:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:51.667 07:10:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:51.667 07:10:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@315 -- # pci_devs=() 00:24:51.668 07:10:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:51.668 07:10:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:51.668 07:10:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:51.668 07:10:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:51.668 07:10:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:51.668 07:10:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@319 -- # net_devs=() 00:24:51.668 07:10:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:51.668 07:10:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@320 -- # e810=() 00:24:51.668 07:10:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@320 -- # local -ga e810 00:24:51.668 07:10:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@321 -- # x722=() 00:24:51.668 07:10:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@321 -- # local -ga x722 00:24:51.668 07:10:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@322 -- # mlx=() 00:24:51.668 07:10:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@322 -- # local -ga mlx 00:24:51.668 07:10:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:51.668 07:10:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:51.668 07:10:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:51.668 07:10:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:51.668 07:10:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:51.668 07:10:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:51.668 07:10:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:51.668 07:10:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:51.668 07:10:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:51.668 07:10:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:51.668 07:10:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:51.668 07:10:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:51.668 07:10:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:51.668 07:10:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:51.668 07:10:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:51.668 07:10:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:51.668 07:10:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:51.668 07:10:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:51.668 07:10:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:51.668 07:10:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:24:51.668 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:24:51.668 07:10:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:51.668 07:10:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:51.668 07:10:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:51.668 07:10:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:51.668 07:10:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:51.668 07:10:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:51.668 07:10:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:24:51.668 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:24:51.668 07:10:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:51.668 07:10:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:51.668 07:10:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:51.668 07:10:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:51.668 07:10:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:51.668 07:10:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:51.668 07:10:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:51.668 07:10:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:51.668 07:10:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:51.668 07:10:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:51.668 07:10:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:51.668 07:10:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:51.668 07:10:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:51.668 07:10:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:51.668 07:10:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:51.668 07:10:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:24:51.668 Found net devices under 0000:0a:00.0: cvl_0_0 00:24:51.668 07:10:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:51.668 07:10:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:51.668 07:10:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:51.668 07:10:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:51.668 07:10:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:51.668 07:10:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:51.668 07:10:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:51.668 07:10:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:51.668 07:10:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:24:51.668 Found net devices under 0000:0a:00.1: cvl_0_1 00:24:51.668 07:10:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:51.668 07:10:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:51.668 07:10:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@442 -- # is_hw=yes 00:24:51.668 07:10:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:51.668 07:10:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:51.668 07:10:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:51.668 07:10:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:51.668 07:10:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:51.668 07:10:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:51.668 07:10:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:51.668 07:10:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:51.668 07:10:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:51.668 07:10:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:51.668 07:10:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:51.668 07:10:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:51.668 07:10:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:51.668 07:10:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:51.668 07:10:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:51.668 07:10:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:51.668 07:10:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:51.668 07:10:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:51.668 07:10:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:51.668 07:10:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:51.668 07:10:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:51.668 07:10:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:51.668 07:10:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:51.668 07:10:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:51.668 07:10:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:51.668 07:10:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:51.668 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:51.668 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.257 ms 00:24:51.668 00:24:51.668 --- 10.0.0.2 ping statistics --- 00:24:51.668 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:51.668 rtt min/avg/max/mdev = 0.257/0.257/0.257/0.000 ms 00:24:51.668 07:10:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:51.668 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:51.668 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.167 ms 00:24:51.668 00:24:51.668 --- 10.0.0.1 ping statistics --- 00:24:51.668 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:51.668 rtt min/avg/max/mdev = 0.167/0.167/0.167/0.000 ms 00:24:51.668 07:10:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:51.668 07:10:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@450 -- # return 0 00:24:51.668 07:10:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:51.668 07:10:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:51.668 07:10:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:51.668 07:10:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:51.669 07:10:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:51.669 07:10:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:51.669 07:10:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:51.669 07:10:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@14 -- # nvmfpid=291446 00:24:51.669 07:10:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@13 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:24:51.669 07:10:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:24:51.669 07:10:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@18 -- # waitforlisten 291446 00:24:51.669 07:10:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@835 -- # '[' -z 291446 ']' 00:24:51.669 07:10:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:51.669 07:10:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:51.669 07:10:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:51.669 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:51.669 07:10:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:51.669 07:10:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:51.669 07:10:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:51.669 07:10:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@868 -- # return 0 00:24:51.669 07:10:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:51.669 07:10:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:51.669 07:10:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:51.669 07:10:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:51.669 07:10:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:24:51.669 07:10:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:51.669 07:10:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:51.669 Malloc0 00:24:51.669 07:10:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:51.669 07:10:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:51.669 07:10:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:51.669 07:10:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:51.669 07:10:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:51.669 07:10:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:51.669 07:10:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:51.669 07:10:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:51.669 07:10:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:51.669 07:10:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:51.669 07:10:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:51.669 07:10:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:51.669 07:10:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:51.669 07:10:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' 00:24:51.669 07:10:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -N -a 00:25:23.758 Fuzzing completed. Shutting down the fuzz application 00:25:23.758 00:25:23.758 Dumping successful admin opcodes: 00:25:23.758 8, 9, 10, 24, 00:25:23.758 Dumping successful io opcodes: 00:25:23.758 0, 9, 00:25:23.758 NS: 0x2000008eff00 I/O qp, Total commands completed: 488986, total successful commands: 2816, random_seed: 4005807296 00:25:23.758 NS: 0x2000008eff00 admin qp, Total commands completed: 59376, total successful commands: 472, random_seed: 1078098688 00:25:23.758 07:10:43 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -j /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:25:23.758 Fuzzing completed. Shutting down the fuzz application 00:25:23.758 00:25:23.758 Dumping successful admin opcodes: 00:25:23.758 24, 00:25:23.758 Dumping successful io opcodes: 00:25:23.758 00:25:23.758 NS: 0x2000008eff00 I/O qp, Total commands completed: 0, total successful commands: 0, random_seed: 1907664659 00:25:23.758 NS: 0x2000008eff00 admin qp, Total commands completed: 16, total successful commands: 4, random_seed: 1907773964 00:25:23.758 07:10:44 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:23.758 07:10:44 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.758 07:10:44 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:23.758 07:10:44 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.758 07:10:44 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:25:23.758 07:10:44 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:25:23.758 07:10:44 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:23.758 07:10:44 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@121 -- # sync 00:25:23.758 07:10:44 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:23.758 07:10:44 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@124 -- # set +e 00:25:23.758 07:10:44 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:23.758 07:10:44 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:23.758 rmmod nvme_tcp 00:25:23.758 rmmod nvme_fabrics 00:25:23.758 rmmod nvme_keyring 00:25:23.758 07:10:44 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:23.758 07:10:44 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@128 -- # set -e 00:25:23.758 07:10:44 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@129 -- # return 0 00:25:23.758 07:10:44 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@517 -- # '[' -n 291446 ']' 00:25:23.758 07:10:44 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@518 -- # killprocess 291446 00:25:23.758 07:10:44 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@954 -- # '[' -z 291446 ']' 00:25:23.758 07:10:44 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@958 -- # kill -0 291446 00:25:23.758 07:10:44 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@959 -- # uname 00:25:23.758 07:10:44 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:23.758 07:10:44 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 291446 00:25:23.758 07:10:44 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:23.758 07:10:44 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:23.758 07:10:44 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@972 -- # echo 'killing process with pid 291446' 00:25:23.758 killing process with pid 291446 00:25:23.758 07:10:44 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@973 -- # kill 291446 00:25:23.758 07:10:44 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@978 -- # wait 291446 00:25:24.017 07:10:44 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:24.017 07:10:44 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:24.017 07:10:44 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:24.017 07:10:44 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@297 -- # iptr 00:25:24.017 07:10:44 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@791 -- # iptables-save 00:25:24.017 07:10:44 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@791 -- # iptables-restore 00:25:24.017 07:10:44 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:24.017 07:10:44 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:24.017 07:10:44 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:24.017 07:10:44 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:24.017 07:10:44 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:24.017 07:10:44 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:25.921 07:10:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:25.921 07:10:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@39 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs1.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs2.txt 00:25:26.180 00:25:26.180 real 0m37.050s 00:25:26.180 user 0m51.262s 00:25:26.180 sys 0m14.801s 00:25:26.180 07:10:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:26.180 07:10:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:26.180 ************************************ 00:25:26.180 END TEST nvmf_fuzz 00:25:26.180 ************************************ 00:25:26.180 07:10:46 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@49 -- # run_test nvmf_multiconnection /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:25:26.180 07:10:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:26.180 07:10:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:26.180 07:10:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:25:26.180 ************************************ 00:25:26.180 START TEST nvmf_multiconnection 00:25:26.180 ************************************ 00:25:26.180 07:10:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:25:26.180 * Looking for test storage... 00:25:26.180 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:26.180 07:10:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:26.180 07:10:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1693 -- # lcov --version 00:25:26.180 07:10:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:26.180 07:10:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:26.180 07:10:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:26.180 07:10:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:26.180 07:10:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:26.180 07:10:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@336 -- # IFS=.-: 00:25:26.180 07:10:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@336 -- # read -ra ver1 00:25:26.180 07:10:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@337 -- # IFS=.-: 00:25:26.180 07:10:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@337 -- # read -ra ver2 00:25:26.180 07:10:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@338 -- # local 'op=<' 00:25:26.180 07:10:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@340 -- # ver1_l=2 00:25:26.180 07:10:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@341 -- # ver2_l=1 00:25:26.180 07:10:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:26.180 07:10:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@344 -- # case "$op" in 00:25:26.180 07:10:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@345 -- # : 1 00:25:26.180 07:10:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:26.180 07:10:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:26.180 07:10:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@365 -- # decimal 1 00:25:26.180 07:10:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@353 -- # local d=1 00:25:26.181 07:10:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:26.181 07:10:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@355 -- # echo 1 00:25:26.181 07:10:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@365 -- # ver1[v]=1 00:25:26.181 07:10:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@366 -- # decimal 2 00:25:26.181 07:10:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@353 -- # local d=2 00:25:26.181 07:10:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:26.181 07:10:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@355 -- # echo 2 00:25:26.181 07:10:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@366 -- # ver2[v]=2 00:25:26.181 07:10:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:26.181 07:10:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:26.181 07:10:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@368 -- # return 0 00:25:26.181 07:10:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:26.181 07:10:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:26.181 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:26.181 --rc genhtml_branch_coverage=1 00:25:26.181 --rc genhtml_function_coverage=1 00:25:26.181 --rc genhtml_legend=1 00:25:26.181 --rc geninfo_all_blocks=1 00:25:26.181 --rc geninfo_unexecuted_blocks=1 00:25:26.181 00:25:26.181 ' 00:25:26.181 07:10:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:26.181 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:26.181 --rc genhtml_branch_coverage=1 00:25:26.181 --rc genhtml_function_coverage=1 00:25:26.181 --rc genhtml_legend=1 00:25:26.181 --rc geninfo_all_blocks=1 00:25:26.181 --rc geninfo_unexecuted_blocks=1 00:25:26.181 00:25:26.181 ' 00:25:26.181 07:10:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:26.181 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:26.181 --rc genhtml_branch_coverage=1 00:25:26.181 --rc genhtml_function_coverage=1 00:25:26.181 --rc genhtml_legend=1 00:25:26.181 --rc geninfo_all_blocks=1 00:25:26.181 --rc geninfo_unexecuted_blocks=1 00:25:26.181 00:25:26.181 ' 00:25:26.181 07:10:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:26.181 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:26.181 --rc genhtml_branch_coverage=1 00:25:26.181 --rc genhtml_function_coverage=1 00:25:26.181 --rc genhtml_legend=1 00:25:26.181 --rc geninfo_all_blocks=1 00:25:26.181 --rc geninfo_unexecuted_blocks=1 00:25:26.181 00:25:26.181 ' 00:25:26.181 07:10:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:26.181 07:10:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@7 -- # uname -s 00:25:26.181 07:10:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:26.181 07:10:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:26.181 07:10:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:26.181 07:10:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:26.181 07:10:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:26.181 07:10:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:26.181 07:10:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:26.181 07:10:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:26.181 07:10:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:26.181 07:10:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:26.181 07:10:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:26.181 07:10:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:25:26.181 07:10:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:26.181 07:10:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:26.181 07:10:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:26.181 07:10:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:26.181 07:10:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:26.181 07:10:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@15 -- # shopt -s extglob 00:25:26.181 07:10:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:26.181 07:10:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:26.181 07:10:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:26.181 07:10:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:26.181 07:10:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:26.181 07:10:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:26.181 07:10:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@5 -- # export PATH 00:25:26.181 07:10:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:26.181 07:10:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@51 -- # : 0 00:25:26.181 07:10:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:26.181 07:10:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:26.181 07:10:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:26.181 07:10:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:26.181 07:10:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:26.181 07:10:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:26.181 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:26.181 07:10:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:26.181 07:10:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:26.181 07:10:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:26.181 07:10:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:26.181 07:10:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:26.181 07:10:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:25:26.181 07:10:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@16 -- # nvmftestinit 00:25:26.181 07:10:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:26.181 07:10:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:26.181 07:10:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:26.181 07:10:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:26.181 07:10:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:26.181 07:10:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:26.181 07:10:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:26.181 07:10:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:26.181 07:10:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:26.181 07:10:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:26.181 07:10:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@309 -- # xtrace_disable 00:25:26.181 07:10:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:28.718 07:10:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:28.718 07:10:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@315 -- # pci_devs=() 00:25:28.718 07:10:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:28.718 07:10:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:28.718 07:10:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:28.718 07:10:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:28.718 07:10:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:28.718 07:10:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@319 -- # net_devs=() 00:25:28.718 07:10:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:28.718 07:10:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@320 -- # e810=() 00:25:28.718 07:10:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@320 -- # local -ga e810 00:25:28.718 07:10:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@321 -- # x722=() 00:25:28.718 07:10:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@321 -- # local -ga x722 00:25:28.718 07:10:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@322 -- # mlx=() 00:25:28.718 07:10:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@322 -- # local -ga mlx 00:25:28.718 07:10:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:28.718 07:10:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:28.718 07:10:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:28.718 07:10:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:28.718 07:10:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:28.718 07:10:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:28.718 07:10:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:28.718 07:10:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:28.718 07:10:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:28.718 07:10:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:28.718 07:10:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:28.718 07:10:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:28.718 07:10:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:28.718 07:10:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:28.718 07:10:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:28.718 07:10:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:28.718 07:10:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:28.718 07:10:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:28.718 07:10:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:28.718 07:10:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:25:28.718 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:25:28.718 07:10:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:28.718 07:10:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:28.718 07:10:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:28.718 07:10:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:28.718 07:10:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:28.718 07:10:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:28.718 07:10:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:25:28.718 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:25:28.718 07:10:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:28.718 07:10:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:28.718 07:10:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:28.718 07:10:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:28.718 07:10:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:28.718 07:10:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:28.718 07:10:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:28.718 07:10:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:28.718 07:10:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:28.718 07:10:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:28.718 07:10:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:28.718 07:10:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:28.718 07:10:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:28.718 07:10:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:28.718 07:10:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:28.718 07:10:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:25:28.718 Found net devices under 0000:0a:00.0: cvl_0_0 00:25:28.718 07:10:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:28.718 07:10:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:28.718 07:10:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:28.718 07:10:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:28.718 07:10:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:28.718 07:10:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:28.719 07:10:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:28.719 07:10:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:28.719 07:10:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:25:28.719 Found net devices under 0000:0a:00.1: cvl_0_1 00:25:28.719 07:10:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:28.719 07:10:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:28.719 07:10:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@442 -- # is_hw=yes 00:25:28.719 07:10:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:28.719 07:10:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:28.719 07:10:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:28.719 07:10:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:28.719 07:10:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:28.719 07:10:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:28.719 07:10:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:28.719 07:10:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:28.719 07:10:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:28.719 07:10:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:28.719 07:10:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:28.719 07:10:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:28.719 07:10:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:28.719 07:10:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:28.719 07:10:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:28.719 07:10:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:28.719 07:10:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:28.719 07:10:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:28.719 07:10:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:28.719 07:10:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:28.719 07:10:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:28.719 07:10:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:28.719 07:10:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:28.719 07:10:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:28.719 07:10:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:28.719 07:10:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:28.719 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:28.719 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.266 ms 00:25:28.719 00:25:28.719 --- 10.0.0.2 ping statistics --- 00:25:28.719 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:28.719 rtt min/avg/max/mdev = 0.266/0.266/0.266/0.000 ms 00:25:28.719 07:10:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:28.719 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:28.719 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.164 ms 00:25:28.719 00:25:28.719 --- 10.0.0.1 ping statistics --- 00:25:28.719 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:28.719 rtt min/avg/max/mdev = 0.164/0.164/0.164/0.000 ms 00:25:28.719 07:10:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:28.719 07:10:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@450 -- # return 0 00:25:28.719 07:10:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:28.719 07:10:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:28.719 07:10:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:28.719 07:10:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:28.719 07:10:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:28.719 07:10:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:28.719 07:10:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:28.719 07:10:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:25:28.719 07:10:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:28.719 07:10:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:28.719 07:10:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:28.719 07:10:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@509 -- # nvmfpid=297052 00:25:28.719 07:10:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:28.719 07:10:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@510 -- # waitforlisten 297052 00:25:28.719 07:10:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@835 -- # '[' -z 297052 ']' 00:25:28.719 07:10:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:28.719 07:10:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:28.719 07:10:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:28.719 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:28.719 07:10:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:28.719 07:10:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:28.719 [2024-11-18 07:10:49.488279] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:25:28.719 [2024-11-18 07:10:49.488368] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:28.719 [2024-11-18 07:10:49.562937] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:28.719 [2024-11-18 07:10:49.613993] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:28.719 [2024-11-18 07:10:49.614071] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:28.719 [2024-11-18 07:10:49.614085] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:28.719 [2024-11-18 07:10:49.614097] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:28.719 [2024-11-18 07:10:49.614107] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:28.719 [2024-11-18 07:10:49.615645] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:28.719 [2024-11-18 07:10:49.615672] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:28.719 [2024-11-18 07:10:49.615729] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:25:28.719 [2024-11-18 07:10:49.615731] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:28.980 07:10:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:28.980 07:10:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@868 -- # return 0 00:25:28.980 07:10:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:28.980 07:10:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:28.980 07:10:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:28.980 07:10:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:28.980 07:10:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:28.980 07:10:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:28.980 07:10:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:28.980 [2024-11-18 07:10:49.765952] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:28.980 07:10:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:28.980 07:10:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # seq 1 11 00:25:28.980 07:10:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:28.980 07:10:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:25:28.980 07:10:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:28.980 07:10:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:28.980 Malloc1 00:25:28.980 07:10:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:28.980 07:10:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:25:28.980 07:10:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:28.980 07:10:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:28.981 07:10:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:28.981 07:10:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:25:28.981 07:10:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:28.981 07:10:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:28.981 07:10:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:28.981 07:10:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:28.981 07:10:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:28.981 07:10:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:28.981 [2024-11-18 07:10:49.833031] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:28.981 07:10:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:28.981 07:10:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:28.981 07:10:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:25:28.981 07:10:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:28.981 07:10:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:28.981 Malloc2 00:25:28.981 07:10:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:28.981 07:10:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:25:28.981 07:10:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:28.981 07:10:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:28.981 07:10:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:28.981 07:10:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:25:28.981 07:10:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:28.981 07:10:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:28.981 07:10:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:28.981 07:10:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:25:28.981 07:10:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:28.981 07:10:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:28.981 07:10:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:28.981 07:10:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:28.981 07:10:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:25:28.981 07:10:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:28.981 07:10:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:28.981 Malloc3 00:25:28.981 07:10:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:28.981 07:10:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:25:28.981 07:10:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:28.981 07:10:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:28.981 07:10:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:28.981 07:10:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:25:28.981 07:10:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:28.981 07:10:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:28.981 07:10:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:28.981 07:10:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:25:28.981 07:10:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:28.981 07:10:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:28.981 07:10:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:28.981 07:10:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:28.981 07:10:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:25:28.981 07:10:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:28.981 07:10:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:28.981 Malloc4 00:25:28.981 07:10:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:28.981 07:10:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:25:28.981 07:10:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:28.981 07:10:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:29.243 07:10:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.243 07:10:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:25:29.243 07:10:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.243 07:10:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:29.243 07:10:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.243 07:10:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:25:29.243 07:10:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.243 07:10:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:29.243 07:10:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.243 07:10:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:29.243 07:10:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:25:29.243 07:10:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.243 07:10:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:29.243 Malloc5 00:25:29.243 07:10:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.243 07:10:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:25:29.243 07:10:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.243 07:10:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:29.243 07:10:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.243 07:10:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:25:29.243 07:10:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.243 07:10:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:29.243 07:10:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.243 07:10:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.2 -s 4420 00:25:29.243 07:10:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.243 07:10:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:29.243 07:10:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.243 07:10:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:29.243 07:10:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:25:29.243 07:10:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.243 07:10:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:29.243 Malloc6 00:25:29.243 07:10:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.243 07:10:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:25:29.243 07:10:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.243 07:10:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:29.243 07:10:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.243 07:10:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:25:29.243 07:10:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.243 07:10:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:29.243 07:10:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.243 07:10:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.2 -s 4420 00:25:29.243 07:10:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.243 07:10:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:29.243 07:10:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.243 07:10:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:29.243 07:10:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:25:29.243 07:10:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.243 07:10:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:29.243 Malloc7 00:25:29.243 07:10:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.243 07:10:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:25:29.243 07:10:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.244 07:10:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:29.244 07:10:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.244 07:10:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:25:29.244 07:10:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.244 07:10:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:29.244 07:10:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.244 07:10:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.2 -s 4420 00:25:29.244 07:10:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.244 07:10:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:29.244 07:10:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.244 07:10:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:29.244 07:10:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:25:29.244 07:10:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.244 07:10:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:29.244 Malloc8 00:25:29.244 07:10:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.244 07:10:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:25:29.244 07:10:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.244 07:10:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:29.244 07:10:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.244 07:10:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:25:29.244 07:10:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.244 07:10:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:29.244 07:10:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.244 07:10:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.2 -s 4420 00:25:29.244 07:10:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.244 07:10:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:29.244 07:10:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.244 07:10:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:29.244 07:10:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:25:29.244 07:10:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.244 07:10:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:29.244 Malloc9 00:25:29.244 07:10:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.244 07:10:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:25:29.244 07:10:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.244 07:10:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:29.244 07:10:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.244 07:10:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:25:29.244 07:10:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.244 07:10:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:29.504 07:10:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.504 07:10:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.2 -s 4420 00:25:29.504 07:10:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.504 07:10:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:29.504 07:10:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.504 07:10:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:29.504 07:10:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:25:29.504 07:10:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.504 07:10:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:29.504 Malloc10 00:25:29.504 07:10:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.504 07:10:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:25:29.504 07:10:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.504 07:10:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:29.504 07:10:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.504 07:10:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:25:29.504 07:10:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.504 07:10:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:29.504 07:10:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.504 07:10:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.2 -s 4420 00:25:29.504 07:10:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.504 07:10:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:29.504 07:10:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.504 07:10:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:29.504 07:10:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:25:29.504 07:10:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.504 07:10:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:29.504 Malloc11 00:25:29.504 07:10:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.504 07:10:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:25:29.504 07:10:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.504 07:10:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:29.504 07:10:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.504 07:10:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:25:29.504 07:10:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.504 07:10:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:29.504 07:10:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.504 07:10:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.2 -s 4420 00:25:29.504 07:10:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.504 07:10:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:29.504 07:10:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.504 07:10:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # seq 1 11 00:25:29.504 07:10:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:29.504 07:10:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:25:30.070 07:10:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:25:30.071 07:10:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:25:30.071 07:10:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:25:30.071 07:10:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:25:30.071 07:10:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:25:32.612 07:10:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:25:32.612 07:10:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:25:32.612 07:10:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK1 00:25:32.612 07:10:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:25:32.612 07:10:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:25:32.612 07:10:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:25:32.612 07:10:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:32.612 07:10:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.2 -s 4420 00:25:32.872 07:10:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:25:32.872 07:10:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:25:32.872 07:10:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:25:32.872 07:10:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:25:32.872 07:10:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:25:34.779 07:10:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:25:34.779 07:10:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:25:34.779 07:10:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK2 00:25:34.779 07:10:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:25:34.779 07:10:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:25:34.779 07:10:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:25:34.779 07:10:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:34.779 07:10:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.2 -s 4420 00:25:35.713 07:10:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:25:35.713 07:10:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:25:35.713 07:10:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:25:35.713 07:10:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:25:35.714 07:10:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:25:37.617 07:10:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:25:37.617 07:10:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:25:37.617 07:10:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK3 00:25:37.617 07:10:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:25:37.617 07:10:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:25:37.617 07:10:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:25:37.617 07:10:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:37.617 07:10:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.2 -s 4420 00:25:38.556 07:10:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:25:38.556 07:10:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:25:38.556 07:10:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:25:38.556 07:10:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:25:38.556 07:10:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:25:40.465 07:11:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:25:40.465 07:11:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:25:40.465 07:11:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK4 00:25:40.465 07:11:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:25:40.465 07:11:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:25:40.465 07:11:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:25:40.465 07:11:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:40.465 07:11:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.2 -s 4420 00:25:41.032 07:11:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:25:41.032 07:11:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:25:41.032 07:11:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:25:41.032 07:11:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:25:41.032 07:11:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:25:42.934 07:11:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:25:42.934 07:11:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:25:42.934 07:11:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK5 00:25:43.194 07:11:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:25:43.194 07:11:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:25:43.194 07:11:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:25:43.194 07:11:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:43.194 07:11:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.2 -s 4420 00:25:43.763 07:11:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:25:43.763 07:11:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:25:43.763 07:11:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:25:43.763 07:11:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:25:43.763 07:11:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:25:46.294 07:11:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:25:46.294 07:11:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:25:46.294 07:11:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK6 00:25:46.294 07:11:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:25:46.294 07:11:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:25:46.294 07:11:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:25:46.294 07:11:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:46.294 07:11:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.2 -s 4420 00:25:46.553 07:11:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:25:46.553 07:11:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:25:46.553 07:11:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:25:46.553 07:11:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:25:46.553 07:11:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:25:49.090 07:11:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:25:49.090 07:11:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:25:49.090 07:11:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK7 00:25:49.090 07:11:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:25:49.090 07:11:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:25:49.090 07:11:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:25:49.090 07:11:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:49.090 07:11:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.2 -s 4420 00:25:49.349 07:11:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:25:49.349 07:11:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:25:49.349 07:11:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:25:49.349 07:11:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:25:49.349 07:11:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:25:51.882 07:11:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:25:51.882 07:11:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:25:51.882 07:11:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK8 00:25:51.882 07:11:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:25:51.882 07:11:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:25:51.882 07:11:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:25:51.882 07:11:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:51.882 07:11:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.2 -s 4420 00:25:52.450 07:11:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:25:52.450 07:11:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:25:52.450 07:11:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:25:52.450 07:11:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:25:52.450 07:11:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:25:54.354 07:11:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:25:54.354 07:11:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:25:54.354 07:11:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK9 00:25:54.354 07:11:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:25:54.354 07:11:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:25:54.354 07:11:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:25:54.354 07:11:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:54.354 07:11:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.2 -s 4420 00:25:55.291 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:25:55.291 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:25:55.291 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:25:55.291 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:25:55.291 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:25:57.196 07:11:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:25:57.196 07:11:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:25:57.196 07:11:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK10 00:25:57.196 07:11:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:25:57.196 07:11:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:25:57.196 07:11:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:25:57.197 07:11:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:57.197 07:11:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.2 -s 4420 00:25:58.135 07:11:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:25:58.135 07:11:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:25:58.135 07:11:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:25:58.135 07:11:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:25:58.135 07:11:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:26:00.040 07:11:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:26:00.040 07:11:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:26:00.040 07:11:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK11 00:26:00.040 07:11:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:26:00.041 07:11:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:26:00.041 07:11:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:26:00.041 07:11:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:26:00.041 [global] 00:26:00.041 thread=1 00:26:00.041 invalidate=1 00:26:00.041 rw=read 00:26:00.041 time_based=1 00:26:00.041 runtime=10 00:26:00.041 ioengine=libaio 00:26:00.041 direct=1 00:26:00.041 bs=262144 00:26:00.041 iodepth=64 00:26:00.041 norandommap=1 00:26:00.041 numjobs=1 00:26:00.041 00:26:00.041 [job0] 00:26:00.041 filename=/dev/nvme0n1 00:26:00.041 [job1] 00:26:00.041 filename=/dev/nvme10n1 00:26:00.041 [job2] 00:26:00.041 filename=/dev/nvme1n1 00:26:00.041 [job3] 00:26:00.041 filename=/dev/nvme2n1 00:26:00.041 [job4] 00:26:00.041 filename=/dev/nvme3n1 00:26:00.041 [job5] 00:26:00.041 filename=/dev/nvme4n1 00:26:00.041 [job6] 00:26:00.041 filename=/dev/nvme5n1 00:26:00.041 [job7] 00:26:00.041 filename=/dev/nvme6n1 00:26:00.041 [job8] 00:26:00.041 filename=/dev/nvme7n1 00:26:00.041 [job9] 00:26:00.041 filename=/dev/nvme8n1 00:26:00.041 [job10] 00:26:00.041 filename=/dev/nvme9n1 00:26:00.041 Could not set queue depth (nvme0n1) 00:26:00.041 Could not set queue depth (nvme10n1) 00:26:00.041 Could not set queue depth (nvme1n1) 00:26:00.041 Could not set queue depth (nvme2n1) 00:26:00.041 Could not set queue depth (nvme3n1) 00:26:00.041 Could not set queue depth (nvme4n1) 00:26:00.041 Could not set queue depth (nvme5n1) 00:26:00.041 Could not set queue depth (nvme6n1) 00:26:00.041 Could not set queue depth (nvme7n1) 00:26:00.041 Could not set queue depth (nvme8n1) 00:26:00.041 Could not set queue depth (nvme9n1) 00:26:00.300 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:00.300 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:00.300 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:00.300 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:00.300 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:00.300 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:00.300 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:00.300 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:00.300 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:00.300 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:00.300 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:00.300 fio-3.35 00:26:00.300 Starting 11 threads 00:26:12.522 00:26:12.522 job0: (groupid=0, jobs=1): err= 0: pid=301300: Mon Nov 18 07:11:31 2024 00:26:12.522 read: IOPS=80, BW=20.1MiB/s (21.1MB/s)(204MiB/10130msec) 00:26:12.522 slat (usec): min=12, max=402007, avg=12336.47, stdev=44353.50 00:26:12.522 clat (msec): min=83, max=1308, avg=783.60, stdev=265.17 00:26:12.522 lat (msec): min=294, max=1394, avg=795.94, stdev=268.52 00:26:12.522 clat percentiles (msec): 00:26:12.522 | 1.00th=[ 300], 5.00th=[ 347], 10.00th=[ 384], 20.00th=[ 514], 00:26:12.522 | 30.00th=[ 667], 40.00th=[ 735], 50.00th=[ 793], 60.00th=[ 860], 00:26:12.522 | 70.00th=[ 936], 80.00th=[ 1045], 90.00th=[ 1150], 95.00th=[ 1183], 00:26:12.522 | 99.00th=[ 1284], 99.50th=[ 1301], 99.90th=[ 1301], 99.95th=[ 1301], 00:26:12.522 | 99.99th=[ 1301] 00:26:12.522 bw ( KiB/s): min=11264, max=44544, per=2.03%, avg=19195.55, stdev=7755.75, samples=20 00:26:12.522 iops : min= 44, max= 174, avg=74.85, stdev=30.27, samples=20 00:26:12.522 lat (msec) : 100=0.12%, 500=19.29%, 750=24.57%, 1000=31.82%, 2000=24.20% 00:26:12.522 cpu : usr=0.06%, sys=0.33%, ctx=94, majf=0, minf=4097 00:26:12.522 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=1.0%, 16=2.0%, 32=3.9%, >=64=92.3% 00:26:12.522 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:12.522 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:12.522 issued rwts: total=814,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:12.522 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:12.522 job1: (groupid=0, jobs=1): err= 0: pid=301303: Mon Nov 18 07:11:31 2024 00:26:12.522 read: IOPS=477, BW=119MiB/s (125MB/s)(1209MiB/10138msec) 00:26:12.522 slat (usec): min=8, max=635880, avg=1371.68, stdev=14208.99 00:26:12.522 clat (usec): min=608, max=1397.6k, avg=132663.52, stdev=182831.51 00:26:12.522 lat (usec): min=636, max=1397.7k, avg=134035.20, stdev=185242.08 00:26:12.522 clat percentiles (usec): 00:26:12.522 | 1.00th=[ 1467], 5.00th=[ 3982], 10.00th=[ 7177], 00:26:12.522 | 20.00th=[ 18744], 30.00th=[ 43254], 40.00th=[ 49546], 00:26:12.522 | 50.00th=[ 60556], 60.00th=[ 99091], 70.00th=[ 137364], 00:26:12.522 | 80.00th=[ 181404], 90.00th=[ 346031], 95.00th=[ 438305], 00:26:12.522 | 99.00th=[ 851444], 99.50th=[1061159], 99.90th=[1199571], 00:26:12.522 | 99.95th=[1199571], 99.99th=[1400898] 00:26:12.522 bw ( KiB/s): min= 8192, max=403968, per=12.95%, avg=122174.75, stdev=110274.63, samples=20 00:26:12.522 iops : min= 32, max= 1578, avg=477.15, stdev=430.73, samples=20 00:26:12.522 lat (usec) : 750=0.04%, 1000=0.04% 00:26:12.522 lat (msec) : 2=1.36%, 4=3.58%, 10=12.55%, 20=2.60%, 50=20.90% 00:26:12.522 lat (msec) : 100=19.54%, 250=25.86%, 500=8.99%, 750=1.57%, 1000=2.40% 00:26:12.522 lat (msec) : 2000=0.56% 00:26:12.522 cpu : usr=0.17%, sys=1.09%, ctx=1454, majf=0, minf=4097 00:26:12.522 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:26:12.522 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:12.522 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:12.522 issued rwts: total=4837,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:12.522 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:12.522 job2: (groupid=0, jobs=1): err= 0: pid=301304: Mon Nov 18 07:11:31 2024 00:26:12.522 read: IOPS=92, BW=23.2MiB/s (24.3MB/s)(235MiB/10136msec) 00:26:12.522 slat (usec): min=13, max=443113, avg=10722.09, stdev=37959.63 00:26:12.522 clat (msec): min=29, max=1325, avg=678.93, stdev=270.93 00:26:12.522 lat (msec): min=29, max=1325, avg=689.65, stdev=275.32 00:26:12.522 clat percentiles (msec): 00:26:12.522 | 1.00th=[ 144], 5.00th=[ 215], 10.00th=[ 253], 20.00th=[ 380], 00:26:12.522 | 30.00th=[ 584], 40.00th=[ 684], 50.00th=[ 743], 60.00th=[ 802], 00:26:12.522 | 70.00th=[ 835], 80.00th=[ 894], 90.00th=[ 978], 95.00th=[ 1083], 00:26:12.522 | 99.00th=[ 1217], 99.50th=[ 1250], 99.90th=[ 1334], 99.95th=[ 1334], 00:26:12.522 | 99.99th=[ 1334] 00:26:12.522 bw ( KiB/s): min= 9728, max=52224, per=2.38%, avg=22418.80, stdev=9501.21, samples=20 00:26:12.522 iops : min= 38, max= 204, avg=87.45, stdev=37.12, samples=20 00:26:12.522 lat (msec) : 50=0.64%, 100=0.11%, 250=8.40%, 500=19.36%, 750=23.30% 00:26:12.522 lat (msec) : 1000=38.40%, 2000=9.79% 00:26:12.522 cpu : usr=0.06%, sys=0.37%, ctx=119, majf=0, minf=4097 00:26:12.522 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.9%, 16=1.7%, 32=3.4%, >=64=93.3% 00:26:12.522 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:12.522 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:12.522 issued rwts: total=940,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:12.522 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:12.522 job3: (groupid=0, jobs=1): err= 0: pid=301305: Mon Nov 18 07:11:31 2024 00:26:12.522 read: IOPS=157, BW=39.4MiB/s (41.3MB/s)(400MiB/10142msec) 00:26:12.522 slat (usec): min=9, max=647099, avg=4114.84, stdev=28825.01 00:26:12.522 clat (msec): min=20, max=1176, avg=401.78, stdev=316.64 00:26:12.522 lat (msec): min=20, max=1393, avg=405.90, stdev=320.59 00:26:12.522 clat percentiles (msec): 00:26:12.522 | 1.00th=[ 26], 5.00th=[ 48], 10.00th=[ 59], 20.00th=[ 106], 00:26:12.522 | 30.00th=[ 157], 40.00th=[ 197], 50.00th=[ 296], 60.00th=[ 393], 00:26:12.522 | 70.00th=[ 651], 80.00th=[ 768], 90.00th=[ 860], 95.00th=[ 944], 00:26:12.522 | 99.00th=[ 1133], 99.50th=[ 1183], 99.90th=[ 1183], 99.95th=[ 1183], 00:26:12.522 | 99.99th=[ 1183] 00:26:12.522 bw ( KiB/s): min=11776, max=138752, per=4.38%, avg=41355.74, stdev=29633.35, samples=19 00:26:12.522 iops : min= 46, max= 542, avg=161.42, stdev=115.82, samples=19 00:26:12.522 lat (msec) : 50=6.45%, 100=12.52%, 250=25.91%, 500=19.96%, 750=13.83% 00:26:12.522 lat (msec) : 1000=18.27%, 2000=3.07% 00:26:12.522 cpu : usr=0.06%, sys=0.46%, ctx=272, majf=0, minf=4097 00:26:12.522 IO depths : 1=0.1%, 2=0.1%, 4=0.3%, 8=0.5%, 16=1.0%, 32=2.0%, >=64=96.1% 00:26:12.522 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:12.522 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:12.522 issued rwts: total=1598,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:12.522 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:12.522 job4: (groupid=0, jobs=1): err= 0: pid=301306: Mon Nov 18 07:11:31 2024 00:26:12.522 read: IOPS=839, BW=210MiB/s (220MB/s)(2108MiB/10045msec) 00:26:12.522 slat (usec): min=11, max=134499, avg=1118.22, stdev=4600.86 00:26:12.523 clat (usec): min=1484, max=492827, avg=75080.88, stdev=66059.25 00:26:12.523 lat (usec): min=1522, max=514290, avg=76199.10, stdev=66944.46 00:26:12.523 clat percentiles (msec): 00:26:12.523 | 1.00th=[ 21], 5.00th=[ 32], 10.00th=[ 34], 20.00th=[ 37], 00:26:12.523 | 30.00th=[ 40], 40.00th=[ 42], 50.00th=[ 46], 60.00th=[ 55], 00:26:12.523 | 70.00th=[ 80], 80.00th=[ 107], 90.00th=[ 146], 95.00th=[ 205], 00:26:12.523 | 99.00th=[ 338], 99.50th=[ 447], 99.90th=[ 493], 99.95th=[ 493], 00:26:12.523 | 99.99th=[ 493] 00:26:12.523 bw ( KiB/s): min=36352, max=363008, per=22.70%, avg=214153.50, stdev=101324.69, samples=20 00:26:12.523 iops : min= 142, max= 1418, avg=836.45, stdev=395.88, samples=20 00:26:12.523 lat (msec) : 2=0.01%, 4=0.17%, 10=0.26%, 20=0.52%, 50=55.10% 00:26:12.523 lat (msec) : 100=21.00%, 250=19.37%, 500=3.57% 00:26:12.523 cpu : usr=0.47%, sys=2.81%, ctx=1291, majf=0, minf=4097 00:26:12.523 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:26:12.523 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:12.523 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:12.523 issued rwts: total=8430,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:12.523 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:12.523 job5: (groupid=0, jobs=1): err= 0: pid=301314: Mon Nov 18 07:11:31 2024 00:26:12.523 read: IOPS=238, BW=59.7MiB/s (62.6MB/s)(605MiB/10138msec) 00:26:12.523 slat (usec): min=8, max=556581, avg=2804.80, stdev=25326.20 00:26:12.523 clat (usec): min=1488, max=1794.8k, avg=264993.11, stdev=330217.61 00:26:12.523 lat (usec): min=1524, max=1794.8k, avg=267797.91, stdev=334454.46 00:26:12.523 clat percentiles (msec): 00:26:12.523 | 1.00th=[ 7], 5.00th=[ 22], 10.00th=[ 39], 20.00th=[ 61], 00:26:12.523 | 30.00th=[ 69], 40.00th=[ 103], 50.00th=[ 124], 60.00th=[ 153], 00:26:12.523 | 70.00th=[ 220], 80.00th=[ 405], 90.00th=[ 768], 95.00th=[ 1116], 00:26:12.523 | 99.00th=[ 1334], 99.50th=[ 1351], 99.90th=[ 1787], 99.95th=[ 1787], 00:26:12.523 | 99.99th=[ 1787] 00:26:12.523 bw ( KiB/s): min= 9728, max=298496, per=6.73%, avg=63498.74, stdev=70981.22, samples=19 00:26:12.523 iops : min= 38, max= 1166, avg=247.95, stdev=277.33, samples=19 00:26:12.523 lat (msec) : 2=0.08%, 4=0.21%, 10=1.16%, 20=3.30%, 50=9.25% 00:26:12.523 lat (msec) : 100=25.61%, 250=34.20%, 500=8.51%, 750=6.57%, 1000=5.41% 00:26:12.523 lat (msec) : 2000=5.70% 00:26:12.523 cpu : usr=0.09%, sys=0.64%, ctx=625, majf=0, minf=4097 00:26:12.523 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.7%, 32=1.3%, >=64=97.4% 00:26:12.523 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:12.523 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:12.523 issued rwts: total=2421,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:12.523 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:12.523 job6: (groupid=0, jobs=1): err= 0: pid=301315: Mon Nov 18 07:11:31 2024 00:26:12.523 read: IOPS=532, BW=133MiB/s (140MB/s)(1351MiB/10138msec) 00:26:12.523 slat (usec): min=8, max=114327, avg=1305.12, stdev=5959.18 00:26:12.523 clat (usec): min=1302, max=474678, avg=118701.48, stdev=74257.34 00:26:12.523 lat (usec): min=1351, max=474708, avg=120006.59, stdev=75179.71 00:26:12.523 clat percentiles (msec): 00:26:12.523 | 1.00th=[ 7], 5.00th=[ 15], 10.00th=[ 29], 20.00th=[ 55], 00:26:12.523 | 30.00th=[ 78], 40.00th=[ 88], 50.00th=[ 104], 60.00th=[ 126], 00:26:12.523 | 70.00th=[ 153], 80.00th=[ 180], 90.00th=[ 215], 95.00th=[ 245], 00:26:12.523 | 99.00th=[ 376], 99.50th=[ 401], 99.90th=[ 443], 99.95th=[ 443], 00:26:12.523 | 99.99th=[ 477] 00:26:12.523 bw ( KiB/s): min=70656, max=251392, per=14.48%, avg=136626.50, stdev=52425.87, samples=20 00:26:12.523 iops : min= 276, max= 982, avg=533.65, stdev=204.74, samples=20 00:26:12.523 lat (msec) : 2=0.17%, 4=0.20%, 10=0.94%, 20=6.33%, 50=10.01% 00:26:12.523 lat (msec) : 100=30.48%, 250=47.20%, 500=4.66% 00:26:12.523 cpu : usr=0.28%, sys=1.25%, ctx=1368, majf=0, minf=4098 00:26:12.523 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:26:12.523 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:12.523 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:12.523 issued rwts: total=5403,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:12.523 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:12.523 job7: (groupid=0, jobs=1): err= 0: pid=301316: Mon Nov 18 07:11:31 2024 00:26:12.523 read: IOPS=101, BW=25.4MiB/s (26.7MB/s)(258MiB/10136msec) 00:26:12.523 slat (usec): min=12, max=347748, avg=8831.62, stdev=32027.42 00:26:12.523 clat (msec): min=29, max=1166, avg=619.94, stdev=291.36 00:26:12.523 lat (msec): min=29, max=1166, avg=628.77, stdev=296.31 00:26:12.523 clat percentiles (msec): 00:26:12.523 | 1.00th=[ 31], 5.00th=[ 176], 10.00th=[ 228], 20.00th=[ 292], 00:26:12.523 | 30.00th=[ 397], 40.00th=[ 542], 50.00th=[ 709], 60.00th=[ 776], 00:26:12.523 | 70.00th=[ 818], 80.00th=[ 885], 90.00th=[ 969], 95.00th=[ 1045], 00:26:12.523 | 99.00th=[ 1099], 99.50th=[ 1099], 99.90th=[ 1099], 99.95th=[ 1167], 00:26:12.523 | 99.99th=[ 1167] 00:26:12.523 bw ( KiB/s): min=13312, max=50176, per=2.62%, avg=24748.05, stdev=9757.51, samples=20 00:26:12.523 iops : min= 52, max= 196, avg=96.55, stdev=38.14, samples=20 00:26:12.523 lat (msec) : 50=2.33%, 100=0.87%, 250=9.12%, 500=25.61%, 750=17.46% 00:26:12.523 lat (msec) : 1000=37.25%, 2000=7.37% 00:26:12.523 cpu : usr=0.08%, sys=0.43%, ctx=166, majf=0, minf=4097 00:26:12.523 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.6%, 32=3.1%, >=64=93.9% 00:26:12.523 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:12.523 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:12.523 issued rwts: total=1031,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:12.523 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:12.523 job8: (groupid=0, jobs=1): err= 0: pid=301317: Mon Nov 18 07:11:31 2024 00:26:12.523 read: IOPS=216, BW=54.2MiB/s (56.8MB/s)(549MiB/10136msec) 00:26:12.523 slat (usec): min=8, max=575092, avg=3492.81, stdev=26706.79 00:26:12.523 clat (msec): min=21, max=1244, avg=291.68, stdev=324.29 00:26:12.523 lat (msec): min=21, max=1244, avg=295.17, stdev=328.89 00:26:12.523 clat percentiles (msec): 00:26:12.523 | 1.00th=[ 29], 5.00th=[ 37], 10.00th=[ 39], 20.00th=[ 42], 00:26:12.523 | 30.00th=[ 43], 40.00th=[ 53], 50.00th=[ 72], 60.00th=[ 209], 00:26:12.523 | 70.00th=[ 481], 80.00th=[ 701], 90.00th=[ 827], 95.00th=[ 919], 00:26:12.523 | 99.00th=[ 986], 99.50th=[ 1003], 99.90th=[ 1083], 99.95th=[ 1250], 00:26:12.523 | 99.99th=[ 1250] 00:26:12.523 bw ( KiB/s): min= 6144, max=300032, per=5.79%, avg=54625.35, stdev=81951.34, samples=20 00:26:12.523 iops : min= 24, max= 1172, avg=213.25, stdev=320.15, samples=20 00:26:12.523 lat (msec) : 50=38.39%, 100=17.71%, 250=8.15%, 500=7.33%, 750=12.98% 00:26:12.523 lat (msec) : 1000=14.89%, 2000=0.55% 00:26:12.523 cpu : usr=0.03%, sys=0.61%, ctx=277, majf=0, minf=4097 00:26:12.523 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.7%, 32=1.5%, >=64=97.1% 00:26:12.523 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:12.523 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:12.523 issued rwts: total=2196,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:12.523 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:12.523 job9: (groupid=0, jobs=1): err= 0: pid=301318: Mon Nov 18 07:11:31 2024 00:26:12.523 read: IOPS=289, BW=72.4MiB/s (75.9MB/s)(734MiB/10141msec) 00:26:12.523 slat (usec): min=8, max=650295, avg=1643.78, stdev=19268.43 00:26:12.523 clat (usec): min=603, max=1393.4k, avg=219173.61, stdev=304940.12 00:26:12.523 lat (usec): min=627, max=1462.5k, avg=220817.40, stdev=306954.51 00:26:12.523 clat percentiles (usec): 00:26:12.523 | 1.00th=[ 1729], 5.00th=[ 2868], 10.00th=[ 5145], 00:26:12.523 | 20.00th=[ 14222], 30.00th=[ 47973], 40.00th=[ 88605], 00:26:12.523 | 50.00th=[ 106431], 60.00th=[ 156238], 70.00th=[ 214959], 00:26:12.523 | 80.00th=[ 278922], 90.00th=[ 700449], 95.00th=[1019216], 00:26:12.523 | 99.00th=[1300235], 99.50th=[1333789], 99.90th=[1350566], 00:26:12.523 | 99.95th=[1367344], 99.99th=[1400898] 00:26:12.523 bw ( KiB/s): min= 3072, max=196096, per=7.79%, avg=73523.90, stdev=55755.50, samples=20 00:26:12.523 iops : min= 12, max= 766, avg=287.15, stdev=217.80, samples=20 00:26:12.523 lat (usec) : 750=0.51%, 1000=0.41% 00:26:12.523 lat (msec) : 2=0.54%, 4=6.23%, 10=6.54%, 20=9.16%, 50=6.81% 00:26:12.523 lat (msec) : 100=16.96%, 250=29.59%, 500=12.02%, 750=1.40%, 1000=4.63% 00:26:12.523 lat (msec) : 2000=5.21% 00:26:12.523 cpu : usr=0.17%, sys=0.82%, ctx=990, majf=0, minf=4097 00:26:12.523 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.1%, >=64=97.9% 00:26:12.523 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:12.523 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:12.523 issued rwts: total=2937,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:12.523 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:12.523 job10: (groupid=0, jobs=1): err= 0: pid=301319: Mon Nov 18 07:11:31 2024 00:26:12.523 read: IOPS=667, BW=167MiB/s (175MB/s)(1692MiB/10142msec) 00:26:12.523 slat (usec): min=12, max=154775, avg=1475.25, stdev=6220.91 00:26:12.523 clat (msec): min=19, max=528, avg=94.38, stdev=90.50 00:26:12.523 lat (msec): min=24, max=528, avg=95.85, stdev=91.86 00:26:12.523 clat percentiles (msec): 00:26:12.523 | 1.00th=[ 28], 5.00th=[ 31], 10.00th=[ 32], 20.00th=[ 34], 00:26:12.523 | 30.00th=[ 36], 40.00th=[ 37], 50.00th=[ 41], 60.00th=[ 69], 00:26:12.523 | 70.00th=[ 116], 80.00th=[ 155], 90.00th=[ 218], 95.00th=[ 313], 00:26:12.523 | 99.00th=[ 393], 99.50th=[ 426], 99.90th=[ 510], 99.95th=[ 510], 00:26:12.523 | 99.99th=[ 527] 00:26:12.523 bw ( KiB/s): min=32191, max=458347, per=18.18%, avg=171475.50, stdev=143835.33, samples=20 00:26:12.523 iops : min= 125, max= 1790, avg=669.70, stdev=561.81, samples=20 00:26:12.523 lat (msec) : 20=0.01%, 50=54.51%, 100=12.74%, 250=25.81%, 500=6.72% 00:26:12.523 lat (msec) : 750=0.21% 00:26:12.523 cpu : usr=0.42%, sys=1.88%, ctx=961, majf=0, minf=3721 00:26:12.523 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:26:12.523 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:12.523 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:12.523 issued rwts: total=6766,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:12.523 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:12.523 00:26:12.523 Run status group 0 (all jobs): 00:26:12.523 READ: bw=921MiB/s (966MB/s), 20.1MiB/s-210MiB/s (21.1MB/s-220MB/s), io=9343MiB (9797MB), run=10045-10142msec 00:26:12.524 00:26:12.524 Disk stats (read/write): 00:26:12.524 nvme0n1: ios=1500/0, merge=0/0, ticks=1202039/0, in_queue=1202039, util=97.10% 00:26:12.524 nvme10n1: ios=9532/0, merge=0/0, ticks=1227036/0, in_queue=1227036, util=97.33% 00:26:12.524 nvme1n1: ios=1725/0, merge=0/0, ticks=1211040/0, in_queue=1211040, util=97.62% 00:26:12.524 nvme2n1: ios=3046/0, merge=0/0, ticks=1222046/0, in_queue=1222046, util=97.77% 00:26:12.524 nvme3n1: ios=16592/0, merge=0/0, ticks=1232768/0, in_queue=1232768, util=97.85% 00:26:12.524 nvme4n1: ios=4688/0, merge=0/0, ticks=1213464/0, in_queue=1213464, util=98.20% 00:26:12.524 nvme5n1: ios=10629/0, merge=0/0, ticks=1231929/0, in_queue=1231929, util=98.37% 00:26:12.524 nvme6n1: ios=1902/0, merge=0/0, ticks=1218340/0, in_queue=1218340, util=98.49% 00:26:12.524 nvme7n1: ios=4264/0, merge=0/0, ticks=1221607/0, in_queue=1221607, util=98.90% 00:26:12.524 nvme8n1: ios=5713/0, merge=0/0, ticks=1232174/0, in_queue=1232174, util=99.11% 00:26:12.524 nvme9n1: ios=13375/0, merge=0/0, ticks=1228838/0, in_queue=1228838, util=99.25% 00:26:12.524 07:11:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:26:12.524 [global] 00:26:12.524 thread=1 00:26:12.524 invalidate=1 00:26:12.524 rw=randwrite 00:26:12.524 time_based=1 00:26:12.524 runtime=10 00:26:12.524 ioengine=libaio 00:26:12.524 direct=1 00:26:12.524 bs=262144 00:26:12.524 iodepth=64 00:26:12.524 norandommap=1 00:26:12.524 numjobs=1 00:26:12.524 00:26:12.524 [job0] 00:26:12.524 filename=/dev/nvme0n1 00:26:12.524 [job1] 00:26:12.524 filename=/dev/nvme10n1 00:26:12.524 [job2] 00:26:12.524 filename=/dev/nvme1n1 00:26:12.524 [job3] 00:26:12.524 filename=/dev/nvme2n1 00:26:12.524 [job4] 00:26:12.524 filename=/dev/nvme3n1 00:26:12.524 [job5] 00:26:12.524 filename=/dev/nvme4n1 00:26:12.524 [job6] 00:26:12.524 filename=/dev/nvme5n1 00:26:12.524 [job7] 00:26:12.524 filename=/dev/nvme6n1 00:26:12.524 [job8] 00:26:12.524 filename=/dev/nvme7n1 00:26:12.524 [job9] 00:26:12.524 filename=/dev/nvme8n1 00:26:12.524 [job10] 00:26:12.524 filename=/dev/nvme9n1 00:26:12.524 Could not set queue depth (nvme0n1) 00:26:12.524 Could not set queue depth (nvme10n1) 00:26:12.524 Could not set queue depth (nvme1n1) 00:26:12.524 Could not set queue depth (nvme2n1) 00:26:12.524 Could not set queue depth (nvme3n1) 00:26:12.524 Could not set queue depth (nvme4n1) 00:26:12.524 Could not set queue depth (nvme5n1) 00:26:12.524 Could not set queue depth (nvme6n1) 00:26:12.524 Could not set queue depth (nvme7n1) 00:26:12.524 Could not set queue depth (nvme8n1) 00:26:12.524 Could not set queue depth (nvme9n1) 00:26:12.524 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:12.524 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:12.524 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:12.524 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:12.524 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:12.524 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:12.524 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:12.524 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:12.524 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:12.524 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:12.524 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:12.524 fio-3.35 00:26:12.524 Starting 11 threads 00:26:22.512 00:26:22.512 job0: (groupid=0, jobs=1): err= 0: pid=302037: Mon Nov 18 07:11:42 2024 00:26:22.512 write: IOPS=360, BW=90.2MiB/s (94.6MB/s)(921MiB/10208msec); 0 zone resets 00:26:22.512 slat (usec): min=16, max=83334, avg=1724.51, stdev=7176.82 00:26:22.512 clat (usec): min=781, max=1066.9k, avg=175433.07, stdev=218625.14 00:26:22.512 lat (usec): min=827, max=1067.0k, avg=177157.59, stdev=220921.37 00:26:22.512 clat percentiles (msec): 00:26:22.512 | 1.00th=[ 3], 5.00th=[ 5], 10.00th=[ 8], 20.00th=[ 22], 00:26:22.512 | 30.00th=[ 44], 40.00th=[ 47], 50.00th=[ 65], 60.00th=[ 115], 00:26:22.512 | 70.00th=[ 138], 80.00th=[ 414], 90.00th=[ 567], 95.00th=[ 642], 00:26:22.512 | 99.00th=[ 793], 99.50th=[ 844], 99.90th=[ 1028], 99.95th=[ 1028], 00:26:22.512 | 99.99th=[ 1070] 00:26:22.512 bw ( KiB/s): min=16384, max=357376, per=11.14%, avg=92684.20, stdev=75270.92, samples=20 00:26:22.512 iops : min= 64, max= 1396, avg=362.00, stdev=294.01, samples=20 00:26:22.512 lat (usec) : 1000=0.05% 00:26:22.512 lat (msec) : 2=0.79%, 4=3.42%, 10=7.74%, 20=6.92%, 50=24.89% 00:26:22.512 lat (msec) : 100=14.14%, 250=18.08%, 500=10.83%, 750=11.45%, 1000=1.55% 00:26:22.512 lat (msec) : 2000=0.14% 00:26:22.512 cpu : usr=1.10%, sys=1.35%, ctx=2210, majf=0, minf=1 00:26:22.512 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.3% 00:26:22.512 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:22.512 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:22.512 issued rwts: total=0,3684,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:22.512 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:22.512 job1: (groupid=0, jobs=1): err= 0: pid=302050: Mon Nov 18 07:11:42 2024 00:26:22.512 write: IOPS=172, BW=43.1MiB/s (45.2MB/s)(437MiB/10140msec); 0 zone resets 00:26:22.512 slat (usec): min=24, max=85922, avg=4733.22, stdev=11716.27 00:26:22.512 clat (msec): min=10, max=968, avg=366.17, stdev=213.86 00:26:22.512 lat (msec): min=10, max=968, avg=370.90, stdev=217.17 00:26:22.512 clat percentiles (msec): 00:26:22.512 | 1.00th=[ 17], 5.00th=[ 33], 10.00th=[ 59], 20.00th=[ 121], 00:26:22.512 | 30.00th=[ 236], 40.00th=[ 300], 50.00th=[ 372], 60.00th=[ 430], 00:26:22.512 | 70.00th=[ 493], 80.00th=[ 584], 90.00th=[ 651], 95.00th=[ 701], 00:26:22.512 | 99.00th=[ 793], 99.50th=[ 835], 99.90th=[ 969], 99.95th=[ 969], 00:26:22.512 | 99.99th=[ 969] 00:26:22.512 bw ( KiB/s): min=22528, max=122368, per=5.18%, avg=43114.50, stdev=24466.25, samples=20 00:26:22.512 iops : min= 88, max= 478, avg=168.40, stdev=95.57, samples=20 00:26:22.512 lat (msec) : 20=1.32%, 50=7.49%, 100=8.07%, 250=15.96%, 500=38.27% 00:26:22.512 lat (msec) : 750=26.95%, 1000=1.95% 00:26:22.512 cpu : usr=0.46%, sys=0.68%, ctx=848, majf=0, minf=1 00:26:22.512 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.5%, 16=0.9%, 32=1.8%, >=64=96.4% 00:26:22.512 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:22.512 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:22.512 issued rwts: total=0,1748,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:22.512 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:22.512 job2: (groupid=0, jobs=1): err= 0: pid=302054: Mon Nov 18 07:11:42 2024 00:26:22.512 write: IOPS=207, BW=51.9MiB/s (54.5MB/s)(526MiB/10122msec); 0 zone resets 00:26:22.512 slat (usec): min=20, max=244663, avg=3893.78, stdev=11976.24 00:26:22.512 clat (usec): min=950, max=973081, avg=303982.74, stdev=246580.20 00:26:22.512 lat (usec): min=1029, max=973165, avg=307876.52, stdev=250028.76 00:26:22.512 clat percentiles (msec): 00:26:22.512 | 1.00th=[ 4], 5.00th=[ 11], 10.00th=[ 27], 20.00th=[ 48], 00:26:22.512 | 30.00th=[ 73], 40.00th=[ 100], 50.00th=[ 334], 60.00th=[ 418], 00:26:22.512 | 70.00th=[ 472], 80.00th=[ 558], 90.00th=[ 625], 95.00th=[ 718], 00:26:22.512 | 99.00th=[ 827], 99.50th=[ 869], 99.90th=[ 936], 99.95th=[ 969], 00:26:22.512 | 99.99th=[ 978] 00:26:22.512 bw ( KiB/s): min=20480, max=254976, per=6.28%, avg=52223.00, stdev=56495.38, samples=20 00:26:22.512 iops : min= 80, max= 996, avg=203.90, stdev=220.73, samples=20 00:26:22.512 lat (usec) : 1000=0.10% 00:26:22.512 lat (msec) : 2=0.24%, 4=0.76%, 10=3.52%, 20=3.66%, 50=14.41% 00:26:22.512 lat (msec) : 100=17.69%, 250=5.80%, 500=27.25%, 750=24.11%, 1000=2.47% 00:26:22.512 cpu : usr=0.62%, sys=0.76%, ctx=1044, majf=0, minf=1 00:26:22.512 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=1.5%, >=64=97.0% 00:26:22.512 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:22.512 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:22.512 issued rwts: total=0,2103,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:22.512 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:22.512 job3: (groupid=0, jobs=1): err= 0: pid=302055: Mon Nov 18 07:11:42 2024 00:26:22.512 write: IOPS=149, BW=37.3MiB/s (39.1MB/s)(381MiB/10206msec); 0 zone resets 00:26:22.512 slat (usec): min=26, max=67143, avg=5768.54, stdev=13004.32 00:26:22.512 clat (msec): min=3, max=1076, avg=422.98, stdev=205.78 00:26:22.512 lat (msec): min=5, max=1076, avg=428.75, stdev=208.76 00:26:22.512 clat percentiles (msec): 00:26:22.512 | 1.00th=[ 9], 5.00th=[ 19], 10.00th=[ 53], 20.00th=[ 288], 00:26:22.512 | 30.00th=[ 355], 40.00th=[ 397], 50.00th=[ 430], 60.00th=[ 477], 00:26:22.512 | 70.00th=[ 542], 80.00th=[ 575], 90.00th=[ 676], 95.00th=[ 735], 00:26:22.512 | 99.00th=[ 902], 99.50th=[ 995], 99.90th=[ 1083], 99.95th=[ 1083], 00:26:22.512 | 99.99th=[ 1083] 00:26:22.512 bw ( KiB/s): min=16384, max=100864, per=4.49%, avg=37329.15, stdev=18344.19, samples=20 00:26:22.512 iops : min= 64, max= 394, avg=145.80, stdev=71.65, samples=20 00:26:22.512 lat (msec) : 4=0.07%, 10=1.77%, 20=3.88%, 50=4.14%, 100=1.25% 00:26:22.512 lat (msec) : 250=6.77%, 500=46.25%, 750=31.47%, 1000=4.01%, 2000=0.39% 00:26:22.512 cpu : usr=0.48%, sys=0.51%, ctx=636, majf=0, minf=1 00:26:22.512 IO depths : 1=0.1%, 2=0.1%, 4=0.3%, 8=0.5%, 16=1.1%, 32=2.1%, >=64=95.9% 00:26:22.512 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:22.512 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:22.512 issued rwts: total=0,1522,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:22.512 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:22.512 job4: (groupid=0, jobs=1): err= 0: pid=302056: Mon Nov 18 07:11:42 2024 00:26:22.512 write: IOPS=259, BW=65.0MiB/s (68.1MB/s)(663MiB/10207msec); 0 zone resets 00:26:22.512 slat (usec): min=21, max=94734, avg=3214.64, stdev=9795.97 00:26:22.512 clat (msec): min=2, max=1066, avg=242.97, stdev=228.60 00:26:22.512 lat (msec): min=2, max=1066, avg=246.19, stdev=231.61 00:26:22.512 clat percentiles (msec): 00:26:22.512 | 1.00th=[ 15], 5.00th=[ 38], 10.00th=[ 47], 20.00th=[ 54], 00:26:22.512 | 30.00th=[ 80], 40.00th=[ 107], 50.00th=[ 136], 60.00th=[ 182], 00:26:22.512 | 70.00th=[ 300], 80.00th=[ 498], 90.00th=[ 600], 95.00th=[ 701], 00:26:22.512 | 99.00th=[ 835], 99.50th=[ 944], 99.90th=[ 1028], 99.95th=[ 1070], 00:26:22.512 | 99.99th=[ 1070] 00:26:22.512 bw ( KiB/s): min=16384, max=248320, per=7.96%, avg=66252.80, stdev=68130.01, samples=20 00:26:22.512 iops : min= 64, max= 970, avg=258.80, stdev=266.13, samples=20 00:26:22.512 lat (msec) : 4=0.11%, 10=0.15%, 20=1.62%, 50=10.75%, 100=24.32% 00:26:22.512 lat (msec) : 250=31.18%, 500=12.07%, 750=17.27%, 1000=2.30%, 2000=0.23% 00:26:22.512 cpu : usr=0.73%, sys=0.85%, ctx=986, majf=0, minf=1 00:26:22.512 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.6%, 32=1.2%, >=64=97.6% 00:26:22.512 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:22.512 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:22.512 issued rwts: total=0,2652,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:22.512 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:22.512 job5: (groupid=0, jobs=1): err= 0: pid=302057: Mon Nov 18 07:11:42 2024 00:26:22.512 write: IOPS=260, BW=65.0MiB/s (68.2MB/s)(658MiB/10115msec); 0 zone resets 00:26:22.512 slat (usec): min=16, max=184381, avg=2849.09, stdev=10763.43 00:26:22.512 clat (msec): min=2, max=1029, avg=243.15, stdev=251.95 00:26:22.512 lat (msec): min=2, max=1029, avg=246.00, stdev=255.19 00:26:22.512 clat percentiles (msec): 00:26:22.512 | 1.00th=[ 9], 5.00th=[ 11], 10.00th=[ 13], 20.00th=[ 17], 00:26:22.512 | 30.00th=[ 23], 40.00th=[ 60], 50.00th=[ 123], 60.00th=[ 266], 00:26:22.512 | 70.00th=[ 401], 80.00th=[ 451], 90.00th=[ 684], 95.00th=[ 735], 00:26:22.512 | 99.00th=[ 877], 99.50th=[ 911], 99.90th=[ 995], 99.95th=[ 1028], 00:26:22.512 | 99.99th=[ 1028] 00:26:22.512 bw ( KiB/s): min=18395, max=201836, per=7.90%, avg=65708.10, stdev=49266.84, samples=20 00:26:22.512 iops : min= 71, max= 788, avg=256.60, stdev=192.42, samples=20 00:26:22.512 lat (msec) : 4=0.04%, 10=3.19%, 20=24.94%, 50=10.95%, 100=7.45% 00:26:22.512 lat (msec) : 250=11.44%, 500=26.08%, 750=11.86%, 1000=3.95%, 2000=0.08% 00:26:22.512 cpu : usr=0.83%, sys=0.80%, ctx=1774, majf=0, minf=1 00:26:22.512 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.6%, 32=1.2%, >=64=97.6% 00:26:22.512 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:22.512 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:22.512 issued rwts: total=0,2630,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:22.512 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:22.512 job6: (groupid=0, jobs=1): err= 0: pid=302058: Mon Nov 18 07:11:42 2024 00:26:22.512 write: IOPS=194, BW=48.7MiB/s (51.1MB/s)(493MiB/10121msec); 0 zone resets 00:26:22.512 slat (usec): min=24, max=166418, avg=4660.56, stdev=12092.07 00:26:22.512 clat (usec): min=1558, max=983965, avg=323473.26, stdev=225335.24 00:26:22.512 lat (msec): min=2, max=984, avg=328.13, stdev=228.74 00:26:22.512 clat percentiles (msec): 00:26:22.513 | 1.00th=[ 4], 5.00th=[ 8], 10.00th=[ 11], 20.00th=[ 62], 00:26:22.513 | 30.00th=[ 148], 40.00th=[ 262], 50.00th=[ 334], 60.00th=[ 409], 00:26:22.513 | 70.00th=[ 447], 80.00th=[ 558], 90.00th=[ 609], 95.00th=[ 701], 00:26:22.513 | 99.00th=[ 785], 99.50th=[ 818], 99.90th=[ 869], 99.95th=[ 986], 00:26:22.513 | 99.99th=[ 986] 00:26:22.513 bw ( KiB/s): min=20480, max=219648, per=5.88%, avg=48881.95, stdev=45441.83, samples=20 00:26:22.513 iops : min= 80, max= 858, avg=190.90, stdev=177.52, samples=20 00:26:22.513 lat (msec) : 2=0.05%, 4=1.27%, 10=8.11%, 20=7.91%, 50=1.52% 00:26:22.513 lat (msec) : 100=4.31%, 250=14.70%, 500=37.25%, 750=23.11%, 1000=1.77% 00:26:22.513 cpu : usr=0.54%, sys=0.73%, ctx=952, majf=0, minf=1 00:26:22.513 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=1.6%, >=64=96.8% 00:26:22.513 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:22.513 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:22.513 issued rwts: total=0,1973,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:22.513 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:22.513 job7: (groupid=0, jobs=1): err= 0: pid=302059: Mon Nov 18 07:11:42 2024 00:26:22.513 write: IOPS=383, BW=95.8MiB/s (100MB/s)(970MiB/10127msec); 0 zone resets 00:26:22.513 slat (usec): min=21, max=160136, avg=1867.00, stdev=7357.68 00:26:22.513 clat (usec): min=839, max=1053.1k, avg=165075.47, stdev=177293.19 00:26:22.513 lat (usec): min=877, max=1053.2k, avg=166942.48, stdev=179225.67 00:26:22.513 clat percentiles (msec): 00:26:22.513 | 1.00th=[ 5], 5.00th=[ 24], 10.00th=[ 34], 20.00th=[ 55], 00:26:22.513 | 30.00th=[ 84], 40.00th=[ 91], 50.00th=[ 105], 60.00th=[ 124], 00:26:22.513 | 70.00th=[ 163], 80.00th=[ 207], 90.00th=[ 372], 95.00th=[ 625], 00:26:22.513 | 99.00th=[ 885], 99.50th=[ 936], 99.90th=[ 1011], 99.95th=[ 1053], 00:26:22.513 | 99.99th=[ 1053] 00:26:22.513 bw ( KiB/s): min=18432, max=188416, per=11.74%, avg=97701.90, stdev=47014.09, samples=20 00:26:22.513 iops : min= 72, max= 736, avg=381.60, stdev=183.62, samples=20 00:26:22.513 lat (usec) : 1000=0.13% 00:26:22.513 lat (msec) : 2=0.23%, 4=0.54%, 10=0.34%, 20=1.31%, 50=16.09% 00:26:22.513 lat (msec) : 100=27.71%, 250=37.25%, 500=9.07%, 750=4.92%, 1000=2.22% 00:26:22.513 lat (msec) : 2000=0.18% 00:26:22.513 cpu : usr=1.27%, sys=1.23%, ctx=1962, majf=0, minf=1 00:26:22.513 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:26:22.513 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:22.513 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:22.513 issued rwts: total=0,3879,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:22.513 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:22.513 job8: (groupid=0, jobs=1): err= 0: pid=302060: Mon Nov 18 07:11:42 2024 00:26:22.513 write: IOPS=397, BW=99.4MiB/s (104MB/s)(1006MiB/10128msec); 0 zone resets 00:26:22.513 slat (usec): min=15, max=111319, avg=1539.21, stdev=6693.86 00:26:22.513 clat (usec): min=707, max=1009.6k, avg=159365.03, stdev=206653.57 00:26:22.513 lat (usec): min=739, max=1009.6k, avg=160904.24, stdev=208777.43 00:26:22.513 clat percentiles (msec): 00:26:22.513 | 1.00th=[ 3], 5.00th=[ 9], 10.00th=[ 22], 20.00th=[ 29], 00:26:22.513 | 30.00th=[ 45], 40.00th=[ 54], 50.00th=[ 66], 60.00th=[ 100], 00:26:22.513 | 70.00th=[ 120], 80.00th=[ 211], 90.00th=[ 558], 95.00th=[ 651], 00:26:22.513 | 99.00th=[ 818], 99.50th=[ 860], 99.90th=[ 969], 99.95th=[ 986], 00:26:22.513 | 99.99th=[ 1011] 00:26:22.513 bw ( KiB/s): min=25088, max=357376, per=12.19%, avg=101423.15, stdev=99430.86, samples=20 00:26:22.513 iops : min= 98, max= 1396, avg=396.10, stdev=388.44, samples=20 00:26:22.513 lat (usec) : 750=0.05%, 1000=0.07% 00:26:22.513 lat (msec) : 2=0.70%, 4=1.84%, 10=2.81%, 20=2.16%, 50=26.41% 00:26:22.513 lat (msec) : 100=26.36%, 250=21.84%, 500=4.17%, 750=12.15%, 1000=1.42% 00:26:22.513 lat (msec) : 2000=0.02% 00:26:22.513 cpu : usr=1.14%, sys=1.46%, ctx=2602, majf=0, minf=1 00:26:22.513 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:26:22.513 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:22.513 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:22.513 issued rwts: total=0,4025,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:22.513 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:22.513 job9: (groupid=0, jobs=1): err= 0: pid=302061: Mon Nov 18 07:11:42 2024 00:26:22.513 write: IOPS=654, BW=164MiB/s (171MB/s)(1656MiB/10124msec); 0 zone resets 00:26:22.513 slat (usec): min=19, max=146409, avg=1247.92, stdev=3984.24 00:26:22.513 clat (usec): min=1127, max=563245, avg=96253.38, stdev=89539.53 00:26:22.513 lat (usec): min=1669, max=563332, avg=97501.31, stdev=90652.22 00:26:22.513 clat percentiles (msec): 00:26:22.513 | 1.00th=[ 6], 5.00th=[ 21], 10.00th=[ 41], 20.00th=[ 45], 00:26:22.513 | 30.00th=[ 49], 40.00th=[ 52], 50.00th=[ 70], 60.00th=[ 84], 00:26:22.513 | 70.00th=[ 99], 80.00th=[ 118], 90.00th=[ 184], 95.00th=[ 284], 00:26:22.513 | 99.00th=[ 498], 99.50th=[ 527], 99.90th=[ 558], 99.95th=[ 558], 00:26:22.513 | 99.99th=[ 567] 00:26:22.513 bw ( KiB/s): min=32768, max=361472, per=20.19%, avg=167940.35, stdev=102432.97, samples=20 00:26:22.513 iops : min= 128, max= 1412, avg=656.00, stdev=400.15, samples=20 00:26:22.513 lat (msec) : 2=0.05%, 4=0.20%, 10=2.02%, 20=2.58%, 50=29.78% 00:26:22.513 lat (msec) : 100=36.71%, 250=22.42%, 500=5.39%, 750=0.86% 00:26:22.513 cpu : usr=1.93%, sys=2.16%, ctx=2616, majf=0, minf=1 00:26:22.513 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:26:22.513 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:22.513 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:22.513 issued rwts: total=0,6623,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:22.513 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:22.513 job10: (groupid=0, jobs=1): err= 0: pid=302062: Mon Nov 18 07:11:42 2024 00:26:22.513 write: IOPS=228, BW=57.1MiB/s (59.9MB/s)(583MiB/10207msec); 0 zone resets 00:26:22.513 slat (usec): min=22, max=247888, avg=3854.54, stdev=12780.22 00:26:22.513 clat (usec): min=838, max=1237.5k, avg=276146.77, stdev=257954.68 00:26:22.513 lat (usec): min=903, max=1237.6k, avg=280001.31, stdev=261386.21 00:26:22.513 clat percentiles (msec): 00:26:22.513 | 1.00th=[ 3], 5.00th=[ 7], 10.00th=[ 11], 20.00th=[ 104], 00:26:22.513 | 30.00th=[ 120], 40.00th=[ 165], 50.00th=[ 182], 60.00th=[ 205], 00:26:22.513 | 70.00th=[ 292], 80.00th=[ 506], 90.00th=[ 709], 95.00th=[ 768], 00:26:22.513 | 99.00th=[ 1053], 99.50th=[ 1099], 99.90th=[ 1183], 99.95th=[ 1234], 00:26:22.513 | 99.99th=[ 1234] 00:26:22.513 bw ( KiB/s): min=14336, max=146944, per=6.98%, avg=58075.30, stdev=42432.03, samples=20 00:26:22.513 iops : min= 56, max= 574, avg=226.85, stdev=165.74, samples=20 00:26:22.513 lat (usec) : 1000=0.13% 00:26:22.513 lat (msec) : 2=0.73%, 4=1.93%, 10=6.05%, 20=3.52%, 50=4.37% 00:26:22.513 lat (msec) : 100=2.02%, 250=46.01%, 500=15.18%, 750=13.85%, 1000=4.55% 00:26:22.513 lat (msec) : 2000=1.67% 00:26:22.513 cpu : usr=0.63%, sys=0.83%, ctx=1045, majf=0, minf=1 00:26:22.513 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.7%, 32=1.4%, >=64=97.3% 00:26:22.513 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:22.513 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:22.513 issued rwts: total=0,2332,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:22.513 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:22.513 00:26:22.513 Run status group 0 (all jobs): 00:26:22.513 WRITE: bw=812MiB/s (852MB/s), 37.3MiB/s-164MiB/s (39.1MB/s-171MB/s), io=8293MiB (8696MB), run=10115-10208msec 00:26:22.513 00:26:22.513 Disk stats (read/write): 00:26:22.513 nvme0n1: ios=51/7339, merge=0/0, ticks=721/1245314, in_queue=1246035, util=99.95% 00:26:22.513 nvme10n1: ios=48/3317, merge=0/0, ticks=1109/1200329, in_queue=1201438, util=100.00% 00:26:22.513 nvme1n1: ios=19/4039, merge=0/0, ticks=275/1177146, in_queue=1177421, util=98.24% 00:26:22.513 nvme2n1: ios=49/3017, merge=0/0, ticks=719/1237832, in_queue=1238551, util=100.00% 00:26:22.513 nvme3n1: ios=44/5276, merge=0/0, ticks=1530/1232045, in_queue=1233575, util=100.00% 00:26:22.513 nvme4n1: ios=13/5070, merge=0/0, ticks=546/1183369, in_queue=1183915, util=98.61% 00:26:22.513 nvme5n1: ios=42/3747, merge=0/0, ticks=2909/1201050, in_queue=1203959, util=100.00% 00:26:22.513 nvme6n1: ios=56/7592, merge=0/0, ticks=3825/1162014, in_queue=1165839, util=100.00% 00:26:22.513 nvme7n1: ios=25/7869, merge=0/0, ticks=389/1188439, in_queue=1188828, util=100.00% 00:26:22.513 nvme8n1: ios=36/13045, merge=0/0, ticks=759/1207271, in_queue=1208030, util=100.00% 00:26:22.513 nvme9n1: ios=0/4635, merge=0/0, ticks=0/1236162, in_queue=1236162, util=99.14% 00:26:22.513 07:11:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@36 -- # sync 00:26:22.513 07:11:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # seq 1 11 00:26:22.513 07:11:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:22.513 07:11:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:26:22.513 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:26:22.513 07:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:26:22.513 07:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:22.513 07:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:22.513 07:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK1 00:26:22.513 07:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:22.513 07:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK1 00:26:22.513 07:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:22.513 07:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:22.513 07:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:22.513 07:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:22.513 07:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:22.514 07:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:22.514 07:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:26:22.514 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:26:22.514 07:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:26:22.514 07:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:22.514 07:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:22.514 07:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK2 00:26:22.514 07:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:22.514 07:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK2 00:26:22.514 07:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:22.514 07:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:26:22.514 07:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:22.514 07:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:22.514 07:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:22.514 07:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:22.514 07:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:26:22.772 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:26:22.772 07:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:26:22.772 07:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:22.772 07:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:22.772 07:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK3 00:26:22.772 07:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:22.772 07:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK3 00:26:22.772 07:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:22.772 07:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:26:22.772 07:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:22.773 07:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:22.773 07:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:22.773 07:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:22.773 07:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:26:23.030 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:26:23.030 07:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:26:23.030 07:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:23.030 07:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:23.030 07:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK4 00:26:23.030 07:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:23.030 07:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK4 00:26:23.030 07:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:23.030 07:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:26:23.030 07:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:23.030 07:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:23.030 07:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:23.030 07:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:23.030 07:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:26:23.287 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:26:23.287 07:11:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:26:23.287 07:11:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:23.287 07:11:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:23.287 07:11:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK5 00:26:23.287 07:11:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:23.287 07:11:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK5 00:26:23.287 07:11:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:23.287 07:11:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:26:23.287 07:11:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:23.287 07:11:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:23.287 07:11:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:23.287 07:11:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:23.287 07:11:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:26:23.545 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:26:23.545 07:11:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:26:23.545 07:11:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:23.545 07:11:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:23.545 07:11:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK6 00:26:23.545 07:11:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:23.545 07:11:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK6 00:26:23.545 07:11:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:23.545 07:11:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:26:23.545 07:11:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:23.545 07:11:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:23.545 07:11:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:23.545 07:11:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:23.545 07:11:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:26:23.804 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:26:23.804 07:11:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:26:23.804 07:11:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:23.804 07:11:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:23.804 07:11:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK7 00:26:23.804 07:11:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:23.804 07:11:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK7 00:26:23.804 07:11:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:23.804 07:11:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:26:23.804 07:11:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:23.804 07:11:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:23.804 07:11:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:23.804 07:11:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:23.804 07:11:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:26:23.804 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:26:23.804 07:11:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:26:23.804 07:11:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:23.804 07:11:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:23.804 07:11:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK8 00:26:23.804 07:11:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:23.804 07:11:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK8 00:26:24.065 07:11:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:24.065 07:11:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:26:24.065 07:11:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:24.065 07:11:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:24.065 07:11:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:24.065 07:11:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:24.065 07:11:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:26:24.065 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:26:24.065 07:11:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:26:24.065 07:11:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:24.065 07:11:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:24.065 07:11:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK9 00:26:24.065 07:11:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:24.065 07:11:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK9 00:26:24.065 07:11:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:24.065 07:11:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:26:24.065 07:11:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:24.065 07:11:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:24.065 07:11:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:24.065 07:11:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:24.065 07:11:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:26:24.065 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:26:24.065 07:11:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:26:24.065 07:11:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:24.065 07:11:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:24.065 07:11:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK10 00:26:24.065 07:11:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:24.065 07:11:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK10 00:26:24.065 07:11:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:24.065 07:11:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:26:24.065 07:11:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:24.065 07:11:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:24.065 07:11:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:24.065 07:11:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:24.065 07:11:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:26:24.326 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:26:24.326 07:11:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:26:24.326 07:11:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:24.326 07:11:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:24.326 07:11:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK11 00:26:24.326 07:11:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:24.326 07:11:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK11 00:26:24.326 07:11:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:24.326 07:11:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:26:24.326 07:11:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:24.326 07:11:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:24.326 07:11:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:24.326 07:11:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:26:24.326 07:11:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:26:24.326 07:11:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@47 -- # nvmftestfini 00:26:24.326 07:11:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:24.326 07:11:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@121 -- # sync 00:26:24.326 07:11:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:24.326 07:11:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@124 -- # set +e 00:26:24.326 07:11:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:24.326 07:11:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:24.326 rmmod nvme_tcp 00:26:24.326 rmmod nvme_fabrics 00:26:24.326 rmmod nvme_keyring 00:26:24.326 07:11:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:24.326 07:11:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@128 -- # set -e 00:26:24.326 07:11:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@129 -- # return 0 00:26:24.326 07:11:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@517 -- # '[' -n 297052 ']' 00:26:24.326 07:11:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@518 -- # killprocess 297052 00:26:24.326 07:11:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@954 -- # '[' -z 297052 ']' 00:26:24.326 07:11:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@958 -- # kill -0 297052 00:26:24.326 07:11:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@959 -- # uname 00:26:24.326 07:11:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:24.326 07:11:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 297052 00:26:24.326 07:11:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:24.326 07:11:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:24.326 07:11:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@972 -- # echo 'killing process with pid 297052' 00:26:24.326 killing process with pid 297052 00:26:24.326 07:11:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@973 -- # kill 297052 00:26:24.326 07:11:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@978 -- # wait 297052 00:26:24.894 07:11:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:24.894 07:11:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:24.894 07:11:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:24.894 07:11:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@297 -- # iptr 00:26:24.894 07:11:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@791 -- # iptables-restore 00:26:24.894 07:11:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@791 -- # iptables-save 00:26:24.894 07:11:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:24.894 07:11:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:24.894 07:11:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:24.894 07:11:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:24.894 07:11:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:24.894 07:11:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:26.798 07:11:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:26.798 00:26:26.798 real 1m0.794s 00:26:26.798 user 3m33.785s 00:26:26.798 sys 0m16.082s 00:26:26.798 07:11:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:26.798 07:11:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:26.798 ************************************ 00:26:26.798 END TEST nvmf_multiconnection 00:26:26.798 ************************************ 00:26:27.057 07:11:47 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@50 -- # run_test nvmf_initiator_timeout /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:26:27.057 07:11:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:27.057 07:11:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:27.057 07:11:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:26:27.057 ************************************ 00:26:27.057 START TEST nvmf_initiator_timeout 00:26:27.057 ************************************ 00:26:27.057 07:11:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:26:27.057 * Looking for test storage... 00:26:27.057 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:27.057 07:11:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:26:27.057 07:11:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1693 -- # lcov --version 00:26:27.057 07:11:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:26:27.057 07:11:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:26:27.057 07:11:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:27.057 07:11:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:27.057 07:11:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:27.057 07:11:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@336 -- # IFS=.-: 00:26:27.057 07:11:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@336 -- # read -ra ver1 00:26:27.057 07:11:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@337 -- # IFS=.-: 00:26:27.057 07:11:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@337 -- # read -ra ver2 00:26:27.057 07:11:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@338 -- # local 'op=<' 00:26:27.057 07:11:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@340 -- # ver1_l=2 00:26:27.057 07:11:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@341 -- # ver2_l=1 00:26:27.057 07:11:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:27.057 07:11:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@344 -- # case "$op" in 00:26:27.057 07:11:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@345 -- # : 1 00:26:27.057 07:11:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:27.057 07:11:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:27.057 07:11:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@365 -- # decimal 1 00:26:27.057 07:11:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@353 -- # local d=1 00:26:27.057 07:11:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:27.057 07:11:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@355 -- # echo 1 00:26:27.057 07:11:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@365 -- # ver1[v]=1 00:26:27.057 07:11:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@366 -- # decimal 2 00:26:27.057 07:11:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@353 -- # local d=2 00:26:27.057 07:11:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:27.057 07:11:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@355 -- # echo 2 00:26:27.057 07:11:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@366 -- # ver2[v]=2 00:26:27.057 07:11:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:27.057 07:11:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:27.057 07:11:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@368 -- # return 0 00:26:27.057 07:11:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:27.057 07:11:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:26:27.057 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:27.057 --rc genhtml_branch_coverage=1 00:26:27.057 --rc genhtml_function_coverage=1 00:26:27.057 --rc genhtml_legend=1 00:26:27.057 --rc geninfo_all_blocks=1 00:26:27.057 --rc geninfo_unexecuted_blocks=1 00:26:27.057 00:26:27.057 ' 00:26:27.057 07:11:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:26:27.057 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:27.057 --rc genhtml_branch_coverage=1 00:26:27.057 --rc genhtml_function_coverage=1 00:26:27.057 --rc genhtml_legend=1 00:26:27.057 --rc geninfo_all_blocks=1 00:26:27.057 --rc geninfo_unexecuted_blocks=1 00:26:27.057 00:26:27.057 ' 00:26:27.057 07:11:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:26:27.057 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:27.057 --rc genhtml_branch_coverage=1 00:26:27.057 --rc genhtml_function_coverage=1 00:26:27.057 --rc genhtml_legend=1 00:26:27.057 --rc geninfo_all_blocks=1 00:26:27.057 --rc geninfo_unexecuted_blocks=1 00:26:27.057 00:26:27.057 ' 00:26:27.057 07:11:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:26:27.057 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:27.057 --rc genhtml_branch_coverage=1 00:26:27.057 --rc genhtml_function_coverage=1 00:26:27.057 --rc genhtml_legend=1 00:26:27.057 --rc geninfo_all_blocks=1 00:26:27.057 --rc geninfo_unexecuted_blocks=1 00:26:27.057 00:26:27.057 ' 00:26:27.057 07:11:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:27.057 07:11:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # uname -s 00:26:27.057 07:11:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:27.057 07:11:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:27.057 07:11:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:27.058 07:11:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:27.058 07:11:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:27.058 07:11:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:27.058 07:11:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:27.058 07:11:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:27.058 07:11:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:27.058 07:11:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:27.058 07:11:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:26:27.058 07:11:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:26:27.058 07:11:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:27.058 07:11:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:27.058 07:11:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:27.058 07:11:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:27.058 07:11:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:27.058 07:11:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@15 -- # shopt -s extglob 00:26:27.058 07:11:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:27.058 07:11:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:27.058 07:11:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:27.058 07:11:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:27.058 07:11:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:27.058 07:11:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:27.058 07:11:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@5 -- # export PATH 00:26:27.058 07:11:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:27.058 07:11:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@51 -- # : 0 00:26:27.058 07:11:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:27.058 07:11:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:27.058 07:11:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:27.058 07:11:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:27.058 07:11:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:27.058 07:11:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:27.058 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:27.058 07:11:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:27.058 07:11:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:27.058 07:11:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:27.058 07:11:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:27.058 07:11:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:27.058 07:11:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:26:27.058 07:11:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:27.058 07:11:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:27.058 07:11:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:27.058 07:11:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:27.058 07:11:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:27.058 07:11:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:27.058 07:11:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:27.058 07:11:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:27.058 07:11:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:27.058 07:11:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:27.058 07:11:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@309 -- # xtrace_disable 00:26:27.058 07:11:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:29.594 07:11:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:29.594 07:11:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@315 -- # pci_devs=() 00:26:29.594 07:11:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:29.594 07:11:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:29.594 07:11:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:29.594 07:11:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:29.594 07:11:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:29.594 07:11:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@319 -- # net_devs=() 00:26:29.594 07:11:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:29.594 07:11:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@320 -- # e810=() 00:26:29.594 07:11:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@320 -- # local -ga e810 00:26:29.594 07:11:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@321 -- # x722=() 00:26:29.594 07:11:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@321 -- # local -ga x722 00:26:29.594 07:11:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@322 -- # mlx=() 00:26:29.594 07:11:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@322 -- # local -ga mlx 00:26:29.594 07:11:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:29.594 07:11:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:29.594 07:11:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:29.594 07:11:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:29.594 07:11:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:29.594 07:11:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:29.595 07:11:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:29.595 07:11:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:29.595 07:11:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:29.595 07:11:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:29.595 07:11:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:29.595 07:11:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:29.595 07:11:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:29.595 07:11:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:29.595 07:11:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:29.595 07:11:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:29.595 07:11:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:29.595 07:11:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:29.595 07:11:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:29.595 07:11:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:26:29.595 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:26:29.595 07:11:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:29.595 07:11:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:29.595 07:11:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:29.595 07:11:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:29.595 07:11:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:29.595 07:11:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:29.595 07:11:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:26:29.595 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:26:29.595 07:11:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:29.595 07:11:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:29.595 07:11:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:29.595 07:11:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:29.595 07:11:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:29.595 07:11:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:29.595 07:11:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:29.595 07:11:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:29.595 07:11:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:29.595 07:11:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:29.595 07:11:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:29.595 07:11:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:29.595 07:11:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:29.595 07:11:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:29.595 07:11:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:29.595 07:11:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:26:29.595 Found net devices under 0000:0a:00.0: cvl_0_0 00:26:29.595 07:11:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:29.595 07:11:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:29.595 07:11:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:29.595 07:11:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:29.595 07:11:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:29.595 07:11:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:29.595 07:11:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:29.595 07:11:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:29.595 07:11:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:26:29.595 Found net devices under 0000:0a:00.1: cvl_0_1 00:26:29.595 07:11:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:29.595 07:11:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:29.595 07:11:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@442 -- # is_hw=yes 00:26:29.595 07:11:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:29.595 07:11:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:29.595 07:11:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:29.595 07:11:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:29.595 07:11:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:29.595 07:11:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:29.595 07:11:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:29.595 07:11:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:29.595 07:11:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:29.595 07:11:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:29.595 07:11:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:29.595 07:11:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:29.595 07:11:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:29.595 07:11:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:29.595 07:11:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:29.595 07:11:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:29.595 07:11:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:29.595 07:11:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:29.595 07:11:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:29.595 07:11:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:29.595 07:11:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:29.595 07:11:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:29.595 07:11:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:29.595 07:11:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:29.595 07:11:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:29.595 07:11:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:29.595 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:29.595 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.269 ms 00:26:29.595 00:26:29.595 --- 10.0.0.2 ping statistics --- 00:26:29.595 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:29.595 rtt min/avg/max/mdev = 0.269/0.269/0.269/0.000 ms 00:26:29.595 07:11:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:29.595 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:29.595 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.101 ms 00:26:29.595 00:26:29.595 --- 10.0.0.1 ping statistics --- 00:26:29.595 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:29.595 rtt min/avg/max/mdev = 0.101/0.101/0.101/0.000 ms 00:26:29.595 07:11:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:29.595 07:11:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@450 -- # return 0 00:26:29.595 07:11:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:29.595 07:11:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:29.595 07:11:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:29.595 07:11:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:29.595 07:11:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:29.595 07:11:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:29.595 07:11:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:29.595 07:11:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:26:29.595 07:11:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:29.595 07:11:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:29.595 07:11:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:29.595 07:11:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@509 -- # nvmfpid=305358 00:26:29.596 07:11:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:26:29.596 07:11:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@510 -- # waitforlisten 305358 00:26:29.596 07:11:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@835 -- # '[' -z 305358 ']' 00:26:29.596 07:11:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:29.596 07:11:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:29.596 07:11:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:29.596 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:29.596 07:11:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:29.596 07:11:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:29.596 [2024-11-18 07:11:50.269246] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:26:29.596 [2024-11-18 07:11:50.269349] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:29.596 [2024-11-18 07:11:50.344014] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:29.596 [2024-11-18 07:11:50.391521] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:29.596 [2024-11-18 07:11:50.391582] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:29.596 [2024-11-18 07:11:50.391605] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:29.596 [2024-11-18 07:11:50.391617] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:29.596 [2024-11-18 07:11:50.391628] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:29.596 [2024-11-18 07:11:50.393264] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:29.596 [2024-11-18 07:11:50.393345] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:26:29.596 [2024-11-18 07:11:50.393325] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:29.596 [2024-11-18 07:11:50.393348] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:29.596 07:11:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:29.596 07:11:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@868 -- # return 0 00:26:29.596 07:11:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:29.596 07:11:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:29.596 07:11:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:29.596 07:11:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:29.596 07:11:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:26:29.596 07:11:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:29.596 07:11:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:29.596 07:11:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:29.856 Malloc0 00:26:29.856 07:11:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:29.856 07:11:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:26:29.856 07:11:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:29.856 07:11:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:29.856 Delay0 00:26:29.856 07:11:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:29.856 07:11:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:29.856 07:11:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:29.856 07:11:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:29.856 [2024-11-18 07:11:50.588565] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:29.856 07:11:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:29.856 07:11:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:26:29.856 07:11:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:29.856 07:11:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:29.856 07:11:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:29.857 07:11:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:29.857 07:11:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:29.857 07:11:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:29.857 07:11:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:29.857 07:11:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:29.857 07:11:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:29.857 07:11:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:29.857 [2024-11-18 07:11:50.616861] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:29.857 07:11:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:29.857 07:11:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:26:30.424 07:11:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:26:30.424 07:11:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1202 -- # local i=0 00:26:30.424 07:11:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:26:30.424 07:11:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:26:30.424 07:11:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1209 -- # sleep 2 00:26:32.345 07:11:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:26:32.345 07:11:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:26:32.345 07:11:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:26:32.345 07:11:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:26:32.345 07:11:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:26:32.345 07:11:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1212 -- # return 0 00:26:32.345 07:11:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@35 -- # fio_pid=305689 00:26:32.345 07:11:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:26:32.345 07:11:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@37 -- # sleep 3 00:26:32.345 [global] 00:26:32.345 thread=1 00:26:32.345 invalidate=1 00:26:32.345 rw=write 00:26:32.345 time_based=1 00:26:32.345 runtime=60 00:26:32.345 ioengine=libaio 00:26:32.345 direct=1 00:26:32.345 bs=4096 00:26:32.345 iodepth=1 00:26:32.345 norandommap=0 00:26:32.345 numjobs=1 00:26:32.345 00:26:32.345 verify_dump=1 00:26:32.345 verify_backlog=512 00:26:32.345 verify_state_save=0 00:26:32.345 do_verify=1 00:26:32.345 verify=crc32c-intel 00:26:32.345 [job0] 00:26:32.345 filename=/dev/nvme0n1 00:26:32.345 Could not set queue depth (nvme0n1) 00:26:32.603 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:26:32.603 fio-3.35 00:26:32.603 Starting 1 thread 00:26:35.896 07:11:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:26:35.896 07:11:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:35.896 07:11:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:35.896 true 00:26:35.896 07:11:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:35.896 07:11:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:26:35.896 07:11:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:35.896 07:11:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:35.896 true 00:26:35.896 07:11:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:35.896 07:11:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:26:35.896 07:11:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:35.896 07:11:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:35.896 true 00:26:35.896 07:11:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:35.896 07:11:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:26:35.896 07:11:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:35.896 07:11:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:35.896 true 00:26:35.896 07:11:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:35.896 07:11:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@45 -- # sleep 3 00:26:38.436 07:11:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:26:38.436 07:11:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:38.436 07:11:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:38.436 true 00:26:38.436 07:11:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:38.436 07:11:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:26:38.436 07:11:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:38.436 07:11:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:38.436 true 00:26:38.436 07:11:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:38.436 07:11:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:26:38.436 07:11:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:38.436 07:11:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:38.436 true 00:26:38.436 07:11:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:38.436 07:11:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:26:38.436 07:11:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:38.436 07:11:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:38.436 true 00:26:38.436 07:11:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:38.436 07:11:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@53 -- # fio_status=0 00:26:38.436 07:11:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@54 -- # wait 305689 00:27:34.668 00:27:34.668 job0: (groupid=0, jobs=1): err= 0: pid=305762: Mon Nov 18 07:12:53 2024 00:27:34.668 read: IOPS=85, BW=341KiB/s (350kB/s)(20.0MiB/60001msec) 00:27:34.668 slat (nsec): min=5565, max=79965, avg=14327.34, stdev=7793.10 00:27:34.668 clat (usec): min=205, max=41170k, avg=11444.32, stdev=575419.58 00:27:34.668 lat (usec): min=213, max=41170k, avg=11458.65, stdev=575419.67 00:27:34.668 clat percentiles (usec): 00:27:34.668 | 1.00th=[ 221], 5.00th=[ 229], 10.00th=[ 235], 00:27:34.668 | 20.00th=[ 243], 30.00th=[ 251], 40.00th=[ 258], 00:27:34.668 | 50.00th=[ 265], 60.00th=[ 273], 70.00th=[ 285], 00:27:34.668 | 80.00th=[ 343], 90.00th=[ 529], 95.00th=[ 41157], 00:27:34.668 | 99.00th=[ 41157], 99.50th=[ 41157], 99.90th=[ 41157], 00:27:34.668 | 99.95th=[ 41157], 99.99th=[17112761] 00:27:34.668 write: IOPS=93, BW=375KiB/s (384kB/s)(22.0MiB/60001msec); 0 zone resets 00:27:34.668 slat (usec): min=5, max=11755, avg=18.58, stdev=220.34 00:27:34.668 clat (usec): min=159, max=493, avg=210.44, stdev=36.56 00:27:34.668 lat (usec): min=166, max=12125, avg=229.01, stdev=227.01 00:27:34.668 clat percentiles (usec): 00:27:34.668 | 1.00th=[ 167], 5.00th=[ 174], 10.00th=[ 178], 20.00th=[ 184], 00:27:34.668 | 30.00th=[ 188], 40.00th=[ 194], 50.00th=[ 202], 60.00th=[ 210], 00:27:34.668 | 70.00th=[ 219], 80.00th=[ 231], 90.00th=[ 253], 95.00th=[ 269], 00:27:34.668 | 99.00th=[ 359], 99.50th=[ 396], 99.90th=[ 457], 99.95th=[ 486], 00:27:34.668 | 99.99th=[ 494] 00:27:34.668 bw ( KiB/s): min= 4096, max= 8192, per=100.00%, avg=5266.29, stdev=1612.31, samples=7 00:27:34.668 iops : min= 1024, max= 2048, avg=1316.57, stdev=403.08, samples=7 00:27:34.668 lat (usec) : 250=60.08%, 500=34.25%, 750=1.97%, 1000=0.02% 00:27:34.668 lat (msec) : 2=0.02%, 50=3.65%, >=2000=0.01% 00:27:34.668 cpu : usr=0.18%, sys=0.35%, ctx=10748, majf=0, minf=1 00:27:34.668 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:34.668 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:34.668 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:34.668 issued rwts: total=5120,5626,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:34.668 latency : target=0, window=0, percentile=100.00%, depth=1 00:27:34.668 00:27:34.668 Run status group 0 (all jobs): 00:27:34.668 READ: bw=341KiB/s (350kB/s), 341KiB/s-341KiB/s (350kB/s-350kB/s), io=20.0MiB (21.0MB), run=60001-60001msec 00:27:34.668 WRITE: bw=375KiB/s (384kB/s), 375KiB/s-375KiB/s (384kB/s-384kB/s), io=22.0MiB (23.0MB), run=60001-60001msec 00:27:34.668 00:27:34.668 Disk stats (read/write): 00:27:34.668 nvme0n1: ios=5219/5131, merge=0/0, ticks=18566/1052, in_queue=19618, util=99.51% 00:27:34.668 07:12:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:27:34.668 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:27:34.668 07:12:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:27:34.668 07:12:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1223 -- # local i=0 00:27:34.668 07:12:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:27:34.668 07:12:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:27:34.668 07:12:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:27:34.668 07:12:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:27:34.668 07:12:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1235 -- # return 0 00:27:34.668 07:12:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:27:34.669 07:12:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:27:34.669 nvmf hotplug test: fio successful as expected 00:27:34.669 07:12:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:34.669 07:12:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:34.669 07:12:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:34.669 07:12:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:34.669 07:12:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:27:34.669 07:12:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:27:34.669 07:12:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:27:34.669 07:12:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:34.669 07:12:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@121 -- # sync 00:27:34.669 07:12:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:34.669 07:12:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@124 -- # set +e 00:27:34.669 07:12:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:34.669 07:12:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:34.669 rmmod nvme_tcp 00:27:34.669 rmmod nvme_fabrics 00:27:34.669 rmmod nvme_keyring 00:27:34.669 07:12:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:34.669 07:12:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@128 -- # set -e 00:27:34.669 07:12:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@129 -- # return 0 00:27:34.669 07:12:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@517 -- # '[' -n 305358 ']' 00:27:34.669 07:12:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@518 -- # killprocess 305358 00:27:34.669 07:12:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@954 -- # '[' -z 305358 ']' 00:27:34.669 07:12:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@958 -- # kill -0 305358 00:27:34.669 07:12:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@959 -- # uname 00:27:34.669 07:12:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:34.669 07:12:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 305358 00:27:34.669 07:12:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:34.669 07:12:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:34.669 07:12:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 305358' 00:27:34.669 killing process with pid 305358 00:27:34.669 07:12:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@973 -- # kill 305358 00:27:34.669 07:12:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@978 -- # wait 305358 00:27:34.669 07:12:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:34.669 07:12:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:34.669 07:12:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:34.669 07:12:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@297 -- # iptr 00:27:34.669 07:12:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@791 -- # iptables-save 00:27:34.669 07:12:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:34.669 07:12:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@791 -- # iptables-restore 00:27:34.669 07:12:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:34.669 07:12:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:34.669 07:12:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:34.669 07:12:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:34.669 07:12:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:35.237 07:12:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:35.237 00:27:35.237 real 1m8.231s 00:27:35.237 user 4m10.763s 00:27:35.237 sys 0m6.643s 00:27:35.237 07:12:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:35.237 07:12:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:35.237 ************************************ 00:27:35.237 END TEST nvmf_initiator_timeout 00:27:35.237 ************************************ 00:27:35.237 07:12:56 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ phy == phy ]] 00:27:35.237 07:12:56 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # '[' tcp = tcp ']' 00:27:35.237 07:12:56 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # gather_supported_nvmf_pci_devs 00:27:35.237 07:12:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@309 -- # xtrace_disable 00:27:35.237 07:12:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:27:37.771 07:12:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:37.771 07:12:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # pci_devs=() 00:27:37.771 07:12:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:37.771 07:12:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:37.771 07:12:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:37.771 07:12:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:37.771 07:12:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:37.771 07:12:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # net_devs=() 00:27:37.771 07:12:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:37.771 07:12:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # e810=() 00:27:37.771 07:12:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # local -ga e810 00:27:37.771 07:12:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # x722=() 00:27:37.771 07:12:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # local -ga x722 00:27:37.771 07:12:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # mlx=() 00:27:37.771 07:12:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # local -ga mlx 00:27:37.771 07:12:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:37.771 07:12:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:37.771 07:12:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:37.771 07:12:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:37.771 07:12:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:37.771 07:12:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:37.771 07:12:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:37.771 07:12:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:37.771 07:12:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:37.771 07:12:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:37.771 07:12:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:37.771 07:12:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:37.771 07:12:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:37.771 07:12:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:37.771 07:12:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:37.771 07:12:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:37.771 07:12:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:37.771 07:12:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:37.771 07:12:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:37.771 07:12:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:37.771 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:37.771 07:12:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:37.771 07:12:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:37.771 07:12:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:37.771 07:12:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:37.771 07:12:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:37.771 07:12:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:37.771 07:12:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:37.771 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:37.771 07:12:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:37.771 07:12:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:37.771 07:12:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:37.771 07:12:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:37.771 07:12:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:37.771 07:12:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:37.771 07:12:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:37.771 07:12:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:37.771 07:12:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:37.771 07:12:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:37.771 07:12:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:37.771 07:12:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:37.771 07:12:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:37.771 07:12:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:37.771 07:12:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:37.771 07:12:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:37.771 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:37.771 07:12:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:37.771 07:12:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:37.771 07:12:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:37.771 07:12:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:37.771 07:12:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:37.771 07:12:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:37.771 07:12:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:37.771 07:12:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:37.771 07:12:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:37.771 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:37.771 07:12:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:37.771 07:12:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:37.771 07:12:58 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:37.771 07:12:58 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@57 -- # (( 2 > 0 )) 00:27:37.771 07:12:58 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@58 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:27:37.771 07:12:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:27:37.771 07:12:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:37.772 07:12:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:27:37.772 ************************************ 00:27:37.772 START TEST nvmf_perf_adq 00:27:37.772 ************************************ 00:27:37.772 07:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:27:37.772 * Looking for test storage... 00:27:37.772 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:37.772 07:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:27:37.772 07:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1693 -- # lcov --version 00:27:37.772 07:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:27:37.772 07:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:27:37.772 07:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:37.772 07:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:37.772 07:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:37.772 07:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # IFS=.-: 00:27:37.772 07:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # read -ra ver1 00:27:37.772 07:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # IFS=.-: 00:27:37.772 07:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # read -ra ver2 00:27:37.772 07:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@338 -- # local 'op=<' 00:27:37.772 07:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@340 -- # ver1_l=2 00:27:37.772 07:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@341 -- # ver2_l=1 00:27:37.772 07:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:37.772 07:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@344 -- # case "$op" in 00:27:37.772 07:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@345 -- # : 1 00:27:37.772 07:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:37.772 07:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:37.772 07:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # decimal 1 00:27:37.772 07:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=1 00:27:37.772 07:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:37.772 07:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 1 00:27:37.772 07:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # ver1[v]=1 00:27:37.772 07:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # decimal 2 00:27:37.772 07:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=2 00:27:37.772 07:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:37.772 07:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 2 00:27:37.772 07:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # ver2[v]=2 00:27:37.772 07:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:37.772 07:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:37.772 07:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # return 0 00:27:37.772 07:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:37.772 07:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:27:37.772 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:37.772 --rc genhtml_branch_coverage=1 00:27:37.772 --rc genhtml_function_coverage=1 00:27:37.772 --rc genhtml_legend=1 00:27:37.772 --rc geninfo_all_blocks=1 00:27:37.772 --rc geninfo_unexecuted_blocks=1 00:27:37.772 00:27:37.772 ' 00:27:37.772 07:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:27:37.772 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:37.772 --rc genhtml_branch_coverage=1 00:27:37.772 --rc genhtml_function_coverage=1 00:27:37.772 --rc genhtml_legend=1 00:27:37.772 --rc geninfo_all_blocks=1 00:27:37.772 --rc geninfo_unexecuted_blocks=1 00:27:37.772 00:27:37.772 ' 00:27:37.772 07:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:27:37.772 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:37.772 --rc genhtml_branch_coverage=1 00:27:37.772 --rc genhtml_function_coverage=1 00:27:37.772 --rc genhtml_legend=1 00:27:37.772 --rc geninfo_all_blocks=1 00:27:37.772 --rc geninfo_unexecuted_blocks=1 00:27:37.772 00:27:37.772 ' 00:27:37.772 07:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:27:37.772 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:37.772 --rc genhtml_branch_coverage=1 00:27:37.772 --rc genhtml_function_coverage=1 00:27:37.772 --rc genhtml_legend=1 00:27:37.772 --rc geninfo_all_blocks=1 00:27:37.772 --rc geninfo_unexecuted_blocks=1 00:27:37.772 00:27:37.772 ' 00:27:37.772 07:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:37.772 07:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:27:37.772 07:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:37.772 07:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:37.772 07:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:37.772 07:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:37.772 07:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:37.772 07:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:37.772 07:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:37.772 07:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:37.772 07:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:37.772 07:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:37.772 07:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:27:37.772 07:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:27:37.772 07:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:37.772 07:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:37.772 07:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:37.772 07:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:37.772 07:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:37.772 07:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@15 -- # shopt -s extglob 00:27:37.772 07:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:37.772 07:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:37.772 07:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:37.772 07:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:37.772 07:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:37.772 07:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:37.772 07:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:27:37.772 07:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:37.772 07:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # : 0 00:27:37.772 07:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:37.772 07:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:37.772 07:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:37.772 07:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:37.772 07:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:37.772 07:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:37.772 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:37.773 07:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:37.773 07:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:37.773 07:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:37.773 07:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:27:37.773 07:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:27:37.773 07:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:39.674 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:39.674 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:27:39.674 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:39.674 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:39.674 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:39.674 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:39.674 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:39.674 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:27:39.674 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:39.674 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:27:39.674 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:27:39.674 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:27:39.674 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:27:39.674 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:27:39.674 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:27:39.674 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:39.674 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:39.674 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:39.674 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:39.674 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:39.674 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:39.674 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:39.674 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:39.674 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:39.674 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:39.674 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:39.674 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:39.674 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:39.674 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:39.674 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:39.674 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:39.674 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:39.674 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:39.674 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:39.674 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:39.674 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:39.674 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:39.674 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:39.674 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:39.674 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:39.674 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:39.674 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:39.674 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:39.674 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:39.674 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:39.674 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:39.674 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:39.675 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:39.675 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:39.675 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:39.675 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:39.675 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:39.675 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:39.675 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:39.675 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:39.675 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:39.675 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:39.675 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:39.675 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:39.675 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:39.675 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:39.675 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:39.675 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:39.675 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:39.675 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:39.675 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:39.675 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:39.675 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:39.675 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:39.675 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:39.675 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:39.675 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:39.675 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:39.675 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:39.675 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:27:39.675 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:27:39.675 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # adq_reload_driver 00:27:39.675 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:27:39.675 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:27:40.242 07:13:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:27:44.428 07:13:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:27:49.695 07:13:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@76 -- # nvmftestinit 00:27:49.695 07:13:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:49.695 07:13:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:49.695 07:13:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:49.695 07:13:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:49.695 07:13:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:49.695 07:13:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:49.695 07:13:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:49.695 07:13:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:49.695 07:13:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:49.695 07:13:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:49.695 07:13:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:27:49.696 07:13:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:49.696 07:13:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:49.696 07:13:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:27:49.696 07:13:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:49.696 07:13:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:49.696 07:13:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:49.696 07:13:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:49.696 07:13:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:49.696 07:13:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:27:49.696 07:13:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:49.696 07:13:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:27:49.696 07:13:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:27:49.696 07:13:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:27:49.696 07:13:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:27:49.696 07:13:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:27:49.696 07:13:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:27:49.696 07:13:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:49.696 07:13:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:49.696 07:13:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:49.696 07:13:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:49.696 07:13:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:49.696 07:13:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:49.696 07:13:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:49.696 07:13:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:49.696 07:13:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:49.696 07:13:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:49.696 07:13:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:49.696 07:13:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:49.696 07:13:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:49.696 07:13:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:49.696 07:13:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:49.696 07:13:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:49.696 07:13:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:49.696 07:13:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:49.696 07:13:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:49.696 07:13:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:49.696 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:49.696 07:13:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:49.696 07:13:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:49.696 07:13:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:49.696 07:13:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:49.696 07:13:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:49.696 07:13:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:49.696 07:13:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:49.696 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:49.696 07:13:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:49.696 07:13:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:49.696 07:13:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:49.696 07:13:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:49.696 07:13:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:49.696 07:13:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:49.696 07:13:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:49.696 07:13:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:49.696 07:13:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:49.696 07:13:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:49.696 07:13:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:49.696 07:13:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:49.696 07:13:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:49.696 07:13:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:49.696 07:13:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:49.696 07:13:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:49.696 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:49.696 07:13:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:49.696 07:13:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:49.696 07:13:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:49.696 07:13:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:49.696 07:13:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:49.696 07:13:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:49.696 07:13:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:49.696 07:13:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:49.696 07:13:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:49.696 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:49.696 07:13:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:49.696 07:13:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:49.696 07:13:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:27:49.696 07:13:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:49.696 07:13:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:49.696 07:13:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:49.696 07:13:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:49.696 07:13:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:49.696 07:13:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:49.696 07:13:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:49.696 07:13:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:49.696 07:13:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:49.696 07:13:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:49.696 07:13:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:49.696 07:13:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:49.696 07:13:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:49.696 07:13:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:49.696 07:13:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:49.696 07:13:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:49.696 07:13:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:49.696 07:13:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:49.696 07:13:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:49.696 07:13:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:49.696 07:13:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:49.696 07:13:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:49.696 07:13:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:49.696 07:13:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:49.696 07:13:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:49.696 07:13:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:49.696 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:49.696 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.279 ms 00:27:49.696 00:27:49.696 --- 10.0.0.2 ping statistics --- 00:27:49.696 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:49.696 rtt min/avg/max/mdev = 0.279/0.279/0.279/0.000 ms 00:27:49.696 07:13:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:49.696 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:49.696 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.097 ms 00:27:49.696 00:27:49.696 --- 10.0.0.1 ping statistics --- 00:27:49.696 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:49.697 rtt min/avg/max/mdev = 0.097/0.097/0.097/0.000 ms 00:27:49.697 07:13:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:49.697 07:13:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:27:49.697 07:13:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:49.697 07:13:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:49.697 07:13:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:49.697 07:13:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:49.697 07:13:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:49.697 07:13:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:49.697 07:13:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:49.697 07:13:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmfappstart -m 0xF --wait-for-rpc 00:27:49.697 07:13:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:49.697 07:13:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:49.697 07:13:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:49.697 07:13:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=318182 00:27:49.697 07:13:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:27:49.697 07:13:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 318182 00:27:49.697 07:13:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 318182 ']' 00:27:49.697 07:13:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:49.697 07:13:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:49.697 07:13:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:49.697 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:49.697 07:13:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:49.697 07:13:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:49.697 [2024-11-18 07:13:10.203244] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:27:49.697 [2024-11-18 07:13:10.203317] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:49.697 [2024-11-18 07:13:10.277713] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:49.697 [2024-11-18 07:13:10.330868] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:49.697 [2024-11-18 07:13:10.330928] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:49.697 [2024-11-18 07:13:10.330963] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:49.697 [2024-11-18 07:13:10.330975] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:49.697 [2024-11-18 07:13:10.330985] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:49.697 [2024-11-18 07:13:10.332664] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:49.697 [2024-11-18 07:13:10.332697] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:49.697 [2024-11-18 07:13:10.332735] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:27:49.697 [2024-11-18 07:13:10.332737] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:49.697 07:13:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:49.697 07:13:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:27:49.697 07:13:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:49.697 07:13:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:49.697 07:13:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:49.697 07:13:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:49.697 07:13:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # adq_configure_nvmf_target 0 00:27:49.697 07:13:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:27:49.697 07:13:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:27:49.697 07:13:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:49.697 07:13:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:49.697 07:13:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:49.697 07:13:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:27:49.697 07:13:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:27:49.697 07:13:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:49.697 07:13:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:49.697 07:13:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:49.697 07:13:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:27:49.697 07:13:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:49.697 07:13:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:49.697 07:13:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:49.697 07:13:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:27:49.697 07:13:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:49.697 07:13:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:49.697 [2024-11-18 07:13:10.620275] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:49.697 07:13:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:49.697 07:13:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:27:49.697 07:13:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:49.697 07:13:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:49.697 Malloc1 00:27:49.697 07:13:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:49.697 07:13:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:49.697 07:13:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:49.697 07:13:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:49.697 07:13:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:49.697 07:13:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:27:49.697 07:13:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:49.697 07:13:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:49.956 07:13:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:49.956 07:13:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:49.956 07:13:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:49.956 07:13:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:49.956 [2024-11-18 07:13:10.680119] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:49.956 07:13:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:49.956 07:13:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@82 -- # perfpid=318332 00:27:49.956 07:13:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:27:49.956 07:13:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # sleep 2 00:27:51.856 07:13:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # rpc_cmd nvmf_get_stats 00:27:51.856 07:13:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:51.856 07:13:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:51.856 07:13:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:51.856 07:13:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # nvmf_stats='{ 00:27:51.856 "tick_rate": 2700000000, 00:27:51.856 "poll_groups": [ 00:27:51.856 { 00:27:51.856 "name": "nvmf_tgt_poll_group_000", 00:27:51.856 "admin_qpairs": 1, 00:27:51.856 "io_qpairs": 1, 00:27:51.856 "current_admin_qpairs": 1, 00:27:51.856 "current_io_qpairs": 1, 00:27:51.856 "pending_bdev_io": 0, 00:27:51.856 "completed_nvme_io": 19838, 00:27:51.856 "transports": [ 00:27:51.856 { 00:27:51.856 "trtype": "TCP" 00:27:51.856 } 00:27:51.856 ] 00:27:51.856 }, 00:27:51.856 { 00:27:51.856 "name": "nvmf_tgt_poll_group_001", 00:27:51.856 "admin_qpairs": 0, 00:27:51.856 "io_qpairs": 1, 00:27:51.856 "current_admin_qpairs": 0, 00:27:51.856 "current_io_qpairs": 1, 00:27:51.856 "pending_bdev_io": 0, 00:27:51.856 "completed_nvme_io": 20090, 00:27:51.856 "transports": [ 00:27:51.856 { 00:27:51.856 "trtype": "TCP" 00:27:51.856 } 00:27:51.856 ] 00:27:51.856 }, 00:27:51.856 { 00:27:51.856 "name": "nvmf_tgt_poll_group_002", 00:27:51.856 "admin_qpairs": 0, 00:27:51.856 "io_qpairs": 1, 00:27:51.856 "current_admin_qpairs": 0, 00:27:51.856 "current_io_qpairs": 1, 00:27:51.856 "pending_bdev_io": 0, 00:27:51.856 "completed_nvme_io": 19998, 00:27:51.856 "transports": [ 00:27:51.856 { 00:27:51.856 "trtype": "TCP" 00:27:51.856 } 00:27:51.856 ] 00:27:51.856 }, 00:27:51.856 { 00:27:51.856 "name": "nvmf_tgt_poll_group_003", 00:27:51.856 "admin_qpairs": 0, 00:27:51.856 "io_qpairs": 1, 00:27:51.856 "current_admin_qpairs": 0, 00:27:51.856 "current_io_qpairs": 1, 00:27:51.856 "pending_bdev_io": 0, 00:27:51.856 "completed_nvme_io": 20036, 00:27:51.856 "transports": [ 00:27:51.856 { 00:27:51.856 "trtype": "TCP" 00:27:51.856 } 00:27:51.856 ] 00:27:51.856 } 00:27:51.856 ] 00:27:51.856 }' 00:27:51.856 07:13:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:27:51.856 07:13:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # wc -l 00:27:51.856 07:13:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # count=4 00:27:51.856 07:13:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@87 -- # [[ 4 -ne 4 ]] 00:27:51.856 07:13:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # wait 318332 00:27:59.962 Initializing NVMe Controllers 00:27:59.962 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:59.962 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:27:59.962 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:27:59.962 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:27:59.963 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:27:59.963 Initialization complete. Launching workers. 00:27:59.963 ======================================================== 00:27:59.963 Latency(us) 00:27:59.963 Device Information : IOPS MiB/s Average min max 00:27:59.963 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10510.40 41.06 6090.63 2503.17 10049.54 00:27:59.963 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 10655.40 41.62 6006.66 2475.27 9910.60 00:27:59.963 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 10494.30 40.99 6099.83 1952.20 10157.76 00:27:59.963 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 10531.40 41.14 6077.45 2410.44 9931.62 00:27:59.963 ======================================================== 00:27:59.963 Total : 42191.51 164.81 6068.42 1952.20 10157.76 00:27:59.963 00:27:59.963 07:13:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # nvmftestfini 00:27:59.963 07:13:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:59.963 07:13:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:27:59.963 07:13:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:59.963 07:13:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:27:59.963 07:13:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:59.963 07:13:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:59.963 rmmod nvme_tcp 00:27:59.963 rmmod nvme_fabrics 00:27:59.963 rmmod nvme_keyring 00:27:59.963 07:13:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:59.963 07:13:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:27:59.963 07:13:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:27:59.963 07:13:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 318182 ']' 00:27:59.963 07:13:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 318182 00:27:59.963 07:13:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 318182 ']' 00:27:59.963 07:13:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 318182 00:27:59.963 07:13:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:27:59.963 07:13:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:59.963 07:13:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 318182 00:27:59.963 07:13:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:59.963 07:13:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:59.963 07:13:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 318182' 00:27:59.963 killing process with pid 318182 00:27:59.963 07:13:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 318182 00:27:59.963 07:13:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 318182 00:28:00.221 07:13:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:00.221 07:13:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:00.221 07:13:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:00.221 07:13:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:28:00.221 07:13:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:28:00.221 07:13:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:00.221 07:13:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:28:00.221 07:13:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:00.221 07:13:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:00.221 07:13:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:00.221 07:13:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:00.221 07:13:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:02.752 07:13:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:02.752 07:13:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@94 -- # adq_reload_driver 00:28:02.752 07:13:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:28:02.752 07:13:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:28:03.011 07:13:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:28:05.548 07:13:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:28:10.821 07:13:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # nvmftestinit 00:28:10.821 07:13:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:10.821 07:13:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:10.821 07:13:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:10.821 07:13:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:10.821 07:13:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:10.821 07:13:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:10.821 07:13:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:10.821 07:13:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:10.821 07:13:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:10.821 07:13:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:10.821 07:13:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:28:10.821 07:13:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:10.821 07:13:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:10.821 07:13:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:28:10.821 07:13:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:10.821 07:13:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:10.821 07:13:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:10.822 07:13:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:10.822 07:13:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:10.822 07:13:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:28:10.822 07:13:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:10.822 07:13:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:28:10.822 07:13:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:28:10.822 07:13:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:28:10.822 07:13:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:28:10.822 07:13:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:28:10.822 07:13:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:28:10.822 07:13:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:10.822 07:13:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:10.822 07:13:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:10.822 07:13:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:10.822 07:13:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:10.822 07:13:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:10.822 07:13:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:10.822 07:13:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:10.822 07:13:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:10.822 07:13:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:10.822 07:13:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:10.822 07:13:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:10.822 07:13:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:10.822 07:13:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:10.822 07:13:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:10.822 07:13:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:10.822 07:13:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:10.822 07:13:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:10.822 07:13:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:10.822 07:13:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:10.822 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:10.822 07:13:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:10.822 07:13:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:10.822 07:13:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:10.822 07:13:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:10.822 07:13:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:10.822 07:13:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:10.822 07:13:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:10.822 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:10.822 07:13:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:10.822 07:13:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:10.822 07:13:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:10.822 07:13:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:10.822 07:13:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:10.822 07:13:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:10.822 07:13:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:10.822 07:13:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:10.822 07:13:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:10.822 07:13:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:10.822 07:13:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:10.822 07:13:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:10.822 07:13:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:10.822 07:13:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:10.822 07:13:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:10.822 07:13:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:10.822 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:10.822 07:13:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:10.822 07:13:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:10.822 07:13:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:10.822 07:13:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:10.822 07:13:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:10.822 07:13:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:10.822 07:13:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:10.822 07:13:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:10.822 07:13:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:10.822 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:10.822 07:13:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:10.822 07:13:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:10.822 07:13:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:28:10.822 07:13:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:10.822 07:13:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:10.822 07:13:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:10.822 07:13:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:10.822 07:13:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:10.822 07:13:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:10.822 07:13:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:10.822 07:13:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:10.822 07:13:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:10.822 07:13:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:10.822 07:13:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:10.822 07:13:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:10.822 07:13:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:10.822 07:13:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:10.822 07:13:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:10.822 07:13:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:10.822 07:13:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:10.822 07:13:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:10.822 07:13:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:10.822 07:13:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:10.822 07:13:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:10.822 07:13:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:10.822 07:13:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:10.822 07:13:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:10.822 07:13:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:10.822 07:13:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:10.822 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:10.822 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.172 ms 00:28:10.822 00:28:10.822 --- 10.0.0.2 ping statistics --- 00:28:10.822 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:10.822 rtt min/avg/max/mdev = 0.172/0.172/0.172/0.000 ms 00:28:10.822 07:13:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:10.822 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:10.822 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.046 ms 00:28:10.822 00:28:10.822 --- 10.0.0.1 ping statistics --- 00:28:10.822 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:10.822 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:28:10.822 07:13:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:10.822 07:13:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:28:10.822 07:13:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:10.822 07:13:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:10.822 07:13:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:10.823 07:13:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:10.823 07:13:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:10.823 07:13:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:10.823 07:13:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:10.823 07:13:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@98 -- # adq_configure_driver 00:28:10.823 07:13:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:28:10.823 07:13:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:28:10.823 07:13:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:28:10.823 net.core.busy_poll = 1 00:28:10.823 07:13:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:28:10.823 net.core.busy_read = 1 00:28:10.823 07:13:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:28:10.823 07:13:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:28:10.823 07:13:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:28:10.823 07:13:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:28:10.823 07:13:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:28:10.823 07:13:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmfappstart -m 0xF --wait-for-rpc 00:28:10.823 07:13:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:10.823 07:13:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:10.823 07:13:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:10.823 07:13:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=320966 00:28:10.823 07:13:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:28:10.823 07:13:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 320966 00:28:10.823 07:13:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 320966 ']' 00:28:10.823 07:13:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:10.823 07:13:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:10.823 07:13:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:10.823 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:10.823 07:13:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:10.823 07:13:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:10.823 [2024-11-18 07:13:31.556585] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:28:10.823 [2024-11-18 07:13:31.556662] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:10.823 [2024-11-18 07:13:31.630588] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:10.823 [2024-11-18 07:13:31.676226] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:10.823 [2024-11-18 07:13:31.676292] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:10.823 [2024-11-18 07:13:31.676316] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:10.823 [2024-11-18 07:13:31.676326] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:10.823 [2024-11-18 07:13:31.676336] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:10.823 [2024-11-18 07:13:31.677726] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:10.823 [2024-11-18 07:13:31.677785] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:10.823 [2024-11-18 07:13:31.677850] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:10.823 [2024-11-18 07:13:31.677853] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:10.823 07:13:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:10.823 07:13:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:28:10.823 07:13:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:10.823 07:13:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:10.823 07:13:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:10.823 07:13:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:10.823 07:13:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # adq_configure_nvmf_target 1 00:28:10.823 07:13:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:28:10.823 07:13:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:28:10.823 07:13:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:10.823 07:13:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:11.082 07:13:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:11.082 07:13:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:28:11.082 07:13:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:28:11.082 07:13:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:11.082 07:13:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:11.082 07:13:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:11.082 07:13:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:28:11.082 07:13:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:11.082 07:13:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:11.082 07:13:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:11.082 07:13:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:28:11.082 07:13:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:11.082 07:13:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:11.082 [2024-11-18 07:13:31.940791] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:11.082 07:13:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:11.082 07:13:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:28:11.082 07:13:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:11.082 07:13:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:11.082 Malloc1 00:28:11.082 07:13:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:11.082 07:13:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:11.082 07:13:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:11.082 07:13:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:11.082 07:13:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:11.082 07:13:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:28:11.082 07:13:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:11.082 07:13:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:11.082 07:13:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:11.082 07:13:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:11.082 07:13:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:11.082 07:13:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:11.082 [2024-11-18 07:13:32.003179] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:11.082 07:13:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:11.082 07:13:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@104 -- # perfpid=321107 00:28:11.082 07:13:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:28:11.082 07:13:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@105 -- # sleep 2 00:28:13.611 07:13:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # rpc_cmd nvmf_get_stats 00:28:13.611 07:13:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:13.611 07:13:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:13.611 07:13:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:13.611 07:13:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmf_stats='{ 00:28:13.611 "tick_rate": 2700000000, 00:28:13.611 "poll_groups": [ 00:28:13.611 { 00:28:13.611 "name": "nvmf_tgt_poll_group_000", 00:28:13.611 "admin_qpairs": 1, 00:28:13.611 "io_qpairs": 1, 00:28:13.611 "current_admin_qpairs": 1, 00:28:13.611 "current_io_qpairs": 1, 00:28:13.611 "pending_bdev_io": 0, 00:28:13.611 "completed_nvme_io": 23660, 00:28:13.611 "transports": [ 00:28:13.611 { 00:28:13.611 "trtype": "TCP" 00:28:13.611 } 00:28:13.611 ] 00:28:13.611 }, 00:28:13.611 { 00:28:13.611 "name": "nvmf_tgt_poll_group_001", 00:28:13.611 "admin_qpairs": 0, 00:28:13.611 "io_qpairs": 3, 00:28:13.611 "current_admin_qpairs": 0, 00:28:13.611 "current_io_qpairs": 3, 00:28:13.611 "pending_bdev_io": 0, 00:28:13.611 "completed_nvme_io": 26580, 00:28:13.611 "transports": [ 00:28:13.611 { 00:28:13.611 "trtype": "TCP" 00:28:13.611 } 00:28:13.611 ] 00:28:13.611 }, 00:28:13.611 { 00:28:13.611 "name": "nvmf_tgt_poll_group_002", 00:28:13.611 "admin_qpairs": 0, 00:28:13.611 "io_qpairs": 0, 00:28:13.611 "current_admin_qpairs": 0, 00:28:13.611 "current_io_qpairs": 0, 00:28:13.611 "pending_bdev_io": 0, 00:28:13.611 "completed_nvme_io": 0, 00:28:13.611 "transports": [ 00:28:13.611 { 00:28:13.611 "trtype": "TCP" 00:28:13.611 } 00:28:13.611 ] 00:28:13.611 }, 00:28:13.611 { 00:28:13.611 "name": "nvmf_tgt_poll_group_003", 00:28:13.611 "admin_qpairs": 0, 00:28:13.611 "io_qpairs": 0, 00:28:13.611 "current_admin_qpairs": 0, 00:28:13.611 "current_io_qpairs": 0, 00:28:13.611 "pending_bdev_io": 0, 00:28:13.611 "completed_nvme_io": 0, 00:28:13.611 "transports": [ 00:28:13.611 { 00:28:13.611 "trtype": "TCP" 00:28:13.611 } 00:28:13.611 ] 00:28:13.611 } 00:28:13.611 ] 00:28:13.611 }' 00:28:13.611 07:13:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:28:13.611 07:13:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # wc -l 00:28:13.611 07:13:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # count=2 00:28:13.611 07:13:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # [[ 2 -lt 2 ]] 00:28:13.611 07:13:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@114 -- # wait 321107 00:28:21.721 Initializing NVMe Controllers 00:28:21.721 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:21.721 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:28:21.721 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:28:21.721 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:28:21.721 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:28:21.721 Initialization complete. Launching workers. 00:28:21.721 ======================================================== 00:28:21.721 Latency(us) 00:28:21.721 Device Information : IOPS MiB/s Average min max 00:28:21.721 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 4251.90 16.61 15058.43 1660.06 60073.25 00:28:21.721 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 4592.30 17.94 13942.81 1741.34 62060.92 00:28:21.721 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 5056.40 19.75 12704.09 1677.68 60626.42 00:28:21.721 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 12765.10 49.86 5013.73 1097.51 46407.96 00:28:21.721 ======================================================== 00:28:21.721 Total : 26665.69 104.16 9611.38 1097.51 62060.92 00:28:21.721 00:28:21.721 07:13:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@115 -- # nvmftestfini 00:28:21.721 07:13:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:21.721 07:13:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:28:21.721 07:13:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:21.721 07:13:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:28:21.721 07:13:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:21.721 07:13:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:21.721 rmmod nvme_tcp 00:28:21.721 rmmod nvme_fabrics 00:28:21.721 rmmod nvme_keyring 00:28:21.721 07:13:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:21.721 07:13:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:28:21.721 07:13:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:28:21.721 07:13:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 320966 ']' 00:28:21.721 07:13:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 320966 00:28:21.721 07:13:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 320966 ']' 00:28:21.721 07:13:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 320966 00:28:21.721 07:13:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:28:21.721 07:13:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:21.721 07:13:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 320966 00:28:21.721 07:13:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:21.721 07:13:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:21.721 07:13:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 320966' 00:28:21.721 killing process with pid 320966 00:28:21.721 07:13:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 320966 00:28:21.721 07:13:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 320966 00:28:21.721 07:13:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:21.721 07:13:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:21.721 07:13:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:21.721 07:13:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:28:21.721 07:13:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:28:21.721 07:13:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:21.721 07:13:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:28:21.721 07:13:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:21.721 07:13:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:21.721 07:13:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:21.721 07:13:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:21.721 07:13:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:25.006 07:13:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:25.006 07:13:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@117 -- # trap - SIGINT SIGTERM EXIT 00:28:25.006 00:28:25.006 real 0m47.321s 00:28:25.006 user 2m40.937s 00:28:25.006 sys 0m10.060s 00:28:25.006 07:13:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:25.006 07:13:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:25.006 ************************************ 00:28:25.006 END TEST nvmf_perf_adq 00:28:25.006 ************************************ 00:28:25.006 07:13:45 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@65 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:28:25.006 07:13:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:28:25.006 07:13:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:25.006 07:13:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:28:25.006 ************************************ 00:28:25.006 START TEST nvmf_shutdown 00:28:25.006 ************************************ 00:28:25.006 07:13:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:28:25.006 * Looking for test storage... 00:28:25.006 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:25.006 07:13:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:28:25.006 07:13:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1693 -- # lcov --version 00:28:25.006 07:13:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:28:25.006 07:13:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:28:25.006 07:13:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:25.006 07:13:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:25.006 07:13:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:25.006 07:13:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:28:25.006 07:13:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:28:25.006 07:13:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:28:25.006 07:13:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:28:25.006 07:13:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:28:25.006 07:13:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:28:25.006 07:13:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:28:25.006 07:13:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:25.006 07:13:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:28:25.006 07:13:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@345 -- # : 1 00:28:25.006 07:13:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:25.006 07:13:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:25.006 07:13:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # decimal 1 00:28:25.006 07:13:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=1 00:28:25.006 07:13:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:25.006 07:13:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 1 00:28:25.006 07:13:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:28:25.006 07:13:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # decimal 2 00:28:25.007 07:13:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=2 00:28:25.007 07:13:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:25.007 07:13:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 2 00:28:25.007 07:13:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:28:25.007 07:13:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:25.007 07:13:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:25.007 07:13:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # return 0 00:28:25.007 07:13:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:25.007 07:13:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:28:25.007 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:25.007 --rc genhtml_branch_coverage=1 00:28:25.007 --rc genhtml_function_coverage=1 00:28:25.007 --rc genhtml_legend=1 00:28:25.007 --rc geninfo_all_blocks=1 00:28:25.007 --rc geninfo_unexecuted_blocks=1 00:28:25.007 00:28:25.007 ' 00:28:25.007 07:13:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:28:25.007 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:25.007 --rc genhtml_branch_coverage=1 00:28:25.007 --rc genhtml_function_coverage=1 00:28:25.007 --rc genhtml_legend=1 00:28:25.007 --rc geninfo_all_blocks=1 00:28:25.007 --rc geninfo_unexecuted_blocks=1 00:28:25.007 00:28:25.007 ' 00:28:25.007 07:13:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:28:25.007 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:25.007 --rc genhtml_branch_coverage=1 00:28:25.007 --rc genhtml_function_coverage=1 00:28:25.007 --rc genhtml_legend=1 00:28:25.007 --rc geninfo_all_blocks=1 00:28:25.007 --rc geninfo_unexecuted_blocks=1 00:28:25.007 00:28:25.007 ' 00:28:25.007 07:13:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:28:25.007 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:25.007 --rc genhtml_branch_coverage=1 00:28:25.007 --rc genhtml_function_coverage=1 00:28:25.007 --rc genhtml_legend=1 00:28:25.007 --rc geninfo_all_blocks=1 00:28:25.007 --rc geninfo_unexecuted_blocks=1 00:28:25.007 00:28:25.007 ' 00:28:25.007 07:13:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:25.007 07:13:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:28:25.007 07:13:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:25.007 07:13:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:25.007 07:13:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:25.007 07:13:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:25.007 07:13:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:25.007 07:13:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:25.007 07:13:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:25.007 07:13:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:25.007 07:13:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:25.007 07:13:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:25.007 07:13:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:28:25.007 07:13:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:28:25.007 07:13:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:25.007 07:13:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:25.007 07:13:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:25.007 07:13:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:25.007 07:13:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:25.007 07:13:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@15 -- # shopt -s extglob 00:28:25.007 07:13:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:25.007 07:13:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:25.007 07:13:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:25.007 07:13:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:25.007 07:13:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:25.007 07:13:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:25.007 07:13:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:28:25.007 07:13:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:25.007 07:13:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # : 0 00:28:25.007 07:13:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:25.007 07:13:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:25.007 07:13:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:25.007 07:13:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:25.007 07:13:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:25.007 07:13:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:25.007 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:25.007 07:13:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:25.007 07:13:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:25.007 07:13:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:25.007 07:13:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BDEV_SIZE=64 00:28:25.007 07:13:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:28:25.007 07:13:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@162 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:28:25.007 07:13:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:25.007 07:13:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:25.007 07:13:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:25.007 ************************************ 00:28:25.007 START TEST nvmf_shutdown_tc1 00:28:25.007 ************************************ 00:28:25.007 07:13:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc1 00:28:25.007 07:13:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@75 -- # starttarget 00:28:25.007 07:13:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@16 -- # nvmftestinit 00:28:25.007 07:13:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:25.007 07:13:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:25.007 07:13:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:25.007 07:13:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:25.007 07:13:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:25.007 07:13:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:25.007 07:13:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:25.007 07:13:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:25.007 07:13:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:25.007 07:13:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:25.007 07:13:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@309 -- # xtrace_disable 00:28:25.007 07:13:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:26.913 07:13:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:26.913 07:13:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # pci_devs=() 00:28:26.913 07:13:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:26.913 07:13:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:26.913 07:13:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:26.913 07:13:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:26.913 07:13:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:26.914 07:13:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # net_devs=() 00:28:26.914 07:13:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:26.914 07:13:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # e810=() 00:28:26.914 07:13:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # local -ga e810 00:28:26.914 07:13:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # x722=() 00:28:26.914 07:13:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # local -ga x722 00:28:26.914 07:13:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # mlx=() 00:28:26.914 07:13:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # local -ga mlx 00:28:26.914 07:13:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:26.914 07:13:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:26.914 07:13:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:26.914 07:13:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:26.914 07:13:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:26.914 07:13:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:26.914 07:13:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:26.914 07:13:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:26.914 07:13:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:26.914 07:13:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:26.914 07:13:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:26.914 07:13:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:26.914 07:13:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:26.914 07:13:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:26.914 07:13:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:26.914 07:13:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:26.914 07:13:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:26.914 07:13:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:26.914 07:13:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:26.914 07:13:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:26.914 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:26.914 07:13:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:26.914 07:13:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:26.914 07:13:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:26.914 07:13:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:26.914 07:13:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:26.914 07:13:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:26.914 07:13:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:26.914 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:26.914 07:13:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:26.914 07:13:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:26.914 07:13:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:26.914 07:13:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:26.914 07:13:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:26.914 07:13:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:26.914 07:13:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:26.914 07:13:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:26.914 07:13:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:26.914 07:13:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:26.914 07:13:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:26.914 07:13:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:26.914 07:13:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:26.914 07:13:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:26.914 07:13:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:26.914 07:13:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:26.914 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:26.914 07:13:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:26.914 07:13:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:26.914 07:13:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:26.914 07:13:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:26.914 07:13:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:26.914 07:13:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:26.914 07:13:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:26.914 07:13:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:26.914 07:13:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:26.914 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:26.914 07:13:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:26.914 07:13:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:26.914 07:13:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # is_hw=yes 00:28:26.914 07:13:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:26.914 07:13:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:26.914 07:13:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:26.914 07:13:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:26.914 07:13:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:26.914 07:13:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:26.914 07:13:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:26.914 07:13:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:26.914 07:13:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:26.914 07:13:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:26.914 07:13:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:26.914 07:13:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:26.914 07:13:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:26.914 07:13:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:26.914 07:13:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:26.914 07:13:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:26.914 07:13:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:26.914 07:13:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:26.914 07:13:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:26.914 07:13:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:26.914 07:13:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:26.914 07:13:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:26.914 07:13:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:26.914 07:13:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:26.914 07:13:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:26.914 07:13:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:26.914 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:26.914 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.194 ms 00:28:26.914 00:28:26.914 --- 10.0.0.2 ping statistics --- 00:28:26.914 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:26.914 rtt min/avg/max/mdev = 0.194/0.194/0.194/0.000 ms 00:28:26.915 07:13:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:26.915 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:26.915 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.199 ms 00:28:26.915 00:28:26.915 --- 10.0.0.1 ping statistics --- 00:28:26.915 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:26.915 rtt min/avg/max/mdev = 0.199/0.199/0.199/0.000 ms 00:28:26.915 07:13:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:26.915 07:13:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # return 0 00:28:26.915 07:13:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:26.915 07:13:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:26.915 07:13:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:26.915 07:13:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:26.915 07:13:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:26.915 07:13:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:26.915 07:13:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:26.915 07:13:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:28:26.915 07:13:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:26.915 07:13:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:26.915 07:13:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:26.915 07:13:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@509 -- # nvmfpid=324408 00:28:26.915 07:13:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:28:26.915 07:13:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@510 -- # waitforlisten 324408 00:28:26.915 07:13:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 324408 ']' 00:28:26.915 07:13:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:26.915 07:13:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:26.915 07:13:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:26.915 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:26.915 07:13:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:26.915 07:13:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:26.915 [2024-11-18 07:13:47.832560] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:28:26.915 [2024-11-18 07:13:47.832663] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:27.174 [2024-11-18 07:13:47.905621] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:27.174 [2024-11-18 07:13:47.948932] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:27.174 [2024-11-18 07:13:47.948992] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:27.174 [2024-11-18 07:13:47.949014] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:27.174 [2024-11-18 07:13:47.949024] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:27.174 [2024-11-18 07:13:47.949033] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:27.174 [2024-11-18 07:13:47.950635] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:27.174 [2024-11-18 07:13:47.950701] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:27.174 [2024-11-18 07:13:47.950753] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:28:27.174 [2024-11-18 07:13:47.950756] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:27.174 07:13:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:27.174 07:13:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:28:27.174 07:13:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:27.174 07:13:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:27.174 07:13:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:27.174 07:13:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:27.174 07:13:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:27.174 07:13:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:27.174 07:13:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:27.174 [2024-11-18 07:13:48.088925] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:27.174 07:13:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:27.174 07:13:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:28:27.174 07:13:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:28:27.174 07:13:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:27.174 07:13:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:27.174 07:13:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:27.174 07:13:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:27.174 07:13:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:27.174 07:13:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:27.174 07:13:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:27.174 07:13:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:27.174 07:13:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:27.174 07:13:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:27.174 07:13:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:27.174 07:13:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:27.174 07:13:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:27.174 07:13:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:27.174 07:13:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:27.174 07:13:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:27.174 07:13:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:27.174 07:13:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:27.174 07:13:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:27.174 07:13:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:27.174 07:13:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:27.174 07:13:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:27.174 07:13:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:27.174 07:13:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # rpc_cmd 00:28:27.174 07:13:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:27.174 07:13:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:27.432 Malloc1 00:28:27.432 [2024-11-18 07:13:48.185130] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:27.432 Malloc2 00:28:27.432 Malloc3 00:28:27.432 Malloc4 00:28:27.432 Malloc5 00:28:27.432 Malloc6 00:28:27.692 Malloc7 00:28:27.692 Malloc8 00:28:27.692 Malloc9 00:28:27.692 Malloc10 00:28:27.692 07:13:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:27.692 07:13:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:28:27.692 07:13:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:27.692 07:13:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:27.692 07:13:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # perfpid=324473 00:28:27.692 07:13:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # waitforlisten 324473 /var/tmp/bdevperf.sock 00:28:27.692 07:13:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 324473 ']' 00:28:27.692 07:13:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:28:27.692 07:13:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:28:27.692 07:13:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:27.692 07:13:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:27.692 07:13:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:28:27.692 07:13:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:27.692 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:27.692 07:13:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:28:27.692 07:13:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:27.692 07:13:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:27.692 07:13:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:27.692 07:13:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:27.692 { 00:28:27.692 "params": { 00:28:27.692 "name": "Nvme$subsystem", 00:28:27.692 "trtype": "$TEST_TRANSPORT", 00:28:27.692 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:27.692 "adrfam": "ipv4", 00:28:27.692 "trsvcid": "$NVMF_PORT", 00:28:27.692 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:27.692 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:27.692 "hdgst": ${hdgst:-false}, 00:28:27.692 "ddgst": ${ddgst:-false} 00:28:27.692 }, 00:28:27.692 "method": "bdev_nvme_attach_controller" 00:28:27.692 } 00:28:27.692 EOF 00:28:27.692 )") 00:28:27.692 07:13:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:27.692 07:13:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:27.692 07:13:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:27.692 { 00:28:27.692 "params": { 00:28:27.692 "name": "Nvme$subsystem", 00:28:27.692 "trtype": "$TEST_TRANSPORT", 00:28:27.692 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:27.692 "adrfam": "ipv4", 00:28:27.692 "trsvcid": "$NVMF_PORT", 00:28:27.692 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:27.692 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:27.692 "hdgst": ${hdgst:-false}, 00:28:27.692 "ddgst": ${ddgst:-false} 00:28:27.692 }, 00:28:27.692 "method": "bdev_nvme_attach_controller" 00:28:27.692 } 00:28:27.692 EOF 00:28:27.692 )") 00:28:27.692 07:13:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:27.692 07:13:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:27.692 07:13:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:27.692 { 00:28:27.692 "params": { 00:28:27.692 "name": "Nvme$subsystem", 00:28:27.692 "trtype": "$TEST_TRANSPORT", 00:28:27.692 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:27.692 "adrfam": "ipv4", 00:28:27.692 "trsvcid": "$NVMF_PORT", 00:28:27.692 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:27.692 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:27.692 "hdgst": ${hdgst:-false}, 00:28:27.692 "ddgst": ${ddgst:-false} 00:28:27.692 }, 00:28:27.692 "method": "bdev_nvme_attach_controller" 00:28:27.692 } 00:28:27.692 EOF 00:28:27.692 )") 00:28:27.692 07:13:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:27.692 07:13:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:27.692 07:13:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:27.692 { 00:28:27.692 "params": { 00:28:27.692 "name": "Nvme$subsystem", 00:28:27.692 "trtype": "$TEST_TRANSPORT", 00:28:27.692 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:27.692 "adrfam": "ipv4", 00:28:27.692 "trsvcid": "$NVMF_PORT", 00:28:27.692 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:27.692 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:27.692 "hdgst": ${hdgst:-false}, 00:28:27.692 "ddgst": ${ddgst:-false} 00:28:27.692 }, 00:28:27.692 "method": "bdev_nvme_attach_controller" 00:28:27.692 } 00:28:27.692 EOF 00:28:27.692 )") 00:28:27.692 07:13:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:27.692 07:13:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:27.692 07:13:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:27.692 { 00:28:27.692 "params": { 00:28:27.692 "name": "Nvme$subsystem", 00:28:27.692 "trtype": "$TEST_TRANSPORT", 00:28:27.692 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:27.692 "adrfam": "ipv4", 00:28:27.692 "trsvcid": "$NVMF_PORT", 00:28:27.692 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:27.692 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:27.692 "hdgst": ${hdgst:-false}, 00:28:27.692 "ddgst": ${ddgst:-false} 00:28:27.692 }, 00:28:27.692 "method": "bdev_nvme_attach_controller" 00:28:27.692 } 00:28:27.692 EOF 00:28:27.692 )") 00:28:27.692 07:13:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:27.692 07:13:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:27.692 07:13:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:27.692 { 00:28:27.692 "params": { 00:28:27.692 "name": "Nvme$subsystem", 00:28:27.693 "trtype": "$TEST_TRANSPORT", 00:28:27.693 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:27.693 "adrfam": "ipv4", 00:28:27.693 "trsvcid": "$NVMF_PORT", 00:28:27.693 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:27.693 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:27.693 "hdgst": ${hdgst:-false}, 00:28:27.693 "ddgst": ${ddgst:-false} 00:28:27.693 }, 00:28:27.693 "method": "bdev_nvme_attach_controller" 00:28:27.693 } 00:28:27.693 EOF 00:28:27.693 )") 00:28:27.693 07:13:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:27.693 07:13:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:27.693 07:13:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:27.693 { 00:28:27.693 "params": { 00:28:27.693 "name": "Nvme$subsystem", 00:28:27.693 "trtype": "$TEST_TRANSPORT", 00:28:27.693 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:27.693 "adrfam": "ipv4", 00:28:27.693 "trsvcid": "$NVMF_PORT", 00:28:27.693 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:27.693 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:27.693 "hdgst": ${hdgst:-false}, 00:28:27.693 "ddgst": ${ddgst:-false} 00:28:27.693 }, 00:28:27.693 "method": "bdev_nvme_attach_controller" 00:28:27.693 } 00:28:27.693 EOF 00:28:27.693 )") 00:28:27.693 07:13:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:27.693 07:13:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:27.693 07:13:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:27.693 { 00:28:27.693 "params": { 00:28:27.693 "name": "Nvme$subsystem", 00:28:27.693 "trtype": "$TEST_TRANSPORT", 00:28:27.693 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:27.693 "adrfam": "ipv4", 00:28:27.693 "trsvcid": "$NVMF_PORT", 00:28:27.693 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:27.693 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:27.693 "hdgst": ${hdgst:-false}, 00:28:27.693 "ddgst": ${ddgst:-false} 00:28:27.693 }, 00:28:27.693 "method": "bdev_nvme_attach_controller" 00:28:27.693 } 00:28:27.693 EOF 00:28:27.693 )") 00:28:27.693 07:13:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:27.693 07:13:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:27.693 07:13:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:27.693 { 00:28:27.693 "params": { 00:28:27.693 "name": "Nvme$subsystem", 00:28:27.693 "trtype": "$TEST_TRANSPORT", 00:28:27.693 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:27.693 "adrfam": "ipv4", 00:28:27.693 "trsvcid": "$NVMF_PORT", 00:28:27.693 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:27.693 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:27.693 "hdgst": ${hdgst:-false}, 00:28:27.693 "ddgst": ${ddgst:-false} 00:28:27.693 }, 00:28:27.693 "method": "bdev_nvme_attach_controller" 00:28:27.693 } 00:28:27.693 EOF 00:28:27.693 )") 00:28:27.693 07:13:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:27.693 07:13:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:27.693 07:13:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:27.693 { 00:28:27.693 "params": { 00:28:27.693 "name": "Nvme$subsystem", 00:28:27.693 "trtype": "$TEST_TRANSPORT", 00:28:27.693 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:27.693 "adrfam": "ipv4", 00:28:27.693 "trsvcid": "$NVMF_PORT", 00:28:27.693 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:27.693 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:27.693 "hdgst": ${hdgst:-false}, 00:28:27.693 "ddgst": ${ddgst:-false} 00:28:27.693 }, 00:28:27.693 "method": "bdev_nvme_attach_controller" 00:28:27.693 } 00:28:27.693 EOF 00:28:27.693 )") 00:28:27.693 07:13:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:27.693 07:13:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:28:27.693 07:13:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:28:27.693 07:13:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:28:27.693 "params": { 00:28:27.693 "name": "Nvme1", 00:28:27.693 "trtype": "tcp", 00:28:27.693 "traddr": "10.0.0.2", 00:28:27.693 "adrfam": "ipv4", 00:28:27.693 "trsvcid": "4420", 00:28:27.693 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:27.693 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:27.693 "hdgst": false, 00:28:27.693 "ddgst": false 00:28:27.693 }, 00:28:27.693 "method": "bdev_nvme_attach_controller" 00:28:27.693 },{ 00:28:27.693 "params": { 00:28:27.693 "name": "Nvme2", 00:28:27.693 "trtype": "tcp", 00:28:27.693 "traddr": "10.0.0.2", 00:28:27.693 "adrfam": "ipv4", 00:28:27.693 "trsvcid": "4420", 00:28:27.693 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:28:27.693 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:28:27.693 "hdgst": false, 00:28:27.693 "ddgst": false 00:28:27.693 }, 00:28:27.693 "method": "bdev_nvme_attach_controller" 00:28:27.693 },{ 00:28:27.693 "params": { 00:28:27.693 "name": "Nvme3", 00:28:27.693 "trtype": "tcp", 00:28:27.693 "traddr": "10.0.0.2", 00:28:27.693 "adrfam": "ipv4", 00:28:27.693 "trsvcid": "4420", 00:28:27.693 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:28:27.693 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:28:27.693 "hdgst": false, 00:28:27.693 "ddgst": false 00:28:27.693 }, 00:28:27.693 "method": "bdev_nvme_attach_controller" 00:28:27.693 },{ 00:28:27.693 "params": { 00:28:27.693 "name": "Nvme4", 00:28:27.693 "trtype": "tcp", 00:28:27.693 "traddr": "10.0.0.2", 00:28:27.693 "adrfam": "ipv4", 00:28:27.693 "trsvcid": "4420", 00:28:27.693 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:28:27.693 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:28:27.693 "hdgst": false, 00:28:27.693 "ddgst": false 00:28:27.693 }, 00:28:27.693 "method": "bdev_nvme_attach_controller" 00:28:27.693 },{ 00:28:27.693 "params": { 00:28:27.693 "name": "Nvme5", 00:28:27.693 "trtype": "tcp", 00:28:27.693 "traddr": "10.0.0.2", 00:28:27.693 "adrfam": "ipv4", 00:28:27.693 "trsvcid": "4420", 00:28:27.693 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:28:27.693 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:28:27.693 "hdgst": false, 00:28:27.693 "ddgst": false 00:28:27.693 }, 00:28:27.693 "method": "bdev_nvme_attach_controller" 00:28:27.693 },{ 00:28:27.693 "params": { 00:28:27.693 "name": "Nvme6", 00:28:27.693 "trtype": "tcp", 00:28:27.693 "traddr": "10.0.0.2", 00:28:27.693 "adrfam": "ipv4", 00:28:27.693 "trsvcid": "4420", 00:28:27.693 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:28:27.693 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:28:27.693 "hdgst": false, 00:28:27.693 "ddgst": false 00:28:27.693 }, 00:28:27.693 "method": "bdev_nvme_attach_controller" 00:28:27.693 },{ 00:28:27.693 "params": { 00:28:27.693 "name": "Nvme7", 00:28:27.693 "trtype": "tcp", 00:28:27.693 "traddr": "10.0.0.2", 00:28:27.693 "adrfam": "ipv4", 00:28:27.693 "trsvcid": "4420", 00:28:27.693 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:28:27.693 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:28:27.693 "hdgst": false, 00:28:27.693 "ddgst": false 00:28:27.693 }, 00:28:27.693 "method": "bdev_nvme_attach_controller" 00:28:27.693 },{ 00:28:27.693 "params": { 00:28:27.693 "name": "Nvme8", 00:28:27.693 "trtype": "tcp", 00:28:27.693 "traddr": "10.0.0.2", 00:28:27.693 "adrfam": "ipv4", 00:28:27.693 "trsvcid": "4420", 00:28:27.693 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:28:27.693 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:28:27.693 "hdgst": false, 00:28:27.693 "ddgst": false 00:28:27.693 }, 00:28:27.693 "method": "bdev_nvme_attach_controller" 00:28:27.693 },{ 00:28:27.693 "params": { 00:28:27.693 "name": "Nvme9", 00:28:27.693 "trtype": "tcp", 00:28:27.693 "traddr": "10.0.0.2", 00:28:27.693 "adrfam": "ipv4", 00:28:27.693 "trsvcid": "4420", 00:28:27.693 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:28:27.693 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:28:27.693 "hdgst": false, 00:28:27.693 "ddgst": false 00:28:27.693 }, 00:28:27.693 "method": "bdev_nvme_attach_controller" 00:28:27.693 },{ 00:28:27.693 "params": { 00:28:27.693 "name": "Nvme10", 00:28:27.693 "trtype": "tcp", 00:28:27.693 "traddr": "10.0.0.2", 00:28:27.693 "adrfam": "ipv4", 00:28:27.693 "trsvcid": "4420", 00:28:27.693 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:28:27.693 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:28:27.693 "hdgst": false, 00:28:27.693 "ddgst": false 00:28:27.693 }, 00:28:27.693 "method": "bdev_nvme_attach_controller" 00:28:27.693 }' 00:28:27.952 [2024-11-18 07:13:48.676513] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:28:27.952 [2024-11-18 07:13:48.676593] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:28:27.952 [2024-11-18 07:13:48.750564] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:27.952 [2024-11-18 07:13:48.799123] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:29.856 07:13:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:29.856 07:13:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:28:29.856 07:13:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@81 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:28:29.856 07:13:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:29.856 07:13:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:29.856 07:13:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:29.856 07:13:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # kill -9 324473 00:28:29.856 07:13:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@85 -- # rm -f /var/run/spdk_bdev1 00:28:29.856 07:13:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # sleep 1 00:28:30.794 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 74: 324473 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:28:30.794 07:13:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@89 -- # kill -0 324408 00:28:30.794 07:13:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:28:30.794 07:13:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:28:30.794 07:13:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:28:30.794 07:13:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:28:30.794 07:13:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:30.794 07:13:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:30.794 { 00:28:30.794 "params": { 00:28:30.794 "name": "Nvme$subsystem", 00:28:30.794 "trtype": "$TEST_TRANSPORT", 00:28:30.794 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:30.794 "adrfam": "ipv4", 00:28:30.794 "trsvcid": "$NVMF_PORT", 00:28:30.794 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:30.795 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:30.795 "hdgst": ${hdgst:-false}, 00:28:30.795 "ddgst": ${ddgst:-false} 00:28:30.795 }, 00:28:30.795 "method": "bdev_nvme_attach_controller" 00:28:30.795 } 00:28:30.795 EOF 00:28:30.795 )") 00:28:30.795 07:13:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:30.795 07:13:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:30.795 07:13:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:30.795 { 00:28:30.795 "params": { 00:28:30.795 "name": "Nvme$subsystem", 00:28:30.795 "trtype": "$TEST_TRANSPORT", 00:28:30.795 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:30.795 "adrfam": "ipv4", 00:28:30.795 "trsvcid": "$NVMF_PORT", 00:28:30.795 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:30.795 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:30.795 "hdgst": ${hdgst:-false}, 00:28:30.795 "ddgst": ${ddgst:-false} 00:28:30.795 }, 00:28:30.795 "method": "bdev_nvme_attach_controller" 00:28:30.795 } 00:28:30.795 EOF 00:28:30.795 )") 00:28:30.795 07:13:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:30.795 07:13:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:30.795 07:13:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:30.795 { 00:28:30.795 "params": { 00:28:30.795 "name": "Nvme$subsystem", 00:28:30.795 "trtype": "$TEST_TRANSPORT", 00:28:30.795 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:30.795 "adrfam": "ipv4", 00:28:30.795 "trsvcid": "$NVMF_PORT", 00:28:30.795 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:30.795 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:30.795 "hdgst": ${hdgst:-false}, 00:28:30.795 "ddgst": ${ddgst:-false} 00:28:30.795 }, 00:28:30.795 "method": "bdev_nvme_attach_controller" 00:28:30.795 } 00:28:30.795 EOF 00:28:30.795 )") 00:28:30.795 07:13:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:30.795 07:13:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:30.795 07:13:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:30.795 { 00:28:30.795 "params": { 00:28:30.795 "name": "Nvme$subsystem", 00:28:30.795 "trtype": "$TEST_TRANSPORT", 00:28:30.795 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:30.795 "adrfam": "ipv4", 00:28:30.795 "trsvcid": "$NVMF_PORT", 00:28:30.795 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:30.795 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:30.795 "hdgst": ${hdgst:-false}, 00:28:30.795 "ddgst": ${ddgst:-false} 00:28:30.795 }, 00:28:30.795 "method": "bdev_nvme_attach_controller" 00:28:30.795 } 00:28:30.795 EOF 00:28:30.795 )") 00:28:30.795 07:13:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:30.795 07:13:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:30.795 07:13:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:30.795 { 00:28:30.795 "params": { 00:28:30.795 "name": "Nvme$subsystem", 00:28:30.795 "trtype": "$TEST_TRANSPORT", 00:28:30.795 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:30.795 "adrfam": "ipv4", 00:28:30.795 "trsvcid": "$NVMF_PORT", 00:28:30.795 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:30.795 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:30.795 "hdgst": ${hdgst:-false}, 00:28:30.795 "ddgst": ${ddgst:-false} 00:28:30.795 }, 00:28:30.795 "method": "bdev_nvme_attach_controller" 00:28:30.795 } 00:28:30.795 EOF 00:28:30.795 )") 00:28:30.795 07:13:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:30.795 07:13:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:30.795 07:13:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:30.795 { 00:28:30.795 "params": { 00:28:30.795 "name": "Nvme$subsystem", 00:28:30.795 "trtype": "$TEST_TRANSPORT", 00:28:30.795 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:30.795 "adrfam": "ipv4", 00:28:30.795 "trsvcid": "$NVMF_PORT", 00:28:30.795 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:30.795 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:30.795 "hdgst": ${hdgst:-false}, 00:28:30.795 "ddgst": ${ddgst:-false} 00:28:30.795 }, 00:28:30.795 "method": "bdev_nvme_attach_controller" 00:28:30.795 } 00:28:30.795 EOF 00:28:30.795 )") 00:28:30.795 07:13:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:30.795 07:13:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:30.795 07:13:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:30.795 { 00:28:30.795 "params": { 00:28:30.795 "name": "Nvme$subsystem", 00:28:30.795 "trtype": "$TEST_TRANSPORT", 00:28:30.795 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:30.795 "adrfam": "ipv4", 00:28:30.795 "trsvcid": "$NVMF_PORT", 00:28:30.795 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:30.795 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:30.795 "hdgst": ${hdgst:-false}, 00:28:30.795 "ddgst": ${ddgst:-false} 00:28:30.795 }, 00:28:30.795 "method": "bdev_nvme_attach_controller" 00:28:30.795 } 00:28:30.795 EOF 00:28:30.795 )") 00:28:30.795 07:13:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:30.795 07:13:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:30.795 07:13:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:30.795 { 00:28:30.795 "params": { 00:28:30.795 "name": "Nvme$subsystem", 00:28:30.795 "trtype": "$TEST_TRANSPORT", 00:28:30.795 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:30.795 "adrfam": "ipv4", 00:28:30.795 "trsvcid": "$NVMF_PORT", 00:28:30.795 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:30.795 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:30.795 "hdgst": ${hdgst:-false}, 00:28:30.795 "ddgst": ${ddgst:-false} 00:28:30.795 }, 00:28:30.795 "method": "bdev_nvme_attach_controller" 00:28:30.795 } 00:28:30.795 EOF 00:28:30.795 )") 00:28:30.795 07:13:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:30.795 07:13:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:30.795 07:13:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:30.795 { 00:28:30.795 "params": { 00:28:30.795 "name": "Nvme$subsystem", 00:28:30.795 "trtype": "$TEST_TRANSPORT", 00:28:30.795 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:30.795 "adrfam": "ipv4", 00:28:30.795 "trsvcid": "$NVMF_PORT", 00:28:30.795 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:30.795 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:30.795 "hdgst": ${hdgst:-false}, 00:28:30.795 "ddgst": ${ddgst:-false} 00:28:30.795 }, 00:28:30.795 "method": "bdev_nvme_attach_controller" 00:28:30.795 } 00:28:30.795 EOF 00:28:30.795 )") 00:28:30.795 07:13:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:30.795 07:13:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:30.795 07:13:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:30.795 { 00:28:30.795 "params": { 00:28:30.795 "name": "Nvme$subsystem", 00:28:30.795 "trtype": "$TEST_TRANSPORT", 00:28:30.795 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:30.795 "adrfam": "ipv4", 00:28:30.795 "trsvcid": "$NVMF_PORT", 00:28:30.795 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:30.795 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:30.795 "hdgst": ${hdgst:-false}, 00:28:30.795 "ddgst": ${ddgst:-false} 00:28:30.795 }, 00:28:30.795 "method": "bdev_nvme_attach_controller" 00:28:30.795 } 00:28:30.795 EOF 00:28:30.795 )") 00:28:30.795 07:13:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:30.795 07:13:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:28:30.795 07:13:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:28:30.795 07:13:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:28:30.795 "params": { 00:28:30.795 "name": "Nvme1", 00:28:30.795 "trtype": "tcp", 00:28:30.795 "traddr": "10.0.0.2", 00:28:30.795 "adrfam": "ipv4", 00:28:30.795 "trsvcid": "4420", 00:28:30.795 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:30.795 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:30.795 "hdgst": false, 00:28:30.795 "ddgst": false 00:28:30.795 }, 00:28:30.795 "method": "bdev_nvme_attach_controller" 00:28:30.795 },{ 00:28:30.795 "params": { 00:28:30.795 "name": "Nvme2", 00:28:30.795 "trtype": "tcp", 00:28:30.795 "traddr": "10.0.0.2", 00:28:30.795 "adrfam": "ipv4", 00:28:30.795 "trsvcid": "4420", 00:28:30.795 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:28:30.795 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:28:30.795 "hdgst": false, 00:28:30.795 "ddgst": false 00:28:30.795 }, 00:28:30.795 "method": "bdev_nvme_attach_controller" 00:28:30.795 },{ 00:28:30.795 "params": { 00:28:30.795 "name": "Nvme3", 00:28:30.795 "trtype": "tcp", 00:28:30.795 "traddr": "10.0.0.2", 00:28:30.795 "adrfam": "ipv4", 00:28:30.795 "trsvcid": "4420", 00:28:30.795 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:28:30.795 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:28:30.795 "hdgst": false, 00:28:30.795 "ddgst": false 00:28:30.795 }, 00:28:30.795 "method": "bdev_nvme_attach_controller" 00:28:30.795 },{ 00:28:30.795 "params": { 00:28:30.795 "name": "Nvme4", 00:28:30.795 "trtype": "tcp", 00:28:30.795 "traddr": "10.0.0.2", 00:28:30.795 "adrfam": "ipv4", 00:28:30.795 "trsvcid": "4420", 00:28:30.795 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:28:30.795 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:28:30.795 "hdgst": false, 00:28:30.795 "ddgst": false 00:28:30.795 }, 00:28:30.795 "method": "bdev_nvme_attach_controller" 00:28:30.795 },{ 00:28:30.795 "params": { 00:28:30.795 "name": "Nvme5", 00:28:30.795 "trtype": "tcp", 00:28:30.795 "traddr": "10.0.0.2", 00:28:30.795 "adrfam": "ipv4", 00:28:30.795 "trsvcid": "4420", 00:28:30.795 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:28:30.795 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:28:30.795 "hdgst": false, 00:28:30.795 "ddgst": false 00:28:30.795 }, 00:28:30.795 "method": "bdev_nvme_attach_controller" 00:28:30.795 },{ 00:28:30.795 "params": { 00:28:30.795 "name": "Nvme6", 00:28:30.795 "trtype": "tcp", 00:28:30.795 "traddr": "10.0.0.2", 00:28:30.795 "adrfam": "ipv4", 00:28:30.795 "trsvcid": "4420", 00:28:30.795 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:28:30.795 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:28:30.795 "hdgst": false, 00:28:30.795 "ddgst": false 00:28:30.795 }, 00:28:30.795 "method": "bdev_nvme_attach_controller" 00:28:30.795 },{ 00:28:30.795 "params": { 00:28:30.795 "name": "Nvme7", 00:28:30.795 "trtype": "tcp", 00:28:30.795 "traddr": "10.0.0.2", 00:28:30.795 "adrfam": "ipv4", 00:28:30.795 "trsvcid": "4420", 00:28:30.795 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:28:30.795 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:28:30.795 "hdgst": false, 00:28:30.795 "ddgst": false 00:28:30.795 }, 00:28:30.795 "method": "bdev_nvme_attach_controller" 00:28:30.795 },{ 00:28:30.795 "params": { 00:28:30.795 "name": "Nvme8", 00:28:30.795 "trtype": "tcp", 00:28:30.795 "traddr": "10.0.0.2", 00:28:30.795 "adrfam": "ipv4", 00:28:30.795 "trsvcid": "4420", 00:28:30.795 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:28:30.795 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:28:30.795 "hdgst": false, 00:28:30.795 "ddgst": false 00:28:30.795 }, 00:28:30.795 "method": "bdev_nvme_attach_controller" 00:28:30.795 },{ 00:28:30.795 "params": { 00:28:30.795 "name": "Nvme9", 00:28:30.795 "trtype": "tcp", 00:28:30.796 "traddr": "10.0.0.2", 00:28:30.796 "adrfam": "ipv4", 00:28:30.796 "trsvcid": "4420", 00:28:30.796 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:28:30.796 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:28:30.796 "hdgst": false, 00:28:30.796 "ddgst": false 00:28:30.796 }, 00:28:30.796 "method": "bdev_nvme_attach_controller" 00:28:30.796 },{ 00:28:30.796 "params": { 00:28:30.796 "name": "Nvme10", 00:28:30.796 "trtype": "tcp", 00:28:30.796 "traddr": "10.0.0.2", 00:28:30.796 "adrfam": "ipv4", 00:28:30.796 "trsvcid": "4420", 00:28:30.796 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:28:30.796 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:28:30.796 "hdgst": false, 00:28:30.796 "ddgst": false 00:28:30.796 }, 00:28:30.796 "method": "bdev_nvme_attach_controller" 00:28:30.796 }' 00:28:30.796 [2024-11-18 07:13:51.768094] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:28:30.796 [2024-11-18 07:13:51.768180] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid324891 ] 00:28:31.053 [2024-11-18 07:13:51.839880] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:31.053 [2024-11-18 07:13:51.890345] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:32.955 Running I/O for 1 seconds... 00:28:33.792 1805.00 IOPS, 112.81 MiB/s 00:28:33.792 Latency(us) 00:28:33.792 [2024-11-18T06:13:54.770Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:33.792 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:33.792 Verification LBA range: start 0x0 length 0x400 00:28:33.792 Nvme1n1 : 1.09 233.89 14.62 0.00 0.00 270841.74 17476.27 257872.02 00:28:33.792 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:33.792 Verification LBA range: start 0x0 length 0x400 00:28:33.792 Nvme2n1 : 1.09 240.16 15.01 0.00 0.00 256297.93 14757.74 245444.46 00:28:33.792 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:33.792 Verification LBA range: start 0x0 length 0x400 00:28:33.792 Nvme3n1 : 1.08 238.02 14.88 0.00 0.00 256238.36 18058.81 253211.69 00:28:33.793 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:33.793 Verification LBA range: start 0x0 length 0x400 00:28:33.793 Nvme4n1 : 1.09 239.55 14.97 0.00 0.00 250311.11 2839.89 267192.70 00:28:33.793 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:33.793 Verification LBA range: start 0x0 length 0x400 00:28:33.793 Nvme5n1 : 1.10 232.10 14.51 0.00 0.00 254738.58 21651.15 253211.69 00:28:33.793 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:33.793 Verification LBA range: start 0x0 length 0x400 00:28:33.793 Nvme6n1 : 1.11 231.38 14.46 0.00 0.00 250978.61 20486.07 251658.24 00:28:33.793 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:33.793 Verification LBA range: start 0x0 length 0x400 00:28:33.793 Nvme7n1 : 1.20 267.08 16.69 0.00 0.00 215358.43 13301.38 236123.78 00:28:33.793 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:33.793 Verification LBA range: start 0x0 length 0x400 00:28:33.793 Nvme8n1 : 1.20 267.76 16.73 0.00 0.00 211405.71 11893.57 254765.13 00:28:33.793 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:33.793 Verification LBA range: start 0x0 length 0x400 00:28:33.793 Nvme9n1 : 1.18 219.99 13.75 0.00 0.00 251874.17 5072.97 278066.82 00:28:33.793 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:33.793 Verification LBA range: start 0x0 length 0x400 00:28:33.793 Nvme10n1 : 1.20 265.82 16.61 0.00 0.00 206307.56 12330.48 259425.47 00:28:33.793 [2024-11-18T06:13:54.771Z] =================================================================================================================== 00:28:33.793 [2024-11-18T06:13:54.771Z] Total : 2435.75 152.23 0.00 0.00 240304.15 2839.89 278066.82 00:28:34.094 07:13:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@95 -- # stoptarget 00:28:34.094 07:13:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:28:34.094 07:13:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:28:34.094 07:13:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:34.094 07:13:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@46 -- # nvmftestfini 00:28:34.094 07:13:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:34.094 07:13:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # sync 00:28:34.094 07:13:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:34.094 07:13:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set +e 00:28:34.094 07:13:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:34.094 07:13:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:34.094 rmmod nvme_tcp 00:28:34.094 rmmod nvme_fabrics 00:28:34.094 rmmod nvme_keyring 00:28:34.094 07:13:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:34.094 07:13:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@128 -- # set -e 00:28:34.094 07:13:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@129 -- # return 0 00:28:34.094 07:13:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@517 -- # '[' -n 324408 ']' 00:28:34.094 07:13:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@518 -- # killprocess 324408 00:28:34.094 07:13:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # '[' -z 324408 ']' 00:28:34.094 07:13:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # kill -0 324408 00:28:34.094 07:13:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # uname 00:28:34.094 07:13:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:34.094 07:13:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 324408 00:28:34.094 07:13:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:34.094 07:13:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:34.094 07:13:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 324408' 00:28:34.094 killing process with pid 324408 00:28:34.094 07:13:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@973 -- # kill 324408 00:28:34.094 07:13:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@978 -- # wait 324408 00:28:34.706 07:13:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:34.706 07:13:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:34.706 07:13:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:34.706 07:13:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # iptr 00:28:34.706 07:13:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-save 00:28:34.706 07:13:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:34.706 07:13:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-restore 00:28:34.706 07:13:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:34.706 07:13:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:34.706 07:13:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:34.706 07:13:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:34.706 07:13:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:36.680 07:13:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:36.680 00:28:36.680 real 0m11.778s 00:28:36.680 user 0m35.117s 00:28:36.680 sys 0m3.045s 00:28:36.680 07:13:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:36.680 07:13:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:36.680 ************************************ 00:28:36.680 END TEST nvmf_shutdown_tc1 00:28:36.680 ************************************ 00:28:36.680 07:13:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@163 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:28:36.680 07:13:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:36.680 07:13:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:36.680 07:13:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:36.680 ************************************ 00:28:36.680 START TEST nvmf_shutdown_tc2 00:28:36.680 ************************************ 00:28:36.680 07:13:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc2 00:28:36.680 07:13:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@100 -- # starttarget 00:28:36.680 07:13:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@16 -- # nvmftestinit 00:28:36.680 07:13:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:36.680 07:13:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:36.680 07:13:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:36.680 07:13:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:36.680 07:13:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:36.680 07:13:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:36.680 07:13:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:36.680 07:13:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:36.680 07:13:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:36.680 07:13:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:36.680 07:13:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@309 -- # xtrace_disable 00:28:36.680 07:13:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:36.680 07:13:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:36.680 07:13:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # pci_devs=() 00:28:36.680 07:13:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:36.680 07:13:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:36.680 07:13:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:36.680 07:13:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:36.680 07:13:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:36.680 07:13:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # net_devs=() 00:28:36.680 07:13:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:36.680 07:13:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # e810=() 00:28:36.680 07:13:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # local -ga e810 00:28:36.680 07:13:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # x722=() 00:28:36.680 07:13:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # local -ga x722 00:28:36.680 07:13:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # mlx=() 00:28:36.680 07:13:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # local -ga mlx 00:28:36.680 07:13:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:36.680 07:13:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:36.680 07:13:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:36.680 07:13:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:36.681 07:13:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:36.681 07:13:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:36.681 07:13:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:36.681 07:13:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:36.681 07:13:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:36.681 07:13:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:36.681 07:13:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:36.681 07:13:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:36.681 07:13:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:36.681 07:13:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:36.681 07:13:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:36.681 07:13:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:36.681 07:13:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:36.681 07:13:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:36.681 07:13:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:36.681 07:13:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:36.681 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:36.681 07:13:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:36.681 07:13:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:36.681 07:13:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:36.681 07:13:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:36.681 07:13:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:36.681 07:13:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:36.681 07:13:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:36.681 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:36.681 07:13:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:36.681 07:13:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:36.681 07:13:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:36.681 07:13:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:36.681 07:13:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:36.681 07:13:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:36.681 07:13:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:36.681 07:13:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:36.681 07:13:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:36.681 07:13:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:36.681 07:13:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:36.681 07:13:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:36.681 07:13:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:36.681 07:13:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:36.681 07:13:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:36.681 07:13:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:36.681 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:36.681 07:13:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:36.681 07:13:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:36.681 07:13:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:36.681 07:13:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:36.681 07:13:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:36.681 07:13:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:36.681 07:13:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:36.681 07:13:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:36.681 07:13:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:36.681 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:36.681 07:13:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:36.681 07:13:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:36.681 07:13:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # is_hw=yes 00:28:36.681 07:13:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:36.681 07:13:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:36.681 07:13:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:36.681 07:13:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:36.681 07:13:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:36.681 07:13:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:36.681 07:13:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:36.681 07:13:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:36.681 07:13:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:36.681 07:13:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:36.681 07:13:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:36.681 07:13:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:36.681 07:13:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:36.681 07:13:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:36.681 07:13:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:36.681 07:13:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:36.681 07:13:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:36.681 07:13:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:36.940 07:13:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:36.940 07:13:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:36.940 07:13:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:36.940 07:13:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:36.940 07:13:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:36.940 07:13:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:36.940 07:13:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:36.940 07:13:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:36.940 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:36.940 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.302 ms 00:28:36.940 00:28:36.940 --- 10.0.0.2 ping statistics --- 00:28:36.940 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:36.940 rtt min/avg/max/mdev = 0.302/0.302/0.302/0.000 ms 00:28:36.940 07:13:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:36.940 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:36.940 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.182 ms 00:28:36.940 00:28:36.940 --- 10.0.0.1 ping statistics --- 00:28:36.940 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:36.940 rtt min/avg/max/mdev = 0.182/0.182/0.182/0.000 ms 00:28:36.940 07:13:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:36.940 07:13:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # return 0 00:28:36.940 07:13:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:36.940 07:13:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:36.940 07:13:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:36.940 07:13:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:36.940 07:13:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:36.940 07:13:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:36.940 07:13:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:36.941 07:13:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:28:36.941 07:13:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:36.941 07:13:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:36.941 07:13:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:36.941 07:13:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@509 -- # nvmfpid=325679 00:28:36.941 07:13:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:28:36.941 07:13:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@510 -- # waitforlisten 325679 00:28:36.941 07:13:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 325679 ']' 00:28:36.941 07:13:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:36.941 07:13:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:36.941 07:13:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:36.941 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:36.941 07:13:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:36.941 07:13:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:36.941 [2024-11-18 07:13:57.802344] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:28:36.941 [2024-11-18 07:13:57.802431] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:36.941 [2024-11-18 07:13:57.878408] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:37.201 [2024-11-18 07:13:57.929289] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:37.201 [2024-11-18 07:13:57.929338] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:37.201 [2024-11-18 07:13:57.929352] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:37.201 [2024-11-18 07:13:57.929364] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:37.201 [2024-11-18 07:13:57.929374] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:37.201 [2024-11-18 07:13:57.930993] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:37.201 [2024-11-18 07:13:57.931118] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:28:37.201 [2024-11-18 07:13:57.931121] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:37.201 [2024-11-18 07:13:57.931031] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:37.201 07:13:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:37.201 07:13:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:28:37.201 07:13:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:37.201 07:13:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:37.201 07:13:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:37.201 07:13:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:37.201 07:13:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:37.201 07:13:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:37.201 07:13:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:37.201 [2024-11-18 07:13:58.077913] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:37.201 07:13:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:37.201 07:13:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:28:37.201 07:13:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:28:37.201 07:13:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:37.201 07:13:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:37.201 07:13:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:37.201 07:13:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:37.201 07:13:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:37.201 07:13:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:37.201 07:13:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:37.201 07:13:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:37.201 07:13:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:37.201 07:13:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:37.201 07:13:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:37.201 07:13:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:37.201 07:13:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:37.202 07:13:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:37.202 07:13:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:37.202 07:13:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:37.202 07:13:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:37.202 07:13:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:37.202 07:13:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:37.202 07:13:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:37.202 07:13:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:37.202 07:13:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:37.202 07:13:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:37.202 07:13:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # rpc_cmd 00:28:37.202 07:13:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:37.202 07:13:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:37.202 Malloc1 00:28:37.461 [2024-11-18 07:13:58.183649] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:37.461 Malloc2 00:28:37.461 Malloc3 00:28:37.461 Malloc4 00:28:37.461 Malloc5 00:28:37.461 Malloc6 00:28:37.720 Malloc7 00:28:37.720 Malloc8 00:28:37.720 Malloc9 00:28:37.720 Malloc10 00:28:37.720 07:13:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:37.720 07:13:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:28:37.720 07:13:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:37.720 07:13:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:37.720 07:13:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # perfpid=325856 00:28:37.720 07:13:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # waitforlisten 325856 /var/tmp/bdevperf.sock 00:28:37.720 07:13:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 325856 ']' 00:28:37.720 07:13:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:37.720 07:13:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:28:37.720 07:13:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:28:37.720 07:13:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:37.720 07:13:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # config=() 00:28:37.720 07:13:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:37.720 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:37.720 07:13:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # local subsystem config 00:28:37.720 07:13:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:37.720 07:13:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:37.720 07:13:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:37.720 07:13:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:37.720 { 00:28:37.720 "params": { 00:28:37.720 "name": "Nvme$subsystem", 00:28:37.720 "trtype": "$TEST_TRANSPORT", 00:28:37.720 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:37.720 "adrfam": "ipv4", 00:28:37.720 "trsvcid": "$NVMF_PORT", 00:28:37.720 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:37.720 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:37.720 "hdgst": ${hdgst:-false}, 00:28:37.720 "ddgst": ${ddgst:-false} 00:28:37.720 }, 00:28:37.720 "method": "bdev_nvme_attach_controller" 00:28:37.720 } 00:28:37.720 EOF 00:28:37.720 )") 00:28:37.720 07:13:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:28:37.720 07:13:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:37.721 07:13:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:37.721 { 00:28:37.721 "params": { 00:28:37.721 "name": "Nvme$subsystem", 00:28:37.721 "trtype": "$TEST_TRANSPORT", 00:28:37.721 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:37.721 "adrfam": "ipv4", 00:28:37.721 "trsvcid": "$NVMF_PORT", 00:28:37.721 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:37.721 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:37.721 "hdgst": ${hdgst:-false}, 00:28:37.721 "ddgst": ${ddgst:-false} 00:28:37.721 }, 00:28:37.721 "method": "bdev_nvme_attach_controller" 00:28:37.721 } 00:28:37.721 EOF 00:28:37.721 )") 00:28:37.721 07:13:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:28:37.721 07:13:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:37.721 07:13:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:37.721 { 00:28:37.721 "params": { 00:28:37.721 "name": "Nvme$subsystem", 00:28:37.721 "trtype": "$TEST_TRANSPORT", 00:28:37.721 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:37.721 "adrfam": "ipv4", 00:28:37.721 "trsvcid": "$NVMF_PORT", 00:28:37.721 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:37.721 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:37.721 "hdgst": ${hdgst:-false}, 00:28:37.721 "ddgst": ${ddgst:-false} 00:28:37.721 }, 00:28:37.721 "method": "bdev_nvme_attach_controller" 00:28:37.721 } 00:28:37.721 EOF 00:28:37.721 )") 00:28:37.721 07:13:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:28:37.721 07:13:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:37.721 07:13:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:37.721 { 00:28:37.721 "params": { 00:28:37.721 "name": "Nvme$subsystem", 00:28:37.721 "trtype": "$TEST_TRANSPORT", 00:28:37.721 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:37.721 "adrfam": "ipv4", 00:28:37.721 "trsvcid": "$NVMF_PORT", 00:28:37.721 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:37.721 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:37.721 "hdgst": ${hdgst:-false}, 00:28:37.721 "ddgst": ${ddgst:-false} 00:28:37.721 }, 00:28:37.721 "method": "bdev_nvme_attach_controller" 00:28:37.721 } 00:28:37.721 EOF 00:28:37.721 )") 00:28:37.721 07:13:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:28:37.721 07:13:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:37.721 07:13:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:37.721 { 00:28:37.721 "params": { 00:28:37.721 "name": "Nvme$subsystem", 00:28:37.721 "trtype": "$TEST_TRANSPORT", 00:28:37.721 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:37.721 "adrfam": "ipv4", 00:28:37.721 "trsvcid": "$NVMF_PORT", 00:28:37.721 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:37.721 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:37.721 "hdgst": ${hdgst:-false}, 00:28:37.721 "ddgst": ${ddgst:-false} 00:28:37.721 }, 00:28:37.721 "method": "bdev_nvme_attach_controller" 00:28:37.721 } 00:28:37.721 EOF 00:28:37.721 )") 00:28:37.721 07:13:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:28:37.721 07:13:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:37.721 07:13:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:37.721 { 00:28:37.721 "params": { 00:28:37.721 "name": "Nvme$subsystem", 00:28:37.721 "trtype": "$TEST_TRANSPORT", 00:28:37.721 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:37.721 "adrfam": "ipv4", 00:28:37.721 "trsvcid": "$NVMF_PORT", 00:28:37.721 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:37.721 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:37.721 "hdgst": ${hdgst:-false}, 00:28:37.721 "ddgst": ${ddgst:-false} 00:28:37.721 }, 00:28:37.721 "method": "bdev_nvme_attach_controller" 00:28:37.721 } 00:28:37.721 EOF 00:28:37.721 )") 00:28:37.721 07:13:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:28:37.721 07:13:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:37.721 07:13:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:37.721 { 00:28:37.721 "params": { 00:28:37.721 "name": "Nvme$subsystem", 00:28:37.721 "trtype": "$TEST_TRANSPORT", 00:28:37.721 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:37.721 "adrfam": "ipv4", 00:28:37.721 "trsvcid": "$NVMF_PORT", 00:28:37.721 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:37.721 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:37.721 "hdgst": ${hdgst:-false}, 00:28:37.721 "ddgst": ${ddgst:-false} 00:28:37.721 }, 00:28:37.721 "method": "bdev_nvme_attach_controller" 00:28:37.721 } 00:28:37.721 EOF 00:28:37.721 )") 00:28:37.721 07:13:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:28:37.721 07:13:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:37.721 07:13:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:37.721 { 00:28:37.721 "params": { 00:28:37.721 "name": "Nvme$subsystem", 00:28:37.721 "trtype": "$TEST_TRANSPORT", 00:28:37.721 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:37.721 "adrfam": "ipv4", 00:28:37.721 "trsvcid": "$NVMF_PORT", 00:28:37.721 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:37.721 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:37.721 "hdgst": ${hdgst:-false}, 00:28:37.721 "ddgst": ${ddgst:-false} 00:28:37.721 }, 00:28:37.721 "method": "bdev_nvme_attach_controller" 00:28:37.721 } 00:28:37.721 EOF 00:28:37.721 )") 00:28:37.721 07:13:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:28:37.721 07:13:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:37.721 07:13:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:37.721 { 00:28:37.721 "params": { 00:28:37.721 "name": "Nvme$subsystem", 00:28:37.721 "trtype": "$TEST_TRANSPORT", 00:28:37.721 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:37.721 "adrfam": "ipv4", 00:28:37.721 "trsvcid": "$NVMF_PORT", 00:28:37.721 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:37.721 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:37.721 "hdgst": ${hdgst:-false}, 00:28:37.721 "ddgst": ${ddgst:-false} 00:28:37.721 }, 00:28:37.721 "method": "bdev_nvme_attach_controller" 00:28:37.721 } 00:28:37.721 EOF 00:28:37.721 )") 00:28:37.721 07:13:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:28:37.721 07:13:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:37.721 07:13:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:37.721 { 00:28:37.721 "params": { 00:28:37.721 "name": "Nvme$subsystem", 00:28:37.721 "trtype": "$TEST_TRANSPORT", 00:28:37.721 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:37.721 "adrfam": "ipv4", 00:28:37.721 "trsvcid": "$NVMF_PORT", 00:28:37.721 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:37.721 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:37.721 "hdgst": ${hdgst:-false}, 00:28:37.721 "ddgst": ${ddgst:-false} 00:28:37.721 }, 00:28:37.721 "method": "bdev_nvme_attach_controller" 00:28:37.721 } 00:28:37.721 EOF 00:28:37.721 )") 00:28:37.721 07:13:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:28:37.721 07:13:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@584 -- # jq . 00:28:37.721 07:13:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@585 -- # IFS=, 00:28:37.721 07:13:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:28:37.721 "params": { 00:28:37.721 "name": "Nvme1", 00:28:37.721 "trtype": "tcp", 00:28:37.721 "traddr": "10.0.0.2", 00:28:37.721 "adrfam": "ipv4", 00:28:37.721 "trsvcid": "4420", 00:28:37.721 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:37.721 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:37.721 "hdgst": false, 00:28:37.721 "ddgst": false 00:28:37.721 }, 00:28:37.721 "method": "bdev_nvme_attach_controller" 00:28:37.721 },{ 00:28:37.721 "params": { 00:28:37.721 "name": "Nvme2", 00:28:37.721 "trtype": "tcp", 00:28:37.721 "traddr": "10.0.0.2", 00:28:37.721 "adrfam": "ipv4", 00:28:37.721 "trsvcid": "4420", 00:28:37.721 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:28:37.721 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:28:37.721 "hdgst": false, 00:28:37.721 "ddgst": false 00:28:37.721 }, 00:28:37.721 "method": "bdev_nvme_attach_controller" 00:28:37.721 },{ 00:28:37.721 "params": { 00:28:37.721 "name": "Nvme3", 00:28:37.721 "trtype": "tcp", 00:28:37.721 "traddr": "10.0.0.2", 00:28:37.721 "adrfam": "ipv4", 00:28:37.721 "trsvcid": "4420", 00:28:37.721 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:28:37.721 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:28:37.721 "hdgst": false, 00:28:37.722 "ddgst": false 00:28:37.722 }, 00:28:37.722 "method": "bdev_nvme_attach_controller" 00:28:37.722 },{ 00:28:37.722 "params": { 00:28:37.722 "name": "Nvme4", 00:28:37.722 "trtype": "tcp", 00:28:37.722 "traddr": "10.0.0.2", 00:28:37.722 "adrfam": "ipv4", 00:28:37.722 "trsvcid": "4420", 00:28:37.722 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:28:37.722 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:28:37.722 "hdgst": false, 00:28:37.722 "ddgst": false 00:28:37.722 }, 00:28:37.722 "method": "bdev_nvme_attach_controller" 00:28:37.722 },{ 00:28:37.722 "params": { 00:28:37.722 "name": "Nvme5", 00:28:37.722 "trtype": "tcp", 00:28:37.722 "traddr": "10.0.0.2", 00:28:37.722 "adrfam": "ipv4", 00:28:37.722 "trsvcid": "4420", 00:28:37.722 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:28:37.722 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:28:37.722 "hdgst": false, 00:28:37.722 "ddgst": false 00:28:37.722 }, 00:28:37.722 "method": "bdev_nvme_attach_controller" 00:28:37.722 },{ 00:28:37.722 "params": { 00:28:37.722 "name": "Nvme6", 00:28:37.722 "trtype": "tcp", 00:28:37.722 "traddr": "10.0.0.2", 00:28:37.722 "adrfam": "ipv4", 00:28:37.722 "trsvcid": "4420", 00:28:37.722 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:28:37.722 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:28:37.722 "hdgst": false, 00:28:37.722 "ddgst": false 00:28:37.722 }, 00:28:37.722 "method": "bdev_nvme_attach_controller" 00:28:37.722 },{ 00:28:37.722 "params": { 00:28:37.722 "name": "Nvme7", 00:28:37.722 "trtype": "tcp", 00:28:37.722 "traddr": "10.0.0.2", 00:28:37.722 "adrfam": "ipv4", 00:28:37.722 "trsvcid": "4420", 00:28:37.722 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:28:37.722 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:28:37.722 "hdgst": false, 00:28:37.722 "ddgst": false 00:28:37.722 }, 00:28:37.722 "method": "bdev_nvme_attach_controller" 00:28:37.722 },{ 00:28:37.722 "params": { 00:28:37.722 "name": "Nvme8", 00:28:37.722 "trtype": "tcp", 00:28:37.722 "traddr": "10.0.0.2", 00:28:37.722 "adrfam": "ipv4", 00:28:37.722 "trsvcid": "4420", 00:28:37.722 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:28:37.722 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:28:37.722 "hdgst": false, 00:28:37.722 "ddgst": false 00:28:37.722 }, 00:28:37.722 "method": "bdev_nvme_attach_controller" 00:28:37.722 },{ 00:28:37.722 "params": { 00:28:37.722 "name": "Nvme9", 00:28:37.722 "trtype": "tcp", 00:28:37.722 "traddr": "10.0.0.2", 00:28:37.722 "adrfam": "ipv4", 00:28:37.722 "trsvcid": "4420", 00:28:37.722 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:28:37.722 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:28:37.722 "hdgst": false, 00:28:37.722 "ddgst": false 00:28:37.722 }, 00:28:37.722 "method": "bdev_nvme_attach_controller" 00:28:37.722 },{ 00:28:37.722 "params": { 00:28:37.722 "name": "Nvme10", 00:28:37.722 "trtype": "tcp", 00:28:37.722 "traddr": "10.0.0.2", 00:28:37.722 "adrfam": "ipv4", 00:28:37.722 "trsvcid": "4420", 00:28:37.722 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:28:37.722 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:28:37.722 "hdgst": false, 00:28:37.722 "ddgst": false 00:28:37.722 }, 00:28:37.722 "method": "bdev_nvme_attach_controller" 00:28:37.722 }' 00:28:37.982 [2024-11-18 07:13:58.707655] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:28:37.982 [2024-11-18 07:13:58.707729] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid325856 ] 00:28:37.982 [2024-11-18 07:13:58.779381] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:37.982 [2024-11-18 07:13:58.826163] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:39.359 Running I/O for 10 seconds... 00:28:39.926 07:14:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:39.926 07:14:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:28:39.926 07:14:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:28:39.926 07:14:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:39.926 07:14:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:39.926 07:14:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:39.926 07:14:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@108 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:28:39.926 07:14:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:28:39.926 07:14:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:28:39.926 07:14:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local ret=1 00:28:39.926 07:14:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # local i 00:28:39.926 07:14:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:28:39.926 07:14:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:28:39.927 07:14:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:28:39.927 07:14:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:28:39.927 07:14:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:39.927 07:14:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:39.927 07:14:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:39.927 07:14:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=11 00:28:39.927 07:14:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 11 -ge 100 ']' 00:28:39.927 07:14:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:28:40.187 07:14:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:28:40.187 07:14:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:28:40.187 07:14:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:28:40.187 07:14:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:28:40.187 07:14:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:40.187 07:14:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:40.187 07:14:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:40.187 07:14:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=77 00:28:40.187 07:14:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 77 -ge 100 ']' 00:28:40.187 07:14:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:28:40.446 07:14:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:28:40.446 07:14:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:28:40.446 07:14:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:28:40.446 07:14:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:28:40.446 07:14:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:40.446 07:14:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:40.446 1476.00 IOPS, 92.25 MiB/s [2024-11-18T06:14:01.424Z] 07:14:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:40.446 07:14:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=131 00:28:40.446 07:14:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:28:40.446 07:14:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # ret=0 00:28:40.446 07:14:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@66 -- # break 00:28:40.446 07:14:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@70 -- # return 0 00:28:40.446 07:14:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@111 -- # killprocess 325856 00:28:40.446 07:14:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 325856 ']' 00:28:40.446 07:14:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 325856 00:28:40.446 07:14:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:28:40.446 07:14:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:40.446 07:14:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 325856 00:28:40.704 07:14:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:40.704 07:14:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:40.704 07:14:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 325856' 00:28:40.704 killing process with pid 325856 00:28:40.704 07:14:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 325856 00:28:40.704 07:14:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 325856 00:28:40.704 Received shutdown signal, test time was about 1.191726 seconds 00:28:40.704 00:28:40.704 Latency(us) 00:28:40.704 [2024-11-18T06:14:01.682Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:40.704 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:40.704 Verification LBA range: start 0x0 length 0x400 00:28:40.704 Nvme1n1 : 1.19 215.67 13.48 0.00 0.00 293825.99 21845.33 318456.41 00:28:40.704 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:40.704 Verification LBA range: start 0x0 length 0x400 00:28:40.704 Nvme2n1 : 1.14 167.77 10.49 0.00 0.00 371406.82 20097.71 312242.63 00:28:40.704 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:40.704 Verification LBA range: start 0x0 length 0x400 00:28:40.704 Nvme3n1 : 1.19 214.97 13.44 0.00 0.00 285440.95 18835.53 330883.98 00:28:40.704 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:40.704 Verification LBA range: start 0x0 length 0x400 00:28:40.704 Nvme4n1 : 1.17 222.83 13.93 0.00 0.00 269096.23 6990.51 313796.08 00:28:40.704 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:40.704 Verification LBA range: start 0x0 length 0x400 00:28:40.704 Nvme5n1 : 1.16 165.77 10.36 0.00 0.00 357578.78 25826.04 324670.20 00:28:40.704 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:40.704 Verification LBA range: start 0x0 length 0x400 00:28:40.704 Nvme6n1 : 1.17 218.02 13.63 0.00 0.00 266769.45 20583.16 313796.08 00:28:40.704 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:40.704 Verification LBA range: start 0x0 length 0x400 00:28:40.704 Nvme7n1 : 1.18 217.31 13.58 0.00 0.00 263581.01 18447.17 299815.06 00:28:40.704 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:40.704 Verification LBA range: start 0x0 length 0x400 00:28:40.704 Nvme8n1 : 1.18 216.48 13.53 0.00 0.00 260498.77 26020.22 324670.20 00:28:40.704 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:40.704 Verification LBA range: start 0x0 length 0x400 00:28:40.704 Nvme9n1 : 1.15 167.39 10.46 0.00 0.00 329358.85 41748.86 302921.96 00:28:40.704 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:40.704 Verification LBA range: start 0x0 length 0x400 00:28:40.704 Nvme10n1 : 1.16 164.87 10.30 0.00 0.00 329386.16 21456.97 335544.32 00:28:40.704 [2024-11-18T06:14:01.682Z] =================================================================================================================== 00:28:40.704 [2024-11-18T06:14:01.683Z] Total : 1971.08 123.19 0.00 0.00 297729.22 6990.51 335544.32 00:28:40.962 07:14:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # sleep 1 00:28:41.897 07:14:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@115 -- # kill -0 325679 00:28:41.897 07:14:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@117 -- # stoptarget 00:28:41.897 07:14:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:28:41.897 07:14:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:28:41.897 07:14:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:41.897 07:14:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@46 -- # nvmftestfini 00:28:41.897 07:14:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:41.897 07:14:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # sync 00:28:41.897 07:14:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:41.897 07:14:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set +e 00:28:41.897 07:14:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:41.897 07:14:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:41.897 rmmod nvme_tcp 00:28:41.897 rmmod nvme_fabrics 00:28:41.897 rmmod nvme_keyring 00:28:41.897 07:14:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:41.897 07:14:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@128 -- # set -e 00:28:41.897 07:14:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@129 -- # return 0 00:28:41.897 07:14:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@517 -- # '[' -n 325679 ']' 00:28:41.897 07:14:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@518 -- # killprocess 325679 00:28:41.897 07:14:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 325679 ']' 00:28:41.897 07:14:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 325679 00:28:41.897 07:14:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:28:41.897 07:14:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:41.897 07:14:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 325679 00:28:41.897 07:14:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:41.897 07:14:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:41.897 07:14:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 325679' 00:28:41.897 killing process with pid 325679 00:28:41.897 07:14:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 325679 00:28:41.897 07:14:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 325679 00:28:42.466 07:14:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:42.466 07:14:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:42.466 07:14:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:42.466 07:14:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # iptr 00:28:42.466 07:14:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-save 00:28:42.466 07:14:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:42.466 07:14:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-restore 00:28:42.466 07:14:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:42.466 07:14:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:42.466 07:14:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:42.466 07:14:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:42.466 07:14:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:45.005 07:14:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:45.005 00:28:45.005 real 0m7.782s 00:28:45.005 user 0m24.031s 00:28:45.005 sys 0m1.546s 00:28:45.005 07:14:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:45.005 07:14:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:45.005 ************************************ 00:28:45.005 END TEST nvmf_shutdown_tc2 00:28:45.005 ************************************ 00:28:45.005 07:14:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@164 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:28:45.005 07:14:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:45.005 07:14:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:45.006 07:14:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:45.006 ************************************ 00:28:45.006 START TEST nvmf_shutdown_tc3 00:28:45.006 ************************************ 00:28:45.006 07:14:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc3 00:28:45.006 07:14:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@122 -- # starttarget 00:28:45.006 07:14:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@16 -- # nvmftestinit 00:28:45.006 07:14:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:45.006 07:14:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:45.006 07:14:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:45.006 07:14:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:45.006 07:14:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:45.006 07:14:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:45.006 07:14:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:45.006 07:14:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:45.006 07:14:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:45.006 07:14:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:45.006 07:14:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@309 -- # xtrace_disable 00:28:45.006 07:14:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:45.006 07:14:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:45.006 07:14:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # pci_devs=() 00:28:45.006 07:14:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:45.006 07:14:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:45.006 07:14:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:45.006 07:14:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:45.006 07:14:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:45.006 07:14:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # net_devs=() 00:28:45.006 07:14:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:45.006 07:14:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # e810=() 00:28:45.006 07:14:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # local -ga e810 00:28:45.006 07:14:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # x722=() 00:28:45.006 07:14:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # local -ga x722 00:28:45.006 07:14:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # mlx=() 00:28:45.006 07:14:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # local -ga mlx 00:28:45.006 07:14:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:45.006 07:14:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:45.006 07:14:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:45.006 07:14:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:45.006 07:14:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:45.006 07:14:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:45.006 07:14:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:45.006 07:14:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:45.006 07:14:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:45.006 07:14:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:45.006 07:14:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:45.006 07:14:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:45.006 07:14:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:45.006 07:14:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:45.006 07:14:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:45.006 07:14:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:45.006 07:14:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:45.006 07:14:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:45.006 07:14:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:45.006 07:14:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:45.006 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:45.006 07:14:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:45.006 07:14:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:45.006 07:14:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:45.006 07:14:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:45.006 07:14:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:45.006 07:14:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:45.006 07:14:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:45.006 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:45.006 07:14:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:45.006 07:14:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:45.006 07:14:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:45.006 07:14:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:45.006 07:14:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:45.006 07:14:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:45.006 07:14:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:45.006 07:14:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:45.006 07:14:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:45.006 07:14:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:45.006 07:14:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:45.006 07:14:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:45.006 07:14:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:45.006 07:14:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:45.006 07:14:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:45.006 07:14:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:45.006 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:45.006 07:14:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:45.006 07:14:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:45.006 07:14:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:45.006 07:14:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:45.006 07:14:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:45.006 07:14:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:45.006 07:14:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:45.006 07:14:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:45.006 07:14:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:45.006 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:45.006 07:14:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:45.006 07:14:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:45.006 07:14:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # is_hw=yes 00:28:45.006 07:14:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:45.006 07:14:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:45.006 07:14:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:45.006 07:14:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:45.006 07:14:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:45.006 07:14:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:45.006 07:14:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:45.006 07:14:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:45.006 07:14:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:45.006 07:14:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:45.006 07:14:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:45.006 07:14:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:45.006 07:14:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:45.006 07:14:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:45.006 07:14:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:45.006 07:14:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:45.006 07:14:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:45.006 07:14:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:45.006 07:14:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:45.006 07:14:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:45.006 07:14:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:45.007 07:14:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:45.007 07:14:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:45.007 07:14:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:45.007 07:14:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:45.007 07:14:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:45.007 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:45.007 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.238 ms 00:28:45.007 00:28:45.007 --- 10.0.0.2 ping statistics --- 00:28:45.007 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:45.007 rtt min/avg/max/mdev = 0.238/0.238/0.238/0.000 ms 00:28:45.007 07:14:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:45.007 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:45.007 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.115 ms 00:28:45.007 00:28:45.007 --- 10.0.0.1 ping statistics --- 00:28:45.007 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:45.007 rtt min/avg/max/mdev = 0.115/0.115/0.115/0.000 ms 00:28:45.007 07:14:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:45.007 07:14:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # return 0 00:28:45.007 07:14:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:45.007 07:14:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:45.007 07:14:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:45.007 07:14:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:45.007 07:14:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:45.007 07:14:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:45.007 07:14:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:45.007 07:14:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:28:45.007 07:14:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:45.007 07:14:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:45.007 07:14:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:45.007 07:14:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@509 -- # nvmfpid=326772 00:28:45.007 07:14:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:28:45.007 07:14:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@510 -- # waitforlisten 326772 00:28:45.007 07:14:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 326772 ']' 00:28:45.007 07:14:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:45.007 07:14:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:45.007 07:14:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:45.007 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:45.007 07:14:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:45.007 07:14:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:45.007 [2024-11-18 07:14:05.670647] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:28:45.007 [2024-11-18 07:14:05.670743] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:45.007 [2024-11-18 07:14:05.746945] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:45.007 [2024-11-18 07:14:05.793042] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:45.007 [2024-11-18 07:14:05.793098] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:45.007 [2024-11-18 07:14:05.793122] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:45.007 [2024-11-18 07:14:05.793133] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:45.007 [2024-11-18 07:14:05.793143] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:45.007 [2024-11-18 07:14:05.794576] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:45.007 [2024-11-18 07:14:05.794637] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:45.007 [2024-11-18 07:14:05.794703] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:28:45.007 [2024-11-18 07:14:05.794706] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:45.007 07:14:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:45.007 07:14:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:28:45.007 07:14:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:45.007 07:14:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:45.007 07:14:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:45.007 07:14:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:45.007 07:14:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:45.007 07:14:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:45.007 07:14:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:45.007 [2024-11-18 07:14:05.931578] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:45.007 07:14:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:45.007 07:14:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:28:45.007 07:14:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:28:45.007 07:14:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:45.007 07:14:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:45.007 07:14:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:45.007 07:14:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:45.007 07:14:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:28:45.007 07:14:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:45.007 07:14:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:28:45.007 07:14:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:45.007 07:14:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:28:45.007 07:14:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:45.007 07:14:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:28:45.007 07:14:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:45.007 07:14:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:28:45.007 07:14:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:45.007 07:14:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:28:45.007 07:14:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:45.007 07:14:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:28:45.007 07:14:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:45.007 07:14:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:28:45.007 07:14:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:45.007 07:14:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:28:45.007 07:14:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:45.007 07:14:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:28:45.007 07:14:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # rpc_cmd 00:28:45.007 07:14:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:45.007 07:14:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:45.266 Malloc1 00:28:45.266 [2024-11-18 07:14:06.023612] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:45.266 Malloc2 00:28:45.266 Malloc3 00:28:45.266 Malloc4 00:28:45.266 Malloc5 00:28:45.266 Malloc6 00:28:45.524 Malloc7 00:28:45.524 Malloc8 00:28:45.524 Malloc9 00:28:45.524 Malloc10 00:28:45.524 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:45.524 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:28:45.524 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:45.524 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:45.524 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # perfpid=326943 00:28:45.524 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # waitforlisten 326943 /var/tmp/bdevperf.sock 00:28:45.524 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 326943 ']' 00:28:45.524 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:28:45.524 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:28:45.524 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:45.525 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:45.525 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # config=() 00:28:45.525 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:45.525 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:45.525 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # local subsystem config 00:28:45.525 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:45.525 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:45.525 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:45.525 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:45.525 { 00:28:45.525 "params": { 00:28:45.525 "name": "Nvme$subsystem", 00:28:45.525 "trtype": "$TEST_TRANSPORT", 00:28:45.525 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:45.525 "adrfam": "ipv4", 00:28:45.525 "trsvcid": "$NVMF_PORT", 00:28:45.525 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:45.525 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:45.525 "hdgst": ${hdgst:-false}, 00:28:45.525 "ddgst": ${ddgst:-false} 00:28:45.525 }, 00:28:45.525 "method": "bdev_nvme_attach_controller" 00:28:45.525 } 00:28:45.525 EOF 00:28:45.525 )") 00:28:45.525 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:28:45.525 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:45.525 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:45.525 { 00:28:45.525 "params": { 00:28:45.525 "name": "Nvme$subsystem", 00:28:45.525 "trtype": "$TEST_TRANSPORT", 00:28:45.525 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:45.525 "adrfam": "ipv4", 00:28:45.525 "trsvcid": "$NVMF_PORT", 00:28:45.525 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:45.525 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:45.525 "hdgst": ${hdgst:-false}, 00:28:45.525 "ddgst": ${ddgst:-false} 00:28:45.525 }, 00:28:45.525 "method": "bdev_nvme_attach_controller" 00:28:45.525 } 00:28:45.525 EOF 00:28:45.525 )") 00:28:45.525 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:28:45.525 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:45.525 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:45.525 { 00:28:45.525 "params": { 00:28:45.525 "name": "Nvme$subsystem", 00:28:45.525 "trtype": "$TEST_TRANSPORT", 00:28:45.525 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:45.525 "adrfam": "ipv4", 00:28:45.525 "trsvcid": "$NVMF_PORT", 00:28:45.525 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:45.525 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:45.525 "hdgst": ${hdgst:-false}, 00:28:45.525 "ddgst": ${ddgst:-false} 00:28:45.525 }, 00:28:45.525 "method": "bdev_nvme_attach_controller" 00:28:45.525 } 00:28:45.525 EOF 00:28:45.525 )") 00:28:45.525 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:28:45.525 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:45.525 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:45.525 { 00:28:45.525 "params": { 00:28:45.525 "name": "Nvme$subsystem", 00:28:45.525 "trtype": "$TEST_TRANSPORT", 00:28:45.525 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:45.525 "adrfam": "ipv4", 00:28:45.525 "trsvcid": "$NVMF_PORT", 00:28:45.525 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:45.525 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:45.525 "hdgst": ${hdgst:-false}, 00:28:45.525 "ddgst": ${ddgst:-false} 00:28:45.525 }, 00:28:45.525 "method": "bdev_nvme_attach_controller" 00:28:45.525 } 00:28:45.525 EOF 00:28:45.525 )") 00:28:45.525 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:28:45.525 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:45.525 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:45.525 { 00:28:45.525 "params": { 00:28:45.525 "name": "Nvme$subsystem", 00:28:45.525 "trtype": "$TEST_TRANSPORT", 00:28:45.525 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:45.525 "adrfam": "ipv4", 00:28:45.525 "trsvcid": "$NVMF_PORT", 00:28:45.525 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:45.525 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:45.525 "hdgst": ${hdgst:-false}, 00:28:45.525 "ddgst": ${ddgst:-false} 00:28:45.525 }, 00:28:45.525 "method": "bdev_nvme_attach_controller" 00:28:45.525 } 00:28:45.525 EOF 00:28:45.525 )") 00:28:45.525 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:28:45.785 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:45.785 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:45.785 { 00:28:45.785 "params": { 00:28:45.785 "name": "Nvme$subsystem", 00:28:45.786 "trtype": "$TEST_TRANSPORT", 00:28:45.786 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:45.786 "adrfam": "ipv4", 00:28:45.786 "trsvcid": "$NVMF_PORT", 00:28:45.786 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:45.786 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:45.786 "hdgst": ${hdgst:-false}, 00:28:45.786 "ddgst": ${ddgst:-false} 00:28:45.786 }, 00:28:45.786 "method": "bdev_nvme_attach_controller" 00:28:45.786 } 00:28:45.786 EOF 00:28:45.786 )") 00:28:45.786 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:28:45.786 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:45.786 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:45.786 { 00:28:45.786 "params": { 00:28:45.786 "name": "Nvme$subsystem", 00:28:45.786 "trtype": "$TEST_TRANSPORT", 00:28:45.786 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:45.786 "adrfam": "ipv4", 00:28:45.786 "trsvcid": "$NVMF_PORT", 00:28:45.786 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:45.786 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:45.786 "hdgst": ${hdgst:-false}, 00:28:45.786 "ddgst": ${ddgst:-false} 00:28:45.786 }, 00:28:45.786 "method": "bdev_nvme_attach_controller" 00:28:45.786 } 00:28:45.786 EOF 00:28:45.786 )") 00:28:45.786 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:28:45.786 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:45.786 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:45.786 { 00:28:45.786 "params": { 00:28:45.786 "name": "Nvme$subsystem", 00:28:45.786 "trtype": "$TEST_TRANSPORT", 00:28:45.786 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:45.786 "adrfam": "ipv4", 00:28:45.786 "trsvcid": "$NVMF_PORT", 00:28:45.786 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:45.786 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:45.786 "hdgst": ${hdgst:-false}, 00:28:45.786 "ddgst": ${ddgst:-false} 00:28:45.786 }, 00:28:45.786 "method": "bdev_nvme_attach_controller" 00:28:45.786 } 00:28:45.786 EOF 00:28:45.786 )") 00:28:45.786 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:28:45.786 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:45.786 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:45.786 { 00:28:45.786 "params": { 00:28:45.786 "name": "Nvme$subsystem", 00:28:45.786 "trtype": "$TEST_TRANSPORT", 00:28:45.786 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:45.786 "adrfam": "ipv4", 00:28:45.786 "trsvcid": "$NVMF_PORT", 00:28:45.786 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:45.786 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:45.786 "hdgst": ${hdgst:-false}, 00:28:45.786 "ddgst": ${ddgst:-false} 00:28:45.786 }, 00:28:45.786 "method": "bdev_nvme_attach_controller" 00:28:45.786 } 00:28:45.786 EOF 00:28:45.786 )") 00:28:45.786 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:28:45.786 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:45.786 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:45.786 { 00:28:45.786 "params": { 00:28:45.786 "name": "Nvme$subsystem", 00:28:45.786 "trtype": "$TEST_TRANSPORT", 00:28:45.786 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:45.786 "adrfam": "ipv4", 00:28:45.786 "trsvcid": "$NVMF_PORT", 00:28:45.786 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:45.786 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:45.786 "hdgst": ${hdgst:-false}, 00:28:45.786 "ddgst": ${ddgst:-false} 00:28:45.786 }, 00:28:45.786 "method": "bdev_nvme_attach_controller" 00:28:45.786 } 00:28:45.786 EOF 00:28:45.786 )") 00:28:45.786 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:28:45.786 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@584 -- # jq . 00:28:45.786 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@585 -- # IFS=, 00:28:45.786 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:28:45.786 "params": { 00:28:45.786 "name": "Nvme1", 00:28:45.786 "trtype": "tcp", 00:28:45.786 "traddr": "10.0.0.2", 00:28:45.786 "adrfam": "ipv4", 00:28:45.786 "trsvcid": "4420", 00:28:45.786 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:45.786 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:45.786 "hdgst": false, 00:28:45.786 "ddgst": false 00:28:45.786 }, 00:28:45.786 "method": "bdev_nvme_attach_controller" 00:28:45.786 },{ 00:28:45.786 "params": { 00:28:45.786 "name": "Nvme2", 00:28:45.786 "trtype": "tcp", 00:28:45.786 "traddr": "10.0.0.2", 00:28:45.786 "adrfam": "ipv4", 00:28:45.786 "trsvcid": "4420", 00:28:45.786 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:28:45.786 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:28:45.786 "hdgst": false, 00:28:45.786 "ddgst": false 00:28:45.786 }, 00:28:45.786 "method": "bdev_nvme_attach_controller" 00:28:45.786 },{ 00:28:45.786 "params": { 00:28:45.786 "name": "Nvme3", 00:28:45.786 "trtype": "tcp", 00:28:45.786 "traddr": "10.0.0.2", 00:28:45.786 "adrfam": "ipv4", 00:28:45.786 "trsvcid": "4420", 00:28:45.786 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:28:45.786 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:28:45.786 "hdgst": false, 00:28:45.786 "ddgst": false 00:28:45.786 }, 00:28:45.786 "method": "bdev_nvme_attach_controller" 00:28:45.786 },{ 00:28:45.786 "params": { 00:28:45.786 "name": "Nvme4", 00:28:45.786 "trtype": "tcp", 00:28:45.786 "traddr": "10.0.0.2", 00:28:45.786 "adrfam": "ipv4", 00:28:45.786 "trsvcid": "4420", 00:28:45.786 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:28:45.786 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:28:45.786 "hdgst": false, 00:28:45.786 "ddgst": false 00:28:45.786 }, 00:28:45.786 "method": "bdev_nvme_attach_controller" 00:28:45.786 },{ 00:28:45.786 "params": { 00:28:45.786 "name": "Nvme5", 00:28:45.786 "trtype": "tcp", 00:28:45.786 "traddr": "10.0.0.2", 00:28:45.786 "adrfam": "ipv4", 00:28:45.786 "trsvcid": "4420", 00:28:45.786 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:28:45.786 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:28:45.786 "hdgst": false, 00:28:45.786 "ddgst": false 00:28:45.786 }, 00:28:45.786 "method": "bdev_nvme_attach_controller" 00:28:45.786 },{ 00:28:45.786 "params": { 00:28:45.786 "name": "Nvme6", 00:28:45.786 "trtype": "tcp", 00:28:45.786 "traddr": "10.0.0.2", 00:28:45.786 "adrfam": "ipv4", 00:28:45.786 "trsvcid": "4420", 00:28:45.786 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:28:45.786 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:28:45.786 "hdgst": false, 00:28:45.786 "ddgst": false 00:28:45.786 }, 00:28:45.786 "method": "bdev_nvme_attach_controller" 00:28:45.786 },{ 00:28:45.786 "params": { 00:28:45.786 "name": "Nvme7", 00:28:45.786 "trtype": "tcp", 00:28:45.786 "traddr": "10.0.0.2", 00:28:45.786 "adrfam": "ipv4", 00:28:45.786 "trsvcid": "4420", 00:28:45.786 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:28:45.786 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:28:45.786 "hdgst": false, 00:28:45.786 "ddgst": false 00:28:45.786 }, 00:28:45.786 "method": "bdev_nvme_attach_controller" 00:28:45.786 },{ 00:28:45.786 "params": { 00:28:45.786 "name": "Nvme8", 00:28:45.786 "trtype": "tcp", 00:28:45.786 "traddr": "10.0.0.2", 00:28:45.786 "adrfam": "ipv4", 00:28:45.786 "trsvcid": "4420", 00:28:45.786 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:28:45.786 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:28:45.786 "hdgst": false, 00:28:45.786 "ddgst": false 00:28:45.786 }, 00:28:45.786 "method": "bdev_nvme_attach_controller" 00:28:45.786 },{ 00:28:45.786 "params": { 00:28:45.786 "name": "Nvme9", 00:28:45.786 "trtype": "tcp", 00:28:45.786 "traddr": "10.0.0.2", 00:28:45.786 "adrfam": "ipv4", 00:28:45.786 "trsvcid": "4420", 00:28:45.786 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:28:45.786 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:28:45.786 "hdgst": false, 00:28:45.786 "ddgst": false 00:28:45.786 }, 00:28:45.786 "method": "bdev_nvme_attach_controller" 00:28:45.786 },{ 00:28:45.786 "params": { 00:28:45.786 "name": "Nvme10", 00:28:45.786 "trtype": "tcp", 00:28:45.786 "traddr": "10.0.0.2", 00:28:45.786 "adrfam": "ipv4", 00:28:45.786 "trsvcid": "4420", 00:28:45.786 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:28:45.786 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:28:45.786 "hdgst": false, 00:28:45.786 "ddgst": false 00:28:45.786 }, 00:28:45.786 "method": "bdev_nvme_attach_controller" 00:28:45.786 }' 00:28:45.787 [2024-11-18 07:14:06.532334] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:28:45.787 [2024-11-18 07:14:06.532421] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid326943 ] 00:28:45.787 [2024-11-18 07:14:06.603825] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:45.787 [2024-11-18 07:14:06.652109] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:47.162 Running I/O for 10 seconds... 00:28:47.730 07:14:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:47.730 07:14:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:28:47.730 07:14:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@128 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:28:47.730 07:14:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:47.730 07:14:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:47.730 07:14:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:47.730 07:14:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@131 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:47.730 07:14:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@133 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:28:47.730 07:14:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:28:47.730 07:14:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:28:47.730 07:14:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local ret=1 00:28:47.730 07:14:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # local i 00:28:47.730 07:14:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:28:47.730 07:14:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:28:47.730 07:14:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:28:47.730 07:14:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:28:47.730 07:14:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:47.730 07:14:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:47.730 07:14:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:47.730 07:14:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=67 00:28:47.730 07:14:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:28:47.730 07:14:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:28:47.999 07:14:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:28:47.999 07:14:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:28:47.999 07:14:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:28:47.999 07:14:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:28:47.999 07:14:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:47.999 07:14:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:47.999 07:14:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:47.999 07:14:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=136 00:28:47.999 07:14:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 136 -ge 100 ']' 00:28:47.999 07:14:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # ret=0 00:28:47.999 07:14:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@66 -- # break 00:28:47.999 07:14:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@70 -- # return 0 00:28:47.999 07:14:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # killprocess 326772 00:28:47.999 07:14:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 326772 ']' 00:28:47.999 07:14:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 326772 00:28:48.000 07:14:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # uname 00:28:48.000 07:14:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:48.000 07:14:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 326772 00:28:48.000 07:14:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:48.000 07:14:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:48.000 07:14:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 326772' 00:28:48.000 killing process with pid 326772 00:28:48.000 07:14:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@973 -- # kill 326772 00:28:48.000 07:14:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@978 -- # wait 326772 00:28:48.000 [2024-11-18 07:14:08.924211] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167f810 is same with the state(6) to be set 00:28:48.000 [2024-11-18 07:14:08.924356] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167f810 is same with the state(6) to be set 00:28:48.000 [2024-11-18 07:14:08.924374] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167f810 is same with the state(6) to be set 00:28:48.000 [2024-11-18 07:14:08.924387] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167f810 is same with the state(6) to be set 00:28:48.000 [2024-11-18 07:14:08.924400] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167f810 is same with the state(6) to be set 00:28:48.000 [2024-11-18 07:14:08.924413] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167f810 is same with the state(6) to be set 00:28:48.000 [2024-11-18 07:14:08.924434] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167f810 is same with the state(6) to be set 00:28:48.000 [2024-11-18 07:14:08.924447] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167f810 is same with the state(6) to be set 00:28:48.000 [2024-11-18 07:14:08.924459] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167f810 is same with the state(6) to be set 00:28:48.000 [2024-11-18 07:14:08.924472] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167f810 is same with the state(6) to be set 00:28:48.000 [2024-11-18 07:14:08.924504] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167f810 is same with the state(6) to be set 00:28:48.000 [2024-11-18 07:14:08.924518] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167f810 is same with the state(6) to be set 00:28:48.000 [2024-11-18 07:14:08.924532] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167f810 is same with the state(6) to be set 00:28:48.000 [2024-11-18 07:14:08.924544] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167f810 is same with the state(6) to be set 00:28:48.000 [2024-11-18 07:14:08.924565] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167f810 is same with the state(6) to be set 00:28:48.000 [2024-11-18 07:14:08.924578] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167f810 is same with the state(6) to be set 00:28:48.000 [2024-11-18 07:14:08.924591] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167f810 is same with the state(6) to be set 00:28:48.000 [2024-11-18 07:14:08.924603] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167f810 is same with the state(6) to be set 00:28:48.000 [2024-11-18 07:14:08.924615] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167f810 is same with the state(6) to be set 00:28:48.000 [2024-11-18 07:14:08.924628] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167f810 is same with the state(6) to be set 00:28:48.000 [2024-11-18 07:14:08.924640] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167f810 is same with the state(6) to be set 00:28:48.000 [2024-11-18 07:14:08.924663] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167f810 is same with the state(6) to be set 00:28:48.000 [2024-11-18 07:14:08.924676] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167f810 is same with the state(6) to be set 00:28:48.000 [2024-11-18 07:14:08.924688] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167f810 is same with the state(6) to be set 00:28:48.000 [2024-11-18 07:14:08.924701] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167f810 is same with the state(6) to be set 00:28:48.000 [2024-11-18 07:14:08.924713] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167f810 is same with the state(6) to be set 00:28:48.000 [2024-11-18 07:14:08.924725] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167f810 is same with the state(6) to be set 00:28:48.000 [2024-11-18 07:14:08.924738] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167f810 is same with the state(6) to be set 00:28:48.000 [2024-11-18 07:14:08.924750] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167f810 is same with the state(6) to be set 00:28:48.000 [2024-11-18 07:14:08.924762] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167f810 is same with the state(6) to be set 00:28:48.000 [2024-11-18 07:14:08.924774] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167f810 is same with the state(6) to be set 00:28:48.000 [2024-11-18 07:14:08.924786] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167f810 is same with the state(6) to be set 00:28:48.000 [2024-11-18 07:14:08.924808] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167f810 is same with the state(6) to be set 00:28:48.000 [2024-11-18 07:14:08.924820] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167f810 is same with the state(6) to be set 00:28:48.000 [2024-11-18 07:14:08.924832] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167f810 is same with the state(6) to be set 00:28:48.000 [2024-11-18 07:14:08.924844] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167f810 is same with the state(6) to be set 00:28:48.000 [2024-11-18 07:14:08.924856] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167f810 is same with the state(6) to be set 00:28:48.000 [2024-11-18 07:14:08.924872] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167f810 is same with the state(6) to be set 00:28:48.000 [2024-11-18 07:14:08.924884] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167f810 is same with the state(6) to be set 00:28:48.000 [2024-11-18 07:14:08.924896] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167f810 is same with the state(6) to be set 00:28:48.000 [2024-11-18 07:14:08.924908] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167f810 is same with the state(6) to be set 00:28:48.000 [2024-11-18 07:14:08.924920] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167f810 is same with the state(6) to be set 00:28:48.000 [2024-11-18 07:14:08.924933] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167f810 is same with the state(6) to be set 00:28:48.000 [2024-11-18 07:14:08.924944] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167f810 is same with the state(6) to be set 00:28:48.000 [2024-11-18 07:14:08.924956] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167f810 is same with the state(6) to be set 00:28:48.000 [2024-11-18 07:14:08.924968] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167f810 is same with the state(6) to be set 00:28:48.000 [2024-11-18 07:14:08.924980] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167f810 is same with the state(6) to be set 00:28:48.000 [2024-11-18 07:14:08.924992] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167f810 is same with the state(6) to be set 00:28:48.000 [2024-11-18 07:14:08.925008] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167f810 is same with the state(6) to be set 00:28:48.000 [2024-11-18 07:14:08.925021] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167f810 is same with the state(6) to be set 00:28:48.000 [2024-11-18 07:14:08.925034] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167f810 is same with the state(6) to be set 00:28:48.000 [2024-11-18 07:14:08.925046] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167f810 is same with the state(6) to be set 00:28:48.000 [2024-11-18 07:14:08.925058] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167f810 is same with the state(6) to be set 00:28:48.000 [2024-11-18 07:14:08.925070] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167f810 is same with the state(6) to be set 00:28:48.000 [2024-11-18 07:14:08.925082] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167f810 is same with the state(6) to be set 00:28:48.000 [2024-11-18 07:14:08.925095] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167f810 is same with the state(6) to be set 00:28:48.000 [2024-11-18 07:14:08.925107] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167f810 is same with the state(6) to be set 00:28:48.000 [2024-11-18 07:14:08.925119] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167f810 is same with the state(6) to be set 00:28:48.000 [2024-11-18 07:14:08.925131] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167f810 is same with the state(6) to be set 00:28:48.000 [2024-11-18 07:14:08.925143] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167f810 is same with the state(6) to be set 00:28:48.000 [2024-11-18 07:14:08.925155] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167f810 is same with the state(6) to be set 00:28:48.000 [2024-11-18 07:14:08.925167] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167f810 is same with the state(6) to be set 00:28:48.000 [2024-11-18 07:14:08.925180] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167f810 is same with the state(6) to be set 00:28:48.000 [2024-11-18 07:14:08.930214] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:48.000 [2024-11-18 07:14:08.932652] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:48.000 [2024-11-18 07:14:08.932686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.000 [2024-11-18 07:14:08.932704] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:48.000 [2024-11-18 07:14:08.932719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.000 [2024-11-18 07:14:08.932735] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:48.000 [2024-11-18 07:14:08.932749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.000 [2024-11-18 07:14:08.932764] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:48.000 [2024-11-18 07:14:08.932778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.000 [2024-11-18 07:14:08.932802] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b46450 is same with the state(6) to be set 00:28:48.000 [2024-11-18 07:14:08.932904] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16823a0 is same with the state(6) to be set 00:28:48.000 [2024-11-18 07:14:08.932943] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16823a0 is same with the state(6) to be set 00:28:48.000 [2024-11-18 07:14:08.932967] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16823a0 is same with the state(6) to be set 00:28:48.000 [2024-11-18 07:14:08.932981] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16823a0 is same with the state(6) to be set 00:28:48.000 [2024-11-18 07:14:08.932994] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16823a0 is same with the state(6) to be set 00:28:48.000 [2024-11-18 07:14:08.933007] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16823a0 is same with the state(6) to be set 00:28:48.001 [2024-11-18 07:14:08.933019] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16823a0 is same with the state(6) to be set 00:28:48.001 [2024-11-18 07:14:08.933031] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16823a0 is same with the state(6) to be set 00:28:48.001 [2024-11-18 07:14:08.933044] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16823a0 is same with the state(6) to be set 00:28:48.001 [2024-11-18 07:14:08.933056] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16823a0 is same with the state(6) to be set 00:28:48.001 [2024-11-18 07:14:08.933068] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16823a0 is same with the state(6) to be set 00:28:48.001 [2024-11-18 07:14:08.933080] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16823a0 is same with the state(6) to be set 00:28:48.001 [2024-11-18 07:14:08.933092] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16823a0 is same with the state(6) to be set 00:28:48.001 [2024-11-18 07:14:08.933104] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16823a0 is same with the state(6) to be set 00:28:48.001 [2024-11-18 07:14:08.933116] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16823a0 is same with the state(6) to be set 00:28:48.001 [2024-11-18 07:14:08.933128] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16823a0 is same with the state(6) to be set 00:28:48.001 [2024-11-18 07:14:08.933140] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16823a0 is same with the state(6) to be set 00:28:48.001 [2024-11-18 07:14:08.933151] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16823a0 is same with the state(6) to be set 00:28:48.001 [2024-11-18 07:14:08.933164] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16823a0 is same with the state(6) to be set 00:28:48.001 [2024-11-18 07:14:08.933176] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16823a0 is same with the state(6) to be set 00:28:48.001 [2024-11-18 07:14:08.933188] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16823a0 is same with the state(6) to be set 00:28:48.001 [2024-11-18 07:14:08.933200] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16823a0 is same with the state(6) to be set 00:28:48.001 [2024-11-18 07:14:08.933212] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16823a0 is same with the state(6) to be set 00:28:48.001 [2024-11-18 07:14:08.933224] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16823a0 is same with the state(6) to be set 00:28:48.001 [2024-11-18 07:14:08.933236] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16823a0 is same with the state(6) to be set 00:28:48.001 [2024-11-18 07:14:08.933248] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16823a0 is same with the state(6) to be set 00:28:48.001 [2024-11-18 07:14:08.933260] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16823a0 is same with the state(6) to be set 00:28:48.001 [2024-11-18 07:14:08.933273] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16823a0 is same with the state(6) to be set 00:28:48.001 [2024-11-18 07:14:08.933285] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16823a0 is same with the state(6) to be set 00:28:48.001 [2024-11-18 07:14:08.933301] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16823a0 is same with the state(6) to be set 00:28:48.001 [2024-11-18 07:14:08.933314] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16823a0 is same with the state(6) to be set 00:28:48.001 [2024-11-18 07:14:08.933327] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16823a0 is same with the state(6) to be set 00:28:48.001 [2024-11-18 07:14:08.933341] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16823a0 is same with the state(6) to be set 00:28:48.001 [2024-11-18 07:14:08.933365] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16823a0 is same with the state(6) to be set 00:28:48.001 [2024-11-18 07:14:08.933383] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16823a0 is same with the state(6) to be set 00:28:48.001 [2024-11-18 07:14:08.933395] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16823a0 is same with the state(6) to be set 00:28:48.001 [2024-11-18 07:14:08.933408] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16823a0 is same with the state(6) to be set 00:28:48.001 [2024-11-18 07:14:08.933420] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16823a0 is same with the state(6) to be set 00:28:48.001 [2024-11-18 07:14:08.933432] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16823a0 is same with the state(6) to be set 00:28:48.001 [2024-11-18 07:14:08.933444] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16823a0 is same with the state(6) to be set 00:28:48.001 [2024-11-18 07:14:08.933457] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16823a0 is same with the state(6) to be set 00:28:48.001 [2024-11-18 07:14:08.933469] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16823a0 is same with the state(6) to be set 00:28:48.001 [2024-11-18 07:14:08.933481] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16823a0 is same with the state(6) to be set 00:28:48.001 [2024-11-18 07:14:08.933501] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16823a0 is same with the state(6) to be set 00:28:48.001 [2024-11-18 07:14:08.933514] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16823a0 is same with the state(6) to be set 00:28:48.001 [2024-11-18 07:14:08.933527] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16823a0 is same with the state(6) to be set 00:28:48.001 [2024-11-18 07:14:08.933547] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16823a0 is same with the state(6) to be set 00:28:48.001 [2024-11-18 07:14:08.933559] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16823a0 is same with the state(6) to be set 00:28:48.001 [2024-11-18 07:14:08.933571] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16823a0 is same with the state(6) to be set 00:28:48.001 [2024-11-18 07:14:08.933583] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16823a0 is same with the state(6) to be set 00:28:48.001 [2024-11-18 07:14:08.933595] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16823a0 is same with the state(6) to be set 00:28:48.001 [2024-11-18 07:14:08.934457] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167fce0 is same with the state(6) to be set 00:28:48.001 [2024-11-18 07:14:08.934481] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167fce0 is same with the state(6) to be set 00:28:48.001 [2024-11-18 07:14:08.934503] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167fce0 is same with the state(6) to be set 00:28:48.001 [2024-11-18 07:14:08.934516] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167fce0 is same with the state(6) to be set 00:28:48.001 [2024-11-18 07:14:08.934528] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167fce0 is same with the state(6) to be set 00:28:48.001 [2024-11-18 07:14:08.934544] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167fce0 is same with the state(6) to be set 00:28:48.001 [2024-11-18 07:14:08.934562] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167fce0 is same with the state(6) to be set 00:28:48.001 [2024-11-18 07:14:08.934575] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167fce0 is same with the state(6) to be set 00:28:48.001 [2024-11-18 07:14:08.934587] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167fce0 is same with the state(6) to be set 00:28:48.001 [2024-11-18 07:14:08.934599] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167fce0 is same with the state(6) to be set 00:28:48.001 [2024-11-18 07:14:08.934611] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167fce0 is same with the state(6) to be set 00:28:48.001 [2024-11-18 07:14:08.934623] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167fce0 is same with the state(6) to be set 00:28:48.001 [2024-11-18 07:14:08.934636] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167fce0 is same with the state(6) to be set 00:28:48.001 [2024-11-18 07:14:08.934648] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167fce0 is same with the state(6) to be set 00:28:48.001 [2024-11-18 07:14:08.934668] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167fce0 is same with the state(6) to be set 00:28:48.001 [2024-11-18 07:14:08.934681] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167fce0 is same with the state(6) to be set 00:28:48.001 [2024-11-18 07:14:08.934694] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167fce0 is same with the state(6) to be set 00:28:48.001 [2024-11-18 07:14:08.934706] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167fce0 is same with the state(6) to be set 00:28:48.001 [2024-11-18 07:14:08.934718] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167fce0 is same with the state(6) to be set 00:28:48.001 [2024-11-18 07:14:08.934730] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167fce0 is same with the state(6) to be set 00:28:48.001 [2024-11-18 07:14:08.934742] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167fce0 is same with the state(6) to be set 00:28:48.001 [2024-11-18 07:14:08.934754] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167fce0 is same with the state(6) to be set 00:28:48.001 [2024-11-18 07:14:08.934766] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167fce0 is same with the state(6) to be set 00:28:48.001 [2024-11-18 07:14:08.934778] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167fce0 is same with the state(6) to be set 00:28:48.001 [2024-11-18 07:14:08.934795] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167fce0 is same with the state(6) to be set 00:28:48.001 [2024-11-18 07:14:08.934807] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167fce0 is same with the state(6) to be set 00:28:48.001 [2024-11-18 07:14:08.934819] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167fce0 is same with the state(6) to be set 00:28:48.001 [2024-11-18 07:14:08.934831] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167fce0 is same with the state(6) to be set 00:28:48.001 [2024-11-18 07:14:08.934843] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167fce0 is same with the state(6) to be set 00:28:48.001 [2024-11-18 07:14:08.934860] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167fce0 is same with the state(6) to be set 00:28:48.001 [2024-11-18 07:14:08.934873] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167fce0 is same with the state(6) to be set 00:28:48.001 [2024-11-18 07:14:08.934886] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167fce0 is same with the state(6) to be set 00:28:48.001 [2024-11-18 07:14:08.934898] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167fce0 is same with the state(6) to be set 00:28:48.001 [2024-11-18 07:14:08.934914] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167fce0 is same with the state(6) to be set 00:28:48.001 [2024-11-18 07:14:08.934926] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167fce0 is same with the state(6) to be set 00:28:48.001 [2024-11-18 07:14:08.934939] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167fce0 is same with the state(6) to be set 00:28:48.001 [2024-11-18 07:14:08.934952] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167fce0 is same with the state(6) to be set 00:28:48.001 [2024-11-18 07:14:08.934964] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167fce0 is same with the state(6) to be set 00:28:48.001 [2024-11-18 07:14:08.934976] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167fce0 is same with the state(6) to be set 00:28:48.001 [2024-11-18 07:14:08.934988] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167fce0 is same with the state(6) to be set 00:28:48.001 [2024-11-18 07:14:08.935001] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167fce0 is same with the state(6) to be set 00:28:48.001 [2024-11-18 07:14:08.935014] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167fce0 is same with the state(6) to be set 00:28:48.001 [2024-11-18 07:14:08.935026] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167fce0 is same with the state(6) to be set 00:28:48.001 [2024-11-18 07:14:08.935038] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167fce0 is same with the state(6) to be set 00:28:48.001 [2024-11-18 07:14:08.935050] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167fce0 is same with the state(6) to be set 00:28:48.002 [2024-11-18 07:14:08.935063] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167fce0 is same with the state(6) to be set 00:28:48.002 [2024-11-18 07:14:08.935081] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167fce0 is same with the state(6) to be set 00:28:48.002 [2024-11-18 07:14:08.935094] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167fce0 is same with the state(6) to be set 00:28:48.002 [2024-11-18 07:14:08.935106] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167fce0 is same with the state(6) to be set 00:28:48.002 [2024-11-18 07:14:08.935119] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167fce0 is same with the state(6) to be set 00:28:48.002 [2024-11-18 07:14:08.935131] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167fce0 is same with the state(6) to be set 00:28:48.002 [2024-11-18 07:14:08.935144] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167fce0 is same with the state(6) to be set 00:28:48.002 [2024-11-18 07:14:08.935156] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167fce0 is same with the state(6) to be set 00:28:48.002 [2024-11-18 07:14:08.935169] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167fce0 is same with the state(6) to be set 00:28:48.002 [2024-11-18 07:14:08.935181] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167fce0 is same with the state(6) to be set 00:28:48.002 [2024-11-18 07:14:08.935194] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167fce0 is same with the state(6) to be set 00:28:48.002 [2024-11-18 07:14:08.935206] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167fce0 is same with the state(6) to be set 00:28:48.002 [2024-11-18 07:14:08.935219] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167fce0 is same with the state(6) to be set 00:28:48.002 [2024-11-18 07:14:08.935231] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167fce0 is same with the state(6) to be set 00:28:48.002 [2024-11-18 07:14:08.935243] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167fce0 is same with the state(6) to be set 00:28:48.002 [2024-11-18 07:14:08.935259] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167fce0 is same with the state(6) to be set 00:28:48.002 [2024-11-18 07:14:08.935272] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167fce0 is same with the state(6) to be set 00:28:48.002 [2024-11-18 07:14:08.935284] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167fce0 is same with the state(6) to be set 00:28:48.002 [2024-11-18 07:14:08.937165] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16801b0 is same with the state(6) to be set 00:28:48.002 [2024-11-18 07:14:08.937202] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16801b0 is same with the state(6) to be set 00:28:48.002 [2024-11-18 07:14:08.937218] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16801b0 is same with the state(6) to be set 00:28:48.002 [2024-11-18 07:14:08.937231] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16801b0 is same with the state(6) to be set 00:28:48.002 [2024-11-18 07:14:08.937245] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16801b0 is same with the state(6) to be set 00:28:48.002 [2024-11-18 07:14:08.937258] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16801b0 is same with the state(6) to be set 00:28:48.002 [2024-11-18 07:14:08.937271] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16801b0 is same with the state(6) to be set 00:28:48.002 [2024-11-18 07:14:08.937319] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16801b0 is same with the state(6) to be set 00:28:48.002 [2024-11-18 07:14:08.937335] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16801b0 is same with the state(6) to be set 00:28:48.002 [2024-11-18 07:14:08.937386] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16801b0 is same with the state(6) to be set 00:28:48.002 [2024-11-18 07:14:08.937403] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16801b0 is same with the state(6) to be set 00:28:48.002 [2024-11-18 07:14:08.937415] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16801b0 is same with the state(6) to be set 00:28:48.002 [2024-11-18 07:14:08.937428] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16801b0 is same with the state(6) to be set 00:28:48.002 [2024-11-18 07:14:08.937440] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16801b0 is same with the state(6) to be set 00:28:48.002 [2024-11-18 07:14:08.937452] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16801b0 is same with the state(6) to be set 00:28:48.002 [2024-11-18 07:14:08.937465] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16801b0 is same with the state(6) to be set 00:28:48.002 [2024-11-18 07:14:08.937477] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16801b0 is same with the state(6) to be set 00:28:48.002 [2024-11-18 07:14:08.937512] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16801b0 is same with the state(6) to be set 00:28:48.002 [2024-11-18 07:14:08.937536] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16801b0 is same with the state(6) to be set 00:28:48.002 [2024-11-18 07:14:08.937558] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16801b0 is same with the state(6) to be set 00:28:48.002 [2024-11-18 07:14:08.937571] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16801b0 is same with the state(6) to be set 00:28:48.002 [2024-11-18 07:14:08.937583] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16801b0 is same with the state(6) to be set 00:28:48.002 [2024-11-18 07:14:08.937595] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16801b0 is same with the state(6) to be set 00:28:48.002 [2024-11-18 07:14:08.937607] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16801b0 is same with the state(6) to be set 00:28:48.002 [2024-11-18 07:14:08.937626] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16801b0 is same with the state(6) to be set 00:28:48.002 [2024-11-18 07:14:08.937638] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16801b0 is same with the state(6) to be set 00:28:48.002 [2024-11-18 07:14:08.937650] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16801b0 is same with the state(6) to be set 00:28:48.002 [2024-11-18 07:14:08.937674] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16801b0 is same with the state(6) to be set 00:28:48.002 [2024-11-18 07:14:08.937702] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16801b0 is same with the state(6) to be set 00:28:48.002 [2024-11-18 07:14:08.937715] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16801b0 is same with the state(6) to be set 00:28:48.002 [2024-11-18 07:14:08.937748] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16801b0 is same with the state(6) to be set 00:28:48.002 [2024-11-18 07:14:08.937764] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16801b0 is same with the state(6) to be set 00:28:48.002 [2024-11-18 07:14:08.937777] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16801b0 is same with the state(6) to be set 00:28:48.002 [2024-11-18 07:14:08.937793] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16801b0 is same with the state(6) to be set 00:28:48.002 [2024-11-18 07:14:08.937805] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16801b0 is same with the state(6) to be set 00:28:48.002 [2024-11-18 07:14:08.937817] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16801b0 is same with the state(6) to be set 00:28:48.002 [2024-11-18 07:14:08.937829] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16801b0 is same with the state(6) to be set 00:28:48.002 [2024-11-18 07:14:08.937842] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16801b0 is same with the state(6) to be set 00:28:48.002 [2024-11-18 07:14:08.937856] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16801b0 is same with the state(6) to be set 00:28:48.002 [2024-11-18 07:14:08.937868] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16801b0 is same with the state(6) to be set 00:28:48.002 [2024-11-18 07:14:08.937889] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16801b0 is same with the state(6) to be set 00:28:48.002 [2024-11-18 07:14:08.937916] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16801b0 is same with the state(6) to be set 00:28:48.002 [2024-11-18 07:14:08.937930] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16801b0 is same with the state(6) to be set 00:28:48.002 [2024-11-18 07:14:08.937948] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16801b0 is same with the state(6) to be set 00:28:48.002 [2024-11-18 07:14:08.937988] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16801b0 is same with the state(6) to be set 00:28:48.002 [2024-11-18 07:14:08.938004] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16801b0 is same with the state(6) to be set 00:28:48.002 [2024-11-18 07:14:08.938044] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16801b0 is same with the state(6) to be set 00:28:48.002 [2024-11-18 07:14:08.938057] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16801b0 is same with the state(6) to be set 00:28:48.002 [2024-11-18 07:14:08.939855] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1680b70 is same with the state(6) to be set 00:28:48.002 [2024-11-18 07:14:08.939888] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1680b70 is same with the state(6) to be set 00:28:48.002 [2024-11-18 07:14:08.939905] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1680b70 is same with the state(6) to be set 00:28:48.002 [2024-11-18 07:14:08.939925] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1680b70 is same with the state(6) to be set 00:28:48.002 [2024-11-18 07:14:08.939938] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1680b70 is same with the state(6) to be set 00:28:48.002 [2024-11-18 07:14:08.939951] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1680b70 is same with the state(6) to be set 00:28:48.002 [2024-11-18 07:14:08.939965] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1680b70 is same with the state(6) to be set 00:28:48.002 [2024-11-18 07:14:08.939977] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1680b70 is same with the state(6) to be set 00:28:48.002 [2024-11-18 07:14:08.939991] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1680b70 is same with the state(6) to be set 00:28:48.002 [2024-11-18 07:14:08.940004] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1680b70 is same with the state(6) to be set 00:28:48.002 [2024-11-18 07:14:08.940017] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1680b70 is same with the state(6) to be set 00:28:48.002 [2024-11-18 07:14:08.940029] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1680b70 is same with the state(6) to be set 00:28:48.002 [2024-11-18 07:14:08.940042] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1680b70 is same with the state(6) to be set 00:28:48.002 [2024-11-18 07:14:08.940055] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1680b70 is same with the state(6) to be set 00:28:48.002 [2024-11-18 07:14:08.940068] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1680b70 is same with the state(6) to be set 00:28:48.002 [2024-11-18 07:14:08.940095] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1680b70 is same with the state(6) to be set 00:28:48.002 [2024-11-18 07:14:08.940107] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1680b70 is same with the state(6) to be set 00:28:48.002 [2024-11-18 07:14:08.940120] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1680b70 is same with the state(6) to be set 00:28:48.002 [2024-11-18 07:14:08.940132] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1680b70 is same with the state(6) to be set 00:28:48.002 [2024-11-18 07:14:08.940144] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1680b70 is same with the state(6) to be set 00:28:48.002 [2024-11-18 07:14:08.940156] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1680b70 is same with the state(6) to be set 00:28:48.002 [2024-11-18 07:14:08.940168] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1680b70 is same with the state(6) to be set 00:28:48.002 [2024-11-18 07:14:08.940180] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1680b70 is same with the state(6) to be set 00:28:48.002 [2024-11-18 07:14:08.940193] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1680b70 is same with the state(6) to be set 00:28:48.002 [2024-11-18 07:14:08.940205] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1680b70 is same with the state(6) to be set 00:28:48.003 [2024-11-18 07:14:08.940217] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1680b70 is same with the state(6) to be set 00:28:48.003 [2024-11-18 07:14:08.940229] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1680b70 is same with the state(6) to be set 00:28:48.003 [2024-11-18 07:14:08.940241] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1680b70 is same with the state(6) to be set 00:28:48.003 [2024-11-18 07:14:08.940253] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1680b70 is same with the state(6) to be set 00:28:48.003 [2024-11-18 07:14:08.940266] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1680b70 is same with the state(6) to be set 00:28:48.003 [2024-11-18 07:14:08.940281] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1680b70 is same with the state(6) to be set 00:28:48.003 [2024-11-18 07:14:08.940294] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1680b70 is same with the state(6) to be set 00:28:48.003 [2024-11-18 07:14:08.940307] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1680b70 is same with the state(6) to be set 00:28:48.003 [2024-11-18 07:14:08.940320] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1680b70 is same with the state(6) to be set 00:28:48.003 [2024-11-18 07:14:08.940332] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1680b70 is same with the state(6) to be set 00:28:48.003 [2024-11-18 07:14:08.940345] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1680b70 is same with the state(6) to be set 00:28:48.003 [2024-11-18 07:14:08.940357] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1680b70 is same with the state(6) to be set 00:28:48.003 [2024-11-18 07:14:08.940370] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1680b70 is same with the state(6) to be set 00:28:48.003 [2024-11-18 07:14:08.940382] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1680b70 is same with the state(6) to be set 00:28:48.003 [2024-11-18 07:14:08.940394] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1680b70 is same with the state(6) to be set 00:28:48.003 [2024-11-18 07:14:08.940406] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1680b70 is same with the state(6) to be set 00:28:48.003 [2024-11-18 07:14:08.940418] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1680b70 is same with the state(6) to be set 00:28:48.003 [2024-11-18 07:14:08.940431] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1680b70 is same with the state(6) to be set 00:28:48.003 [2024-11-18 07:14:08.940443] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1680b70 is same with the state(6) to be set 00:28:48.003 [2024-11-18 07:14:08.940455] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1680b70 is same with the state(6) to be set 00:28:48.003 [2024-11-18 07:14:08.940467] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1680b70 is same with the state(6) to be set 00:28:48.003 [2024-11-18 07:14:08.940479] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1680b70 is same with the state(6) to be set 00:28:48.003 [2024-11-18 07:14:08.940514] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1680b70 is same with the state(6) to be set 00:28:48.003 [2024-11-18 07:14:08.940541] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1680b70 is same with the state(6) to be set 00:28:48.003 [2024-11-18 07:14:08.940554] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1680b70 is same with the state(6) to be set 00:28:48.003 [2024-11-18 07:14:08.940566] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1680b70 is same with the state(6) to be set 00:28:48.003 [2024-11-18 07:14:08.940578] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1680b70 is same with the state(6) to be set 00:28:48.003 [2024-11-18 07:14:08.940590] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1680b70 is same with the state(6) to be set 00:28:48.003 [2024-11-18 07:14:08.940603] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1680b70 is same with the state(6) to be set 00:28:48.003 [2024-11-18 07:14:08.940615] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1680b70 is same with the state(6) to be set 00:28:48.003 [2024-11-18 07:14:08.940627] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1680b70 is same with the state(6) to be set 00:28:48.003 [2024-11-18 07:14:08.940639] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1680b70 is same with the state(6) to be set 00:28:48.003 [2024-11-18 07:14:08.940655] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1680b70 is same with the state(6) to be set 00:28:48.003 [2024-11-18 07:14:08.940667] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1680b70 is same with the state(6) to be set 00:28:48.003 [2024-11-18 07:14:08.940679] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1680b70 is same with the state(6) to be set 00:28:48.003 [2024-11-18 07:14:08.940691] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1680b70 is same with the state(6) to be set 00:28:48.003 [2024-11-18 07:14:08.940703] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1680b70 is same with the state(6) to be set 00:28:48.003 [2024-11-18 07:14:08.940715] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1680b70 is same with the state(6) to be set 00:28:48.003 [2024-11-18 07:14:08.941962] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1681040 is same with the state(6) to be set 00:28:48.003 [2024-11-18 07:14:08.941987] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1681040 is same with the state(6) to be set 00:28:48.003 [2024-11-18 07:14:08.942001] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1681040 is same with the state(6) to be set 00:28:48.003 [2024-11-18 07:14:08.942013] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1681040 is same with the state(6) to be set 00:28:48.003 [2024-11-18 07:14:08.942041] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1681040 is same with the state(6) to be set 00:28:48.003 [2024-11-18 07:14:08.942054] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1681040 is same with the state(6) to be set 00:28:48.003 [2024-11-18 07:14:08.942066] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1681040 is same with the state(6) to be set 00:28:48.003 [2024-11-18 07:14:08.942079] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1681040 is same with the state(6) to be set 00:28:48.003 [2024-11-18 07:14:08.942091] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1681040 is same with the state(6) to be set 00:28:48.003 [2024-11-18 07:14:08.942103] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1681040 is same with the state(6) to be set 00:28:48.003 [2024-11-18 07:14:08.942114] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1681040 is same with the state(6) to be set 00:28:48.003 [2024-11-18 07:14:08.942127] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1681040 is same with the state(6) to be set 00:28:48.003 [2024-11-18 07:14:08.942139] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1681040 is same with the state(6) to be set 00:28:48.003 [2024-11-18 07:14:08.942152] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1681040 is same with the state(6) to be set 00:28:48.003 [2024-11-18 07:14:08.942164] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1681040 is same with the state(6) to be set 00:28:48.003 [2024-11-18 07:14:08.942177] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1681040 is same with the state(6) to be set 00:28:48.003 [2024-11-18 07:14:08.942191] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1681040 is same with the state(6) to be set 00:28:48.003 [2024-11-18 07:14:08.942204] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1681040 is same with the state(6) to be set 00:28:48.003 [2024-11-18 07:14:08.942216] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1681040 is same with the state(6) to be set 00:28:48.003 [2024-11-18 07:14:08.942229] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1681040 is same with the state(6) to be set 00:28:48.003 [2024-11-18 07:14:08.942241] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1681040 is same with the state(6) to be set 00:28:48.003 [2024-11-18 07:14:08.942258] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1681040 is same with the state(6) to be set 00:28:48.003 [2024-11-18 07:14:08.942271] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1681040 is same with the state(6) to be set 00:28:48.003 [2024-11-18 07:14:08.942283] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1681040 is same with the state(6) to be set 00:28:48.003 [2024-11-18 07:14:08.942295] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1681040 is same with the state(6) to be set 00:28:48.003 [2024-11-18 07:14:08.942307] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1681040 is same with the state(6) to be set 00:28:48.003 [2024-11-18 07:14:08.942320] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1681040 is same with the state(6) to be set 00:28:48.003 [2024-11-18 07:14:08.942333] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1681040 is same with the state(6) to be set 00:28:48.003 [2024-11-18 07:14:08.942345] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1681040 is same with the state(6) to be set 00:28:48.003 [2024-11-18 07:14:08.942357] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1681040 is same with the state(6) to be set 00:28:48.003 [2024-11-18 07:14:08.942370] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1681040 is same with the state(6) to be set 00:28:48.003 [2024-11-18 07:14:08.942382] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1681040 is same with the state(6) to be set 00:28:48.003 [2024-11-18 07:14:08.942395] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1681040 is same with the state(6) to be set 00:28:48.003 [2024-11-18 07:14:08.942412] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1681040 is same with the state(6) to be set 00:28:48.003 [2024-11-18 07:14:08.942425] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1681040 is same with the state(6) to be set 00:28:48.003 [2024-11-18 07:14:08.942437] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1681040 is same with the state(6) to be set 00:28:48.003 [2024-11-18 07:14:08.942448] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1681040 is same with the state(6) to be set 00:28:48.003 [2024-11-18 07:14:08.942461] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1681040 is same with the state(6) to be set 00:28:48.003 [2024-11-18 07:14:08.942488] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1681040 is same with the state(6) to be set 00:28:48.003 [2024-11-18 07:14:08.942509] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1681040 is same with the state(6) to be set 00:28:48.003 [2024-11-18 07:14:08.942522] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1681040 is same with the state(6) to be set 00:28:48.003 [2024-11-18 07:14:08.942545] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1681040 is same with the state(6) to be set 00:28:48.003 [2024-11-18 07:14:08.942557] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1681040 is same with the state(6) to be set 00:28:48.003 [2024-11-18 07:14:08.942570] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1681040 is same with the state(6) to be set 00:28:48.003 [2024-11-18 07:14:08.942582] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1681040 is same with the state(6) to be set 00:28:48.003 [2024-11-18 07:14:08.942595] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1681040 is same with the state(6) to be set 00:28:48.003 [2024-11-18 07:14:08.942607] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1681040 is same with the state(6) to be set 00:28:48.003 [2024-11-18 07:14:08.942620] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1681040 is same with the state(6) to be set 00:28:48.003 [2024-11-18 07:14:08.942632] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1681040 is same with the state(6) to be set 00:28:48.003 [2024-11-18 07:14:08.942652] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1681040 is same with the state(6) to be set 00:28:48.003 [2024-11-18 07:14:08.942666] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1681040 is same with the state(6) to be set 00:28:48.004 [2024-11-18 07:14:08.942678] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1681040 is same with the state(6) to be set 00:28:48.004 [2024-11-18 07:14:08.942691] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1681040 is same with the state(6) to be set 00:28:48.004 [2024-11-18 07:14:08.942704] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1681040 is same with the state(6) to be set 00:28:48.004 [2024-11-18 07:14:08.942717] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1681040 is same with the state(6) to be set 00:28:48.004 [2024-11-18 07:14:08.942729] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1681040 is same with the state(6) to be set 00:28:48.004 [2024-11-18 07:14:08.942741] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1681040 is same with the state(6) to be set 00:28:48.004 [2024-11-18 07:14:08.942753] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1681040 is same with the state(6) to be set 00:28:48.004 [2024-11-18 07:14:08.942766] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1681040 is same with the state(6) to be set 00:28:48.004 [2024-11-18 07:14:08.942779] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1681040 is same with the state(6) to be set 00:28:48.004 [2024-11-18 07:14:08.942801] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1681040 is same with the state(6) to be set 00:28:48.004 [2024-11-18 07:14:08.942813] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1681040 is same with the state(6) to be set 00:28:48.004 [2024-11-18 07:14:08.942825] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1681040 is same with the state(6) to be set 00:28:48.004 [2024-11-18 07:14:08.944269] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1681510 is same with the state(6) to be set 00:28:48.004 [2024-11-18 07:14:08.944297] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1681510 is same with the state(6) to be set 00:28:48.004 [2024-11-18 07:14:08.944315] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1681510 is same with the state(6) to be set 00:28:48.004 [2024-11-18 07:14:08.944328] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1681510 is same with the state(6) to be set 00:28:48.004 [2024-11-18 07:14:08.944340] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1681510 is same with the state(6) to be set 00:28:48.004 [2024-11-18 07:14:08.944355] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1681510 is same with the state(6) to be set 00:28:48.004 [2024-11-18 07:14:08.944367] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1681510 is same with the state(6) to be set 00:28:48.004 [2024-11-18 07:14:08.944380] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1681510 is same with the state(6) to be set 00:28:48.004 [2024-11-18 07:14:08.944393] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1681510 is same with the state(6) to be set 00:28:48.004 [2024-11-18 07:14:08.944408] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1681510 is same with the state(6) to be set 00:28:48.004 [2024-11-18 07:14:08.944420] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1681510 is same with the state(6) to be set 00:28:48.004 [2024-11-18 07:14:08.944432] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1681510 is same with the state(6) to be set 00:28:48.004 [2024-11-18 07:14:08.944445] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1681510 is same with the state(6) to be set 00:28:48.004 [2024-11-18 07:14:08.944466] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1681510 is same with the state(6) to be set 00:28:48.004 [2024-11-18 07:14:08.944480] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1681510 is same with the state(6) to be set 00:28:48.004 [2024-11-18 07:14:08.944504] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1681510 is same with the state(6) to be set 00:28:48.004 [2024-11-18 07:14:08.944520] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1681510 is same with the state(6) to be set 00:28:48.004 [2024-11-18 07:14:08.944545] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1681510 is same with the state(6) to be set 00:28:48.004 [2024-11-18 07:14:08.944561] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1681510 is same with the state(6) to be set 00:28:48.004 [2024-11-18 07:14:08.944574] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1681510 is same with the state(6) to be set 00:28:48.004 [2024-11-18 07:14:08.944586] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1681510 is same with the state(6) to be set 00:28:48.004 [2024-11-18 07:14:08.944599] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1681510 is same with the state(6) to be set 00:28:48.004 [2024-11-18 07:14:08.944614] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1681510 is same with the state(6) to be set 00:28:48.004 [2024-11-18 07:14:08.944627] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1681510 is same with the state(6) to be set 00:28:48.004 [2024-11-18 07:14:08.944640] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1681510 is same with the state(6) to be set 00:28:48.004 [2024-11-18 07:14:08.944654] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1681510 is same with the state(6) to be set 00:28:48.004 [2024-11-18 07:14:08.944667] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1681510 is same with the state(6) to be set 00:28:48.004 [2024-11-18 07:14:08.944680] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1681510 is same with the state(6) to be set 00:28:48.004 [2024-11-18 07:14:08.944693] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1681510 is same with the state(6) to be set 00:28:48.004 [2024-11-18 07:14:08.944708] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1681510 is same with the state(6) to be set 00:28:48.004 [2024-11-18 07:14:08.944722] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1681510 is same with the state(6) to be set 00:28:48.004 [2024-11-18 07:14:08.944734] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1681510 is same with the state(6) to be set 00:28:48.004 [2024-11-18 07:14:08.944746] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1681510 is same with the state(6) to be set 00:28:48.004 [2024-11-18 07:14:08.944761] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1681510 is same with the state(6) to be set 00:28:48.004 [2024-11-18 07:14:08.944775] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1681510 is same with the state(6) to be set 00:28:48.004 [2024-11-18 07:14:08.944790] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1681510 is same with the state(6) to be set 00:28:48.004 [2024-11-18 07:14:08.944820] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1681510 is same with the state(6) to be set 00:28:48.004 [2024-11-18 07:14:08.944832] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1681510 is same with the state(6) to be set 00:28:48.004 [2024-11-18 07:14:08.944857] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1681510 is same with the state(6) to be set 00:28:48.004 [2024-11-18 07:14:08.944870] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1681510 is same with the state(6) to be set 00:28:48.004 [2024-11-18 07:14:08.944886] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1681510 is same with the state(6) to be set 00:28:48.004 [2024-11-18 07:14:08.944898] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1681510 is same with the state(6) to be set 00:28:48.004 [2024-11-18 07:14:08.944913] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1681510 is same with the state(6) to be set 00:28:48.004 [2024-11-18 07:14:08.944925] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1681510 is same with the state(6) to be set 00:28:48.004 [2024-11-18 07:14:08.944937] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1681510 is same with the state(6) to be set 00:28:48.004 [2024-11-18 07:14:08.944949] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1681510 is same with the state(6) to be set 00:28:48.004 [2024-11-18 07:14:08.944964] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1681510 is same with the state(6) to be set 00:28:48.004 [2024-11-18 07:14:08.944977] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1681510 is same with the state(6) to be set 00:28:48.004 [2024-11-18 07:14:08.944988] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1681510 is same with the state(6) to be set 00:28:48.004 [2024-11-18 07:14:08.945001] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1681510 is same with the state(6) to be set 00:28:48.004 [2024-11-18 07:14:08.945015] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1681510 is same with the state(6) to be set 00:28:48.004 [2024-11-18 07:14:08.945027] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1681510 is same with the state(6) to be set 00:28:48.004 [2024-11-18 07:14:08.945040] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1681510 is same with the state(6) to be set 00:28:48.004 [2024-11-18 07:14:08.945051] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1681510 is same with the state(6) to be set 00:28:48.004 [2024-11-18 07:14:08.945065] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1681510 is same with the state(6) to be set 00:28:48.005 [2024-11-18 07:14:08.945077] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1681510 is same with the state(6) to be set 00:28:48.005 [2024-11-18 07:14:08.945089] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1681510 is same with the state(6) to be set 00:28:48.005 [2024-11-18 07:14:08.945101] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1681510 is same with the state(6) to be set 00:28:48.005 [2024-11-18 07:14:08.945113] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1681510 is same with the state(6) to be set 00:28:48.005 [2024-11-18 07:14:08.945125] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1681510 is same with the state(6) to be set 00:28:48.005 [2024-11-18 07:14:08.945137] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1681510 is same with the state(6) to be set 00:28:48.005 [2024-11-18 07:14:08.945148] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1681510 is same with the state(6) to be set 00:28:48.005 [2024-11-18 07:14:08.945162] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1681510 is same with the state(6) to be set 00:28:48.005 [2024-11-18 07:14:08.946348] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1681a00 is same with the state(6) to be set 00:28:48.005 [2024-11-18 07:14:08.946388] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1681a00 is same with the state(6) to be set 00:28:48.005 [2024-11-18 07:14:08.946404] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1681a00 is same with the state(6) to be set 00:28:48.005 [2024-11-18 07:14:08.946416] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1681a00 is same with the state(6) to be set 00:28:48.005 [2024-11-18 07:14:08.946434] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1681a00 is same with the state(6) to be set 00:28:48.005 [2024-11-18 07:14:08.946447] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1681a00 is same with the state(6) to be set 00:28:48.005 [2024-11-18 07:14:08.946459] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1681a00 is same with the state(6) to be set 00:28:48.005 [2024-11-18 07:14:08.946487] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1681a00 is same with the state(6) to be set 00:28:48.005 [2024-11-18 07:14:08.946517] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1681a00 is same with the state(6) to be set 00:28:48.005 [2024-11-18 07:14:08.946543] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1681a00 is same with the state(6) to be set 00:28:48.005 [2024-11-18 07:14:08.946555] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1681a00 is same with the state(6) to be set 00:28:48.005 [2024-11-18 07:14:08.946567] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1681a00 is same with the state(6) to be set 00:28:48.005 [2024-11-18 07:14:08.946579] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1681a00 is same with the state(6) to be set 00:28:48.005 [2024-11-18 07:14:08.946592] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1681a00 is same with the state(6) to be set 00:28:48.005 [2024-11-18 07:14:08.946604] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1681a00 is same with the state(6) to be set 00:28:48.005 [2024-11-18 07:14:08.946616] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1681a00 is same with the state(6) to be set 00:28:48.005 [2024-11-18 07:14:08.946628] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1681a00 is same with the state(6) to be set 00:28:48.005 [2024-11-18 07:14:08.946641] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1681a00 is same with the state(6) to be set 00:28:48.005 [2024-11-18 07:14:08.946653] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1681a00 is same with the state(6) to be set 00:28:48.005 [2024-11-18 07:14:08.946665] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1681a00 is same with the state(6) to be set 00:28:48.005 [2024-11-18 07:14:08.946677] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1681a00 is same with the state(6) to be set 00:28:48.005 [2024-11-18 07:14:08.946689] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1681a00 is same with the state(6) to be set 00:28:48.005 [2024-11-18 07:14:08.946702] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1681a00 is same with the state(6) to be set 00:28:48.005 [2024-11-18 07:14:08.946715] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1681a00 is same with the state(6) to be set 00:28:48.005 [2024-11-18 07:14:08.946727] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1681a00 is same with the state(6) to be set 00:28:48.005 [2024-11-18 07:14:08.946739] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1681a00 is same with the state(6) to be set 00:28:48.005 [2024-11-18 07:14:08.946752] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1681a00 is same with the state(6) to be set 00:28:48.005 [2024-11-18 07:14:08.946764] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1681a00 is same with the state(6) to be set 00:28:48.005 [2024-11-18 07:14:08.946777] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1681a00 is same with the state(6) to be set 00:28:48.005 [2024-11-18 07:14:08.946790] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1681a00 is same with the state(6) to be set 00:28:48.005 [2024-11-18 07:14:08.946803] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1681a00 is same with the state(6) to be set 00:28:48.005 [2024-11-18 07:14:08.946819] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1681a00 is same with the state(6) to be set 00:28:48.005 [2024-11-18 07:14:08.946831] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1681a00 is same with the state(6) to be set 00:28:48.005 [2024-11-18 07:14:08.946855] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1681a00 is same with the state(6) to be set 00:28:48.005 [2024-11-18 07:14:08.946867] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1681a00 is same with the state(6) to be set 00:28:48.005 [2024-11-18 07:14:08.946894] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1681a00 is same with the state(6) to be set 00:28:48.005 [2024-11-18 07:14:08.946906] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1681a00 is same with the state(6) to be set 00:28:48.005 [2024-11-18 07:14:08.946919] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1681a00 is same with the state(6) to be set 00:28:48.005 [2024-11-18 07:14:08.946932] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1681a00 is same with the state(6) to be set 00:28:48.005 [2024-11-18 07:14:08.946944] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1681a00 is same with the state(6) to be set 00:28:48.005 [2024-11-18 07:14:08.946956] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1681a00 is same with the state(6) to be set 00:28:48.005 [2024-11-18 07:14:08.946968] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1681a00 is same with the state(6) to be set 00:28:48.005 [2024-11-18 07:14:08.946979] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1681a00 is same with the state(6) to be set 00:28:48.005 [2024-11-18 07:14:08.946992] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1681a00 is same with the state(6) to be set 00:28:48.005 [2024-11-18 07:14:08.947004] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1681a00 is same with the state(6) to be set 00:28:48.005 [2024-11-18 07:14:08.947016] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1681a00 is same with the state(6) to be set 00:28:48.005 [2024-11-18 07:14:08.947027] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1681a00 is same with the state(6) to be set 00:28:48.005 [2024-11-18 07:14:08.947039] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1681a00 is same with the state(6) to be set 00:28:48.005 [2024-11-18 07:14:08.947051] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1681a00 is same with the state(6) to be set 00:28:48.005 [2024-11-18 07:14:08.947064] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1681a00 is same with the state(6) to be set 00:28:48.005 [2024-11-18 07:14:08.947075] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1681a00 is same with the state(6) to be set 00:28:48.005 [2024-11-18 07:14:08.947088] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1681a00 is same with the state(6) to be set 00:28:48.005 [2024-11-18 07:14:08.947099] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1681a00 is same with the state(6) to be set 00:28:48.005 [2024-11-18 07:14:08.947111] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1681a00 is same with the state(6) to be set 00:28:48.005 [2024-11-18 07:14:08.947124] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1681a00 is same with the state(6) to be set 00:28:48.005 [2024-11-18 07:14:08.947136] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1681a00 is same with the state(6) to be set 00:28:48.005 [2024-11-18 07:14:08.947148] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1681a00 is same with the state(6) to be set 00:28:48.005 [2024-11-18 07:14:08.947159] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1681a00 is same with the state(6) to be set 00:28:48.005 [2024-11-18 07:14:08.955560] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b46450 (9): Bad file descriptor 00:28:48.005 [2024-11-18 07:14:08.955694] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:48.005 [2024-11-18 07:14:08.955721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.005 [2024-11-18 07:14:08.955738] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:48.005 [2024-11-18 07:14:08.955752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.005 [2024-11-18 07:14:08.955767] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:48.005 [2024-11-18 07:14:08.955782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.005 [2024-11-18 07:14:08.955796] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:48.005 [2024-11-18 07:14:08.955810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.005 [2024-11-18 07:14:08.955823] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b406f0 is same with the state(6) to be set 00:28:48.005 [2024-11-18 07:14:08.955873] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:48.005 [2024-11-18 07:14:08.955895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.005 [2024-11-18 07:14:08.955911] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:48.005 [2024-11-18 07:14:08.955926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.005 [2024-11-18 07:14:08.955941] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:48.005 [2024-11-18 07:14:08.955955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.005 [2024-11-18 07:14:08.955969] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:48.005 [2024-11-18 07:14:08.955984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.005 [2024-11-18 07:14:08.955999] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b44700 is same with the state(6) to be set 00:28:48.005 [2024-11-18 07:14:08.956049] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:48.005 [2024-11-18 07:14:08.956070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.005 [2024-11-18 07:14:08.956085] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:48.005 [2024-11-18 07:14:08.956099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.005 [2024-11-18 07:14:08.956115] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:48.006 [2024-11-18 07:14:08.956129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.006 [2024-11-18 07:14:08.956143] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:48.006 [2024-11-18 07:14:08.956175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.006 [2024-11-18 07:14:08.956189] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f808d0 is same with the state(6) to be set 00:28:48.006 [2024-11-18 07:14:08.956239] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:48.006 [2024-11-18 07:14:08.956261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.006 [2024-11-18 07:14:08.956277] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:48.006 [2024-11-18 07:14:08.956291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.006 [2024-11-18 07:14:08.956305] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:48.006 [2024-11-18 07:14:08.956319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.006 [2024-11-18 07:14:08.956334] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:48.006 [2024-11-18 07:14:08.956348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.006 [2024-11-18 07:14:08.956362] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f86fa0 is same with the state(6) to be set 00:28:48.006 [2024-11-18 07:14:08.956411] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:48.006 [2024-11-18 07:14:08.956432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.006 [2024-11-18 07:14:08.956448] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:48.006 [2024-11-18 07:14:08.956462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.006 [2024-11-18 07:14:08.956476] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:48.006 [2024-11-18 07:14:08.956500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.006 [2024-11-18 07:14:08.956517] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:48.006 [2024-11-18 07:14:08.956531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.006 [2024-11-18 07:14:08.956545] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f874d0 is same with the state(6) to be set 00:28:48.006 [2024-11-18 07:14:08.956593] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:48.006 [2024-11-18 07:14:08.956614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.006 [2024-11-18 07:14:08.956629] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:48.006 [2024-11-18 07:14:08.956642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.006 [2024-11-18 07:14:08.956657] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:48.006 [2024-11-18 07:14:08.956676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.006 [2024-11-18 07:14:08.956691] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:48.006 [2024-11-18 07:14:08.956704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.006 [2024-11-18 07:14:08.956717] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2110780 is same with the state(6) to be set 00:28:48.006 [2024-11-18 07:14:08.956768] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:48.006 [2024-11-18 07:14:08.956790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.006 [2024-11-18 07:14:08.956805] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:48.006 [2024-11-18 07:14:08.956819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.006 [2024-11-18 07:14:08.956833] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:48.006 [2024-11-18 07:14:08.956846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.006 [2024-11-18 07:14:08.956861] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:48.006 [2024-11-18 07:14:08.956875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.006 [2024-11-18 07:14:08.956888] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b3fa50 is same with the state(6) to be set 00:28:48.006 [2024-11-18 07:14:08.956935] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:48.006 [2024-11-18 07:14:08.956955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.006 [2024-11-18 07:14:08.956970] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:48.006 [2024-11-18 07:14:08.956985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.006 [2024-11-18 07:14:08.957000] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:48.006 [2024-11-18 07:14:08.957014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.006 [2024-11-18 07:14:08.957028] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:48.006 [2024-11-18 07:14:08.957042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.006 [2024-11-18 07:14:08.957056] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8ef50 is same with the state(6) to be set 00:28:48.006 [2024-11-18 07:14:08.957103] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:48.006 [2024-11-18 07:14:08.957124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.006 [2024-11-18 07:14:08.957140] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:48.006 [2024-11-18 07:14:08.957154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.006 [2024-11-18 07:14:08.957173] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:48.006 [2024-11-18 07:14:08.957187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.006 [2024-11-18 07:14:08.957202] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:48.006 [2024-11-18 07:14:08.957216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.006 [2024-11-18 07:14:08.957230] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4e0a0 is same with the state(6) to be set 00:28:48.006 [2024-11-18 07:14:08.957467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.006 [2024-11-18 07:14:08.957502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.006 [2024-11-18 07:14:08.957531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.006 [2024-11-18 07:14:08.957547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.006 [2024-11-18 07:14:08.957565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.006 [2024-11-18 07:14:08.957579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.006 [2024-11-18 07:14:08.957595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.006 [2024-11-18 07:14:08.957609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.006 [2024-11-18 07:14:08.957625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.006 [2024-11-18 07:14:08.957640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.006 [2024-11-18 07:14:08.957656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.006 [2024-11-18 07:14:08.957670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.006 [2024-11-18 07:14:08.957686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.006 [2024-11-18 07:14:08.957701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.006 [2024-11-18 07:14:08.957717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.006 [2024-11-18 07:14:08.957731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.006 [2024-11-18 07:14:08.957747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.006 [2024-11-18 07:14:08.957762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.006 [2024-11-18 07:14:08.957777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.006 [2024-11-18 07:14:08.957792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.006 [2024-11-18 07:14:08.957813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.006 [2024-11-18 07:14:08.957829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.006 [2024-11-18 07:14:08.957845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.006 [2024-11-18 07:14:08.957859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.006 [2024-11-18 07:14:08.957875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.006 [2024-11-18 07:14:08.957890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.006 [2024-11-18 07:14:08.957905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.006 [2024-11-18 07:14:08.957920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.007 [2024-11-18 07:14:08.957936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.007 [2024-11-18 07:14:08.957950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.007 [2024-11-18 07:14:08.957966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.007 [2024-11-18 07:14:08.957981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.007 [2024-11-18 07:14:08.957997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.007 [2024-11-18 07:14:08.958011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.007 [2024-11-18 07:14:08.958026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.007 [2024-11-18 07:14:08.958041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.007 [2024-11-18 07:14:08.958056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.007 [2024-11-18 07:14:08.958071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.007 [2024-11-18 07:14:08.958087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.007 [2024-11-18 07:14:08.958102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.007 [2024-11-18 07:14:08.958117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.007 [2024-11-18 07:14:08.958132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.007 [2024-11-18 07:14:08.958147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.007 [2024-11-18 07:14:08.958162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.007 [2024-11-18 07:14:08.958177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.007 [2024-11-18 07:14:08.958196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.007 [2024-11-18 07:14:08.958212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.007 [2024-11-18 07:14:08.958226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.007 [2024-11-18 07:14:08.958243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.007 [2024-11-18 07:14:08.958257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.007 [2024-11-18 07:14:08.958272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.007 [2024-11-18 07:14:08.958286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.007 [2024-11-18 07:14:08.958302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.007 [2024-11-18 07:14:08.958317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.007 [2024-11-18 07:14:08.958334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.007 [2024-11-18 07:14:08.958348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.007 [2024-11-18 07:14:08.958365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.007 [2024-11-18 07:14:08.958380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.007 [2024-11-18 07:14:08.958396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.007 [2024-11-18 07:14:08.958411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.007 [2024-11-18 07:14:08.958428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.007 [2024-11-18 07:14:08.958443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.007 [2024-11-18 07:14:08.958458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.007 [2024-11-18 07:14:08.958473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.007 [2024-11-18 07:14:08.958496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.007 [2024-11-18 07:14:08.958514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.007 [2024-11-18 07:14:08.958531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.007 [2024-11-18 07:14:08.958546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.007 [2024-11-18 07:14:08.958562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.007 [2024-11-18 07:14:08.958577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.007 [2024-11-18 07:14:08.958596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.007 [2024-11-18 07:14:08.958623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.007 [2024-11-18 07:14:08.958638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.007 [2024-11-18 07:14:08.958653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.007 [2024-11-18 07:14:08.958670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.007 [2024-11-18 07:14:08.958685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.007 [2024-11-18 07:14:08.958700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.007 [2024-11-18 07:14:08.958714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.007 [2024-11-18 07:14:08.958731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.007 [2024-11-18 07:14:08.958746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.007 [2024-11-18 07:14:08.958762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.007 [2024-11-18 07:14:08.958777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.007 [2024-11-18 07:14:08.958793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.007 [2024-11-18 07:14:08.958808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.007 [2024-11-18 07:14:08.958824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.007 [2024-11-18 07:14:08.958838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.007 [2024-11-18 07:14:08.958855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.007 [2024-11-18 07:14:08.958869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.007 [2024-11-18 07:14:08.958886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.007 [2024-11-18 07:14:08.958900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.007 [2024-11-18 07:14:08.958918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.007 [2024-11-18 07:14:08.958933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.007 [2024-11-18 07:14:08.958949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.007 [2024-11-18 07:14:08.958963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.007 [2024-11-18 07:14:08.958979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.007 [2024-11-18 07:14:08.958998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.007 [2024-11-18 07:14:08.959015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.007 [2024-11-18 07:14:08.959030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.007 [2024-11-18 07:14:08.959045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.007 [2024-11-18 07:14:08.959060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.007 [2024-11-18 07:14:08.959076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.007 [2024-11-18 07:14:08.959091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.007 [2024-11-18 07:14:08.959106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.007 [2024-11-18 07:14:08.959121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.007 [2024-11-18 07:14:08.959137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.007 [2024-11-18 07:14:08.959152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.007 [2024-11-18 07:14:08.959169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.007 [2024-11-18 07:14:08.959184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.007 [2024-11-18 07:14:08.959199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.007 [2024-11-18 07:14:08.959214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.007 [2024-11-18 07:14:08.959238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.007 [2024-11-18 07:14:08.959254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.007 [2024-11-18 07:14:08.959269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.008 [2024-11-18 07:14:08.959284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.008 [2024-11-18 07:14:08.959300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.008 [2024-11-18 07:14:08.959314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.008 [2024-11-18 07:14:08.959331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.008 [2024-11-18 07:14:08.959345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.008 [2024-11-18 07:14:08.959361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.008 [2024-11-18 07:14:08.959376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.008 [2024-11-18 07:14:08.959396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.008 [2024-11-18 07:14:08.959412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.008 [2024-11-18 07:14:08.959428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.008 [2024-11-18 07:14:08.959442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.008 [2024-11-18 07:14:08.959458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.008 [2024-11-18 07:14:08.959473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.008 [2024-11-18 07:14:08.959497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.008 [2024-11-18 07:14:08.959514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.008 [2024-11-18 07:14:08.960008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.008 [2024-11-18 07:14:08.960033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.008 [2024-11-18 07:14:08.960055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.008 [2024-11-18 07:14:08.960073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.008 [2024-11-18 07:14:08.960089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.008 [2024-11-18 07:14:08.960104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.008 [2024-11-18 07:14:08.960120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.008 [2024-11-18 07:14:08.960136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.008 [2024-11-18 07:14:08.960152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.008 [2024-11-18 07:14:08.960168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.008 [2024-11-18 07:14:08.960184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.008 [2024-11-18 07:14:08.960198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.008 [2024-11-18 07:14:08.960215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.008 [2024-11-18 07:14:08.960229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.008 [2024-11-18 07:14:08.960246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.008 [2024-11-18 07:14:08.960260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.008 [2024-11-18 07:14:08.960276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.008 [2024-11-18 07:14:08.960301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.008 [2024-11-18 07:14:08.960319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.008 [2024-11-18 07:14:08.960334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.008 [2024-11-18 07:14:08.960351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.008 [2024-11-18 07:14:08.960365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.008 [2024-11-18 07:14:08.960381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.008 [2024-11-18 07:14:08.960397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.008 [2024-11-18 07:14:08.960414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.008 [2024-11-18 07:14:08.960429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.008 [2024-11-18 07:14:08.960445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.008 [2024-11-18 07:14:08.960460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.008 [2024-11-18 07:14:08.960477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.008 [2024-11-18 07:14:08.960500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.008 [2024-11-18 07:14:08.960518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.008 [2024-11-18 07:14:08.960533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.008 [2024-11-18 07:14:08.960549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.008 [2024-11-18 07:14:08.960564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.008 [2024-11-18 07:14:08.960581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.008 [2024-11-18 07:14:08.960595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.008 [2024-11-18 07:14:08.960611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.008 [2024-11-18 07:14:08.960626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.008 [2024-11-18 07:14:08.960642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.008 [2024-11-18 07:14:08.960657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.008 [2024-11-18 07:14:08.960673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.008 [2024-11-18 07:14:08.960687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.008 [2024-11-18 07:14:08.960708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.008 [2024-11-18 07:14:08.960723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.008 [2024-11-18 07:14:08.960740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.008 [2024-11-18 07:14:08.960755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.008 [2024-11-18 07:14:08.960777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.008 [2024-11-18 07:14:08.960793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.008 [2024-11-18 07:14:08.960808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.008 [2024-11-18 07:14:08.960824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.008 [2024-11-18 07:14:08.960840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.008 [2024-11-18 07:14:08.960854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.008 [2024-11-18 07:14:08.960870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.008 [2024-11-18 07:14:08.960885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.008 [2024-11-18 07:14:08.960901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.008 [2024-11-18 07:14:08.960917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.008 [2024-11-18 07:14:08.960934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.008 [2024-11-18 07:14:08.960948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.008 [2024-11-18 07:14:08.960964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.008 [2024-11-18 07:14:08.960979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.008 [2024-11-18 07:14:08.960995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.009 [2024-11-18 07:14:08.961010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.009 [2024-11-18 07:14:08.961026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.009 [2024-11-18 07:14:08.961041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.009 [2024-11-18 07:14:08.961057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.009 [2024-11-18 07:14:08.961071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.009 [2024-11-18 07:14:08.961087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.009 [2024-11-18 07:14:08.961106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.009 [2024-11-18 07:14:08.961122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.009 [2024-11-18 07:14:08.961137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.009 [2024-11-18 07:14:08.961153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.009 [2024-11-18 07:14:08.961167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.009 [2024-11-18 07:14:08.961184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.009 [2024-11-18 07:14:08.961199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.009 [2024-11-18 07:14:08.961215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.009 [2024-11-18 07:14:08.961229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.009 [2024-11-18 07:14:08.961245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.009 [2024-11-18 07:14:08.961260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.009 [2024-11-18 07:14:08.961282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.009 [2024-11-18 07:14:08.961297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.009 [2024-11-18 07:14:08.961313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.009 [2024-11-18 07:14:08.961329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.009 [2024-11-18 07:14:08.961345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.009 [2024-11-18 07:14:08.961359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.009 [2024-11-18 07:14:08.961375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.009 [2024-11-18 07:14:08.961390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.009 [2024-11-18 07:14:08.961406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.009 [2024-11-18 07:14:08.961421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.009 [2024-11-18 07:14:08.961438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.009 [2024-11-18 07:14:08.961453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.009 [2024-11-18 07:14:08.961469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.009 [2024-11-18 07:14:08.961483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.009 [2024-11-18 07:14:08.961512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.009 [2024-11-18 07:14:08.961529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.009 [2024-11-18 07:14:08.961545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.009 [2024-11-18 07:14:08.961560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.009 [2024-11-18 07:14:08.961577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.009 [2024-11-18 07:14:08.961591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.009 [2024-11-18 07:14:08.961607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.009 [2024-11-18 07:14:08.961622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.009 [2024-11-18 07:14:08.961637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.009 [2024-11-18 07:14:08.961652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.009 [2024-11-18 07:14:08.961668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.009 [2024-11-18 07:14:08.961683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.009 [2024-11-18 07:14:08.961699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.009 [2024-11-18 07:14:08.961714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.009 [2024-11-18 07:14:08.961729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.009 [2024-11-18 07:14:08.961744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.009 [2024-11-18 07:14:08.961761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.009 [2024-11-18 07:14:08.961775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.009 [2024-11-18 07:14:08.961792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.009 [2024-11-18 07:14:08.961807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.009 [2024-11-18 07:14:08.961824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.009 [2024-11-18 07:14:08.961839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.009 [2024-11-18 07:14:08.961856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.009 [2024-11-18 07:14:08.961870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.009 [2024-11-18 07:14:08.961886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.009 [2024-11-18 07:14:08.961905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.009 [2024-11-18 07:14:08.961922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.009 [2024-11-18 07:14:08.961936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.009 [2024-11-18 07:14:08.961952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.009 [2024-11-18 07:14:08.961967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.009 [2024-11-18 07:14:08.961983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.009 [2024-11-18 07:14:08.961998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.009 [2024-11-18 07:14:08.962014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.009 [2024-11-18 07:14:08.962029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.009 [2024-11-18 07:14:08.962045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.009 [2024-11-18 07:14:08.962060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.009 [2024-11-18 07:14:08.965004] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:28:48.009 [2024-11-18 07:14:08.965058] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b44700 (9): Bad file descriptor 00:28:48.009 [2024-11-18 07:14:08.965696] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:48.009 [2024-11-18 07:14:08.965730] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] resetting controller 00:28:48.009 [2024-11-18 07:14:08.965758] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2110780 (9): Bad file descriptor 00:28:48.009 [2024-11-18 07:14:08.965811] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b406f0 (9): Bad file descriptor 00:28:48.009 [2024-11-18 07:14:08.965851] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f808d0 (9): Bad file descriptor 00:28:48.009 [2024-11-18 07:14:08.965884] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f86fa0 (9): Bad file descriptor 00:28:48.009 [2024-11-18 07:14:08.965915] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f874d0 (9): Bad file descriptor 00:28:48.009 [2024-11-18 07:14:08.965948] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b3fa50 (9): Bad file descriptor 00:28:48.009 [2024-11-18 07:14:08.965979] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a8ef50 (9): Bad file descriptor 00:28:48.009 [2024-11-18 07:14:08.966010] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b4e0a0 (9): Bad file descriptor 00:28:48.009 [2024-11-18 07:14:08.966094] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:48.009 [2024-11-18 07:14:08.966648] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:48.009 [2024-11-18 07:14:08.966726] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:48.009 [2024-11-18 07:14:08.966831] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:48.009 [2024-11-18 07:14:08.967293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.009 [2024-11-18 07:14:08.967323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b44700 with addr=10.0.0.2, port=4420 00:28:48.009 [2024-11-18 07:14:08.967349] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b44700 is same with the state(6) to be set 00:28:48.009 [2024-11-18 07:14:08.967440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.009 [2024-11-18 07:14:08.967464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.010 [2024-11-18 07:14:08.967487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.010 [2024-11-18 07:14:08.967517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.010 [2024-11-18 07:14:08.967536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.010 [2024-11-18 07:14:08.967552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.010 [2024-11-18 07:14:08.967568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.010 [2024-11-18 07:14:08.967583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.010 [2024-11-18 07:14:08.967600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.010 [2024-11-18 07:14:08.967615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.010 [2024-11-18 07:14:08.967632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.010 [2024-11-18 07:14:08.967647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.010 [2024-11-18 07:14:08.967663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.010 [2024-11-18 07:14:08.967680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.010 [2024-11-18 07:14:08.967696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.010 [2024-11-18 07:14:08.967712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.010 [2024-11-18 07:14:08.967728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.010 [2024-11-18 07:14:08.967743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.010 [2024-11-18 07:14:08.967760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.010 [2024-11-18 07:14:08.967774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.010 [2024-11-18 07:14:08.967790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.010 [2024-11-18 07:14:08.967805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.010 [2024-11-18 07:14:08.967821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.010 [2024-11-18 07:14:08.967836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.010 [2024-11-18 07:14:08.967858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.010 [2024-11-18 07:14:08.967874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.010 [2024-11-18 07:14:08.967891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.010 [2024-11-18 07:14:08.967905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.010 [2024-11-18 07:14:08.967930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.010 [2024-11-18 07:14:08.967946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.010 [2024-11-18 07:14:08.967962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.010 [2024-11-18 07:14:08.967977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.010 [2024-11-18 07:14:08.967993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.010 [2024-11-18 07:14:08.968007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.010 [2024-11-18 07:14:08.968024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.010 [2024-11-18 07:14:08.968039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.010 [2024-11-18 07:14:08.968055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.010 [2024-11-18 07:14:08.968069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.010 [2024-11-18 07:14:08.968085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.010 [2024-11-18 07:14:08.968101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.010 [2024-11-18 07:14:08.968117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.010 [2024-11-18 07:14:08.968132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.010 [2024-11-18 07:14:08.968147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.010 [2024-11-18 07:14:08.968161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.010 [2024-11-18 07:14:08.968178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.010 [2024-11-18 07:14:08.968192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.010 [2024-11-18 07:14:08.968208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.010 [2024-11-18 07:14:08.968222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.010 [2024-11-18 07:14:08.968238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.010 [2024-11-18 07:14:08.968257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.010 [2024-11-18 07:14:08.968274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.010 [2024-11-18 07:14:08.968288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.010 [2024-11-18 07:14:08.968304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.010 [2024-11-18 07:14:08.968318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.010 [2024-11-18 07:14:08.968334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.010 [2024-11-18 07:14:08.968348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.010 [2024-11-18 07:14:08.968365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.010 [2024-11-18 07:14:08.968379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.277 [2024-11-18 07:14:08.968395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.277 [2024-11-18 07:14:08.968412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.277 [2024-11-18 07:14:08.968434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.277 [2024-11-18 07:14:08.968451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.277 [2024-11-18 07:14:08.968468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.277 [2024-11-18 07:14:08.968482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.277 [2024-11-18 07:14:08.968506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.277 [2024-11-18 07:14:08.968522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.277 [2024-11-18 07:14:08.968538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.277 [2024-11-18 07:14:08.968553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.277 [2024-11-18 07:14:08.968570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.277 [2024-11-18 07:14:08.968585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.277 [2024-11-18 07:14:08.968601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.278 [2024-11-18 07:14:08.968617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.278 [2024-11-18 07:14:08.968633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.278 [2024-11-18 07:14:08.968648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.278 [2024-11-18 07:14:08.968670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.278 [2024-11-18 07:14:08.968686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.278 [2024-11-18 07:14:08.968704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.278 [2024-11-18 07:14:08.968719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.278 [2024-11-18 07:14:08.968735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.278 [2024-11-18 07:14:08.968750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.278 [2024-11-18 07:14:08.968766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.278 [2024-11-18 07:14:08.968781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.278 [2024-11-18 07:14:08.968796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.278 [2024-11-18 07:14:08.968811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.278 [2024-11-18 07:14:08.968828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.278 [2024-11-18 07:14:08.968842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.278 [2024-11-18 07:14:08.968857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.278 [2024-11-18 07:14:08.968872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.278 [2024-11-18 07:14:08.968888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.278 [2024-11-18 07:14:08.968902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.278 [2024-11-18 07:14:08.968918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.278 [2024-11-18 07:14:08.968933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.278 [2024-11-18 07:14:08.968954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.278 [2024-11-18 07:14:08.968969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.278 [2024-11-18 07:14:08.968985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.278 [2024-11-18 07:14:08.968999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.278 [2024-11-18 07:14:08.969015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.278 [2024-11-18 07:14:08.969030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.278 [2024-11-18 07:14:08.969045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.278 [2024-11-18 07:14:08.969064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.278 [2024-11-18 07:14:08.969082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.278 [2024-11-18 07:14:08.969096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.278 [2024-11-18 07:14:08.969112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.278 [2024-11-18 07:14:08.969126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.278 [2024-11-18 07:14:08.969142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.278 [2024-11-18 07:14:08.969157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.278 [2024-11-18 07:14:08.969172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.278 [2024-11-18 07:14:08.969187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.278 [2024-11-18 07:14:08.969203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.278 [2024-11-18 07:14:08.969217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.278 [2024-11-18 07:14:08.969233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.278 [2024-11-18 07:14:08.969248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.278 [2024-11-18 07:14:08.969264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.278 [2024-11-18 07:14:08.969278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.278 [2024-11-18 07:14:08.969294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.278 [2024-11-18 07:14:08.969309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.278 [2024-11-18 07:14:08.969325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.278 [2024-11-18 07:14:08.982321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.278 [2024-11-18 07:14:08.982406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.278 [2024-11-18 07:14:08.982422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.278 [2024-11-18 07:14:08.982440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.278 [2024-11-18 07:14:08.982456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.278 [2024-11-18 07:14:08.982472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.278 [2024-11-18 07:14:08.982487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.278 [2024-11-18 07:14:08.982539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.278 [2024-11-18 07:14:08.982556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.278 [2024-11-18 07:14:08.982572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.278 [2024-11-18 07:14:08.982588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.278 [2024-11-18 07:14:08.982604] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b48cd0 is same with the state(6) to be set 00:28:48.278 [2024-11-18 07:14:08.984033] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:48.278 [2024-11-18 07:14:08.984190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.278 [2024-11-18 07:14:08.984214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.278 [2024-11-18 07:14:08.984241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.278 [2024-11-18 07:14:08.984258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.278 [2024-11-18 07:14:08.984275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.278 [2024-11-18 07:14:08.984290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.278 [2024-11-18 07:14:08.984306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.278 [2024-11-18 07:14:08.984321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.278 [2024-11-18 07:14:08.984337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.278 [2024-11-18 07:14:08.984352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.278 [2024-11-18 07:14:08.984368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.278 [2024-11-18 07:14:08.984383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.278 [2024-11-18 07:14:08.984399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.278 [2024-11-18 07:14:08.984414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.278 [2024-11-18 07:14:08.984430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.278 [2024-11-18 07:14:08.984445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.278 [2024-11-18 07:14:08.984461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.278 [2024-11-18 07:14:08.984475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.278 [2024-11-18 07:14:08.984498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.278 [2024-11-18 07:14:08.984515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.279 [2024-11-18 07:14:08.984537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.279 [2024-11-18 07:14:08.984553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.279 [2024-11-18 07:14:08.984569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.279 [2024-11-18 07:14:08.984583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.279 [2024-11-18 07:14:08.984599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.279 [2024-11-18 07:14:08.984615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.279 [2024-11-18 07:14:08.984632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.279 [2024-11-18 07:14:08.984647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.279 [2024-11-18 07:14:08.984663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.279 [2024-11-18 07:14:08.984678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.279 [2024-11-18 07:14:08.984694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.279 [2024-11-18 07:14:08.984709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.279 [2024-11-18 07:14:08.984724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.279 [2024-11-18 07:14:08.984739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.279 [2024-11-18 07:14:08.984755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.279 [2024-11-18 07:14:08.984771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.279 [2024-11-18 07:14:08.984788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.279 [2024-11-18 07:14:08.984803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.279 [2024-11-18 07:14:08.984818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.279 [2024-11-18 07:14:08.984833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.279 [2024-11-18 07:14:08.984850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.279 [2024-11-18 07:14:08.984865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.279 [2024-11-18 07:14:08.984882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.279 [2024-11-18 07:14:08.984896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.279 [2024-11-18 07:14:08.984912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.279 [2024-11-18 07:14:08.984930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.279 [2024-11-18 07:14:08.984947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.279 [2024-11-18 07:14:08.984962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.279 [2024-11-18 07:14:08.984978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.279 [2024-11-18 07:14:08.984992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.279 [2024-11-18 07:14:08.985008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.279 [2024-11-18 07:14:08.985024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.279 [2024-11-18 07:14:08.985040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.279 [2024-11-18 07:14:08.985055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.279 [2024-11-18 07:14:08.985071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.279 [2024-11-18 07:14:08.985086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.279 [2024-11-18 07:14:08.985102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.279 [2024-11-18 07:14:08.985117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.279 [2024-11-18 07:14:08.985133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.279 [2024-11-18 07:14:08.985148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.279 [2024-11-18 07:14:08.985164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.279 [2024-11-18 07:14:08.985180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.279 [2024-11-18 07:14:08.985197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.279 [2024-11-18 07:14:08.985211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.279 [2024-11-18 07:14:08.985228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.279 [2024-11-18 07:14:08.985242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.279 [2024-11-18 07:14:08.985259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.279 [2024-11-18 07:14:08.985273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.279 [2024-11-18 07:14:08.985289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.279 [2024-11-18 07:14:08.985303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.279 [2024-11-18 07:14:08.985324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.279 [2024-11-18 07:14:08.985339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.279 [2024-11-18 07:14:08.985355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.279 [2024-11-18 07:14:08.985370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.279 [2024-11-18 07:14:08.985386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.279 [2024-11-18 07:14:08.985401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.279 [2024-11-18 07:14:08.985416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.279 [2024-11-18 07:14:08.985432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.279 [2024-11-18 07:14:08.985448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.279 [2024-11-18 07:14:08.985463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.279 [2024-11-18 07:14:08.985478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.279 [2024-11-18 07:14:08.985499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.279 [2024-11-18 07:14:08.985518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.279 [2024-11-18 07:14:08.985533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.279 [2024-11-18 07:14:08.985549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.279 [2024-11-18 07:14:08.985564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.279 [2024-11-18 07:14:08.985580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.279 [2024-11-18 07:14:08.985594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.279 [2024-11-18 07:14:08.985611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.279 [2024-11-18 07:14:08.985626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.279 [2024-11-18 07:14:08.985642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.279 [2024-11-18 07:14:08.985657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.279 [2024-11-18 07:14:08.985673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.279 [2024-11-18 07:14:08.985688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.279 [2024-11-18 07:14:08.985704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.279 [2024-11-18 07:14:08.985727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.279 [2024-11-18 07:14:08.985744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.280 [2024-11-18 07:14:08.985759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.280 [2024-11-18 07:14:08.985775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.280 [2024-11-18 07:14:08.985790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.280 [2024-11-18 07:14:08.985807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.280 [2024-11-18 07:14:08.985821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.280 [2024-11-18 07:14:08.985837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.280 [2024-11-18 07:14:08.985852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.280 [2024-11-18 07:14:08.985868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.280 [2024-11-18 07:14:08.985882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.280 [2024-11-18 07:14:08.985898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.280 [2024-11-18 07:14:08.985913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.280 [2024-11-18 07:14:08.985929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.280 [2024-11-18 07:14:08.985943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.280 [2024-11-18 07:14:08.985960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.280 [2024-11-18 07:14:08.985975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.280 [2024-11-18 07:14:08.985991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.280 [2024-11-18 07:14:08.986006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.280 [2024-11-18 07:14:08.986022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.280 [2024-11-18 07:14:08.986037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.280 [2024-11-18 07:14:08.986053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.280 [2024-11-18 07:14:08.986067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.280 [2024-11-18 07:14:08.986084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.280 [2024-11-18 07:14:08.986100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.280 [2024-11-18 07:14:08.986121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.280 [2024-11-18 07:14:08.986137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.280 [2024-11-18 07:14:08.986153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.280 [2024-11-18 07:14:08.986167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.280 [2024-11-18 07:14:08.986184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.280 [2024-11-18 07:14:08.986199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.280 [2024-11-18 07:14:08.986215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.280 [2024-11-18 07:14:08.986230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.280 [2024-11-18 07:14:08.986244] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20da3b0 is same with the state(6) to be set 00:28:48.280 [2024-11-18 07:14:08.986359] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:28:48.280 [2024-11-18 07:14:08.986537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.280 [2024-11-18 07:14:08.986567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2110780 with addr=10.0.0.2, port=4420 00:28:48.280 [2024-11-18 07:14:08.986584] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2110780 is same with the state(6) to be set 00:28:48.280 [2024-11-18 07:14:08.986611] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b44700 (9): Bad file descriptor 00:28:48.280 [2024-11-18 07:14:08.986696] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] Unable to perform failover, already in progress. 00:28:48.280 [2024-11-18 07:14:08.986734] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] Unable to perform failover, already in progress. 00:28:48.280 [2024-11-18 07:14:08.986758] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2110780 (9): Bad file descriptor 00:28:48.280 [2024-11-18 07:14:08.986821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.280 [2024-11-18 07:14:08.986844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.280 [2024-11-18 07:14:08.986865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.280 [2024-11-18 07:14:08.986882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.280 [2024-11-18 07:14:08.986898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.280 [2024-11-18 07:14:08.986914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.280 [2024-11-18 07:14:08.986930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.280 [2024-11-18 07:14:08.986945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.280 [2024-11-18 07:14:08.986962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.280 [2024-11-18 07:14:08.986982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.280 [2024-11-18 07:14:08.986999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.280 [2024-11-18 07:14:08.987015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.280 [2024-11-18 07:14:08.987031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.280 [2024-11-18 07:14:08.987046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.280 [2024-11-18 07:14:08.987062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.280 [2024-11-18 07:14:08.987076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.280 [2024-11-18 07:14:08.987092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.280 [2024-11-18 07:14:08.987107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.280 [2024-11-18 07:14:08.987123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.280 [2024-11-18 07:14:08.987138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.280 [2024-11-18 07:14:08.987154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.280 [2024-11-18 07:14:08.987169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.280 [2024-11-18 07:14:08.987185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.280 [2024-11-18 07:14:08.987200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.280 [2024-11-18 07:14:08.987216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.280 [2024-11-18 07:14:08.987231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.280 [2024-11-18 07:14:08.987247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.280 [2024-11-18 07:14:08.987262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.280 [2024-11-18 07:14:08.987277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.280 [2024-11-18 07:14:08.987293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.280 [2024-11-18 07:14:08.987309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.280 [2024-11-18 07:14:08.987324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.280 [2024-11-18 07:14:08.987341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.280 [2024-11-18 07:14:08.987355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.280 [2024-11-18 07:14:08.987377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.280 [2024-11-18 07:14:08.987392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.280 [2024-11-18 07:14:08.987408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.280 [2024-11-18 07:14:08.987423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.280 [2024-11-18 07:14:08.987439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.281 [2024-11-18 07:14:08.987454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.281 [2024-11-18 07:14:08.987470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.281 [2024-11-18 07:14:08.987485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.281 [2024-11-18 07:14:08.987513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.281 [2024-11-18 07:14:08.987529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.281 [2024-11-18 07:14:08.987545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.281 [2024-11-18 07:14:08.987559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.281 [2024-11-18 07:14:08.987576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.281 [2024-11-18 07:14:08.987592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.281 [2024-11-18 07:14:08.987608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.281 [2024-11-18 07:14:08.987622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.281 [2024-11-18 07:14:08.987638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.281 [2024-11-18 07:14:08.987653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.281 [2024-11-18 07:14:08.987670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.281 [2024-11-18 07:14:08.987684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.281 [2024-11-18 07:14:08.987700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.281 [2024-11-18 07:14:08.987715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.281 [2024-11-18 07:14:08.987731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.281 [2024-11-18 07:14:08.987745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.281 [2024-11-18 07:14:08.987762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.281 [2024-11-18 07:14:08.987781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.281 [2024-11-18 07:14:08.987798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.281 [2024-11-18 07:14:08.987813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.281 [2024-11-18 07:14:08.987829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.281 [2024-11-18 07:14:08.987844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.281 [2024-11-18 07:14:08.987861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.281 [2024-11-18 07:14:08.987875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.281 [2024-11-18 07:14:08.987891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.281 [2024-11-18 07:14:08.987906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.281 [2024-11-18 07:14:08.987922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.281 [2024-11-18 07:14:08.987937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.281 [2024-11-18 07:14:08.987953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.281 [2024-11-18 07:14:08.987968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.281 [2024-11-18 07:14:08.987984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.281 [2024-11-18 07:14:08.987999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.281 [2024-11-18 07:14:08.988016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.281 [2024-11-18 07:14:08.988030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.281 [2024-11-18 07:14:08.988046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.281 [2024-11-18 07:14:08.988060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.281 [2024-11-18 07:14:08.988076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.281 [2024-11-18 07:14:08.988092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.281 [2024-11-18 07:14:08.988108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.281 [2024-11-18 07:14:08.988122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.281 [2024-11-18 07:14:08.988139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.281 [2024-11-18 07:14:08.988153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.281 [2024-11-18 07:14:08.988174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.281 [2024-11-18 07:14:08.988189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.281 [2024-11-18 07:14:08.988205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.281 [2024-11-18 07:14:08.988220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.281 [2024-11-18 07:14:08.988235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.281 [2024-11-18 07:14:08.988250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.281 [2024-11-18 07:14:08.988267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.281 [2024-11-18 07:14:08.988281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.281 [2024-11-18 07:14:08.988298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.281 [2024-11-18 07:14:08.988313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.281 [2024-11-18 07:14:08.988329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.281 [2024-11-18 07:14:08.988344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.281 [2024-11-18 07:14:08.988360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.281 [2024-11-18 07:14:08.988374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.281 [2024-11-18 07:14:08.988391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.281 [2024-11-18 07:14:08.988405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.281 [2024-11-18 07:14:08.988421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.281 [2024-11-18 07:14:08.988436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.281 [2024-11-18 07:14:08.988452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.281 [2024-11-18 07:14:08.988468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.281 [2024-11-18 07:14:08.988484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.281 [2024-11-18 07:14:08.988510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.281 [2024-11-18 07:14:08.988526] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ad7d0 is same with the state(6) to be set 00:28:48.281 [2024-11-18 07:14:08.988619] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] Unable to perform failover, already in progress. 00:28:48.281 [2024-11-18 07:14:08.989847] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:28:48.281 [2024-11-18 07:14:08.989968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.281 [2024-11-18 07:14:08.989995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b46450 with addr=10.0.0.2, port=4420 00:28:48.281 [2024-11-18 07:14:08.990013] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b46450 is same with the state(6) to be set 00:28:48.281 [2024-11-18 07:14:08.990032] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:28:48.281 [2024-11-18 07:14:08.990045] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:28:48.281 [2024-11-18 07:14:08.990061] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:28:48.281 [2024-11-18 07:14:08.990076] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:28:48.281 [2024-11-18 07:14:08.990400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.281 [2024-11-18 07:14:08.990423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.282 [2024-11-18 07:14:08.990445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.282 [2024-11-18 07:14:08.990461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.282 [2024-11-18 07:14:08.990479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.282 [2024-11-18 07:14:08.990503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.282 [2024-11-18 07:14:08.990522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.282 [2024-11-18 07:14:08.990537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.282 [2024-11-18 07:14:08.990554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.282 [2024-11-18 07:14:08.990568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.282 [2024-11-18 07:14:08.990585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.282 [2024-11-18 07:14:08.990600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.282 [2024-11-18 07:14:08.990615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.282 [2024-11-18 07:14:08.990630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.282 [2024-11-18 07:14:08.990646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.282 [2024-11-18 07:14:08.990661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.282 [2024-11-18 07:14:08.990677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.282 [2024-11-18 07:14:08.990691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.282 [2024-11-18 07:14:08.990707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.282 [2024-11-18 07:14:08.990734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.282 [2024-11-18 07:14:08.990751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.282 [2024-11-18 07:14:08.990766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.282 [2024-11-18 07:14:08.990781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.282 [2024-11-18 07:14:08.990795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.282 [2024-11-18 07:14:08.990811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.282 [2024-11-18 07:14:08.990826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.282 [2024-11-18 07:14:08.990841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.282 [2024-11-18 07:14:08.990856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.282 [2024-11-18 07:14:08.990871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.282 [2024-11-18 07:14:08.990885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.282 [2024-11-18 07:14:08.990902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.282 [2024-11-18 07:14:08.990916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.282 [2024-11-18 07:14:08.990931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.282 [2024-11-18 07:14:08.990946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.282 [2024-11-18 07:14:08.990961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.282 [2024-11-18 07:14:08.990976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.282 [2024-11-18 07:14:08.990991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.282 [2024-11-18 07:14:08.991006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.282 [2024-11-18 07:14:08.991022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.282 [2024-11-18 07:14:08.991036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.282 [2024-11-18 07:14:08.991052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.282 [2024-11-18 07:14:08.991066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.282 [2024-11-18 07:14:08.991082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.282 [2024-11-18 07:14:08.991097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.282 [2024-11-18 07:14:08.991116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.282 [2024-11-18 07:14:08.991131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.282 [2024-11-18 07:14:08.991147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.282 [2024-11-18 07:14:08.991162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.282 [2024-11-18 07:14:08.991178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.282 [2024-11-18 07:14:08.991192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.282 [2024-11-18 07:14:08.991214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.282 [2024-11-18 07:14:08.991231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.282 [2024-11-18 07:14:08.991247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.282 [2024-11-18 07:14:08.991262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.282 [2024-11-18 07:14:08.991278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.282 [2024-11-18 07:14:08.991292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.282 [2024-11-18 07:14:08.991308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.282 [2024-11-18 07:14:08.991323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.282 [2024-11-18 07:14:08.991339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.282 [2024-11-18 07:14:08.991353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.282 [2024-11-18 07:14:08.991369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.282 [2024-11-18 07:14:08.991384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.282 [2024-11-18 07:14:08.991400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.282 [2024-11-18 07:14:08.991414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.282 [2024-11-18 07:14:08.991430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.282 [2024-11-18 07:14:08.991444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.282 [2024-11-18 07:14:08.991460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.282 [2024-11-18 07:14:08.991475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.283 [2024-11-18 07:14:08.991497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.283 [2024-11-18 07:14:08.991518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.283 [2024-11-18 07:14:08.991535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.283 [2024-11-18 07:14:08.991549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.283 [2024-11-18 07:14:08.991565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.283 [2024-11-18 07:14:08.991580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.283 [2024-11-18 07:14:08.991595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.283 [2024-11-18 07:14:08.991610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.283 [2024-11-18 07:14:08.991627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.283 [2024-11-18 07:14:08.991641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.283 [2024-11-18 07:14:08.991657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.283 [2024-11-18 07:14:08.991672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.283 [2024-11-18 07:14:08.991687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.283 [2024-11-18 07:14:08.991701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.283 [2024-11-18 07:14:08.991723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.283 [2024-11-18 07:14:08.991739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.283 [2024-11-18 07:14:08.991755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.283 [2024-11-18 07:14:08.991770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.283 [2024-11-18 07:14:08.991785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.283 [2024-11-18 07:14:08.991801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.283 [2024-11-18 07:14:08.991816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.283 [2024-11-18 07:14:08.991831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.283 [2024-11-18 07:14:08.991847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.283 [2024-11-18 07:14:08.991861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.283 [2024-11-18 07:14:08.991877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.283 [2024-11-18 07:14:08.991892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.283 [2024-11-18 07:14:08.991912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.283 [2024-11-18 07:14:08.991927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.283 [2024-11-18 07:14:08.991943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.283 [2024-11-18 07:14:08.991957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.283 [2024-11-18 07:14:08.991974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.283 [2024-11-18 07:14:08.991988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.283 [2024-11-18 07:14:08.992004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.283 [2024-11-18 07:14:08.992018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.283 [2024-11-18 07:14:08.992034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.283 [2024-11-18 07:14:08.992048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.283 [2024-11-18 07:14:08.992064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.283 [2024-11-18 07:14:08.992078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.283 [2024-11-18 07:14:08.992093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.283 [2024-11-18 07:14:08.992108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.283 [2024-11-18 07:14:08.992124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.283 [2024-11-18 07:14:08.992138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.283 [2024-11-18 07:14:08.992154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.283 [2024-11-18 07:14:08.992169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.283 [2024-11-18 07:14:08.992184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.283 [2024-11-18 07:14:08.992198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.283 [2024-11-18 07:14:08.992216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.283 [2024-11-18 07:14:08.992232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.283 [2024-11-18 07:14:08.992247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.283 [2024-11-18 07:14:08.992262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.283 [2024-11-18 07:14:08.992278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.283 [2024-11-18 07:14:08.992296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.283 [2024-11-18 07:14:08.992313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.283 [2024-11-18 07:14:08.992328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.283 [2024-11-18 07:14:08.992344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.283 [2024-11-18 07:14:08.992358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.283 [2024-11-18 07:14:08.992375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.283 [2024-11-18 07:14:08.992390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.283 [2024-11-18 07:14:08.992406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.283 [2024-11-18 07:14:08.992420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.283 [2024-11-18 07:14:08.992435] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b49e90 is same with the state(6) to be set 00:28:48.283 [2024-11-18 07:14:08.994772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.283 [2024-11-18 07:14:08.994798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.283 [2024-11-18 07:14:08.994819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.283 [2024-11-18 07:14:08.994836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.283 [2024-11-18 07:14:08.994853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.283 [2024-11-18 07:14:08.994868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.283 [2024-11-18 07:14:08.994885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.283 [2024-11-18 07:14:08.994899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.283 [2024-11-18 07:14:08.994915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.283 [2024-11-18 07:14:08.994930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.283 [2024-11-18 07:14:08.994946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.283 [2024-11-18 07:14:08.994961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.283 [2024-11-18 07:14:08.994977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.283 [2024-11-18 07:14:08.994992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.283 [2024-11-18 07:14:08.995007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.283 [2024-11-18 07:14:08.995027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.283 [2024-11-18 07:14:08.995045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.283 [2024-11-18 07:14:08.995060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.284 [2024-11-18 07:14:08.995076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.284 [2024-11-18 07:14:08.995091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.284 [2024-11-18 07:14:08.995107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.284 [2024-11-18 07:14:08.995122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.284 [2024-11-18 07:14:08.995138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.284 [2024-11-18 07:14:08.995153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.284 [2024-11-18 07:14:08.995169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.284 [2024-11-18 07:14:08.995184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.284 [2024-11-18 07:14:08.995200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.284 [2024-11-18 07:14:08.995215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.284 [2024-11-18 07:14:08.995231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.284 [2024-11-18 07:14:08.995245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.284 [2024-11-18 07:14:08.995262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.284 [2024-11-18 07:14:08.995277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.284 [2024-11-18 07:14:08.995293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.284 [2024-11-18 07:14:08.995307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.284 [2024-11-18 07:14:08.995323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.284 [2024-11-18 07:14:08.995339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.284 [2024-11-18 07:14:08.995355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.284 [2024-11-18 07:14:08.995370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.284 [2024-11-18 07:14:08.995386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.284 [2024-11-18 07:14:08.995400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.284 [2024-11-18 07:14:08.995417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.284 [2024-11-18 07:14:08.995436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.284 [2024-11-18 07:14:08.995453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.284 [2024-11-18 07:14:08.995468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.284 [2024-11-18 07:14:08.995484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.284 [2024-11-18 07:14:08.995508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.284 [2024-11-18 07:14:08.995526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.284 [2024-11-18 07:14:08.995541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.284 [2024-11-18 07:14:08.995562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.284 [2024-11-18 07:14:08.995576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.284 [2024-11-18 07:14:08.995592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.284 [2024-11-18 07:14:08.995607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.284 [2024-11-18 07:14:08.995623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.284 [2024-11-18 07:14:08.995637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.284 [2024-11-18 07:14:08.995653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.284 [2024-11-18 07:14:08.995667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.284 [2024-11-18 07:14:08.995683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.284 [2024-11-18 07:14:08.995697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.284 [2024-11-18 07:14:08.995713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.284 [2024-11-18 07:14:08.995728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.284 [2024-11-18 07:14:08.995744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.284 [2024-11-18 07:14:08.995761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.284 [2024-11-18 07:14:08.995777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.284 [2024-11-18 07:14:08.995792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.284 [2024-11-18 07:14:08.995807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.284 [2024-11-18 07:14:08.995822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.284 [2024-11-18 07:14:08.995842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.284 [2024-11-18 07:14:08.995857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.284 [2024-11-18 07:14:08.995873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.284 [2024-11-18 07:14:08.995887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.284 [2024-11-18 07:14:08.995903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.284 [2024-11-18 07:14:08.995917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.284 [2024-11-18 07:14:08.995934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.284 [2024-11-18 07:14:08.995948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.284 [2024-11-18 07:14:08.995963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.284 [2024-11-18 07:14:08.995978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.284 [2024-11-18 07:14:08.995994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.284 [2024-11-18 07:14:08.996009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.284 [2024-11-18 07:14:08.996024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.284 [2024-11-18 07:14:08.996038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.284 [2024-11-18 07:14:08.996059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.284 [2024-11-18 07:14:08.996074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.284 [2024-11-18 07:14:08.996090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.284 [2024-11-18 07:14:08.996106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.284 [2024-11-18 07:14:08.996122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.284 [2024-11-18 07:14:08.996136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.284 [2024-11-18 07:14:08.996152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.284 [2024-11-18 07:14:08.996166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.284 [2024-11-18 07:14:08.996182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.284 [2024-11-18 07:14:08.996196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.284 [2024-11-18 07:14:08.996212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.284 [2024-11-18 07:14:08.996235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.284 [2024-11-18 07:14:08.996251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.284 [2024-11-18 07:14:08.996266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.284 [2024-11-18 07:14:08.996281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.284 [2024-11-18 07:14:08.996296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.284 [2024-11-18 07:14:08.996311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.285 [2024-11-18 07:14:08.996325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.285 [2024-11-18 07:14:08.996341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.285 [2024-11-18 07:14:08.996355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.285 [2024-11-18 07:14:08.996371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.285 [2024-11-18 07:14:08.996385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.285 [2024-11-18 07:14:08.996401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.285 [2024-11-18 07:14:08.996415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.285 [2024-11-18 07:14:08.996431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.285 [2024-11-18 07:14:08.996445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.285 [2024-11-18 07:14:08.996461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.285 [2024-11-18 07:14:08.996476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.285 [2024-11-18 07:14:08.996504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.285 [2024-11-18 07:14:08.996532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.285 [2024-11-18 07:14:08.996548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.285 [2024-11-18 07:14:08.996563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.285 [2024-11-18 07:14:08.996580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.285 [2024-11-18 07:14:09.004745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.285 [2024-11-18 07:14:09.004812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.285 [2024-11-18 07:14:09.004830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.285 [2024-11-18 07:14:09.004858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.285 [2024-11-18 07:14:09.004874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.285 [2024-11-18 07:14:09.004890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.285 [2024-11-18 07:14:09.004904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.285 [2024-11-18 07:14:09.004921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.285 [2024-11-18 07:14:09.004935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.285 [2024-11-18 07:14:09.004952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.285 [2024-11-18 07:14:09.004966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.285 [2024-11-18 07:14:09.004981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.285 [2024-11-18 07:14:09.004996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.285 [2024-11-18 07:14:09.005012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.285 [2024-11-18 07:14:09.005026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.285 [2024-11-18 07:14:09.005041] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21affd0 is same with the state(6) to be set 00:28:48.285 [2024-11-18 07:14:09.006388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.285 [2024-11-18 07:14:09.006413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.285 [2024-11-18 07:14:09.006438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.285 [2024-11-18 07:14:09.006454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.285 [2024-11-18 07:14:09.006471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.285 [2024-11-18 07:14:09.006485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.285 [2024-11-18 07:14:09.006514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.285 [2024-11-18 07:14:09.006530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.285 [2024-11-18 07:14:09.006547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.285 [2024-11-18 07:14:09.006561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.285 [2024-11-18 07:14:09.006577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.285 [2024-11-18 07:14:09.006593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.285 [2024-11-18 07:14:09.006615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.285 [2024-11-18 07:14:09.006630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.285 [2024-11-18 07:14:09.006647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.285 [2024-11-18 07:14:09.006662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.285 [2024-11-18 07:14:09.006678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.285 [2024-11-18 07:14:09.006693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.285 [2024-11-18 07:14:09.006709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.285 [2024-11-18 07:14:09.006723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.285 [2024-11-18 07:14:09.006740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.285 [2024-11-18 07:14:09.006754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.285 [2024-11-18 07:14:09.006769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.285 [2024-11-18 07:14:09.006783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.285 [2024-11-18 07:14:09.006799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.285 [2024-11-18 07:14:09.006814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.285 [2024-11-18 07:14:09.006830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.285 [2024-11-18 07:14:09.006844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.285 [2024-11-18 07:14:09.006860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.285 [2024-11-18 07:14:09.006875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.285 [2024-11-18 07:14:09.006891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.285 [2024-11-18 07:14:09.006905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.285 [2024-11-18 07:14:09.006921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.285 [2024-11-18 07:14:09.006935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.285 [2024-11-18 07:14:09.006951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.285 [2024-11-18 07:14:09.006966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.285 [2024-11-18 07:14:09.006982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.285 [2024-11-18 07:14:09.007001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.285 [2024-11-18 07:14:09.007018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.285 [2024-11-18 07:14:09.007033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.285 [2024-11-18 07:14:09.007048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.285 [2024-11-18 07:14:09.007063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.285 [2024-11-18 07:14:09.007078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.285 [2024-11-18 07:14:09.007093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.285 [2024-11-18 07:14:09.007108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.285 [2024-11-18 07:14:09.007123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.285 [2024-11-18 07:14:09.007139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.286 [2024-11-18 07:14:09.007154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.286 [2024-11-18 07:14:09.007170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.286 [2024-11-18 07:14:09.007184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.286 [2024-11-18 07:14:09.007201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.286 [2024-11-18 07:14:09.007215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.286 [2024-11-18 07:14:09.007231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.286 [2024-11-18 07:14:09.007245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.286 [2024-11-18 07:14:09.007262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.286 [2024-11-18 07:14:09.007276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.286 [2024-11-18 07:14:09.007292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.286 [2024-11-18 07:14:09.007307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.286 [2024-11-18 07:14:09.007322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.286 [2024-11-18 07:14:09.007337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.286 [2024-11-18 07:14:09.007353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.286 [2024-11-18 07:14:09.007368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.286 [2024-11-18 07:14:09.007387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.286 [2024-11-18 07:14:09.007402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.286 [2024-11-18 07:14:09.007418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.286 [2024-11-18 07:14:09.007433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.286 [2024-11-18 07:14:09.007449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.286 [2024-11-18 07:14:09.007463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.286 [2024-11-18 07:14:09.007480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.286 [2024-11-18 07:14:09.007500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.286 [2024-11-18 07:14:09.007518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.286 [2024-11-18 07:14:09.007535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.286 [2024-11-18 07:14:09.007552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.286 [2024-11-18 07:14:09.007566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.286 [2024-11-18 07:14:09.007583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.286 [2024-11-18 07:14:09.007597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.286 [2024-11-18 07:14:09.007613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.286 [2024-11-18 07:14:09.007628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.286 [2024-11-18 07:14:09.007643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.286 [2024-11-18 07:14:09.007658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.286 [2024-11-18 07:14:09.007675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.286 [2024-11-18 07:14:09.007689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.286 [2024-11-18 07:14:09.007706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.286 [2024-11-18 07:14:09.007720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.286 [2024-11-18 07:14:09.007736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.286 [2024-11-18 07:14:09.007751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.286 [2024-11-18 07:14:09.007767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.286 [2024-11-18 07:14:09.007785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.286 [2024-11-18 07:14:09.007802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.286 [2024-11-18 07:14:09.007816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.286 [2024-11-18 07:14:09.007832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.286 [2024-11-18 07:14:09.007847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.286 [2024-11-18 07:14:09.007862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.286 [2024-11-18 07:14:09.007877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.286 [2024-11-18 07:14:09.007892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.286 [2024-11-18 07:14:09.007907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.286 [2024-11-18 07:14:09.007923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.286 [2024-11-18 07:14:09.007937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.286 [2024-11-18 07:14:09.007953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.286 [2024-11-18 07:14:09.007968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.286 [2024-11-18 07:14:09.007983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.286 [2024-11-18 07:14:09.007998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.286 [2024-11-18 07:14:09.008014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.286 [2024-11-18 07:14:09.008030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.286 [2024-11-18 07:14:09.008045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.286 [2024-11-18 07:14:09.008060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.286 [2024-11-18 07:14:09.008076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.286 [2024-11-18 07:14:09.008090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.286 [2024-11-18 07:14:09.008106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.286 [2024-11-18 07:14:09.008121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.286 [2024-11-18 07:14:09.008137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.286 [2024-11-18 07:14:09.008152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.286 [2024-11-18 07:14:09.008171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.286 [2024-11-18 07:14:09.008186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.286 [2024-11-18 07:14:09.008202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.286 [2024-11-18 07:14:09.008217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.286 [2024-11-18 07:14:09.008233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.287 [2024-11-18 07:14:09.008247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.287 [2024-11-18 07:14:09.008264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.287 [2024-11-18 07:14:09.008279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.287 [2024-11-18 07:14:09.008295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.287 [2024-11-18 07:14:09.008309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.287 [2024-11-18 07:14:09.008325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.287 [2024-11-18 07:14:09.008340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.287 [2024-11-18 07:14:09.008355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.287 [2024-11-18 07:14:09.008369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.287 [2024-11-18 07:14:09.008385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.287 [2024-11-18 07:14:09.008400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.287 [2024-11-18 07:14:09.008414] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f511e0 is same with the state(6) to be set 00:28:48.287 [2024-11-18 07:14:09.009660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.287 [2024-11-18 07:14:09.009684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.287 [2024-11-18 07:14:09.009706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.287 [2024-11-18 07:14:09.009722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.287 [2024-11-18 07:14:09.009739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.287 [2024-11-18 07:14:09.009753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.287 [2024-11-18 07:14:09.009769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.287 [2024-11-18 07:14:09.009784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.287 [2024-11-18 07:14:09.009805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.287 [2024-11-18 07:14:09.009821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.287 [2024-11-18 07:14:09.009837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.287 [2024-11-18 07:14:09.009851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.287 [2024-11-18 07:14:09.009867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.287 [2024-11-18 07:14:09.009882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.287 [2024-11-18 07:14:09.009897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.287 [2024-11-18 07:14:09.009912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.287 [2024-11-18 07:14:09.009928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.287 [2024-11-18 07:14:09.009942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.287 [2024-11-18 07:14:09.009959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.287 [2024-11-18 07:14:09.009973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.287 [2024-11-18 07:14:09.009988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.287 [2024-11-18 07:14:09.010002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.287 [2024-11-18 07:14:09.010018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.287 [2024-11-18 07:14:09.010032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.287 [2024-11-18 07:14:09.010048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.287 [2024-11-18 07:14:09.010062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.287 [2024-11-18 07:14:09.010079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.287 [2024-11-18 07:14:09.010093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.287 [2024-11-18 07:14:09.010109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.287 [2024-11-18 07:14:09.010124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.287 [2024-11-18 07:14:09.010140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.287 [2024-11-18 07:14:09.010154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.287 [2024-11-18 07:14:09.010170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.287 [2024-11-18 07:14:09.010188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.287 [2024-11-18 07:14:09.010206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.287 [2024-11-18 07:14:09.010221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.287 [2024-11-18 07:14:09.010236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.287 [2024-11-18 07:14:09.010251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.287 [2024-11-18 07:14:09.010267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.287 [2024-11-18 07:14:09.010282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.287 [2024-11-18 07:14:09.010298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.287 [2024-11-18 07:14:09.010313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.287 [2024-11-18 07:14:09.010329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.287 [2024-11-18 07:14:09.010343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.287 [2024-11-18 07:14:09.010360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.287 [2024-11-18 07:14:09.010374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.287 [2024-11-18 07:14:09.010390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.287 [2024-11-18 07:14:09.010404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.287 [2024-11-18 07:14:09.010420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.287 [2024-11-18 07:14:09.010435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.287 [2024-11-18 07:14:09.010450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.287 [2024-11-18 07:14:09.010465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.287 [2024-11-18 07:14:09.010481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.287 [2024-11-18 07:14:09.010504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.287 [2024-11-18 07:14:09.010523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.287 [2024-11-18 07:14:09.010547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.287 [2024-11-18 07:14:09.010562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.287 [2024-11-18 07:14:09.010576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.287 [2024-11-18 07:14:09.010596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.287 [2024-11-18 07:14:09.010612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.287 [2024-11-18 07:14:09.010627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.287 [2024-11-18 07:14:09.010642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.287 [2024-11-18 07:14:09.010658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.287 [2024-11-18 07:14:09.010672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.287 [2024-11-18 07:14:09.010688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.287 [2024-11-18 07:14:09.010703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.288 [2024-11-18 07:14:09.010719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.288 [2024-11-18 07:14:09.010733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.288 [2024-11-18 07:14:09.010749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.288 [2024-11-18 07:14:09.010764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.288 [2024-11-18 07:14:09.010781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.288 [2024-11-18 07:14:09.010795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.288 [2024-11-18 07:14:09.010811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.288 [2024-11-18 07:14:09.010825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.288 [2024-11-18 07:14:09.010841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.288 [2024-11-18 07:14:09.010856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.288 [2024-11-18 07:14:09.010872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.288 [2024-11-18 07:14:09.010887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.288 [2024-11-18 07:14:09.010903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.288 [2024-11-18 07:14:09.010918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.288 [2024-11-18 07:14:09.010934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.288 [2024-11-18 07:14:09.010948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.288 [2024-11-18 07:14:09.010964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.288 [2024-11-18 07:14:09.010982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.288 [2024-11-18 07:14:09.010999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.288 [2024-11-18 07:14:09.011014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.288 [2024-11-18 07:14:09.011030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.288 [2024-11-18 07:14:09.011045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.288 [2024-11-18 07:14:09.011061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.288 [2024-11-18 07:14:09.011075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.288 [2024-11-18 07:14:09.011091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.288 [2024-11-18 07:14:09.011106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.288 [2024-11-18 07:14:09.011122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.288 [2024-11-18 07:14:09.011136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.288 [2024-11-18 07:14:09.011152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.288 [2024-11-18 07:14:09.011167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.288 [2024-11-18 07:14:09.011183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.288 [2024-11-18 07:14:09.011197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.288 [2024-11-18 07:14:09.011213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.288 [2024-11-18 07:14:09.011228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.288 [2024-11-18 07:14:09.011244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.288 [2024-11-18 07:14:09.011259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.288 [2024-11-18 07:14:09.011276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.288 [2024-11-18 07:14:09.011291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.288 [2024-11-18 07:14:09.011307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.288 [2024-11-18 07:14:09.011321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.288 [2024-11-18 07:14:09.011338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.288 [2024-11-18 07:14:09.011352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.288 [2024-11-18 07:14:09.011372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.288 [2024-11-18 07:14:09.011387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.288 [2024-11-18 07:14:09.011403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.288 [2024-11-18 07:14:09.011417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.288 [2024-11-18 07:14:09.011434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.288 [2024-11-18 07:14:09.011448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.288 [2024-11-18 07:14:09.011465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.288 [2024-11-18 07:14:09.011479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.288 [2024-11-18 07:14:09.011502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.288 [2024-11-18 07:14:09.011519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.288 [2024-11-18 07:14:09.011535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.288 [2024-11-18 07:14:09.011549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.288 [2024-11-18 07:14:09.011565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.288 [2024-11-18 07:14:09.011579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.288 [2024-11-18 07:14:09.011596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.288 [2024-11-18 07:14:09.011610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.288 [2024-11-18 07:14:09.011627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.288 [2024-11-18 07:14:09.011641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.288 [2024-11-18 07:14:09.011657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.288 [2024-11-18 07:14:09.011672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.288 [2024-11-18 07:14:09.011686] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d8f20 is same with the state(6) to be set 00:28:48.288 [2024-11-18 07:14:09.012941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.288 [2024-11-18 07:14:09.012965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.288 [2024-11-18 07:14:09.012987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.288 [2024-11-18 07:14:09.013003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.288 [2024-11-18 07:14:09.013020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.288 [2024-11-18 07:14:09.013040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.288 [2024-11-18 07:14:09.013057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.288 [2024-11-18 07:14:09.013073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.288 [2024-11-18 07:14:09.013089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.288 [2024-11-18 07:14:09.013104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.288 [2024-11-18 07:14:09.013120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.288 [2024-11-18 07:14:09.013135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.288 [2024-11-18 07:14:09.013151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.288 [2024-11-18 07:14:09.013166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.288 [2024-11-18 07:14:09.013181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.288 [2024-11-18 07:14:09.013196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.289 [2024-11-18 07:14:09.013212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.289 [2024-11-18 07:14:09.013226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.289 [2024-11-18 07:14:09.013242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.289 [2024-11-18 07:14:09.013257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.289 [2024-11-18 07:14:09.013273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.289 [2024-11-18 07:14:09.013288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.289 [2024-11-18 07:14:09.013304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.289 [2024-11-18 07:14:09.013320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.289 [2024-11-18 07:14:09.013337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.289 [2024-11-18 07:14:09.013351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.289 [2024-11-18 07:14:09.013368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.289 [2024-11-18 07:14:09.013382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.289 [2024-11-18 07:14:09.013399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.289 [2024-11-18 07:14:09.013413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.289 [2024-11-18 07:14:09.013434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.289 [2024-11-18 07:14:09.013449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.289 [2024-11-18 07:14:09.013465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.289 [2024-11-18 07:14:09.013481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.289 [2024-11-18 07:14:09.013504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.289 [2024-11-18 07:14:09.013521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.289 [2024-11-18 07:14:09.013538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.289 [2024-11-18 07:14:09.013553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.289 [2024-11-18 07:14:09.013569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.289 [2024-11-18 07:14:09.013584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.289 [2024-11-18 07:14:09.013600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.289 [2024-11-18 07:14:09.013614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.289 [2024-11-18 07:14:09.013630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.289 [2024-11-18 07:14:09.013644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.289 [2024-11-18 07:14:09.013661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.289 [2024-11-18 07:14:09.013676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.289 [2024-11-18 07:14:09.013691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.289 [2024-11-18 07:14:09.013706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.289 [2024-11-18 07:14:09.013722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.289 [2024-11-18 07:14:09.013737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.289 [2024-11-18 07:14:09.013754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.289 [2024-11-18 07:14:09.013769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.289 [2024-11-18 07:14:09.013786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.289 [2024-11-18 07:14:09.013800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.289 [2024-11-18 07:14:09.013816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.289 [2024-11-18 07:14:09.013835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.289 [2024-11-18 07:14:09.013851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.289 [2024-11-18 07:14:09.013866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.289 [2024-11-18 07:14:09.013882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.289 [2024-11-18 07:14:09.013897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.289 [2024-11-18 07:14:09.013914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.289 [2024-11-18 07:14:09.013929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.289 [2024-11-18 07:14:09.013945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.289 [2024-11-18 07:14:09.013960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.289 [2024-11-18 07:14:09.013976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.289 [2024-11-18 07:14:09.013991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.289 [2024-11-18 07:14:09.014007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.289 [2024-11-18 07:14:09.014022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.289 [2024-11-18 07:14:09.014037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.289 [2024-11-18 07:14:09.014052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.289 [2024-11-18 07:14:09.014068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.289 [2024-11-18 07:14:09.014082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.289 [2024-11-18 07:14:09.014098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.289 [2024-11-18 07:14:09.014112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.289 [2024-11-18 07:14:09.014128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.289 [2024-11-18 07:14:09.014143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.289 [2024-11-18 07:14:09.014159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.289 [2024-11-18 07:14:09.014173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.289 [2024-11-18 07:14:09.014189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.289 [2024-11-18 07:14:09.014203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.289 [2024-11-18 07:14:09.014229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.289 [2024-11-18 07:14:09.014244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.289 [2024-11-18 07:14:09.014261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.289 [2024-11-18 07:14:09.014275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.289 [2024-11-18 07:14:09.014291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.289 [2024-11-18 07:14:09.014305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.289 [2024-11-18 07:14:09.014321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.289 [2024-11-18 07:14:09.014336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.289 [2024-11-18 07:14:09.014352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.289 [2024-11-18 07:14:09.014367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.289 [2024-11-18 07:14:09.014383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.289 [2024-11-18 07:14:09.014397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.289 [2024-11-18 07:14:09.014414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.289 [2024-11-18 07:14:09.014429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.289 [2024-11-18 07:14:09.014445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.290 [2024-11-18 07:14:09.014460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.290 [2024-11-18 07:14:09.014477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.290 [2024-11-18 07:14:09.014501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.290 [2024-11-18 07:14:09.014520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.290 [2024-11-18 07:14:09.014534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.290 [2024-11-18 07:14:09.014550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.290 [2024-11-18 07:14:09.014565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.290 [2024-11-18 07:14:09.014581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.290 [2024-11-18 07:14:09.014596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.290 [2024-11-18 07:14:09.014613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.290 [2024-11-18 07:14:09.014631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.290 [2024-11-18 07:14:09.014648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.290 [2024-11-18 07:14:09.014663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.290 [2024-11-18 07:14:09.014679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.290 [2024-11-18 07:14:09.014693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.290 [2024-11-18 07:14:09.014709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.290 [2024-11-18 07:14:09.014724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.290 [2024-11-18 07:14:09.014740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.290 [2024-11-18 07:14:09.014756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.290 [2024-11-18 07:14:09.014772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.290 [2024-11-18 07:14:09.014787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.290 [2024-11-18 07:14:09.014803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.290 [2024-11-18 07:14:09.014818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.290 [2024-11-18 07:14:09.014834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.290 [2024-11-18 07:14:09.014849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.290 [2024-11-18 07:14:09.014865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.290 [2024-11-18 07:14:09.014879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.290 [2024-11-18 07:14:09.014895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.290 [2024-11-18 07:14:09.014911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.290 [2024-11-18 07:14:09.014928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.290 [2024-11-18 07:14:09.014942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.290 [2024-11-18 07:14:09.014959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.290 [2024-11-18 07:14:09.014973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.290 [2024-11-18 07:14:09.014988] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20dcdd0 is same with the state(6) to be set 00:28:48.290 [2024-11-18 07:14:09.016616] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:28:48.290 [2024-11-18 07:14:09.016654] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:28:48.290 [2024-11-18 07:14:09.016675] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:28:48.290 [2024-11-18 07:14:09.016695] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:28:48.290 [2024-11-18 07:14:09.016937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.290 [2024-11-18 07:14:09.016967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b3fa50 with addr=10.0.0.2, port=4420 00:28:48.290 [2024-11-18 07:14:09.016984] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b3fa50 is same with the state(6) to be set 00:28:48.290 [2024-11-18 07:14:09.017011] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b46450 (9): Bad file descriptor 00:28:48.290 [2024-11-18 07:14:09.017031] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Ctrlr is in error state 00:28:48.290 [2024-11-18 07:14:09.017045] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] controller reinitialization failed 00:28:48.290 [2024-11-18 07:14:09.017061] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:28:48.290 [2024-11-18 07:14:09.017078] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Resetting controller failed. 00:28:48.290 [2024-11-18 07:14:09.017132] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] Unable to perform failover, already in progress. 00:28:48.290 [2024-11-18 07:14:09.017157] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] Unable to perform failover, already in progress. 00:28:48.290 [2024-11-18 07:14:09.017194] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Unable to perform failover, already in progress. 00:28:48.290 [2024-11-18 07:14:09.017216] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b3fa50 (9): Bad file descriptor 00:28:48.290 [2024-11-18 07:14:09.017600] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:28:48.290 task offset: 24576 on job bdev=Nvme4n1 fails 00:28:48.290 00:28:48.290 Latency(us) 00:28:48.290 [2024-11-18T06:14:09.268Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:48.290 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:48.290 Job: Nvme1n1 ended in about 0.88 seconds with error 00:28:48.290 Verification LBA range: start 0x0 length 0x400 00:28:48.290 Nvme1n1 : 0.88 151.95 9.50 73.12 0.00 281081.06 33399.09 240784.12 00:28:48.290 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:48.290 Job: Nvme2n1 ended in about 0.89 seconds with error 00:28:48.290 Verification LBA range: start 0x0 length 0x400 00:28:48.290 Nvme2n1 : 0.89 144.62 9.04 72.31 0.00 285513.20 17961.72 256318.58 00:28:48.290 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:48.290 Job: Nvme3n1 ended in about 0.89 seconds with error 00:28:48.290 Verification LBA range: start 0x0 length 0x400 00:28:48.290 Nvme3n1 : 0.89 165.88 10.37 59.81 0.00 266862.83 18835.53 260978.92 00:28:48.290 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:48.290 Job: Nvme4n1 ended in about 0.86 seconds with error 00:28:48.290 Verification LBA range: start 0x0 length 0x400 00:28:48.290 Nvme4n1 : 0.86 224.53 14.03 74.84 0.00 197274.17 8107.05 260978.92 00:28:48.290 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:48.290 Job: Nvme5n1 ended in about 0.90 seconds with error 00:28:48.290 Verification LBA range: start 0x0 length 0x400 00:28:48.290 Nvme5n1 : 0.90 142.59 8.91 71.30 0.00 271177.96 22039.51 264085.81 00:28:48.290 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:48.290 Job: Nvme6n1 ended in about 0.90 seconds with error 00:28:48.290 Verification LBA range: start 0x0 length 0x400 00:28:48.290 Nvme6n1 : 0.90 142.07 8.88 71.03 0.00 266087.28 28350.39 256318.58 00:28:48.290 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:48.290 Job: Nvme7n1 ended in about 0.90 seconds with error 00:28:48.290 Verification LBA range: start 0x0 length 0x400 00:28:48.290 Nvme7n1 : 0.90 141.55 8.85 70.78 0.00 261083.59 21554.06 262532.36 00:28:48.290 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:48.290 Job: Nvme8n1 ended in about 0.88 seconds with error 00:28:48.290 Verification LBA range: start 0x0 length 0x400 00:28:48.290 Nvme8n1 : 0.88 150.92 9.43 72.62 0.00 241018.96 22233.69 267192.70 00:28:48.290 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:48.290 Job: Nvme9n1 ended in about 0.86 seconds with error 00:28:48.290 Verification LBA range: start 0x0 length 0x400 00:28:48.290 Nvme9n1 : 0.86 149.45 9.34 74.73 0.00 232917.14 8641.04 290494.39 00:28:48.290 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:48.290 Job: Nvme10n1 ended in about 0.91 seconds with error 00:28:48.290 Verification LBA range: start 0x0 length 0x400 00:28:48.290 Nvme10n1 : 0.91 141.04 8.81 70.52 0.00 244142.59 19709.35 265639.25 00:28:48.290 [2024-11-18T06:14:09.268Z] =================================================================================================================== 00:28:48.290 [2024-11-18T06:14:09.268Z] Total : 1554.60 97.16 711.05 0.00 252959.76 8107.05 290494.39 00:28:48.290 [2024-11-18 07:14:09.044882] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:28:48.290 [2024-11-18 07:14:09.044979] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:28:48.290 [2024-11-18 07:14:09.045250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.290 [2024-11-18 07:14:09.045286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b4e0a0 with addr=10.0.0.2, port=4420 00:28:48.291 [2024-11-18 07:14:09.045306] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4e0a0 is same with the state(6) to be set 00:28:48.291 [2024-11-18 07:14:09.045402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.291 [2024-11-18 07:14:09.045429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b406f0 with addr=10.0.0.2, port=4420 00:28:48.291 [2024-11-18 07:14:09.045446] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b406f0 is same with the state(6) to be set 00:28:48.291 [2024-11-18 07:14:09.045559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.291 [2024-11-18 07:14:09.045586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f808d0 with addr=10.0.0.2, port=4420 00:28:48.291 [2024-11-18 07:14:09.045603] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f808d0 is same with the state(6) to be set 00:28:48.291 [2024-11-18 07:14:09.045691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.291 [2024-11-18 07:14:09.045718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a8ef50 with addr=10.0.0.2, port=4420 00:28:48.291 [2024-11-18 07:14:09.045734] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8ef50 is same with the state(6) to be set 00:28:48.291 [2024-11-18 07:14:09.045755] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:28:48.291 [2024-11-18 07:14:09.045770] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:28:48.291 [2024-11-18 07:14:09.045785] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:28:48.291 [2024-11-18 07:14:09.045802] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:28:48.291 [2024-11-18 07:14:09.047462] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:28:48.291 [2024-11-18 07:14:09.047514] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] resetting controller 00:28:48.291 [2024-11-18 07:14:09.047683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.291 [2024-11-18 07:14:09.047713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f874d0 with addr=10.0.0.2, port=4420 00:28:48.291 [2024-11-18 07:14:09.047730] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f874d0 is same with the state(6) to be set 00:28:48.291 [2024-11-18 07:14:09.047811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.291 [2024-11-18 07:14:09.047838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f86fa0 with addr=10.0.0.2, port=4420 00:28:48.291 [2024-11-18 07:14:09.047854] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f86fa0 is same with the state(6) to be set 00:28:48.291 [2024-11-18 07:14:09.047879] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b4e0a0 (9): Bad file descriptor 00:28:48.291 [2024-11-18 07:14:09.047901] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b406f0 (9): Bad file descriptor 00:28:48.291 [2024-11-18 07:14:09.047920] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f808d0 (9): Bad file descriptor 00:28:48.291 [2024-11-18 07:14:09.047938] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a8ef50 (9): Bad file descriptor 00:28:48.291 [2024-11-18 07:14:09.047955] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:28:48.291 [2024-11-18 07:14:09.047970] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:28:48.291 [2024-11-18 07:14:09.047984] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:28:48.291 [2024-11-18 07:14:09.047998] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:28:48.291 [2024-11-18 07:14:09.048064] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] Unable to perform failover, already in progress. 00:28:48.291 [2024-11-18 07:14:09.048091] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] Unable to perform failover, already in progress. 00:28:48.291 [2024-11-18 07:14:09.048111] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] Unable to perform failover, already in progress. 00:28:48.291 [2024-11-18 07:14:09.048133] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] Unable to perform failover, already in progress. 00:28:48.291 [2024-11-18 07:14:09.048592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.291 [2024-11-18 07:14:09.048625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b44700 with addr=10.0.0.2, port=4420 00:28:48.291 [2024-11-18 07:14:09.048642] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b44700 is same with the state(6) to be set 00:28:48.291 [2024-11-18 07:14:09.048719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.291 [2024-11-18 07:14:09.048746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2110780 with addr=10.0.0.2, port=4420 00:28:48.291 [2024-11-18 07:14:09.048762] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2110780 is same with the state(6) to be set 00:28:48.291 [2024-11-18 07:14:09.048782] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f874d0 (9): Bad file descriptor 00:28:48.291 [2024-11-18 07:14:09.048803] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f86fa0 (9): Bad file descriptor 00:28:48.291 [2024-11-18 07:14:09.048820] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:28:48.291 [2024-11-18 07:14:09.048839] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:28:48.291 [2024-11-18 07:14:09.048853] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:28:48.291 [2024-11-18 07:14:09.048868] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:28:48.291 [2024-11-18 07:14:09.048885] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:28:48.291 [2024-11-18 07:14:09.048899] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:28:48.291 [2024-11-18 07:14:09.048912] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:28:48.291 [2024-11-18 07:14:09.048925] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:28:48.291 [2024-11-18 07:14:09.048939] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:28:48.291 [2024-11-18 07:14:09.048952] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:28:48.291 [2024-11-18 07:14:09.048965] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:28:48.291 [2024-11-18 07:14:09.048978] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:28:48.291 [2024-11-18 07:14:09.048993] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:28:48.291 [2024-11-18 07:14:09.049005] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:28:48.291 [2024-11-18 07:14:09.049019] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:28:48.291 [2024-11-18 07:14:09.049031] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:28:48.291 [2024-11-18 07:14:09.049129] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:28:48.291 [2024-11-18 07:14:09.049154] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:28:48.291 [2024-11-18 07:14:09.049189] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b44700 (9): Bad file descriptor 00:28:48.291 [2024-11-18 07:14:09.049212] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2110780 (9): Bad file descriptor 00:28:48.291 [2024-11-18 07:14:09.049229] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:28:48.291 [2024-11-18 07:14:09.049242] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:28:48.291 [2024-11-18 07:14:09.049256] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:28:48.291 [2024-11-18 07:14:09.049270] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:28:48.291 [2024-11-18 07:14:09.049285] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:28:48.291 [2024-11-18 07:14:09.049298] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:28:48.291 [2024-11-18 07:14:09.049311] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:28:48.291 [2024-11-18 07:14:09.049324] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:28:48.291 [2024-11-18 07:14:09.049434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.291 [2024-11-18 07:14:09.049461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b46450 with addr=10.0.0.2, port=4420 00:28:48.291 [2024-11-18 07:14:09.049484] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b46450 is same with the state(6) to be set 00:28:48.291 [2024-11-18 07:14:09.049583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.291 [2024-11-18 07:14:09.049608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b3fa50 with addr=10.0.0.2, port=4420 00:28:48.291 [2024-11-18 07:14:09.049625] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b3fa50 is same with the state(6) to be set 00:28:48.291 [2024-11-18 07:14:09.049641] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:28:48.291 [2024-11-18 07:14:09.049654] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:28:48.292 [2024-11-18 07:14:09.049669] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:28:48.292 [2024-11-18 07:14:09.049682] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:28:48.292 [2024-11-18 07:14:09.049698] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Ctrlr is in error state 00:28:48.292 [2024-11-18 07:14:09.049711] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] controller reinitialization failed 00:28:48.292 [2024-11-18 07:14:09.049724] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:28:48.292 [2024-11-18 07:14:09.049738] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Resetting controller failed. 00:28:48.292 [2024-11-18 07:14:09.049784] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b46450 (9): Bad file descriptor 00:28:48.292 [2024-11-18 07:14:09.049809] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b3fa50 (9): Bad file descriptor 00:28:48.292 [2024-11-18 07:14:09.049852] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:28:48.292 [2024-11-18 07:14:09.049874] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:28:48.292 [2024-11-18 07:14:09.049890] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:28:48.292 [2024-11-18 07:14:09.049903] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:28:48.292 [2024-11-18 07:14:09.049918] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:28:48.292 [2024-11-18 07:14:09.049931] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:28:48.292 [2024-11-18 07:14:09.049944] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:28:48.292 [2024-11-18 07:14:09.049957] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:28:48.549 07:14:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@137 -- # sleep 1 00:28:49.488 07:14:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@138 -- # NOT wait 326943 00:28:49.488 07:14:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@652 -- # local es=0 00:28:49.488 07:14:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 326943 00:28:49.488 07:14:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@640 -- # local arg=wait 00:28:49.488 07:14:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:49.488 07:14:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # type -t wait 00:28:49.488 07:14:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:49.488 07:14:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # wait 326943 00:28:49.488 07:14:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # es=255 00:28:49.488 07:14:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:49.488 07:14:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@664 -- # es=127 00:28:49.488 07:14:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@665 -- # case "$es" in 00:28:49.488 07:14:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@672 -- # es=1 00:28:49.488 07:14:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:49.488 07:14:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@140 -- # stoptarget 00:28:49.488 07:14:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:28:49.488 07:14:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:28:49.488 07:14:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:49.488 07:14:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@46 -- # nvmftestfini 00:28:49.488 07:14:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:49.488 07:14:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # sync 00:28:49.488 07:14:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:49.488 07:14:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set +e 00:28:49.488 07:14:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:49.488 07:14:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:49.488 rmmod nvme_tcp 00:28:49.749 rmmod nvme_fabrics 00:28:49.749 rmmod nvme_keyring 00:28:49.749 07:14:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:49.749 07:14:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@128 -- # set -e 00:28:49.749 07:14:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@129 -- # return 0 00:28:49.749 07:14:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@517 -- # '[' -n 326772 ']' 00:28:49.749 07:14:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@518 -- # killprocess 326772 00:28:49.749 07:14:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 326772 ']' 00:28:49.749 07:14:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 326772 00:28:49.749 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (326772) - No such process 00:28:49.749 07:14:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@981 -- # echo 'Process with pid 326772 is not found' 00:28:49.749 Process with pid 326772 is not found 00:28:49.749 07:14:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:49.749 07:14:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:49.749 07:14:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:49.749 07:14:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # iptr 00:28:49.749 07:14:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-save 00:28:49.749 07:14:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:49.749 07:14:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-restore 00:28:49.749 07:14:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:49.749 07:14:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:49.749 07:14:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:49.749 07:14:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:49.749 07:14:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:51.652 07:14:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:51.652 00:28:51.652 real 0m7.141s 00:28:51.652 user 0m16.808s 00:28:51.652 sys 0m1.348s 00:28:51.652 07:14:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:51.653 07:14:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:51.653 ************************************ 00:28:51.653 END TEST nvmf_shutdown_tc3 00:28:51.653 ************************************ 00:28:51.653 07:14:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ e810 == \e\8\1\0 ]] 00:28:51.653 07:14:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ tcp == \r\d\m\a ]] 00:28:51.653 07:14:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@167 -- # run_test nvmf_shutdown_tc4 nvmf_shutdown_tc4 00:28:51.653 07:14:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:51.653 07:14:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:51.653 07:14:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:51.653 ************************************ 00:28:51.653 START TEST nvmf_shutdown_tc4 00:28:51.653 ************************************ 00:28:51.653 07:14:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc4 00:28:51.653 07:14:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@145 -- # starttarget 00:28:51.653 07:14:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@16 -- # nvmftestinit 00:28:51.653 07:14:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:51.653 07:14:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:51.653 07:14:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:51.653 07:14:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:51.653 07:14:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:51.653 07:14:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:51.653 07:14:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:51.653 07:14:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:51.653 07:14:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:51.653 07:14:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:51.653 07:14:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@309 -- # xtrace_disable 00:28:51.653 07:14:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:28:51.653 07:14:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:51.653 07:14:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # pci_devs=() 00:28:51.653 07:14:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:51.653 07:14:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:51.653 07:14:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:51.653 07:14:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:51.653 07:14:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:51.653 07:14:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # net_devs=() 00:28:51.653 07:14:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:51.653 07:14:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # e810=() 00:28:51.653 07:14:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # local -ga e810 00:28:51.653 07:14:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # x722=() 00:28:51.653 07:14:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # local -ga x722 00:28:51.653 07:14:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # mlx=() 00:28:51.653 07:14:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # local -ga mlx 00:28:51.653 07:14:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:51.653 07:14:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:51.653 07:14:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:51.653 07:14:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:51.653 07:14:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:51.653 07:14:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:51.653 07:14:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:51.653 07:14:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:51.653 07:14:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:51.653 07:14:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:51.653 07:14:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:51.653 07:14:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:51.653 07:14:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:51.653 07:14:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:51.653 07:14:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:51.912 07:14:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:51.912 07:14:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:51.912 07:14:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:51.912 07:14:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:51.912 07:14:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:51.912 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:51.912 07:14:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:51.912 07:14:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:51.912 07:14:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:51.912 07:14:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:51.912 07:14:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:51.912 07:14:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:51.912 07:14:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:51.912 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:51.912 07:14:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:51.912 07:14:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:51.912 07:14:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:51.912 07:14:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:51.912 07:14:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:51.912 07:14:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:51.912 07:14:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:51.912 07:14:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:51.912 07:14:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:51.912 07:14:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:51.912 07:14:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:51.912 07:14:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:51.912 07:14:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:51.912 07:14:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:51.912 07:14:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:51.912 07:14:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:51.912 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:51.912 07:14:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:51.912 07:14:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:51.912 07:14:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:51.912 07:14:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:51.912 07:14:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:51.912 07:14:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:51.912 07:14:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:51.912 07:14:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:51.912 07:14:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:51.912 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:51.912 07:14:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:51.912 07:14:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:51.912 07:14:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # is_hw=yes 00:28:51.912 07:14:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:51.912 07:14:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:51.912 07:14:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:51.912 07:14:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:51.912 07:14:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:51.913 07:14:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:51.913 07:14:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:51.913 07:14:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:51.913 07:14:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:51.913 07:14:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:51.913 07:14:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:51.913 07:14:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:51.913 07:14:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:51.913 07:14:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:51.913 07:14:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:51.913 07:14:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:51.913 07:14:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:51.913 07:14:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:51.913 07:14:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:51.913 07:14:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:51.913 07:14:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:51.913 07:14:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:51.913 07:14:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:51.913 07:14:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:51.913 07:14:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:51.913 07:14:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:51.913 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:51.913 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.295 ms 00:28:51.913 00:28:51.913 --- 10.0.0.2 ping statistics --- 00:28:51.913 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:51.913 rtt min/avg/max/mdev = 0.295/0.295/0.295/0.000 ms 00:28:51.913 07:14:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:51.913 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:51.913 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.161 ms 00:28:51.913 00:28:51.913 --- 10.0.0.1 ping statistics --- 00:28:51.913 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:51.913 rtt min/avg/max/mdev = 0.161/0.161/0.161/0.000 ms 00:28:51.913 07:14:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:51.913 07:14:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@450 -- # return 0 00:28:51.913 07:14:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:51.913 07:14:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:51.913 07:14:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:51.913 07:14:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:51.913 07:14:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:51.913 07:14:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:51.913 07:14:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:51.913 07:14:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:28:51.913 07:14:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:51.913 07:14:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:51.913 07:14:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:28:51.913 07:14:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@509 -- # nvmfpid=327818 00:28:51.913 07:14:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:28:51.913 07:14:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@510 -- # waitforlisten 327818 00:28:51.913 07:14:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@835 -- # '[' -z 327818 ']' 00:28:51.913 07:14:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:51.913 07:14:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:51.913 07:14:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:51.913 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:51.913 07:14:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:51.913 07:14:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:28:51.913 [2024-11-18 07:14:12.865596] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:28:51.913 [2024-11-18 07:14:12.865677] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:52.171 [2024-11-18 07:14:12.941054] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:52.171 [2024-11-18 07:14:12.986977] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:52.171 [2024-11-18 07:14:12.987045] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:52.171 [2024-11-18 07:14:12.987074] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:52.171 [2024-11-18 07:14:12.987085] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:52.171 [2024-11-18 07:14:12.987095] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:52.171 [2024-11-18 07:14:12.988613] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:52.171 [2024-11-18 07:14:12.988679] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:52.171 [2024-11-18 07:14:12.988745] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:28:52.171 [2024-11-18 07:14:12.988748] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:52.171 07:14:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:52.171 07:14:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@868 -- # return 0 00:28:52.171 07:14:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:52.171 07:14:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:52.171 07:14:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:28:52.171 07:14:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:52.171 07:14:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:52.171 07:14:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:52.171 07:14:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:28:52.171 [2024-11-18 07:14:13.134544] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:52.171 07:14:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:52.171 07:14:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:28:52.171 07:14:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:28:52.171 07:14:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:52.171 07:14:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:28:52.171 07:14:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:52.171 07:14:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:52.171 07:14:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:28:52.431 07:14:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:52.431 07:14:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:28:52.431 07:14:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:52.431 07:14:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:28:52.431 07:14:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:52.431 07:14:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:28:52.431 07:14:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:52.431 07:14:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:28:52.431 07:14:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:52.431 07:14:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:28:52.431 07:14:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:52.431 07:14:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:28:52.431 07:14:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:52.431 07:14:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:28:52.431 07:14:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:52.431 07:14:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:28:52.431 07:14:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:52.431 07:14:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:28:52.431 07:14:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@36 -- # rpc_cmd 00:28:52.431 07:14:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:52.431 07:14:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:28:52.431 Malloc1 00:28:52.431 [2024-11-18 07:14:13.230109] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:52.431 Malloc2 00:28:52.431 Malloc3 00:28:52.431 Malloc4 00:28:52.431 Malloc5 00:28:52.691 Malloc6 00:28:52.691 Malloc7 00:28:52.691 Malloc8 00:28:52.691 Malloc9 00:28:52.691 Malloc10 00:28:52.949 07:14:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:52.950 07:14:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:28:52.950 07:14:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:52.950 07:14:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:28:52.950 07:14:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@149 -- # perfpid=327906 00:28:52.950 07:14:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@148 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 45056 -O 4096 -w randwrite -t 20 -r 'trtype:tcp adrfam:IPV4 traddr:10.0.0.2 trsvcid:4420' -P 4 00:28:52.950 07:14:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@150 -- # sleep 5 00:28:52.950 [2024-11-18 07:14:13.764725] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:28:58.233 07:14:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@152 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:58.233 07:14:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@155 -- # killprocess 327818 00:28:58.233 07:14:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 327818 ']' 00:28:58.233 07:14:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 327818 00:28:58.234 07:14:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # uname 00:28:58.234 07:14:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:58.234 07:14:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 327818 00:28:58.234 07:14:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:58.234 07:14:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:58.234 07:14:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 327818' 00:28:58.234 killing process with pid 327818 00:28:58.234 07:14:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@973 -- # kill 327818 00:28:58.234 07:14:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@978 -- # wait 327818 00:28:58.234 Write completed with error (sct=0, sc=8) 00:28:58.234 Write completed with error (sct=0, sc=8) 00:28:58.234 starting I/O failed: -6 00:28:58.234 Write completed with error (sct=0, sc=8) 00:28:58.234 Write completed with error (sct=0, sc=8) 00:28:58.234 Write completed with error (sct=0, sc=8) 00:28:58.234 Write completed with error (sct=0, sc=8) 00:28:58.234 starting I/O failed: -6 00:28:58.234 Write completed with error (sct=0, sc=8) 00:28:58.234 Write completed with error (sct=0, sc=8) 00:28:58.234 Write completed with error (sct=0, sc=8) 00:28:58.234 Write completed with error (sct=0, sc=8) 00:28:58.234 starting I/O failed: -6 00:28:58.234 Write completed with error (sct=0, sc=8) 00:28:58.234 Write completed with error (sct=0, sc=8) 00:28:58.234 Write completed with error (sct=0, sc=8) 00:28:58.234 Write completed with error (sct=0, sc=8) 00:28:58.234 starting I/O failed: -6 00:28:58.234 Write completed with error (sct=0, sc=8) 00:28:58.234 Write completed with error (sct=0, sc=8) 00:28:58.234 Write completed with error (sct=0, sc=8) 00:28:58.234 Write completed with error (sct=0, sc=8) 00:28:58.234 starting I/O failed: -6 00:28:58.234 Write completed with error (sct=0, sc=8) 00:28:58.234 Write completed with error (sct=0, sc=8) 00:28:58.234 Write completed with error (sct=0, sc=8) 00:28:58.234 Write completed with error (sct=0, sc=8) 00:28:58.234 starting I/O failed: -6 00:28:58.234 Write completed with error (sct=0, sc=8) 00:28:58.234 Write completed with error (sct=0, sc=8) 00:28:58.234 Write completed with error (sct=0, sc=8) 00:28:58.234 Write completed with error (sct=0, sc=8) 00:28:58.234 starting I/O failed: -6 00:28:58.234 Write completed with error (sct=0, sc=8) 00:28:58.234 Write completed with error (sct=0, sc=8) 00:28:58.234 Write completed with error (sct=0, sc=8) 00:28:58.234 Write completed with error (sct=0, sc=8) 00:28:58.234 starting I/O failed: -6 00:28:58.234 Write completed with error (sct=0, sc=8) 00:28:58.234 Write completed with error (sct=0, sc=8) 00:28:58.234 Write completed with error (sct=0, sc=8) 00:28:58.234 Write completed with error (sct=0, sc=8) 00:28:58.234 starting I/O failed: -6 00:28:58.234 Write completed with error (sct=0, sc=8) 00:28:58.234 Write completed with error (sct=0, sc=8) 00:28:58.234 Write completed with error (sct=0, sc=8) 00:28:58.234 Write completed with error (sct=0, sc=8) 00:28:58.234 starting I/O failed: -6 00:28:58.234 [2024-11-18 07:14:18.759538] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199b280 is same with the state(6) to be set 00:28:58.234 [2024-11-18 07:14:18.759526] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:58.234 [2024-11-18 07:14:18.759613] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199b280 is same with the state(6) to be set 00:28:58.234 [2024-11-18 07:14:18.759637] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199b280 is same with the state(6) to be set 00:28:58.234 [2024-11-18 07:14:18.759651] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199b280 is same with the state(6) to be set 00:28:58.234 [2024-11-18 07:14:18.759664] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199b280 is same with the state(6) to be set 00:28:58.234 [2024-11-18 07:14:18.759676] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199b280 is same with the state(6) to be set 00:28:58.234 Write completed with error (sct=0, sc=8) 00:28:58.234 Write completed with error (sct=0, sc=8) 00:28:58.234 starting I/O failed: -6 00:28:58.234 Write completed with error (sct=0, sc=8) 00:28:58.234 starting I/O failed: -6 00:28:58.234 Write completed with error (sct=0, sc=8) 00:28:58.234 Write completed with error (sct=0, sc=8) 00:28:58.234 Write completed with error (sct=0, sc=8) 00:28:58.234 starting I/O failed: -6 00:28:58.234 Write completed with error (sct=0, sc=8) 00:28:58.234 starting I/O failed: -6 00:28:58.234 Write completed with error (sct=0, sc=8) 00:28:58.234 Write completed with error (sct=0, sc=8) 00:28:58.234 Write completed with error (sct=0, sc=8) 00:28:58.234 starting I/O failed: -6 00:28:58.234 Write completed with error (sct=0, sc=8) 00:28:58.234 starting I/O failed: -6 00:28:58.234 Write completed with error (sct=0, sc=8) 00:28:58.234 Write completed with error (sct=0, sc=8) 00:28:58.234 Write completed with error (sct=0, sc=8) 00:28:58.234 starting I/O failed: -6 00:28:58.234 Write completed with error (sct=0, sc=8) 00:28:58.234 starting I/O failed: -6 00:28:58.234 Write completed with error (sct=0, sc=8) 00:28:58.234 Write completed with error (sct=0, sc=8) 00:28:58.234 Write completed with error (sct=0, sc=8) 00:28:58.234 starting I/O failed: -6 00:28:58.234 Write completed with error (sct=0, sc=8) 00:28:58.234 starting I/O failed: -6 00:28:58.234 Write completed with error (sct=0, sc=8) 00:28:58.234 [2024-11-18 07:14:18.760163] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199b750 is same with the state(6) to be set 00:28:58.234 Write completed with error (sct=0, sc=8) 00:28:58.234 [2024-11-18 07:14:18.760196] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199b750 is same with the state(6) to be set 00:28:58.234 Write completed with error (sct=0, sc=8) 00:28:58.234 [2024-11-18 07:14:18.760211] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199b750 is same with the state(6) to be set 00:28:58.234 starting I/O failed: -6 00:28:58.234 [2024-11-18 07:14:18.760224] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199b750 is same with the state(6) to be set 00:28:58.234 Write completed with error (sct=0, sc=8) 00:28:58.234 [2024-11-18 07:14:18.760236] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199b750 is same with the state(6) to be set 00:28:58.234 starting I/O failed: -6 00:28:58.234 Write completed with error (sct=0, sc=8) 00:28:58.234 Write completed with error (sct=0, sc=8) 00:28:58.234 Write completed with error (sct=0, sc=8) 00:28:58.234 starting I/O failed: -6 00:28:58.234 Write completed with error (sct=0, sc=8) 00:28:58.234 starting I/O failed: -6 00:28:58.234 Write completed with error (sct=0, sc=8) 00:28:58.234 Write completed with error (sct=0, sc=8) 00:28:58.234 Write completed with error (sct=0, sc=8) 00:28:58.234 starting I/O failed: -6 00:28:58.234 Write completed with error (sct=0, sc=8) 00:28:58.234 starting I/O failed: -6 00:28:58.234 Write completed with error (sct=0, sc=8) 00:28:58.234 Write completed with error (sct=0, sc=8) 00:28:58.234 Write completed with error (sct=0, sc=8) 00:28:58.234 starting I/O failed: -6 00:28:58.234 Write completed with error (sct=0, sc=8) 00:28:58.234 starting I/O failed: -6 00:28:58.234 Write completed with error (sct=0, sc=8) 00:28:58.234 Write completed with error (sct=0, sc=8) 00:28:58.234 Write completed with error (sct=0, sc=8) 00:28:58.234 starting I/O failed: -6 00:28:58.234 Write completed with error (sct=0, sc=8) 00:28:58.234 starting I/O failed: -6 00:28:58.234 Write completed with error (sct=0, sc=8) 00:28:58.234 Write completed with error (sct=0, sc=8) 00:28:58.234 Write completed with error (sct=0, sc=8) 00:28:58.234 starting I/O failed: -6 00:28:58.234 Write completed with error (sct=0, sc=8) 00:28:58.234 starting I/O failed: -6 00:28:58.234 Write completed with error (sct=0, sc=8) 00:28:58.234 Write completed with error (sct=0, sc=8) 00:28:58.234 [2024-11-18 07:14:18.760703] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:58.234 Write completed with error (sct=0, sc=8) 00:28:58.234 starting I/O failed: -6 00:28:58.234 Write completed with error (sct=0, sc=8) 00:28:58.234 starting I/O failed: -6 00:28:58.234 Write completed with error (sct=0, sc=8) 00:28:58.234 Write completed with error (sct=0, sc=8) 00:28:58.234 starting I/O failed: -6 00:28:58.234 Write completed with error (sct=0, sc=8) 00:28:58.234 starting I/O failed: -6 00:28:58.234 Write completed with error (sct=0, sc=8) 00:28:58.234 starting I/O failed: -6 00:28:58.234 Write completed with error (sct=0, sc=8) 00:28:58.234 Write completed with error (sct=0, sc=8) 00:28:58.234 starting I/O failed: -6 00:28:58.234 Write completed with error (sct=0, sc=8) 00:28:58.234 starting I/O failed: -6 00:28:58.234 Write completed with error (sct=0, sc=8) 00:28:58.234 starting I/O failed: -6 00:28:58.234 Write completed with error (sct=0, sc=8) 00:28:58.234 Write completed with error (sct=0, sc=8) 00:28:58.234 starting I/O failed: -6 00:28:58.234 Write completed with error (sct=0, sc=8) 00:28:58.234 starting I/O failed: -6 00:28:58.234 Write completed with error (sct=0, sc=8) 00:28:58.234 starting I/O failed: -6 00:28:58.234 Write completed with error (sct=0, sc=8) 00:28:58.234 Write completed with error (sct=0, sc=8) 00:28:58.234 starting I/O failed: -6 00:28:58.234 Write completed with error (sct=0, sc=8) 00:28:58.234 starting I/O failed: -6 00:28:58.234 Write completed with error (sct=0, sc=8) 00:28:58.234 starting I/O failed: -6 00:28:58.234 Write completed with error (sct=0, sc=8) 00:28:58.234 Write completed with error (sct=0, sc=8) 00:28:58.234 starting I/O failed: -6 00:28:58.234 Write completed with error (sct=0, sc=8) 00:28:58.234 starting I/O failed: -6 00:28:58.234 Write completed with error (sct=0, sc=8) 00:28:58.234 starting I/O failed: -6 00:28:58.234 Write completed with error (sct=0, sc=8) 00:28:58.234 Write completed with error (sct=0, sc=8) 00:28:58.234 starting I/O failed: -6 00:28:58.234 Write completed with error (sct=0, sc=8) 00:28:58.235 starting I/O failed: -6 00:28:58.235 Write completed with error (sct=0, sc=8) 00:28:58.235 starting I/O failed: -6 00:28:58.235 Write completed with error (sct=0, sc=8) 00:28:58.235 [2024-11-18 07:14:18.761358] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199adb0 is same with the state(6) to be set 00:28:58.235 Write completed with error (sct=0, sc=8) 00:28:58.235 starting I/O failed: -6 00:28:58.235 [2024-11-18 07:14:18.761395] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199adb0 is same with the state(6) to be set 00:28:58.235 Write completed with error (sct=0, sc=8) 00:28:58.235 [2024-11-18 07:14:18.761412] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199adb0 is same with the state(6) to be set 00:28:58.235 starting I/O failed: -6 00:28:58.235 [2024-11-18 07:14:18.761425] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199adb0 is same with the state(6) to be set 00:28:58.235 Write completed with error (sct=0, sc=8) 00:28:58.235 [2024-11-18 07:14:18.761437] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199adb0 is same with the state(6) to be set 00:28:58.235 starting I/O failed: -6 00:28:58.235 [2024-11-18 07:14:18.761449] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199adb0 is same with the state(6) to be set 00:28:58.235 Write completed with error (sct=0, sc=8) 00:28:58.235 [2024-11-18 07:14:18.761461] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199adb0 is same with the state(6) to be set 00:28:58.235 Write completed with error (sct=0, sc=8) 00:28:58.235 [2024-11-18 07:14:18.761474] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199adb0 is same with the state(6) to be set 00:28:58.235 starting I/O failed: -6 00:28:58.235 Write completed with error (sct=0, sc=8) 00:28:58.235 [2024-11-18 07:14:18.761486] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199adb0 is same with the state(6) to be set 00:28:58.235 starting I/O failed: -6 00:28:58.235 Write completed with error (sct=0, sc=8) 00:28:58.235 starting I/O failed: -6 00:28:58.235 Write completed with error (sct=0, sc=8) 00:28:58.235 Write completed with error (sct=0, sc=8) 00:28:58.235 starting I/O failed: -6 00:28:58.235 Write completed with error (sct=0, sc=8) 00:28:58.235 starting I/O failed: -6 00:28:58.235 Write completed with error (sct=0, sc=8) 00:28:58.235 starting I/O failed: -6 00:28:58.235 Write completed with error (sct=0, sc=8) 00:28:58.235 Write completed with error (sct=0, sc=8) 00:28:58.235 starting I/O failed: -6 00:28:58.235 Write completed with error (sct=0, sc=8) 00:28:58.235 starting I/O failed: -6 00:28:58.235 Write completed with error (sct=0, sc=8) 00:28:58.235 starting I/O failed: -6 00:28:58.235 Write completed with error (sct=0, sc=8) 00:28:58.235 Write completed with error (sct=0, sc=8) 00:28:58.235 starting I/O failed: -6 00:28:58.235 Write completed with error (sct=0, sc=8) 00:28:58.235 starting I/O failed: -6 00:28:58.235 Write completed with error (sct=0, sc=8) 00:28:58.235 starting I/O failed: -6 00:28:58.235 Write completed with error (sct=0, sc=8) 00:28:58.235 Write completed with error (sct=0, sc=8) 00:28:58.235 starting I/O failed: -6 00:28:58.235 Write completed with error (sct=0, sc=8) 00:28:58.235 starting I/O failed: -6 00:28:58.235 [2024-11-18 07:14:18.761883] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:58.235 Write completed with error (sct=0, sc=8) 00:28:58.235 starting I/O failed: -6 00:28:58.235 Write completed with error (sct=0, sc=8) 00:28:58.235 starting I/O failed: -6 00:28:58.235 Write completed with error (sct=0, sc=8) 00:28:58.235 starting I/O failed: -6 00:28:58.235 Write completed with error (sct=0, sc=8) 00:28:58.235 starting I/O failed: -6 00:28:58.235 Write completed with error (sct=0, sc=8) 00:28:58.235 starting I/O failed: -6 00:28:58.235 Write completed with error (sct=0, sc=8) 00:28:58.235 starting I/O failed: -6 00:28:58.235 Write completed with error (sct=0, sc=8) 00:28:58.235 starting I/O failed: -6 00:28:58.235 Write completed with error (sct=0, sc=8) 00:28:58.235 starting I/O failed: -6 00:28:58.235 Write completed with error (sct=0, sc=8) 00:28:58.235 starting I/O failed: -6 00:28:58.235 Write completed with error (sct=0, sc=8) 00:28:58.235 starting I/O failed: -6 00:28:58.235 Write completed with error (sct=0, sc=8) 00:28:58.235 starting I/O failed: -6 00:28:58.235 Write completed with error (sct=0, sc=8) 00:28:58.235 starting I/O failed: -6 00:28:58.235 Write completed with error (sct=0, sc=8) 00:28:58.235 starting I/O failed: -6 00:28:58.235 Write completed with error (sct=0, sc=8) 00:28:58.235 starting I/O failed: -6 00:28:58.235 Write completed with error (sct=0, sc=8) 00:28:58.235 starting I/O failed: -6 00:28:58.235 Write completed with error (sct=0, sc=8) 00:28:58.235 starting I/O failed: -6 00:28:58.235 Write completed with error (sct=0, sc=8) 00:28:58.235 starting I/O failed: -6 00:28:58.235 Write completed with error (sct=0, sc=8) 00:28:58.235 starting I/O failed: -6 00:28:58.235 Write completed with error (sct=0, sc=8) 00:28:58.235 starting I/O failed: -6 00:28:58.235 Write completed with error (sct=0, sc=8) 00:28:58.235 starting I/O failed: -6 00:28:58.235 Write completed with error (sct=0, sc=8) 00:28:58.235 starting I/O failed: -6 00:28:58.235 Write completed with error (sct=0, sc=8) 00:28:58.235 starting I/O failed: -6 00:28:58.235 Write completed with error (sct=0, sc=8) 00:28:58.235 starting I/O failed: -6 00:28:58.235 Write completed with error (sct=0, sc=8) 00:28:58.235 starting I/O failed: -6 00:28:58.235 Write completed with error (sct=0, sc=8) 00:28:58.235 starting I/O failed: -6 00:28:58.235 Write completed with error (sct=0, sc=8) 00:28:58.235 starting I/O failed: -6 00:28:58.235 Write completed with error (sct=0, sc=8) 00:28:58.235 starting I/O failed: -6 00:28:58.235 Write completed with error (sct=0, sc=8) 00:28:58.235 starting I/O failed: -6 00:28:58.235 Write completed with error (sct=0, sc=8) 00:28:58.235 starting I/O failed: -6 00:28:58.235 Write completed with error (sct=0, sc=8) 00:28:58.235 starting I/O failed: -6 00:28:58.235 Write completed with error (sct=0, sc=8) 00:28:58.235 starting I/O failed: -6 00:28:58.235 Write completed with error (sct=0, sc=8) 00:28:58.235 starting I/O failed: -6 00:28:58.235 Write completed with error (sct=0, sc=8) 00:28:58.235 starting I/O failed: -6 00:28:58.235 Write completed with error (sct=0, sc=8) 00:28:58.235 starting I/O failed: -6 00:28:58.235 Write completed with error (sct=0, sc=8) 00:28:58.235 starting I/O failed: -6 00:28:58.235 Write completed with error (sct=0, sc=8) 00:28:58.235 starting I/O failed: -6 00:28:58.235 Write completed with error (sct=0, sc=8) 00:28:58.235 starting I/O failed: -6 00:28:58.235 Write completed with error (sct=0, sc=8) 00:28:58.235 starting I/O failed: -6 00:28:58.235 Write completed with error (sct=0, sc=8) 00:28:58.235 starting I/O failed: -6 00:28:58.235 Write completed with error (sct=0, sc=8) 00:28:58.235 starting I/O failed: -6 00:28:58.235 Write completed with error (sct=0, sc=8) 00:28:58.235 starting I/O failed: -6 00:28:58.235 Write completed with error (sct=0, sc=8) 00:28:58.235 starting I/O failed: -6 00:28:58.235 Write completed with error (sct=0, sc=8) 00:28:58.235 starting I/O failed: -6 00:28:58.235 Write completed with error (sct=0, sc=8) 00:28:58.235 starting I/O failed: -6 00:28:58.235 Write completed with error (sct=0, sc=8) 00:28:58.235 starting I/O failed: -6 00:28:58.235 Write completed with error (sct=0, sc=8) 00:28:58.235 starting I/O failed: -6 00:28:58.235 Write completed with error (sct=0, sc=8) 00:28:58.235 starting I/O failed: -6 00:28:58.235 Write completed with error (sct=0, sc=8) 00:28:58.235 starting I/O failed: -6 00:28:58.235 Write completed with error (sct=0, sc=8) 00:28:58.235 starting I/O failed: -6 00:28:58.235 Write completed with error (sct=0, sc=8) 00:28:58.235 starting I/O failed: -6 00:28:58.235 Write completed with error (sct=0, sc=8) 00:28:58.235 starting I/O failed: -6 00:28:58.235 Write completed with error (sct=0, sc=8) 00:28:58.235 starting I/O failed: -6 00:28:58.235 Write completed with error (sct=0, sc=8) 00:28:58.235 starting I/O failed: -6 00:28:58.235 Write completed with error (sct=0, sc=8) 00:28:58.235 starting I/O failed: -6 00:28:58.235 Write completed with error (sct=0, sc=8) 00:28:58.235 starting I/O failed: -6 00:28:58.235 Write completed with error (sct=0, sc=8) 00:28:58.235 starting I/O failed: -6 00:28:58.235 Write completed with error (sct=0, sc=8) 00:28:58.235 starting I/O failed: -6 00:28:58.235 Write completed with error (sct=0, sc=8) 00:28:58.235 starting I/O failed: -6 00:28:58.235 Write completed with error (sct=0, sc=8) 00:28:58.235 starting I/O failed: -6 00:28:58.235 [2024-11-18 07:14:18.763964] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:28:58.235 NVMe io qpair process completion error 00:28:58.235 Write completed with error (sct=0, sc=8) 00:28:58.235 Write completed with error (sct=0, sc=8) 00:28:58.235 Write completed with error (sct=0, sc=8) 00:28:58.235 Write completed with error (sct=0, sc=8) 00:28:58.235 starting I/O failed: -6 00:28:58.235 Write completed with error (sct=0, sc=8) 00:28:58.235 Write completed with error (sct=0, sc=8) 00:28:58.235 Write completed with error (sct=0, sc=8) 00:28:58.235 Write completed with error (sct=0, sc=8) 00:28:58.235 starting I/O failed: -6 00:28:58.235 Write completed with error (sct=0, sc=8) 00:28:58.235 Write completed with error (sct=0, sc=8) 00:28:58.235 Write completed with error (sct=0, sc=8) 00:28:58.235 Write completed with error (sct=0, sc=8) 00:28:58.235 starting I/O failed: -6 00:28:58.235 Write completed with error (sct=0, sc=8) 00:28:58.235 Write completed with error (sct=0, sc=8) 00:28:58.235 Write completed with error (sct=0, sc=8) 00:28:58.235 Write completed with error (sct=0, sc=8) 00:28:58.236 starting I/O failed: -6 00:28:58.236 Write completed with error (sct=0, sc=8) 00:28:58.236 Write completed with error (sct=0, sc=8) 00:28:58.236 Write completed with error (sct=0, sc=8) 00:28:58.236 Write completed with error (sct=0, sc=8) 00:28:58.236 starting I/O failed: -6 00:28:58.236 Write completed with error (sct=0, sc=8) 00:28:58.236 Write completed with error (sct=0, sc=8) 00:28:58.236 Write completed with error (sct=0, sc=8) 00:28:58.236 Write completed with error (sct=0, sc=8) 00:28:58.236 starting I/O failed: -6 00:28:58.236 Write completed with error (sct=0, sc=8) 00:28:58.236 Write completed with error (sct=0, sc=8) 00:28:58.236 Write completed with error (sct=0, sc=8) 00:28:58.236 Write completed with error (sct=0, sc=8) 00:28:58.236 starting I/O failed: -6 00:28:58.236 Write completed with error (sct=0, sc=8) 00:28:58.236 Write completed with error (sct=0, sc=8) 00:28:58.236 Write completed with error (sct=0, sc=8) 00:28:58.236 Write completed with error (sct=0, sc=8) 00:28:58.236 starting I/O failed: -6 00:28:58.236 Write completed with error (sct=0, sc=8) 00:28:58.236 Write completed with error (sct=0, sc=8) 00:28:58.236 Write completed with error (sct=0, sc=8) 00:28:58.236 Write completed with error (sct=0, sc=8) 00:28:58.236 starting I/O failed: -6 00:28:58.236 Write completed with error (sct=0, sc=8) 00:28:58.236 Write completed with error (sct=0, sc=8) 00:28:58.236 Write completed with error (sct=0, sc=8) 00:28:58.236 Write completed with error (sct=0, sc=8) 00:28:58.236 starting I/O failed: -6 00:28:58.236 Write completed with error (sct=0, sc=8) 00:28:58.236 Write completed with error (sct=0, sc=8) 00:28:58.236 [2024-11-18 07:14:18.772157] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:58.236 Write completed with error (sct=0, sc=8) 00:28:58.236 starting I/O failed: -6 00:28:58.236 Write completed with error (sct=0, sc=8) 00:28:58.236 Write completed with error (sct=0, sc=8) 00:28:58.236 Write completed with error (sct=0, sc=8) 00:28:58.236 starting I/O failed: -6 00:28:58.236 Write completed with error (sct=0, sc=8) 00:28:58.236 starting I/O failed: -6 00:28:58.236 Write completed with error (sct=0, sc=8) 00:28:58.236 Write completed with error (sct=0, sc=8) 00:28:58.236 Write completed with error (sct=0, sc=8) 00:28:58.236 starting I/O failed: -6 00:28:58.236 Write completed with error (sct=0, sc=8) 00:28:58.236 starting I/O failed: -6 00:28:58.236 Write completed with error (sct=0, sc=8) 00:28:58.236 Write completed with error (sct=0, sc=8) 00:28:58.236 Write completed with error (sct=0, sc=8) 00:28:58.236 starting I/O failed: -6 00:28:58.236 Write completed with error (sct=0, sc=8) 00:28:58.236 starting I/O failed: -6 00:28:58.236 Write completed with error (sct=0, sc=8) 00:28:58.236 Write completed with error (sct=0, sc=8) 00:28:58.236 Write completed with error (sct=0, sc=8) 00:28:58.236 starting I/O failed: -6 00:28:58.236 Write completed with error (sct=0, sc=8) 00:28:58.236 starting I/O failed: -6 00:28:58.236 Write completed with error (sct=0, sc=8) 00:28:58.236 Write completed with error (sct=0, sc=8) 00:28:58.236 Write completed with error (sct=0, sc=8) 00:28:58.236 starting I/O failed: -6 00:28:58.236 Write completed with error (sct=0, sc=8) 00:28:58.236 starting I/O failed: -6 00:28:58.236 Write completed with error (sct=0, sc=8) 00:28:58.236 Write completed with error (sct=0, sc=8) 00:28:58.236 Write completed with error (sct=0, sc=8) 00:28:58.236 starting I/O failed: -6 00:28:58.236 Write completed with error (sct=0, sc=8) 00:28:58.236 starting I/O failed: -6 00:28:58.236 Write completed with error (sct=0, sc=8) 00:28:58.236 Write completed with error (sct=0, sc=8) 00:28:58.236 Write completed with error (sct=0, sc=8) 00:28:58.236 starting I/O failed: -6 00:28:58.236 Write completed with error (sct=0, sc=8) 00:28:58.236 starting I/O failed: -6 00:28:58.236 Write completed with error (sct=0, sc=8) 00:28:58.236 Write completed with error (sct=0, sc=8) 00:28:58.236 Write completed with error (sct=0, sc=8) 00:28:58.236 starting I/O failed: -6 00:28:58.236 Write completed with error (sct=0, sc=8) 00:28:58.236 starting I/O failed: -6 00:28:58.236 Write completed with error (sct=0, sc=8) 00:28:58.236 Write completed with error (sct=0, sc=8) 00:28:58.236 Write completed with error (sct=0, sc=8) 00:28:58.236 starting I/O failed: -6 00:28:58.236 Write completed with error (sct=0, sc=8) 00:28:58.236 starting I/O failed: -6 00:28:58.236 Write completed with error (sct=0, sc=8) 00:28:58.236 Write completed with error (sct=0, sc=8) 00:28:58.236 Write completed with error (sct=0, sc=8) 00:28:58.236 starting I/O failed: -6 00:28:58.236 [2024-11-18 07:14:18.773172] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:58.236 starting I/O failed: -6 00:28:58.236 Write completed with error (sct=0, sc=8) 00:28:58.236 starting I/O failed: -6 00:28:58.236 Write completed with error (sct=0, sc=8) 00:28:58.236 Write completed with error (sct=0, sc=8) 00:28:58.236 starting I/O failed: -6 00:28:58.236 Write completed with error (sct=0, sc=8) 00:28:58.236 starting I/O failed: -6 00:28:58.236 Write completed with error (sct=0, sc=8) 00:28:58.236 starting I/O failed: -6 00:28:58.236 Write completed with error (sct=0, sc=8) 00:28:58.236 Write completed with error (sct=0, sc=8) 00:28:58.236 starting I/O failed: -6 00:28:58.236 Write completed with error (sct=0, sc=8) 00:28:58.236 starting I/O failed: -6 00:28:58.236 Write completed with error (sct=0, sc=8) 00:28:58.236 starting I/O failed: -6 00:28:58.236 Write completed with error (sct=0, sc=8) 00:28:58.236 Write completed with error (sct=0, sc=8) 00:28:58.236 starting I/O failed: -6 00:28:58.236 Write completed with error (sct=0, sc=8) 00:28:58.236 starting I/O failed: -6 00:28:58.236 Write completed with error (sct=0, sc=8) 00:28:58.236 starting I/O failed: -6 00:28:58.236 Write completed with error (sct=0, sc=8) 00:28:58.236 Write completed with error (sct=0, sc=8) 00:28:58.236 starting I/O failed: -6 00:28:58.236 Write completed with error (sct=0, sc=8) 00:28:58.236 starting I/O failed: -6 00:28:58.236 Write completed with error (sct=0, sc=8) 00:28:58.236 starting I/O failed: -6 00:28:58.236 Write completed with error (sct=0, sc=8) 00:28:58.236 Write completed with error (sct=0, sc=8) 00:28:58.236 starting I/O failed: -6 00:28:58.236 Write completed with error (sct=0, sc=8) 00:28:58.236 starting I/O failed: -6 00:28:58.236 Write completed with error (sct=0, sc=8) 00:28:58.236 starting I/O failed: -6 00:28:58.236 Write completed with error (sct=0, sc=8) 00:28:58.236 Write completed with error (sct=0, sc=8) 00:28:58.236 starting I/O failed: -6 00:28:58.236 Write completed with error (sct=0, sc=8) 00:28:58.236 starting I/O failed: -6 00:28:58.236 Write completed with error (sct=0, sc=8) 00:28:58.236 starting I/O failed: -6 00:28:58.236 Write completed with error (sct=0, sc=8) 00:28:58.236 Write completed with error (sct=0, sc=8) 00:28:58.236 starting I/O failed: -6 00:28:58.236 Write completed with error (sct=0, sc=8) 00:28:58.236 starting I/O failed: -6 00:28:58.236 Write completed with error (sct=0, sc=8) 00:28:58.236 starting I/O failed: -6 00:28:58.236 Write completed with error (sct=0, sc=8) 00:28:58.236 Write completed with error (sct=0, sc=8) 00:28:58.236 starting I/O failed: -6 00:28:58.236 Write completed with error (sct=0, sc=8) 00:28:58.236 starting I/O failed: -6 00:28:58.236 Write completed with error (sct=0, sc=8) 00:28:58.236 starting I/O failed: -6 00:28:58.236 Write completed with error (sct=0, sc=8) 00:28:58.236 Write completed with error (sct=0, sc=8) 00:28:58.236 starting I/O failed: -6 00:28:58.236 Write completed with error (sct=0, sc=8) 00:28:58.236 starting I/O failed: -6 00:28:58.236 Write completed with error (sct=0, sc=8) 00:28:58.236 starting I/O failed: -6 00:28:58.236 Write completed with error (sct=0, sc=8) 00:28:58.236 Write completed with error (sct=0, sc=8) 00:28:58.236 starting I/O failed: -6 00:28:58.236 Write completed with error (sct=0, sc=8) 00:28:58.236 starting I/O failed: -6 00:28:58.236 Write completed with error (sct=0, sc=8) 00:28:58.236 starting I/O failed: -6 00:28:58.236 Write completed with error (sct=0, sc=8) 00:28:58.236 Write completed with error (sct=0, sc=8) 00:28:58.236 starting I/O failed: -6 00:28:58.236 Write completed with error (sct=0, sc=8) 00:28:58.236 starting I/O failed: -6 00:28:58.236 Write completed with error (sct=0, sc=8) 00:28:58.236 starting I/O failed: -6 00:28:58.236 Write completed with error (sct=0, sc=8) 00:28:58.236 Write completed with error (sct=0, sc=8) 00:28:58.236 starting I/O failed: -6 00:28:58.236 Write completed with error (sct=0, sc=8) 00:28:58.236 starting I/O failed: -6 00:28:58.236 Write completed with error (sct=0, sc=8) 00:28:58.236 starting I/O failed: -6 00:28:58.236 [2024-11-18 07:14:18.774360] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:28:58.236 Write completed with error (sct=0, sc=8) 00:28:58.236 starting I/O failed: -6 00:28:58.236 Write completed with error (sct=0, sc=8) 00:28:58.236 starting I/O failed: -6 00:28:58.236 Write completed with error (sct=0, sc=8) 00:28:58.236 starting I/O failed: -6 00:28:58.236 Write completed with error (sct=0, sc=8) 00:28:58.236 starting I/O failed: -6 00:28:58.236 Write completed with error (sct=0, sc=8) 00:28:58.236 starting I/O failed: -6 00:28:58.236 Write completed with error (sct=0, sc=8) 00:28:58.236 starting I/O failed: -6 00:28:58.236 Write completed with error (sct=0, sc=8) 00:28:58.236 starting I/O failed: -6 00:28:58.236 Write completed with error (sct=0, sc=8) 00:28:58.236 starting I/O failed: -6 00:28:58.236 Write completed with error (sct=0, sc=8) 00:28:58.236 starting I/O failed: -6 00:28:58.236 Write completed with error (sct=0, sc=8) 00:28:58.236 starting I/O failed: -6 00:28:58.236 Write completed with error (sct=0, sc=8) 00:28:58.236 starting I/O failed: -6 00:28:58.236 Write completed with error (sct=0, sc=8) 00:28:58.236 starting I/O failed: -6 00:28:58.237 Write completed with error (sct=0, sc=8) 00:28:58.237 starting I/O failed: -6 00:28:58.237 Write completed with error (sct=0, sc=8) 00:28:58.237 starting I/O failed: -6 00:28:58.237 Write completed with error (sct=0, sc=8) 00:28:58.237 starting I/O failed: -6 00:28:58.237 Write completed with error (sct=0, sc=8) 00:28:58.237 starting I/O failed: -6 00:28:58.237 Write completed with error (sct=0, sc=8) 00:28:58.237 starting I/O failed: -6 00:28:58.237 Write completed with error (sct=0, sc=8) 00:28:58.237 starting I/O failed: -6 00:28:58.237 Write completed with error (sct=0, sc=8) 00:28:58.237 starting I/O failed: -6 00:28:58.237 Write completed with error (sct=0, sc=8) 00:28:58.237 starting I/O failed: -6 00:28:58.237 Write completed with error (sct=0, sc=8) 00:28:58.237 starting I/O failed: -6 00:28:58.237 Write completed with error (sct=0, sc=8) 00:28:58.237 starting I/O failed: -6 00:28:58.237 Write completed with error (sct=0, sc=8) 00:28:58.237 starting I/O failed: -6 00:28:58.237 Write completed with error (sct=0, sc=8) 00:28:58.237 starting I/O failed: -6 00:28:58.237 Write completed with error (sct=0, sc=8) 00:28:58.237 starting I/O failed: -6 00:28:58.237 Write completed with error (sct=0, sc=8) 00:28:58.237 starting I/O failed: -6 00:28:58.237 Write completed with error (sct=0, sc=8) 00:28:58.237 starting I/O failed: -6 00:28:58.237 Write completed with error (sct=0, sc=8) 00:28:58.237 starting I/O failed: -6 00:28:58.237 Write completed with error (sct=0, sc=8) 00:28:58.237 starting I/O failed: -6 00:28:58.237 Write completed with error (sct=0, sc=8) 00:28:58.237 starting I/O failed: -6 00:28:58.237 Write completed with error (sct=0, sc=8) 00:28:58.237 starting I/O failed: -6 00:28:58.237 Write completed with error (sct=0, sc=8) 00:28:58.237 starting I/O failed: -6 00:28:58.237 Write completed with error (sct=0, sc=8) 00:28:58.237 starting I/O failed: -6 00:28:58.237 Write completed with error (sct=0, sc=8) 00:28:58.237 starting I/O failed: -6 00:28:58.237 Write completed with error (sct=0, sc=8) 00:28:58.237 starting I/O failed: -6 00:28:58.237 Write completed with error (sct=0, sc=8) 00:28:58.237 starting I/O failed: -6 00:28:58.237 Write completed with error (sct=0, sc=8) 00:28:58.237 starting I/O failed: -6 00:28:58.237 Write completed with error (sct=0, sc=8) 00:28:58.237 starting I/O failed: -6 00:28:58.237 Write completed with error (sct=0, sc=8) 00:28:58.237 starting I/O failed: -6 00:28:58.237 Write completed with error (sct=0, sc=8) 00:28:58.237 starting I/O failed: -6 00:28:58.237 Write completed with error (sct=0, sc=8) 00:28:58.237 starting I/O failed: -6 00:28:58.237 Write completed with error (sct=0, sc=8) 00:28:58.237 starting I/O failed: -6 00:28:58.237 Write completed with error (sct=0, sc=8) 00:28:58.237 starting I/O failed: -6 00:28:58.237 Write completed with error (sct=0, sc=8) 00:28:58.237 starting I/O failed: -6 00:28:58.237 Write completed with error (sct=0, sc=8) 00:28:58.237 starting I/O failed: -6 00:28:58.237 Write completed with error (sct=0, sc=8) 00:28:58.237 starting I/O failed: -6 00:28:58.237 Write completed with error (sct=0, sc=8) 00:28:58.237 starting I/O failed: -6 00:28:58.237 Write completed with error (sct=0, sc=8) 00:28:58.237 starting I/O failed: -6 00:28:58.237 Write completed with error (sct=0, sc=8) 00:28:58.237 starting I/O failed: -6 00:28:58.237 Write completed with error (sct=0, sc=8) 00:28:58.237 starting I/O failed: -6 00:28:58.237 Write completed with error (sct=0, sc=8) 00:28:58.237 starting I/O failed: -6 00:28:58.237 Write completed with error (sct=0, sc=8) 00:28:58.237 starting I/O failed: -6 00:28:58.237 Write completed with error (sct=0, sc=8) 00:28:58.237 starting I/O failed: -6 00:28:58.237 Write completed with error (sct=0, sc=8) 00:28:58.237 starting I/O failed: -6 00:28:58.237 Write completed with error (sct=0, sc=8) 00:28:58.237 starting I/O failed: -6 00:28:58.237 Write completed with error (sct=0, sc=8) 00:28:58.237 starting I/O failed: -6 00:28:58.237 Write completed with error (sct=0, sc=8) 00:28:58.237 starting I/O failed: -6 00:28:58.237 Write completed with error (sct=0, sc=8) 00:28:58.237 starting I/O failed: -6 00:28:58.237 Write completed with error (sct=0, sc=8) 00:28:58.237 starting I/O failed: -6 00:28:58.237 Write completed with error (sct=0, sc=8) 00:28:58.237 starting I/O failed: -6 00:28:58.237 [2024-11-18 07:14:18.776217] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:58.237 NVMe io qpair process completion error 00:28:58.237 Write completed with error (sct=0, sc=8) 00:28:58.237 Write completed with error (sct=0, sc=8) 00:28:58.237 Write completed with error (sct=0, sc=8) 00:28:58.237 starting I/O failed: -6 00:28:58.237 Write completed with error (sct=0, sc=8) 00:28:58.237 Write completed with error (sct=0, sc=8) 00:28:58.237 Write completed with error (sct=0, sc=8) 00:28:58.237 Write completed with error (sct=0, sc=8) 00:28:58.237 starting I/O failed: -6 00:28:58.237 Write completed with error (sct=0, sc=8) 00:28:58.237 Write completed with error (sct=0, sc=8) 00:28:58.237 Write completed with error (sct=0, sc=8) 00:28:58.237 Write completed with error (sct=0, sc=8) 00:28:58.237 starting I/O failed: -6 00:28:58.237 Write completed with error (sct=0, sc=8) 00:28:58.237 Write completed with error (sct=0, sc=8) 00:28:58.237 Write completed with error (sct=0, sc=8) 00:28:58.237 Write completed with error (sct=0, sc=8) 00:28:58.237 starting I/O failed: -6 00:28:58.237 Write completed with error (sct=0, sc=8) 00:28:58.237 Write completed with error (sct=0, sc=8) 00:28:58.237 Write completed with error (sct=0, sc=8) 00:28:58.237 Write completed with error (sct=0, sc=8) 00:28:58.237 starting I/O failed: -6 00:28:58.237 Write completed with error (sct=0, sc=8) 00:28:58.237 Write completed with error (sct=0, sc=8) 00:28:58.237 Write completed with error (sct=0, sc=8) 00:28:58.237 Write completed with error (sct=0, sc=8) 00:28:58.237 starting I/O failed: -6 00:28:58.237 Write completed with error (sct=0, sc=8) 00:28:58.237 Write completed with error (sct=0, sc=8) 00:28:58.237 Write completed with error (sct=0, sc=8) 00:28:58.237 Write completed with error (sct=0, sc=8) 00:28:58.237 starting I/O failed: -6 00:28:58.237 Write completed with error (sct=0, sc=8) 00:28:58.237 Write completed with error (sct=0, sc=8) 00:28:58.237 Write completed with error (sct=0, sc=8) 00:28:58.237 Write completed with error (sct=0, sc=8) 00:28:58.237 starting I/O failed: -6 00:28:58.237 Write completed with error (sct=0, sc=8) 00:28:58.237 Write completed with error (sct=0, sc=8) 00:28:58.237 Write completed with error (sct=0, sc=8) 00:28:58.237 [2024-11-18 07:14:18.777375] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17bd380 is same with Write completed with error (sct=0, sc=8) 00:28:58.237 the state(6) to be set 00:28:58.237 starting I/O failed: -6 00:28:58.237 Write completed with error (sct=0, sc=8) 00:28:58.237 [2024-11-18 07:14:18.777423] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17bd380 is same with Write completed with error (sct=0, sc=8) 00:28:58.237 the state(6) to be set 00:28:58.237 [2024-11-18 07:14:18.777458] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17bd380 is same with the state(6) to be set 00:28:58.237 Write completed with error (sct=0, sc=8) 00:28:58.237 [2024-11-18 07:14:18.777472] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17bd380 is same with the state(6) to be set 00:28:58.237 Write completed with error (sct=0, sc=8) 00:28:58.237 starting I/O failed: -6 00:28:58.237 [2024-11-18 07:14:18.777487] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17bd380 is same with the state(6) to be set 00:28:58.237 Write completed with error (sct=0, sc=8) 00:28:58.237 [2024-11-18 07:14:18.777517] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17bd380 is same with the state(6) to be set 00:28:58.237 Write completed with error (sct=0, sc=8) 00:28:58.237 Write completed with error (sct=0, sc=8) 00:28:58.237 [2024-11-18 07:14:18.777558] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17bd380 is same with the state(6) to be set 00:28:58.237 [2024-11-18 07:14:18.777594] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:58.237 Write completed with error (sct=0, sc=8) 00:28:58.237 starting I/O failed: -6 00:28:58.237 Write completed with error (sct=0, sc=8) 00:28:58.237 Write completed with error (sct=0, sc=8) 00:28:58.237 Write completed with error (sct=0, sc=8) 00:28:58.237 starting I/O failed: -6 00:28:58.237 Write completed with error (sct=0, sc=8) 00:28:58.237 starting I/O failed: -6 00:28:58.237 Write completed with error (sct=0, sc=8) 00:28:58.237 Write completed with error (sct=0, sc=8) 00:28:58.237 Write completed with error (sct=0, sc=8) 00:28:58.237 starting I/O failed: -6 00:28:58.237 Write completed with error (sct=0, sc=8) 00:28:58.237 starting I/O failed: -6 00:28:58.237 Write completed with error (sct=0, sc=8) 00:28:58.237 Write completed with error (sct=0, sc=8) 00:28:58.237 Write completed with error (sct=0, sc=8) 00:28:58.237 starting I/O failed: -6 00:28:58.237 Write completed with error (sct=0, sc=8) 00:28:58.237 starting I/O failed: -6 00:28:58.237 Write completed with error (sct=0, sc=8) 00:28:58.237 Write completed with error (sct=0, sc=8) 00:28:58.237 Write completed with error (sct=0, sc=8) 00:28:58.237 starting I/O failed: -6 00:28:58.237 Write completed with error (sct=0, sc=8) 00:28:58.237 starting I/O failed: -6 00:28:58.237 Write completed with error (sct=0, sc=8) 00:28:58.237 Write completed with error (sct=0, sc=8) 00:28:58.237 Write completed with error (sct=0, sc=8) 00:28:58.237 starting I/O failed: -6 00:28:58.237 Write completed with error (sct=0, sc=8) 00:28:58.237 starting I/O failed: -6 00:28:58.237 Write completed with error (sct=0, sc=8) 00:28:58.237 Write completed with error (sct=0, sc=8) 00:28:58.237 Write completed with error (sct=0, sc=8) 00:28:58.237 starting I/O failed: -6 00:28:58.237 Write completed with error (sct=0, sc=8) 00:28:58.237 starting I/O failed: -6 00:28:58.237 Write completed with error (sct=0, sc=8) 00:28:58.237 Write completed with error (sct=0, sc=8) 00:28:58.237 Write completed with error (sct=0, sc=8) 00:28:58.237 starting I/O failed: -6 00:28:58.237 Write completed with error (sct=0, sc=8) 00:28:58.238 starting I/O failed: -6 00:28:58.238 Write completed with error (sct=0, sc=8) 00:28:58.238 Write completed with error (sct=0, sc=8) 00:28:58.238 Write completed with error (sct=0, sc=8) 00:28:58.238 starting I/O failed: -6 00:28:58.238 Write completed with error (sct=0, sc=8) 00:28:58.238 starting I/O failed: -6 00:28:58.238 Write completed with error (sct=0, sc=8) 00:28:58.238 Write completed with error (sct=0, sc=8) 00:28:58.238 Write completed with error (sct=0, sc=8) 00:28:58.238 starting I/O failed: -6 00:28:58.238 Write completed with error (sct=0, sc=8) 00:28:58.238 starting I/O failed: -6 00:28:58.238 Write completed with error (sct=0, sc=8) 00:28:58.238 Write completed with error (sct=0, sc=8) 00:28:58.238 Write completed with error (sct=0, sc=8) 00:28:58.238 starting I/O failed: -6 00:28:58.238 [2024-11-18 07:14:18.778567] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:58.238 Write completed with error (sct=0, sc=8) 00:28:58.238 starting I/O failed: -6 00:28:58.238 Write completed with error (sct=0, sc=8) 00:28:58.238 starting I/O failed: -6 00:28:58.238 Write completed with error (sct=0, sc=8) 00:28:58.238 Write completed with error (sct=0, sc=8) 00:28:58.238 starting I/O failed: -6 00:28:58.238 Write completed with error (sct=0, sc=8) 00:28:58.238 starting I/O failed: -6 00:28:58.238 Write completed with error (sct=0, sc=8) 00:28:58.238 starting I/O failed: -6 00:28:58.238 Write completed with error (sct=0, sc=8) 00:28:58.238 Write completed with error (sct=0, sc=8) 00:28:58.238 starting I/O failed: -6 00:28:58.238 Write completed with error (sct=0, sc=8) 00:28:58.238 starting I/O failed: -6 00:28:58.238 Write completed with error (sct=0, sc=8) 00:28:58.238 starting I/O failed: -6 00:28:58.238 Write completed with error (sct=0, sc=8) 00:28:58.238 Write completed with error (sct=0, sc=8) 00:28:58.238 starting I/O failed: -6 00:28:58.238 Write completed with error (sct=0, sc=8) 00:28:58.238 starting I/O failed: -6 00:28:58.238 Write completed with error (sct=0, sc=8) 00:28:58.238 starting I/O failed: -6 00:28:58.238 Write completed with error (sct=0, sc=8) 00:28:58.238 Write completed with error (sct=0, sc=8) 00:28:58.238 starting I/O failed: -6 00:28:58.238 Write completed with error (sct=0, sc=8) 00:28:58.238 starting I/O failed: -6 00:28:58.238 Write completed with error (sct=0, sc=8) 00:28:58.238 starting I/O failed: -6 00:28:58.238 Write completed with error (sct=0, sc=8) 00:28:58.238 Write completed with error (sct=0, sc=8) 00:28:58.238 starting I/O failed: -6 00:28:58.238 Write completed with error (sct=0, sc=8) 00:28:58.238 starting I/O failed: -6 00:28:58.238 Write completed with error (sct=0, sc=8) 00:28:58.238 starting I/O failed: -6 00:28:58.238 Write completed with error (sct=0, sc=8) 00:28:58.238 Write completed with error (sct=0, sc=8) 00:28:58.238 starting I/O failed: -6 00:28:58.238 Write completed with error (sct=0, sc=8) 00:28:58.238 starting I/O failed: -6 00:28:58.238 Write completed with error (sct=0, sc=8) 00:28:58.238 starting I/O failed: -6 00:28:58.238 Write completed with error (sct=0, sc=8) 00:28:58.238 Write completed with error (sct=0, sc=8) 00:28:58.238 starting I/O failed: -6 00:28:58.238 Write completed with error (sct=0, sc=8) 00:28:58.238 starting I/O failed: -6 00:28:58.238 Write completed with error (sct=0, sc=8) 00:28:58.238 starting I/O failed: -6 00:28:58.238 Write completed with error (sct=0, sc=8) 00:28:58.238 Write completed with error (sct=0, sc=8) 00:28:58.238 starting I/O failed: -6 00:28:58.238 Write completed with error (sct=0, sc=8) 00:28:58.238 starting I/O failed: -6 00:28:58.238 Write completed with error (sct=0, sc=8) 00:28:58.238 starting I/O failed: -6 00:28:58.238 Write completed with error (sct=0, sc=8) 00:28:58.238 Write completed with error (sct=0, sc=8) 00:28:58.238 starting I/O failed: -6 00:28:58.238 Write completed with error (sct=0, sc=8) 00:28:58.238 starting I/O failed: -6 00:28:58.238 Write completed with error (sct=0, sc=8) 00:28:58.238 starting I/O failed: -6 00:28:58.238 Write completed with error (sct=0, sc=8) 00:28:58.238 Write completed with error (sct=0, sc=8) 00:28:58.238 starting I/O failed: -6 00:28:58.238 Write completed with error (sct=0, sc=8) 00:28:58.238 starting I/O failed: -6 00:28:58.238 Write completed with error (sct=0, sc=8) 00:28:58.238 starting I/O failed: -6 00:28:58.238 Write completed with error (sct=0, sc=8) 00:28:58.238 Write completed with error (sct=0, sc=8) 00:28:58.238 starting I/O failed: -6 00:28:58.238 Write completed with error (sct=0, sc=8) 00:28:58.238 starting I/O failed: -6 00:28:58.238 Write completed with error (sct=0, sc=8) 00:28:58.238 starting I/O failed: -6 00:28:58.238 Write completed with error (sct=0, sc=8) 00:28:58.238 Write completed with error (sct=0, sc=8) 00:28:58.238 starting I/O failed: -6 00:28:58.238 Write completed with error (sct=0, sc=8) 00:28:58.238 starting I/O failed: -6 00:28:58.238 [2024-11-18 07:14:18.779712] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:28:58.238 Write completed with error (sct=0, sc=8) 00:28:58.238 starting I/O failed: -6 00:28:58.238 Write completed with error (sct=0, sc=8) 00:28:58.238 starting I/O failed: -6 00:28:58.238 Write completed with error (sct=0, sc=8) 00:28:58.238 starting I/O failed: -6 00:28:58.238 Write completed with error (sct=0, sc=8) 00:28:58.238 starting I/O failed: -6 00:28:58.238 Write completed with error (sct=0, sc=8) 00:28:58.238 starting I/O failed: -6 00:28:58.238 Write completed with error (sct=0, sc=8) 00:28:58.238 starting I/O failed: -6 00:28:58.238 Write completed with error (sct=0, sc=8) 00:28:58.238 starting I/O failed: -6 00:28:58.238 Write completed with error (sct=0, sc=8) 00:28:58.238 starting I/O failed: -6 00:28:58.238 Write completed with error (sct=0, sc=8) 00:28:58.238 starting I/O failed: -6 00:28:58.238 Write completed with error (sct=0, sc=8) 00:28:58.238 starting I/O failed: -6 00:28:58.238 Write completed with error (sct=0, sc=8) 00:28:58.238 starting I/O failed: -6 00:28:58.238 Write completed with error (sct=0, sc=8) 00:28:58.238 starting I/O failed: -6 00:28:58.238 Write completed with error (sct=0, sc=8) 00:28:58.238 starting I/O failed: -6 00:28:58.238 Write completed with error (sct=0, sc=8) 00:28:58.238 starting I/O failed: -6 00:28:58.238 Write completed with error (sct=0, sc=8) 00:28:58.238 starting I/O failed: -6 00:28:58.238 Write completed with error (sct=0, sc=8) 00:28:58.238 starting I/O failed: -6 00:28:58.238 Write completed with error (sct=0, sc=8) 00:28:58.238 starting I/O failed: -6 00:28:58.238 Write completed with error (sct=0, sc=8) 00:28:58.238 starting I/O failed: -6 00:28:58.238 Write completed with error (sct=0, sc=8) 00:28:58.238 starting I/O failed: -6 00:28:58.238 Write completed with error (sct=0, sc=8) 00:28:58.238 starting I/O failed: -6 00:28:58.238 Write completed with error (sct=0, sc=8) 00:28:58.238 starting I/O failed: -6 00:28:58.238 Write completed with error (sct=0, sc=8) 00:28:58.238 starting I/O failed: -6 00:28:58.238 Write completed with error (sct=0, sc=8) 00:28:58.238 starting I/O failed: -6 00:28:58.238 Write completed with error (sct=0, sc=8) 00:28:58.238 starting I/O failed: -6 00:28:58.238 Write completed with error (sct=0, sc=8) 00:28:58.238 starting I/O failed: -6 00:28:58.238 Write completed with error (sct=0, sc=8) 00:28:58.238 starting I/O failed: -6 00:28:58.238 Write completed with error (sct=0, sc=8) 00:28:58.238 starting I/O failed: -6 00:28:58.238 Write completed with error (sct=0, sc=8) 00:28:58.238 starting I/O failed: -6 00:28:58.238 Write completed with error (sct=0, sc=8) 00:28:58.238 starting I/O failed: -6 00:28:58.238 Write completed with error (sct=0, sc=8) 00:28:58.238 starting I/O failed: -6 00:28:58.238 Write completed with error (sct=0, sc=8) 00:28:58.238 starting I/O failed: -6 00:28:58.238 Write completed with error (sct=0, sc=8) 00:28:58.238 starting I/O failed: -6 00:28:58.238 Write completed with error (sct=0, sc=8) 00:28:58.238 starting I/O failed: -6 00:28:58.238 Write completed with error (sct=0, sc=8) 00:28:58.238 starting I/O failed: -6 00:28:58.238 Write completed with error (sct=0, sc=8) 00:28:58.238 starting I/O failed: -6 00:28:58.238 Write completed with error (sct=0, sc=8) 00:28:58.238 starting I/O failed: -6 00:28:58.238 Write completed with error (sct=0, sc=8) 00:28:58.238 starting I/O failed: -6 00:28:58.238 Write completed with error (sct=0, sc=8) 00:28:58.238 starting I/O failed: -6 00:28:58.238 Write completed with error (sct=0, sc=8) 00:28:58.238 starting I/O failed: -6 00:28:58.238 Write completed with error (sct=0, sc=8) 00:28:58.238 starting I/O failed: -6 00:28:58.238 Write completed with error (sct=0, sc=8) 00:28:58.238 starting I/O failed: -6 00:28:58.239 Write completed with error (sct=0, sc=8) 00:28:58.239 starting I/O failed: -6 00:28:58.239 Write completed with error (sct=0, sc=8) 00:28:58.239 starting I/O failed: -6 00:28:58.239 Write completed with error (sct=0, sc=8) 00:28:58.239 starting I/O failed: -6 00:28:58.239 Write completed with error (sct=0, sc=8) 00:28:58.239 starting I/O failed: -6 00:28:58.239 Write completed with error (sct=0, sc=8) 00:28:58.239 starting I/O failed: -6 00:28:58.239 Write completed with error (sct=0, sc=8) 00:28:58.239 starting I/O failed: -6 00:28:58.239 Write completed with error (sct=0, sc=8) 00:28:58.239 starting I/O failed: -6 00:28:58.239 Write completed with error (sct=0, sc=8) 00:28:58.239 starting I/O failed: -6 00:28:58.239 Write completed with error (sct=0, sc=8) 00:28:58.239 starting I/O failed: -6 00:28:58.239 Write completed with error (sct=0, sc=8) 00:28:58.239 starting I/O failed: -6 00:28:58.239 Write completed with error (sct=0, sc=8) 00:28:58.239 starting I/O failed: -6 00:28:58.239 Write completed with error (sct=0, sc=8) 00:28:58.239 starting I/O failed: -6 00:28:58.239 Write completed with error (sct=0, sc=8) 00:28:58.239 starting I/O failed: -6 00:28:58.239 Write completed with error (sct=0, sc=8) 00:28:58.239 starting I/O failed: -6 00:28:58.239 Write completed with error (sct=0, sc=8) 00:28:58.239 starting I/O failed: -6 00:28:58.239 Write completed with error (sct=0, sc=8) 00:28:58.239 starting I/O failed: -6 00:28:58.239 Write completed with error (sct=0, sc=8) 00:28:58.239 starting I/O failed: -6 00:28:58.239 Write completed with error (sct=0, sc=8) 00:28:58.239 starting I/O failed: -6 00:28:58.239 Write completed with error (sct=0, sc=8) 00:28:58.239 starting I/O failed: -6 00:28:58.239 Write completed with error (sct=0, sc=8) 00:28:58.239 starting I/O failed: -6 00:28:58.239 [2024-11-18 07:14:18.781695] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:58.239 NVMe io qpair process completion error 00:28:58.239 Write completed with error (sct=0, sc=8) 00:28:58.239 Write completed with error (sct=0, sc=8) 00:28:58.239 Write completed with error (sct=0, sc=8) 00:28:58.239 starting I/O failed: -6 00:28:58.239 Write completed with error (sct=0, sc=8) 00:28:58.239 Write completed with error (sct=0, sc=8) 00:28:58.239 Write completed with error (sct=0, sc=8) 00:28:58.239 Write completed with error (sct=0, sc=8) 00:28:58.239 starting I/O failed: -6 00:28:58.239 Write completed with error (sct=0, sc=8) 00:28:58.239 Write completed with error (sct=0, sc=8) 00:28:58.239 Write completed with error (sct=0, sc=8) 00:28:58.239 Write completed with error (sct=0, sc=8) 00:28:58.239 starting I/O failed: -6 00:28:58.239 Write completed with error (sct=0, sc=8) 00:28:58.239 Write completed with error (sct=0, sc=8) 00:28:58.239 Write completed with error (sct=0, sc=8) 00:28:58.239 Write completed with error (sct=0, sc=8) 00:28:58.239 starting I/O failed: -6 00:28:58.239 Write completed with error (sct=0, sc=8) 00:28:58.239 Write completed with error (sct=0, sc=8) 00:28:58.239 Write completed with error (sct=0, sc=8) 00:28:58.239 Write completed with error (sct=0, sc=8) 00:28:58.239 starting I/O failed: -6 00:28:58.239 Write completed with error (sct=0, sc=8) 00:28:58.239 Write completed with error (sct=0, sc=8) 00:28:58.239 Write completed with error (sct=0, sc=8) 00:28:58.239 Write completed with error (sct=0, sc=8) 00:28:58.239 starting I/O failed: -6 00:28:58.239 Write completed with error (sct=0, sc=8) 00:28:58.239 Write completed with error (sct=0, sc=8) 00:28:58.239 Write completed with error (sct=0, sc=8) 00:28:58.239 Write completed with error (sct=0, sc=8) 00:28:58.239 starting I/O failed: -6 00:28:58.239 Write completed with error (sct=0, sc=8) 00:28:58.239 Write completed with error (sct=0, sc=8) 00:28:58.239 Write completed with error (sct=0, sc=8) 00:28:58.239 Write completed with error (sct=0, sc=8) 00:28:58.239 starting I/O failed: -6 00:28:58.239 Write completed with error (sct=0, sc=8) 00:28:58.239 Write completed with error (sct=0, sc=8) 00:28:58.239 Write completed with error (sct=0, sc=8) 00:28:58.239 Write completed with error (sct=0, sc=8) 00:28:58.239 starting I/O failed: -6 00:28:58.239 Write completed with error (sct=0, sc=8) 00:28:58.239 Write completed with error (sct=0, sc=8) 00:28:58.239 Write completed with error (sct=0, sc=8) 00:28:58.239 Write completed with error (sct=0, sc=8) 00:28:58.239 starting I/O failed: -6 00:28:58.239 [2024-11-18 07:14:18.782942] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:58.239 Write completed with error (sct=0, sc=8) 00:28:58.239 Write completed with error (sct=0, sc=8) 00:28:58.239 Write completed with error (sct=0, sc=8) 00:28:58.239 starting I/O failed: -6 00:28:58.239 Write completed with error (sct=0, sc=8) 00:28:58.239 starting I/O failed: -6 00:28:58.239 Write completed with error (sct=0, sc=8) 00:28:58.239 Write completed with error (sct=0, sc=8) 00:28:58.239 Write completed with error (sct=0, sc=8) 00:28:58.239 starting I/O failed: -6 00:28:58.239 Write completed with error (sct=0, sc=8) 00:28:58.239 starting I/O failed: -6 00:28:58.239 Write completed with error (sct=0, sc=8) 00:28:58.239 Write completed with error (sct=0, sc=8) 00:28:58.239 Write completed with error (sct=0, sc=8) 00:28:58.239 starting I/O failed: -6 00:28:58.239 Write completed with error (sct=0, sc=8) 00:28:58.239 starting I/O failed: -6 00:28:58.239 Write completed with error (sct=0, sc=8) 00:28:58.239 Write completed with error (sct=0, sc=8) 00:28:58.239 Write completed with error (sct=0, sc=8) 00:28:58.239 starting I/O failed: -6 00:28:58.239 Write completed with error (sct=0, sc=8) 00:28:58.239 starting I/O failed: -6 00:28:58.239 Write completed with error (sct=0, sc=8) 00:28:58.239 Write completed with error (sct=0, sc=8) 00:28:58.239 Write completed with error (sct=0, sc=8) 00:28:58.239 starting I/O failed: -6 00:28:58.239 Write completed with error (sct=0, sc=8) 00:28:58.239 starting I/O failed: -6 00:28:58.239 Write completed with error (sct=0, sc=8) 00:28:58.239 Write completed with error (sct=0, sc=8) 00:28:58.239 Write completed with error (sct=0, sc=8) 00:28:58.239 starting I/O failed: -6 00:28:58.239 Write completed with error (sct=0, sc=8) 00:28:58.239 starting I/O failed: -6 00:28:58.239 Write completed with error (sct=0, sc=8) 00:28:58.239 Write completed with error (sct=0, sc=8) 00:28:58.239 Write completed with error (sct=0, sc=8) 00:28:58.239 starting I/O failed: -6 00:28:58.239 Write completed with error (sct=0, sc=8) 00:28:58.239 starting I/O failed: -6 00:28:58.239 Write completed with error (sct=0, sc=8) 00:28:58.239 Write completed with error (sct=0, sc=8) 00:28:58.239 Write completed with error (sct=0, sc=8) 00:28:58.239 starting I/O failed: -6 00:28:58.239 Write completed with error (sct=0, sc=8) 00:28:58.239 starting I/O failed: -6 00:28:58.239 Write completed with error (sct=0, sc=8) 00:28:58.239 Write completed with error (sct=0, sc=8) 00:28:58.239 Write completed with error (sct=0, sc=8) 00:28:58.239 starting I/O failed: -6 00:28:58.239 Write completed with error (sct=0, sc=8) 00:28:58.239 starting I/O failed: -6 00:28:58.239 Write completed with error (sct=0, sc=8) 00:28:58.239 Write completed with error (sct=0, sc=8) 00:28:58.239 Write completed with error (sct=0, sc=8) 00:28:58.239 starting I/O failed: -6 00:28:58.239 Write completed with error (sct=0, sc=8) 00:28:58.239 starting I/O failed: -6 00:28:58.239 [2024-11-18 07:14:18.783911] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:58.239 Write completed with error (sct=0, sc=8) 00:28:58.239 Write completed with error (sct=0, sc=8) 00:28:58.239 starting I/O failed: -6 00:28:58.239 Write completed with error (sct=0, sc=8) 00:28:58.240 starting I/O failed: -6 00:28:58.240 Write completed with error (sct=0, sc=8) 00:28:58.240 starting I/O failed: -6 00:28:58.240 Write completed with error (sct=0, sc=8) 00:28:58.240 Write completed with error (sct=0, sc=8) 00:28:58.240 starting I/O failed: -6 00:28:58.240 Write completed with error (sct=0, sc=8) 00:28:58.240 starting I/O failed: -6 00:28:58.240 Write completed with error (sct=0, sc=8) 00:28:58.240 starting I/O failed: -6 00:28:58.240 Write completed with error (sct=0, sc=8) 00:28:58.240 Write completed with error (sct=0, sc=8) 00:28:58.240 starting I/O failed: -6 00:28:58.240 Write completed with error (sct=0, sc=8) 00:28:58.240 starting I/O failed: -6 00:28:58.240 Write completed with error (sct=0, sc=8) 00:28:58.240 starting I/O failed: -6 00:28:58.240 Write completed with error (sct=0, sc=8) 00:28:58.240 Write completed with error (sct=0, sc=8) 00:28:58.240 starting I/O failed: -6 00:28:58.240 Write completed with error (sct=0, sc=8) 00:28:58.240 starting I/O failed: -6 00:28:58.240 Write completed with error (sct=0, sc=8) 00:28:58.240 starting I/O failed: -6 00:28:58.240 Write completed with error (sct=0, sc=8) 00:28:58.240 Write completed with error (sct=0, sc=8) 00:28:58.240 starting I/O failed: -6 00:28:58.240 Write completed with error (sct=0, sc=8) 00:28:58.240 starting I/O failed: -6 00:28:58.240 Write completed with error (sct=0, sc=8) 00:28:58.240 starting I/O failed: -6 00:28:58.240 Write completed with error (sct=0, sc=8) 00:28:58.240 Write completed with error (sct=0, sc=8) 00:28:58.240 starting I/O failed: -6 00:28:58.240 Write completed with error (sct=0, sc=8) 00:28:58.240 starting I/O failed: -6 00:28:58.240 Write completed with error (sct=0, sc=8) 00:28:58.240 starting I/O failed: -6 00:28:58.240 Write completed with error (sct=0, sc=8) 00:28:58.240 Write completed with error (sct=0, sc=8) 00:28:58.240 starting I/O failed: -6 00:28:58.240 Write completed with error (sct=0, sc=8) 00:28:58.240 starting I/O failed: -6 00:28:58.240 Write completed with error (sct=0, sc=8) 00:28:58.240 starting I/O failed: -6 00:28:58.240 Write completed with error (sct=0, sc=8) 00:28:58.240 Write completed with error (sct=0, sc=8) 00:28:58.240 starting I/O failed: -6 00:28:58.240 Write completed with error (sct=0, sc=8) 00:28:58.240 starting I/O failed: -6 00:28:58.240 Write completed with error (sct=0, sc=8) 00:28:58.240 starting I/O failed: -6 00:28:58.240 Write completed with error (sct=0, sc=8) 00:28:58.240 Write completed with error (sct=0, sc=8) 00:28:58.240 starting I/O failed: -6 00:28:58.240 Write completed with error (sct=0, sc=8) 00:28:58.240 starting I/O failed: -6 00:28:58.240 Write completed with error (sct=0, sc=8) 00:28:58.240 starting I/O failed: -6 00:28:58.240 Write completed with error (sct=0, sc=8) 00:28:58.240 Write completed with error (sct=0, sc=8) 00:28:58.240 starting I/O failed: -6 00:28:58.240 Write completed with error (sct=0, sc=8) 00:28:58.240 starting I/O failed: -6 00:28:58.240 Write completed with error (sct=0, sc=8) 00:28:58.240 starting I/O failed: -6 00:28:58.240 Write completed with error (sct=0, sc=8) 00:28:58.240 Write completed with error (sct=0, sc=8) 00:28:58.240 starting I/O failed: -6 00:28:58.240 Write completed with error (sct=0, sc=8) 00:28:58.240 starting I/O failed: -6 00:28:58.240 Write completed with error (sct=0, sc=8) 00:28:58.240 starting I/O failed: -6 00:28:58.240 Write completed with error (sct=0, sc=8) 00:28:58.240 Write completed with error (sct=0, sc=8) 00:28:58.240 starting I/O failed: -6 00:28:58.240 Write completed with error (sct=0, sc=8) 00:28:58.240 starting I/O failed: -6 00:28:58.240 Write completed with error (sct=0, sc=8) 00:28:58.240 starting I/O failed: -6 00:28:58.240 Write completed with error (sct=0, sc=8) 00:28:58.240 [2024-11-18 07:14:18.785090] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:28:58.240 Write completed with error (sct=0, sc=8) 00:28:58.240 starting I/O failed: -6 00:28:58.240 Write completed with error (sct=0, sc=8) 00:28:58.240 starting I/O failed: -6 00:28:58.240 Write completed with error (sct=0, sc=8) 00:28:58.240 starting I/O failed: -6 00:28:58.240 Write completed with error (sct=0, sc=8) 00:28:58.240 starting I/O failed: -6 00:28:58.240 Write completed with error (sct=0, sc=8) 00:28:58.240 starting I/O failed: -6 00:28:58.240 Write completed with error (sct=0, sc=8) 00:28:58.240 starting I/O failed: -6 00:28:58.240 Write completed with error (sct=0, sc=8) 00:28:58.240 starting I/O failed: -6 00:28:58.240 Write completed with error (sct=0, sc=8) 00:28:58.240 starting I/O failed: -6 00:28:58.240 Write completed with error (sct=0, sc=8) 00:28:58.240 starting I/O failed: -6 00:28:58.240 Write completed with error (sct=0, sc=8) 00:28:58.240 starting I/O failed: -6 00:28:58.240 Write completed with error (sct=0, sc=8) 00:28:58.240 starting I/O failed: -6 00:28:58.240 Write completed with error (sct=0, sc=8) 00:28:58.240 starting I/O failed: -6 00:28:58.240 Write completed with error (sct=0, sc=8) 00:28:58.240 starting I/O failed: -6 00:28:58.240 Write completed with error (sct=0, sc=8) 00:28:58.240 starting I/O failed: -6 00:28:58.240 Write completed with error (sct=0, sc=8) 00:28:58.240 starting I/O failed: -6 00:28:58.240 Write completed with error (sct=0, sc=8) 00:28:58.240 starting I/O failed: -6 00:28:58.240 Write completed with error (sct=0, sc=8) 00:28:58.240 starting I/O failed: -6 00:28:58.240 Write completed with error (sct=0, sc=8) 00:28:58.240 starting I/O failed: -6 00:28:58.240 Write completed with error (sct=0, sc=8) 00:28:58.240 starting I/O failed: -6 00:28:58.240 Write completed with error (sct=0, sc=8) 00:28:58.240 starting I/O failed: -6 00:28:58.240 Write completed with error (sct=0, sc=8) 00:28:58.240 starting I/O failed: -6 00:28:58.240 Write completed with error (sct=0, sc=8) 00:28:58.240 starting I/O failed: -6 00:28:58.240 Write completed with error (sct=0, sc=8) 00:28:58.240 starting I/O failed: -6 00:28:58.240 Write completed with error (sct=0, sc=8) 00:28:58.240 starting I/O failed: -6 00:28:58.240 Write completed with error (sct=0, sc=8) 00:28:58.240 starting I/O failed: -6 00:28:58.240 Write completed with error (sct=0, sc=8) 00:28:58.240 starting I/O failed: -6 00:28:58.240 Write completed with error (sct=0, sc=8) 00:28:58.240 starting I/O failed: -6 00:28:58.240 Write completed with error (sct=0, sc=8) 00:28:58.240 starting I/O failed: -6 00:28:58.240 Write completed with error (sct=0, sc=8) 00:28:58.240 starting I/O failed: -6 00:28:58.240 Write completed with error (sct=0, sc=8) 00:28:58.240 starting I/O failed: -6 00:28:58.240 Write completed with error (sct=0, sc=8) 00:28:58.240 starting I/O failed: -6 00:28:58.240 Write completed with error (sct=0, sc=8) 00:28:58.240 starting I/O failed: -6 00:28:58.240 Write completed with error (sct=0, sc=8) 00:28:58.240 starting I/O failed: -6 00:28:58.240 Write completed with error (sct=0, sc=8) 00:28:58.240 starting I/O failed: -6 00:28:58.240 Write completed with error (sct=0, sc=8) 00:28:58.240 starting I/O failed: -6 00:28:58.240 Write completed with error (sct=0, sc=8) 00:28:58.240 starting I/O failed: -6 00:28:58.240 Write completed with error (sct=0, sc=8) 00:28:58.240 starting I/O failed: -6 00:28:58.240 Write completed with error (sct=0, sc=8) 00:28:58.240 starting I/O failed: -6 00:28:58.240 Write completed with error (sct=0, sc=8) 00:28:58.240 starting I/O failed: -6 00:28:58.240 Write completed with error (sct=0, sc=8) 00:28:58.240 starting I/O failed: -6 00:28:58.240 Write completed with error (sct=0, sc=8) 00:28:58.240 starting I/O failed: -6 00:28:58.240 Write completed with error (sct=0, sc=8) 00:28:58.240 starting I/O failed: -6 00:28:58.240 Write completed with error (sct=0, sc=8) 00:28:58.240 starting I/O failed: -6 00:28:58.240 Write completed with error (sct=0, sc=8) 00:28:58.240 starting I/O failed: -6 00:28:58.240 Write completed with error (sct=0, sc=8) 00:28:58.240 starting I/O failed: -6 00:28:58.240 Write completed with error (sct=0, sc=8) 00:28:58.240 starting I/O failed: -6 00:28:58.240 Write completed with error (sct=0, sc=8) 00:28:58.240 starting I/O failed: -6 00:28:58.240 Write completed with error (sct=0, sc=8) 00:28:58.240 starting I/O failed: -6 00:28:58.240 Write completed with error (sct=0, sc=8) 00:28:58.240 starting I/O failed: -6 00:28:58.240 Write completed with error (sct=0, sc=8) 00:28:58.240 starting I/O failed: -6 00:28:58.240 Write completed with error (sct=0, sc=8) 00:28:58.240 starting I/O failed: -6 00:28:58.240 Write completed with error (sct=0, sc=8) 00:28:58.240 starting I/O failed: -6 00:28:58.240 Write completed with error (sct=0, sc=8) 00:28:58.240 starting I/O failed: -6 00:28:58.240 Write completed with error (sct=0, sc=8) 00:28:58.240 starting I/O failed: -6 00:28:58.240 Write completed with error (sct=0, sc=8) 00:28:58.240 starting I/O failed: -6 00:28:58.240 Write completed with error (sct=0, sc=8) 00:28:58.240 starting I/O failed: -6 00:28:58.240 Write completed with error (sct=0, sc=8) 00:28:58.240 starting I/O failed: -6 00:28:58.240 Write completed with error (sct=0, sc=8) 00:28:58.240 starting I/O failed: -6 00:28:58.240 Write completed with error (sct=0, sc=8) 00:28:58.240 starting I/O failed: -6 00:28:58.240 Write completed with error (sct=0, sc=8) 00:28:58.240 starting I/O failed: -6 00:28:58.240 Write completed with error (sct=0, sc=8) 00:28:58.240 starting I/O failed: -6 00:28:58.241 Write completed with error (sct=0, sc=8) 00:28:58.241 starting I/O failed: -6 00:28:58.241 [2024-11-18 07:14:18.787136] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:58.241 NVMe io qpair process completion error 00:28:58.241 Write completed with error (sct=0, sc=8) 00:28:58.241 Write completed with error (sct=0, sc=8) 00:28:58.241 Write completed with error (sct=0, sc=8) 00:28:58.241 Write completed with error (sct=0, sc=8) 00:28:58.241 starting I/O failed: -6 00:28:58.241 Write completed with error (sct=0, sc=8) 00:28:58.241 Write completed with error (sct=0, sc=8) 00:28:58.241 Write completed with error (sct=0, sc=8) 00:28:58.241 Write completed with error (sct=0, sc=8) 00:28:58.241 starting I/O failed: -6 00:28:58.241 Write completed with error (sct=0, sc=8) 00:28:58.241 Write completed with error (sct=0, sc=8) 00:28:58.241 Write completed with error (sct=0, sc=8) 00:28:58.241 Write completed with error (sct=0, sc=8) 00:28:58.241 starting I/O failed: -6 00:28:58.241 Write completed with error (sct=0, sc=8) 00:28:58.241 Write completed with error (sct=0, sc=8) 00:28:58.241 Write completed with error (sct=0, sc=8) 00:28:58.241 Write completed with error (sct=0, sc=8) 00:28:58.241 starting I/O failed: -6 00:28:58.241 Write completed with error (sct=0, sc=8) 00:28:58.241 Write completed with error (sct=0, sc=8) 00:28:58.241 Write completed with error (sct=0, sc=8) 00:28:58.241 Write completed with error (sct=0, sc=8) 00:28:58.241 starting I/O failed: -6 00:28:58.241 Write completed with error (sct=0, sc=8) 00:28:58.241 Write completed with error (sct=0, sc=8) 00:28:58.241 Write completed with error (sct=0, sc=8) 00:28:58.241 Write completed with error (sct=0, sc=8) 00:28:58.241 starting I/O failed: -6 00:28:58.241 Write completed with error (sct=0, sc=8) 00:28:58.241 Write completed with error (sct=0, sc=8) 00:28:58.241 Write completed with error (sct=0, sc=8) 00:28:58.241 Write completed with error (sct=0, sc=8) 00:28:58.241 starting I/O failed: -6 00:28:58.241 Write completed with error (sct=0, sc=8) 00:28:58.241 Write completed with error (sct=0, sc=8) 00:28:58.241 Write completed with error (sct=0, sc=8) 00:28:58.241 Write completed with error (sct=0, sc=8) 00:28:58.241 starting I/O failed: -6 00:28:58.241 Write completed with error (sct=0, sc=8) 00:28:58.241 Write completed with error (sct=0, sc=8) 00:28:58.241 Write completed with error (sct=0, sc=8) 00:28:58.241 Write completed with error (sct=0, sc=8) 00:28:58.241 starting I/O failed: -6 00:28:58.241 Write completed with error (sct=0, sc=8) 00:28:58.241 Write completed with error (sct=0, sc=8) 00:28:58.241 Write completed with error (sct=0, sc=8) 00:28:58.241 [2024-11-18 07:14:18.788485] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:58.241 starting I/O failed: -6 00:28:58.241 starting I/O failed: -6 00:28:58.241 Write completed with error (sct=0, sc=8) 00:28:58.241 Write completed with error (sct=0, sc=8) 00:28:58.241 starting I/O failed: -6 00:28:58.241 Write completed with error (sct=0, sc=8) 00:28:58.241 starting I/O failed: -6 00:28:58.241 Write completed with error (sct=0, sc=8) 00:28:58.241 Write completed with error (sct=0, sc=8) 00:28:58.241 Write completed with error (sct=0, sc=8) 00:28:58.241 starting I/O failed: -6 00:28:58.241 Write completed with error (sct=0, sc=8) 00:28:58.241 starting I/O failed: -6 00:28:58.241 Write completed with error (sct=0, sc=8) 00:28:58.241 Write completed with error (sct=0, sc=8) 00:28:58.241 Write completed with error (sct=0, sc=8) 00:28:58.241 starting I/O failed: -6 00:28:58.241 Write completed with error (sct=0, sc=8) 00:28:58.241 starting I/O failed: -6 00:28:58.241 Write completed with error (sct=0, sc=8) 00:28:58.241 Write completed with error (sct=0, sc=8) 00:28:58.241 Write completed with error (sct=0, sc=8) 00:28:58.241 starting I/O failed: -6 00:28:58.241 Write completed with error (sct=0, sc=8) 00:28:58.241 starting I/O failed: -6 00:28:58.241 Write completed with error (sct=0, sc=8) 00:28:58.241 Write completed with error (sct=0, sc=8) 00:28:58.241 Write completed with error (sct=0, sc=8) 00:28:58.241 starting I/O failed: -6 00:28:58.241 Write completed with error (sct=0, sc=8) 00:28:58.241 starting I/O failed: -6 00:28:58.241 Write completed with error (sct=0, sc=8) 00:28:58.241 Write completed with error (sct=0, sc=8) 00:28:58.241 Write completed with error (sct=0, sc=8) 00:28:58.241 starting I/O failed: -6 00:28:58.241 Write completed with error (sct=0, sc=8) 00:28:58.241 starting I/O failed: -6 00:28:58.241 Write completed with error (sct=0, sc=8) 00:28:58.241 Write completed with error (sct=0, sc=8) 00:28:58.241 Write completed with error (sct=0, sc=8) 00:28:58.241 starting I/O failed: -6 00:28:58.241 Write completed with error (sct=0, sc=8) 00:28:58.241 starting I/O failed: -6 00:28:58.241 Write completed with error (sct=0, sc=8) 00:28:58.241 Write completed with error (sct=0, sc=8) 00:28:58.241 Write completed with error (sct=0, sc=8) 00:28:58.241 starting I/O failed: -6 00:28:58.241 Write completed with error (sct=0, sc=8) 00:28:58.241 starting I/O failed: -6 00:28:58.241 Write completed with error (sct=0, sc=8) 00:28:58.241 Write completed with error (sct=0, sc=8) 00:28:58.241 Write completed with error (sct=0, sc=8) 00:28:58.241 starting I/O failed: -6 00:28:58.241 Write completed with error (sct=0, sc=8) 00:28:58.241 starting I/O failed: -6 00:28:58.241 Write completed with error (sct=0, sc=8) 00:28:58.241 Write completed with error (sct=0, sc=8) 00:28:58.241 Write completed with error (sct=0, sc=8) 00:28:58.241 starting I/O failed: -6 00:28:58.241 [2024-11-18 07:14:18.789567] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:58.241 Write completed with error (sct=0, sc=8) 00:28:58.241 starting I/O failed: -6 00:28:58.241 Write completed with error (sct=0, sc=8) 00:28:58.241 starting I/O failed: -6 00:28:58.241 Write completed with error (sct=0, sc=8) 00:28:58.241 Write completed with error (sct=0, sc=8) 00:28:58.241 starting I/O failed: -6 00:28:58.241 Write completed with error (sct=0, sc=8) 00:28:58.241 starting I/O failed: -6 00:28:58.241 Write completed with error (sct=0, sc=8) 00:28:58.241 starting I/O failed: -6 00:28:58.241 Write completed with error (sct=0, sc=8) 00:28:58.241 Write completed with error (sct=0, sc=8) 00:28:58.241 starting I/O failed: -6 00:28:58.241 Write completed with error (sct=0, sc=8) 00:28:58.241 starting I/O failed: -6 00:28:58.241 Write completed with error (sct=0, sc=8) 00:28:58.241 starting I/O failed: -6 00:28:58.241 Write completed with error (sct=0, sc=8) 00:28:58.241 Write completed with error (sct=0, sc=8) 00:28:58.241 starting I/O failed: -6 00:28:58.241 Write completed with error (sct=0, sc=8) 00:28:58.241 starting I/O failed: -6 00:28:58.241 Write completed with error (sct=0, sc=8) 00:28:58.241 starting I/O failed: -6 00:28:58.241 Write completed with error (sct=0, sc=8) 00:28:58.241 Write completed with error (sct=0, sc=8) 00:28:58.241 starting I/O failed: -6 00:28:58.241 Write completed with error (sct=0, sc=8) 00:28:58.241 starting I/O failed: -6 00:28:58.241 Write completed with error (sct=0, sc=8) 00:28:58.241 starting I/O failed: -6 00:28:58.241 Write completed with error (sct=0, sc=8) 00:28:58.241 Write completed with error (sct=0, sc=8) 00:28:58.241 starting I/O failed: -6 00:28:58.241 Write completed with error (sct=0, sc=8) 00:28:58.241 starting I/O failed: -6 00:28:58.241 Write completed with error (sct=0, sc=8) 00:28:58.241 starting I/O failed: -6 00:28:58.241 Write completed with error (sct=0, sc=8) 00:28:58.241 Write completed with error (sct=0, sc=8) 00:28:58.241 starting I/O failed: -6 00:28:58.241 Write completed with error (sct=0, sc=8) 00:28:58.241 starting I/O failed: -6 00:28:58.241 Write completed with error (sct=0, sc=8) 00:28:58.241 starting I/O failed: -6 00:28:58.241 Write completed with error (sct=0, sc=8) 00:28:58.241 Write completed with error (sct=0, sc=8) 00:28:58.241 starting I/O failed: -6 00:28:58.241 Write completed with error (sct=0, sc=8) 00:28:58.241 starting I/O failed: -6 00:28:58.241 Write completed with error (sct=0, sc=8) 00:28:58.241 starting I/O failed: -6 00:28:58.241 Write completed with error (sct=0, sc=8) 00:28:58.241 Write completed with error (sct=0, sc=8) 00:28:58.241 starting I/O failed: -6 00:28:58.241 Write completed with error (sct=0, sc=8) 00:28:58.241 starting I/O failed: -6 00:28:58.241 Write completed with error (sct=0, sc=8) 00:28:58.241 starting I/O failed: -6 00:28:58.241 Write completed with error (sct=0, sc=8) 00:28:58.241 Write completed with error (sct=0, sc=8) 00:28:58.241 starting I/O failed: -6 00:28:58.241 Write completed with error (sct=0, sc=8) 00:28:58.241 starting I/O failed: -6 00:28:58.241 Write completed with error (sct=0, sc=8) 00:28:58.241 starting I/O failed: -6 00:28:58.241 Write completed with error (sct=0, sc=8) 00:28:58.241 Write completed with error (sct=0, sc=8) 00:28:58.241 starting I/O failed: -6 00:28:58.241 Write completed with error (sct=0, sc=8) 00:28:58.241 starting I/O failed: -6 00:28:58.241 Write completed with error (sct=0, sc=8) 00:28:58.241 starting I/O failed: -6 00:28:58.241 Write completed with error (sct=0, sc=8) 00:28:58.241 Write completed with error (sct=0, sc=8) 00:28:58.241 starting I/O failed: -6 00:28:58.241 Write completed with error (sct=0, sc=8) 00:28:58.241 starting I/O failed: -6 00:28:58.241 Write completed with error (sct=0, sc=8) 00:28:58.241 starting I/O failed: -6 00:28:58.241 Write completed with error (sct=0, sc=8) 00:28:58.241 Write completed with error (sct=0, sc=8) 00:28:58.241 starting I/O failed: -6 00:28:58.241 Write completed with error (sct=0, sc=8) 00:28:58.241 starting I/O failed: -6 00:28:58.241 [2024-11-18 07:14:18.790728] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:28:58.241 Write completed with error (sct=0, sc=8) 00:28:58.241 starting I/O failed: -6 00:28:58.241 Write completed with error (sct=0, sc=8) 00:28:58.242 starting I/O failed: -6 00:28:58.242 Write completed with error (sct=0, sc=8) 00:28:58.242 starting I/O failed: -6 00:28:58.242 Write completed with error (sct=0, sc=8) 00:28:58.242 starting I/O failed: -6 00:28:58.242 Write completed with error (sct=0, sc=8) 00:28:58.242 starting I/O failed: -6 00:28:58.242 Write completed with error (sct=0, sc=8) 00:28:58.242 starting I/O failed: -6 00:28:58.242 Write completed with error (sct=0, sc=8) 00:28:58.242 starting I/O failed: -6 00:28:58.242 Write completed with error (sct=0, sc=8) 00:28:58.242 starting I/O failed: -6 00:28:58.242 Write completed with error (sct=0, sc=8) 00:28:58.242 starting I/O failed: -6 00:28:58.242 Write completed with error (sct=0, sc=8) 00:28:58.242 starting I/O failed: -6 00:28:58.242 Write completed with error (sct=0, sc=8) 00:28:58.242 starting I/O failed: -6 00:28:58.242 Write completed with error (sct=0, sc=8) 00:28:58.242 starting I/O failed: -6 00:28:58.242 Write completed with error (sct=0, sc=8) 00:28:58.242 starting I/O failed: -6 00:28:58.242 Write completed with error (sct=0, sc=8) 00:28:58.242 starting I/O failed: -6 00:28:58.242 Write completed with error (sct=0, sc=8) 00:28:58.242 starting I/O failed: -6 00:28:58.242 Write completed with error (sct=0, sc=8) 00:28:58.242 starting I/O failed: -6 00:28:58.242 Write completed with error (sct=0, sc=8) 00:28:58.242 starting I/O failed: -6 00:28:58.242 Write completed with error (sct=0, sc=8) 00:28:58.242 starting I/O failed: -6 00:28:58.242 Write completed with error (sct=0, sc=8) 00:28:58.242 starting I/O failed: -6 00:28:58.242 Write completed with error (sct=0, sc=8) 00:28:58.242 starting I/O failed: -6 00:28:58.242 Write completed with error (sct=0, sc=8) 00:28:58.242 starting I/O failed: -6 00:28:58.242 Write completed with error (sct=0, sc=8) 00:28:58.242 starting I/O failed: -6 00:28:58.242 Write completed with error (sct=0, sc=8) 00:28:58.242 starting I/O failed: -6 00:28:58.242 Write completed with error (sct=0, sc=8) 00:28:58.242 starting I/O failed: -6 00:28:58.242 Write completed with error (sct=0, sc=8) 00:28:58.242 starting I/O failed: -6 00:28:58.242 Write completed with error (sct=0, sc=8) 00:28:58.242 starting I/O failed: -6 00:28:58.242 Write completed with error (sct=0, sc=8) 00:28:58.242 starting I/O failed: -6 00:28:58.242 Write completed with error (sct=0, sc=8) 00:28:58.242 starting I/O failed: -6 00:28:58.242 Write completed with error (sct=0, sc=8) 00:28:58.242 starting I/O failed: -6 00:28:58.242 Write completed with error (sct=0, sc=8) 00:28:58.242 starting I/O failed: -6 00:28:58.242 Write completed with error (sct=0, sc=8) 00:28:58.242 starting I/O failed: -6 00:28:58.242 Write completed with error (sct=0, sc=8) 00:28:58.242 starting I/O failed: -6 00:28:58.242 Write completed with error (sct=0, sc=8) 00:28:58.242 starting I/O failed: -6 00:28:58.242 Write completed with error (sct=0, sc=8) 00:28:58.242 starting I/O failed: -6 00:28:58.242 Write completed with error (sct=0, sc=8) 00:28:58.242 starting I/O failed: -6 00:28:58.242 Write completed with error (sct=0, sc=8) 00:28:58.242 starting I/O failed: -6 00:28:58.242 Write completed with error (sct=0, sc=8) 00:28:58.242 starting I/O failed: -6 00:28:58.242 Write completed with error (sct=0, sc=8) 00:28:58.242 starting I/O failed: -6 00:28:58.242 Write completed with error (sct=0, sc=8) 00:28:58.242 starting I/O failed: -6 00:28:58.242 Write completed with error (sct=0, sc=8) 00:28:58.242 starting I/O failed: -6 00:28:58.242 Write completed with error (sct=0, sc=8) 00:28:58.242 starting I/O failed: -6 00:28:58.242 Write completed with error (sct=0, sc=8) 00:28:58.242 starting I/O failed: -6 00:28:58.242 Write completed with error (sct=0, sc=8) 00:28:58.242 starting I/O failed: -6 00:28:58.242 Write completed with error (sct=0, sc=8) 00:28:58.242 starting I/O failed: -6 00:28:58.242 Write completed with error (sct=0, sc=8) 00:28:58.242 starting I/O failed: -6 00:28:58.242 Write completed with error (sct=0, sc=8) 00:28:58.242 starting I/O failed: -6 00:28:58.242 Write completed with error (sct=0, sc=8) 00:28:58.242 starting I/O failed: -6 00:28:58.242 Write completed with error (sct=0, sc=8) 00:28:58.242 starting I/O failed: -6 00:28:58.242 Write completed with error (sct=0, sc=8) 00:28:58.242 starting I/O failed: -6 00:28:58.242 Write completed with error (sct=0, sc=8) 00:28:58.242 starting I/O failed: -6 00:28:58.242 Write completed with error (sct=0, sc=8) 00:28:58.242 starting I/O failed: -6 00:28:58.242 Write completed with error (sct=0, sc=8) 00:28:58.242 starting I/O failed: -6 00:28:58.242 Write completed with error (sct=0, sc=8) 00:28:58.242 starting I/O failed: -6 00:28:58.242 Write completed with error (sct=0, sc=8) 00:28:58.242 starting I/O failed: -6 00:28:58.242 Write completed with error (sct=0, sc=8) 00:28:58.242 starting I/O failed: -6 00:28:58.242 Write completed with error (sct=0, sc=8) 00:28:58.242 starting I/O failed: -6 00:28:58.242 Write completed with error (sct=0, sc=8) 00:28:58.242 starting I/O failed: -6 00:28:58.242 Write completed with error (sct=0, sc=8) 00:28:58.242 starting I/O failed: -6 00:28:58.242 Write completed with error (sct=0, sc=8) 00:28:58.242 starting I/O failed: -6 00:28:58.242 Write completed with error (sct=0, sc=8) 00:28:58.242 starting I/O failed: -6 00:28:58.242 Write completed with error (sct=0, sc=8) 00:28:58.242 starting I/O failed: -6 00:28:58.242 [2024-11-18 07:14:18.792682] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:58.242 NVMe io qpair process completion error 00:28:58.242 Write completed with error (sct=0, sc=8) 00:28:58.242 Write completed with error (sct=0, sc=8) 00:28:58.242 Write completed with error (sct=0, sc=8) 00:28:58.242 Write completed with error (sct=0, sc=8) 00:28:58.242 starting I/O failed: -6 00:28:58.242 Write completed with error (sct=0, sc=8) 00:28:58.242 Write completed with error (sct=0, sc=8) 00:28:58.242 Write completed with error (sct=0, sc=8) 00:28:58.242 Write completed with error (sct=0, sc=8) 00:28:58.242 starting I/O failed: -6 00:28:58.242 Write completed with error (sct=0, sc=8) 00:28:58.242 Write completed with error (sct=0, sc=8) 00:28:58.242 Write completed with error (sct=0, sc=8) 00:28:58.242 Write completed with error (sct=0, sc=8) 00:28:58.242 starting I/O failed: -6 00:28:58.242 Write completed with error (sct=0, sc=8) 00:28:58.242 Write completed with error (sct=0, sc=8) 00:28:58.242 Write completed with error (sct=0, sc=8) 00:28:58.242 Write completed with error (sct=0, sc=8) 00:28:58.242 starting I/O failed: -6 00:28:58.242 Write completed with error (sct=0, sc=8) 00:28:58.242 Write completed with error (sct=0, sc=8) 00:28:58.242 Write completed with error (sct=0, sc=8) 00:28:58.242 Write completed with error (sct=0, sc=8) 00:28:58.242 starting I/O failed: -6 00:28:58.242 Write completed with error (sct=0, sc=8) 00:28:58.242 Write completed with error (sct=0, sc=8) 00:28:58.242 Write completed with error (sct=0, sc=8) 00:28:58.242 Write completed with error (sct=0, sc=8) 00:28:58.242 starting I/O failed: -6 00:28:58.242 Write completed with error (sct=0, sc=8) 00:28:58.242 Write completed with error (sct=0, sc=8) 00:28:58.242 Write completed with error (sct=0, sc=8) 00:28:58.242 Write completed with error (sct=0, sc=8) 00:28:58.242 starting I/O failed: -6 00:28:58.242 Write completed with error (sct=0, sc=8) 00:28:58.242 Write completed with error (sct=0, sc=8) 00:28:58.242 Write completed with error (sct=0, sc=8) 00:28:58.242 Write completed with error (sct=0, sc=8) 00:28:58.242 starting I/O failed: -6 00:28:58.242 Write completed with error (sct=0, sc=8) 00:28:58.242 Write completed with error (sct=0, sc=8) 00:28:58.242 Write completed with error (sct=0, sc=8) 00:28:58.242 Write completed with error (sct=0, sc=8) 00:28:58.242 starting I/O failed: -6 00:28:58.242 Write completed with error (sct=0, sc=8) 00:28:58.242 Write completed with error (sct=0, sc=8) 00:28:58.242 Write completed with error (sct=0, sc=8) 00:28:58.242 [2024-11-18 07:14:18.794014] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:58.242 starting I/O failed: -6 00:28:58.242 Write completed with error (sct=0, sc=8) 00:28:58.242 starting I/O failed: -6 00:28:58.242 Write completed with error (sct=0, sc=8) 00:28:58.242 Write completed with error (sct=0, sc=8) 00:28:58.242 starting I/O failed: -6 00:28:58.242 Write completed with error (sct=0, sc=8) 00:28:58.242 Write completed with error (sct=0, sc=8) 00:28:58.242 starting I/O failed: -6 00:28:58.242 Write completed with error (sct=0, sc=8) 00:28:58.242 Write completed with error (sct=0, sc=8) 00:28:58.242 starting I/O failed: -6 00:28:58.242 Write completed with error (sct=0, sc=8) 00:28:58.242 Write completed with error (sct=0, sc=8) 00:28:58.242 starting I/O failed: -6 00:28:58.242 Write completed with error (sct=0, sc=8) 00:28:58.242 Write completed with error (sct=0, sc=8) 00:28:58.242 starting I/O failed: -6 00:28:58.242 Write completed with error (sct=0, sc=8) 00:28:58.242 Write completed with error (sct=0, sc=8) 00:28:58.242 starting I/O failed: -6 00:28:58.242 Write completed with error (sct=0, sc=8) 00:28:58.243 Write completed with error (sct=0, sc=8) 00:28:58.243 starting I/O failed: -6 00:28:58.243 Write completed with error (sct=0, sc=8) 00:28:58.243 Write completed with error (sct=0, sc=8) 00:28:58.243 starting I/O failed: -6 00:28:58.243 Write completed with error (sct=0, sc=8) 00:28:58.243 Write completed with error (sct=0, sc=8) 00:28:58.243 starting I/O failed: -6 00:28:58.243 Write completed with error (sct=0, sc=8) 00:28:58.243 Write completed with error (sct=0, sc=8) 00:28:58.243 starting I/O failed: -6 00:28:58.243 Write completed with error (sct=0, sc=8) 00:28:58.243 Write completed with error (sct=0, sc=8) 00:28:58.243 starting I/O failed: -6 00:28:58.243 Write completed with error (sct=0, sc=8) 00:28:58.243 Write completed with error (sct=0, sc=8) 00:28:58.243 starting I/O failed: -6 00:28:58.243 Write completed with error (sct=0, sc=8) 00:28:58.243 Write completed with error (sct=0, sc=8) 00:28:58.243 starting I/O failed: -6 00:28:58.243 Write completed with error (sct=0, sc=8) 00:28:58.243 Write completed with error (sct=0, sc=8) 00:28:58.243 starting I/O failed: -6 00:28:58.243 Write completed with error (sct=0, sc=8) 00:28:58.243 Write completed with error (sct=0, sc=8) 00:28:58.243 starting I/O failed: -6 00:28:58.243 Write completed with error (sct=0, sc=8) 00:28:58.243 Write completed with error (sct=0, sc=8) 00:28:58.243 starting I/O failed: -6 00:28:58.243 Write completed with error (sct=0, sc=8) 00:28:58.243 Write completed with error (sct=0, sc=8) 00:28:58.243 starting I/O failed: -6 00:28:58.243 Write completed with error (sct=0, sc=8) 00:28:58.243 Write completed with error (sct=0, sc=8) 00:28:58.243 starting I/O failed: -6 00:28:58.243 Write completed with error (sct=0, sc=8) 00:28:58.243 Write completed with error (sct=0, sc=8) 00:28:58.243 starting I/O failed: -6 00:28:58.243 Write completed with error (sct=0, sc=8) 00:28:58.243 Write completed with error (sct=0, sc=8) 00:28:58.243 starting I/O failed: -6 00:28:58.243 Write completed with error (sct=0, sc=8) 00:28:58.243 Write completed with error (sct=0, sc=8) 00:28:58.243 starting I/O failed: -6 00:28:58.243 Write completed with error (sct=0, sc=8) 00:28:58.243 Write completed with error (sct=0, sc=8) 00:28:58.243 starting I/O failed: -6 00:28:58.243 [2024-11-18 07:14:18.795158] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:28:58.243 Write completed with error (sct=0, sc=8) 00:28:58.243 starting I/O failed: -6 00:28:58.243 Write completed with error (sct=0, sc=8) 00:28:58.243 starting I/O failed: -6 00:28:58.243 Write completed with error (sct=0, sc=8) 00:28:58.243 Write completed with error (sct=0, sc=8) 00:28:58.243 starting I/O failed: -6 00:28:58.243 Write completed with error (sct=0, sc=8) 00:28:58.243 starting I/O failed: -6 00:28:58.243 Write completed with error (sct=0, sc=8) 00:28:58.243 starting I/O failed: -6 00:28:58.243 Write completed with error (sct=0, sc=8) 00:28:58.243 Write completed with error (sct=0, sc=8) 00:28:58.243 starting I/O failed: -6 00:28:58.243 Write completed with error (sct=0, sc=8) 00:28:58.243 starting I/O failed: -6 00:28:58.243 Write completed with error (sct=0, sc=8) 00:28:58.243 starting I/O failed: -6 00:28:58.243 Write completed with error (sct=0, sc=8) 00:28:58.243 Write completed with error (sct=0, sc=8) 00:28:58.243 starting I/O failed: -6 00:28:58.243 Write completed with error (sct=0, sc=8) 00:28:58.243 starting I/O failed: -6 00:28:58.243 Write completed with error (sct=0, sc=8) 00:28:58.243 starting I/O failed: -6 00:28:58.243 Write completed with error (sct=0, sc=8) 00:28:58.243 Write completed with error (sct=0, sc=8) 00:28:58.243 starting I/O failed: -6 00:28:58.243 Write completed with error (sct=0, sc=8) 00:28:58.243 starting I/O failed: -6 00:28:58.243 Write completed with error (sct=0, sc=8) 00:28:58.243 starting I/O failed: -6 00:28:58.243 Write completed with error (sct=0, sc=8) 00:28:58.243 Write completed with error (sct=0, sc=8) 00:28:58.243 starting I/O failed: -6 00:28:58.243 Write completed with error (sct=0, sc=8) 00:28:58.243 starting I/O failed: -6 00:28:58.243 Write completed with error (sct=0, sc=8) 00:28:58.243 starting I/O failed: -6 00:28:58.243 Write completed with error (sct=0, sc=8) 00:28:58.243 Write completed with error (sct=0, sc=8) 00:28:58.243 starting I/O failed: -6 00:28:58.243 Write completed with error (sct=0, sc=8) 00:28:58.243 starting I/O failed: -6 00:28:58.243 Write completed with error (sct=0, sc=8) 00:28:58.243 starting I/O failed: -6 00:28:58.243 Write completed with error (sct=0, sc=8) 00:28:58.243 Write completed with error (sct=0, sc=8) 00:28:58.243 starting I/O failed: -6 00:28:58.243 Write completed with error (sct=0, sc=8) 00:28:58.243 starting I/O failed: -6 00:28:58.243 Write completed with error (sct=0, sc=8) 00:28:58.243 starting I/O failed: -6 00:28:58.243 Write completed with error (sct=0, sc=8) 00:28:58.243 Write completed with error (sct=0, sc=8) 00:28:58.243 starting I/O failed: -6 00:28:58.243 Write completed with error (sct=0, sc=8) 00:28:58.243 starting I/O failed: -6 00:28:58.243 Write completed with error (sct=0, sc=8) 00:28:58.243 starting I/O failed: -6 00:28:58.243 Write completed with error (sct=0, sc=8) 00:28:58.243 Write completed with error (sct=0, sc=8) 00:28:58.243 starting I/O failed: -6 00:28:58.243 Write completed with error (sct=0, sc=8) 00:28:58.243 starting I/O failed: -6 00:28:58.243 Write completed with error (sct=0, sc=8) 00:28:58.243 starting I/O failed: -6 00:28:58.243 Write completed with error (sct=0, sc=8) 00:28:58.243 Write completed with error (sct=0, sc=8) 00:28:58.243 starting I/O failed: -6 00:28:58.243 Write completed with error (sct=0, sc=8) 00:28:58.243 starting I/O failed: -6 00:28:58.243 Write completed with error (sct=0, sc=8) 00:28:58.243 starting I/O failed: -6 00:28:58.243 Write completed with error (sct=0, sc=8) 00:28:58.243 Write completed with error (sct=0, sc=8) 00:28:58.243 starting I/O failed: -6 00:28:58.243 Write completed with error (sct=0, sc=8) 00:28:58.243 starting I/O failed: -6 00:28:58.243 Write completed with error (sct=0, sc=8) 00:28:58.243 starting I/O failed: -6 00:28:58.243 Write completed with error (sct=0, sc=8) 00:28:58.243 [2024-11-18 07:14:18.796281] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:58.243 Write completed with error (sct=0, sc=8) 00:28:58.244 starting I/O failed: -6 00:28:58.244 Write completed with error (sct=0, sc=8) 00:28:58.244 starting I/O failed: -6 00:28:58.244 Write completed with error (sct=0, sc=8) 00:28:58.244 starting I/O failed: -6 00:28:58.244 Write completed with error (sct=0, sc=8) 00:28:58.244 starting I/O failed: -6 00:28:58.244 Write completed with error (sct=0, sc=8) 00:28:58.244 starting I/O failed: -6 00:28:58.244 Write completed with error (sct=0, sc=8) 00:28:58.244 starting I/O failed: -6 00:28:58.244 Write completed with error (sct=0, sc=8) 00:28:58.244 starting I/O failed: -6 00:28:58.244 Write completed with error (sct=0, sc=8) 00:28:58.244 starting I/O failed: -6 00:28:58.244 Write completed with error (sct=0, sc=8) 00:28:58.244 starting I/O failed: -6 00:28:58.244 Write completed with error (sct=0, sc=8) 00:28:58.244 starting I/O failed: -6 00:28:58.244 Write completed with error (sct=0, sc=8) 00:28:58.244 starting I/O failed: -6 00:28:58.244 Write completed with error (sct=0, sc=8) 00:28:58.244 starting I/O failed: -6 00:28:58.244 Write completed with error (sct=0, sc=8) 00:28:58.244 starting I/O failed: -6 00:28:58.244 Write completed with error (sct=0, sc=8) 00:28:58.244 starting I/O failed: -6 00:28:58.244 Write completed with error (sct=0, sc=8) 00:28:58.244 starting I/O failed: -6 00:28:58.244 Write completed with error (sct=0, sc=8) 00:28:58.244 starting I/O failed: -6 00:28:58.244 Write completed with error (sct=0, sc=8) 00:28:58.244 starting I/O failed: -6 00:28:58.244 Write completed with error (sct=0, sc=8) 00:28:58.244 starting I/O failed: -6 00:28:58.244 Write completed with error (sct=0, sc=8) 00:28:58.244 starting I/O failed: -6 00:28:58.244 Write completed with error (sct=0, sc=8) 00:28:58.244 starting I/O failed: -6 00:28:58.244 Write completed with error (sct=0, sc=8) 00:28:58.244 starting I/O failed: -6 00:28:58.244 Write completed with error (sct=0, sc=8) 00:28:58.244 starting I/O failed: -6 00:28:58.244 Write completed with error (sct=0, sc=8) 00:28:58.244 starting I/O failed: -6 00:28:58.244 Write completed with error (sct=0, sc=8) 00:28:58.244 starting I/O failed: -6 00:28:58.244 Write completed with error (sct=0, sc=8) 00:28:58.244 starting I/O failed: -6 00:28:58.244 Write completed with error (sct=0, sc=8) 00:28:58.244 starting I/O failed: -6 00:28:58.244 Write completed with error (sct=0, sc=8) 00:28:58.244 starting I/O failed: -6 00:28:58.244 Write completed with error (sct=0, sc=8) 00:28:58.244 starting I/O failed: -6 00:28:58.244 Write completed with error (sct=0, sc=8) 00:28:58.244 starting I/O failed: -6 00:28:58.244 Write completed with error (sct=0, sc=8) 00:28:58.244 starting I/O failed: -6 00:28:58.244 Write completed with error (sct=0, sc=8) 00:28:58.244 starting I/O failed: -6 00:28:58.244 Write completed with error (sct=0, sc=8) 00:28:58.244 starting I/O failed: -6 00:28:58.244 Write completed with error (sct=0, sc=8) 00:28:58.244 starting I/O failed: -6 00:28:58.244 Write completed with error (sct=0, sc=8) 00:28:58.244 starting I/O failed: -6 00:28:58.244 Write completed with error (sct=0, sc=8) 00:28:58.244 starting I/O failed: -6 00:28:58.244 Write completed with error (sct=0, sc=8) 00:28:58.244 starting I/O failed: -6 00:28:58.244 Write completed with error (sct=0, sc=8) 00:28:58.244 starting I/O failed: -6 00:28:58.244 Write completed with error (sct=0, sc=8) 00:28:58.244 starting I/O failed: -6 00:28:58.244 Write completed with error (sct=0, sc=8) 00:28:58.244 starting I/O failed: -6 00:28:58.244 Write completed with error (sct=0, sc=8) 00:28:58.244 starting I/O failed: -6 00:28:58.244 Write completed with error (sct=0, sc=8) 00:28:58.244 starting I/O failed: -6 00:28:58.244 Write completed with error (sct=0, sc=8) 00:28:58.244 starting I/O failed: -6 00:28:58.244 Write completed with error (sct=0, sc=8) 00:28:58.244 starting I/O failed: -6 00:28:58.244 Write completed with error (sct=0, sc=8) 00:28:58.244 starting I/O failed: -6 00:28:58.244 Write completed with error (sct=0, sc=8) 00:28:58.244 starting I/O failed: -6 00:28:58.244 Write completed with error (sct=0, sc=8) 00:28:58.244 starting I/O failed: -6 00:28:58.244 Write completed with error (sct=0, sc=8) 00:28:58.244 starting I/O failed: -6 00:28:58.244 Write completed with error (sct=0, sc=8) 00:28:58.244 starting I/O failed: -6 00:28:58.244 Write completed with error (sct=0, sc=8) 00:28:58.244 starting I/O failed: -6 00:28:58.244 Write completed with error (sct=0, sc=8) 00:28:58.244 starting I/O failed: -6 00:28:58.244 Write completed with error (sct=0, sc=8) 00:28:58.244 starting I/O failed: -6 00:28:58.244 Write completed with error (sct=0, sc=8) 00:28:58.244 starting I/O failed: -6 00:28:58.244 Write completed with error (sct=0, sc=8) 00:28:58.244 starting I/O failed: -6 00:28:58.244 Write completed with error (sct=0, sc=8) 00:28:58.244 starting I/O failed: -6 00:28:58.244 Write completed with error (sct=0, sc=8) 00:28:58.244 starting I/O failed: -6 00:28:58.244 Write completed with error (sct=0, sc=8) 00:28:58.244 starting I/O failed: -6 00:28:58.244 Write completed with error (sct=0, sc=8) 00:28:58.244 starting I/O failed: -6 00:28:58.244 Write completed with error (sct=0, sc=8) 00:28:58.244 starting I/O failed: -6 00:28:58.244 Write completed with error (sct=0, sc=8) 00:28:58.244 starting I/O failed: -6 00:28:58.244 Write completed with error (sct=0, sc=8) 00:28:58.244 starting I/O failed: -6 00:28:58.244 [2024-11-18 07:14:18.799010] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:58.244 NVMe io qpair process completion error 00:28:58.244 Write completed with error (sct=0, sc=8) 00:28:58.244 Write completed with error (sct=0, sc=8) 00:28:58.244 Write completed with error (sct=0, sc=8) 00:28:58.244 Write completed with error (sct=0, sc=8) 00:28:58.244 starting I/O failed: -6 00:28:58.244 Write completed with error (sct=0, sc=8) 00:28:58.244 Write completed with error (sct=0, sc=8) 00:28:58.244 Write completed with error (sct=0, sc=8) 00:28:58.244 Write completed with error (sct=0, sc=8) 00:28:58.244 starting I/O failed: -6 00:28:58.244 Write completed with error (sct=0, sc=8) 00:28:58.244 Write completed with error (sct=0, sc=8) 00:28:58.244 Write completed with error (sct=0, sc=8) 00:28:58.244 Write completed with error (sct=0, sc=8) 00:28:58.244 starting I/O failed: -6 00:28:58.244 Write completed with error (sct=0, sc=8) 00:28:58.244 Write completed with error (sct=0, sc=8) 00:28:58.244 Write completed with error (sct=0, sc=8) 00:28:58.244 Write completed with error (sct=0, sc=8) 00:28:58.245 starting I/O failed: -6 00:28:58.245 Write completed with error (sct=0, sc=8) 00:28:58.245 Write completed with error (sct=0, sc=8) 00:28:58.245 Write completed with error (sct=0, sc=8) 00:28:58.245 Write completed with error (sct=0, sc=8) 00:28:58.245 starting I/O failed: -6 00:28:58.245 Write completed with error (sct=0, sc=8) 00:28:58.245 Write completed with error (sct=0, sc=8) 00:28:58.245 Write completed with error (sct=0, sc=8) 00:28:58.245 Write completed with error (sct=0, sc=8) 00:28:58.245 starting I/O failed: -6 00:28:58.245 Write completed with error (sct=0, sc=8) 00:28:58.245 Write completed with error (sct=0, sc=8) 00:28:58.245 Write completed with error (sct=0, sc=8) 00:28:58.245 Write completed with error (sct=0, sc=8) 00:28:58.245 starting I/O failed: -6 00:28:58.245 Write completed with error (sct=0, sc=8) 00:28:58.245 Write completed with error (sct=0, sc=8) 00:28:58.245 Write completed with error (sct=0, sc=8) 00:28:58.245 Write completed with error (sct=0, sc=8) 00:28:58.245 starting I/O failed: -6 00:28:58.245 Write completed with error (sct=0, sc=8) 00:28:58.245 Write completed with error (sct=0, sc=8) 00:28:58.245 Write completed with error (sct=0, sc=8) 00:28:58.245 Write completed with error (sct=0, sc=8) 00:28:58.245 starting I/O failed: -6 00:28:58.245 Write completed with error (sct=0, sc=8) 00:28:58.245 Write completed with error (sct=0, sc=8) 00:28:58.245 [2024-11-18 07:14:18.800230] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:58.245 Write completed with error (sct=0, sc=8) 00:28:58.245 starting I/O failed: -6 00:28:58.245 Write completed with error (sct=0, sc=8) 00:28:58.245 starting I/O failed: -6 00:28:58.245 Write completed with error (sct=0, sc=8) 00:28:58.245 Write completed with error (sct=0, sc=8) 00:28:58.245 Write completed with error (sct=0, sc=8) 00:28:58.245 starting I/O failed: -6 00:28:58.245 Write completed with error (sct=0, sc=8) 00:28:58.245 starting I/O failed: -6 00:28:58.245 Write completed with error (sct=0, sc=8) 00:28:58.245 Write completed with error (sct=0, sc=8) 00:28:58.245 Write completed with error (sct=0, sc=8) 00:28:58.245 starting I/O failed: -6 00:28:58.245 Write completed with error (sct=0, sc=8) 00:28:58.245 starting I/O failed: -6 00:28:58.245 Write completed with error (sct=0, sc=8) 00:28:58.245 Write completed with error (sct=0, sc=8) 00:28:58.245 Write completed with error (sct=0, sc=8) 00:28:58.245 starting I/O failed: -6 00:28:58.245 Write completed with error (sct=0, sc=8) 00:28:58.245 starting I/O failed: -6 00:28:58.245 Write completed with error (sct=0, sc=8) 00:28:58.245 Write completed with error (sct=0, sc=8) 00:28:58.245 Write completed with error (sct=0, sc=8) 00:28:58.245 starting I/O failed: -6 00:28:58.245 Write completed with error (sct=0, sc=8) 00:28:58.245 starting I/O failed: -6 00:28:58.245 Write completed with error (sct=0, sc=8) 00:28:58.245 Write completed with error (sct=0, sc=8) 00:28:58.245 Write completed with error (sct=0, sc=8) 00:28:58.245 starting I/O failed: -6 00:28:58.245 Write completed with error (sct=0, sc=8) 00:28:58.245 starting I/O failed: -6 00:28:58.245 Write completed with error (sct=0, sc=8) 00:28:58.245 Write completed with error (sct=0, sc=8) 00:28:58.245 Write completed with error (sct=0, sc=8) 00:28:58.245 starting I/O failed: -6 00:28:58.245 Write completed with error (sct=0, sc=8) 00:28:58.245 starting I/O failed: -6 00:28:58.245 Write completed with error (sct=0, sc=8) 00:28:58.245 Write completed with error (sct=0, sc=8) 00:28:58.245 Write completed with error (sct=0, sc=8) 00:28:58.245 starting I/O failed: -6 00:28:58.245 Write completed with error (sct=0, sc=8) 00:28:58.245 starting I/O failed: -6 00:28:58.245 Write completed with error (sct=0, sc=8) 00:28:58.245 Write completed with error (sct=0, sc=8) 00:28:58.245 Write completed with error (sct=0, sc=8) 00:28:58.245 starting I/O failed: -6 00:28:58.245 Write completed with error (sct=0, sc=8) 00:28:58.245 starting I/O failed: -6 00:28:58.245 Write completed with error (sct=0, sc=8) 00:28:58.245 Write completed with error (sct=0, sc=8) 00:28:58.245 Write completed with error (sct=0, sc=8) 00:28:58.245 starting I/O failed: -6 00:28:58.245 Write completed with error (sct=0, sc=8) 00:28:58.245 starting I/O failed: -6 00:28:58.245 Write completed with error (sct=0, sc=8) 00:28:58.245 Write completed with error (sct=0, sc=8) 00:28:58.245 Write completed with error (sct=0, sc=8) 00:28:58.245 starting I/O failed: -6 00:28:58.245 Write completed with error (sct=0, sc=8) 00:28:58.245 starting I/O failed: -6 00:28:58.245 Write completed with error (sct=0, sc=8) 00:28:58.245 Write completed with error (sct=0, sc=8) 00:28:58.245 Write completed with error (sct=0, sc=8) 00:28:58.245 starting I/O failed: -6 00:28:58.245 [2024-11-18 07:14:18.801292] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:58.245 Write completed with error (sct=0, sc=8) 00:28:58.245 starting I/O failed: -6 00:28:58.246 Write completed with error (sct=0, sc=8) 00:28:58.246 Write completed with error (sct=0, sc=8) 00:28:58.246 starting I/O failed: -6 00:28:58.246 Write completed with error (sct=0, sc=8) 00:28:58.246 starting I/O failed: -6 00:28:58.246 Write completed with error (sct=0, sc=8) 00:28:58.246 starting I/O failed: -6 00:28:58.246 Write completed with error (sct=0, sc=8) 00:28:58.246 Write completed with error (sct=0, sc=8) 00:28:58.246 starting I/O failed: -6 00:28:58.246 Write completed with error (sct=0, sc=8) 00:28:58.246 starting I/O failed: -6 00:28:58.246 Write completed with error (sct=0, sc=8) 00:28:58.246 starting I/O failed: -6 00:28:58.246 Write completed with error (sct=0, sc=8) 00:28:58.246 Write completed with error (sct=0, sc=8) 00:28:58.246 starting I/O failed: -6 00:28:58.246 Write completed with error (sct=0, sc=8) 00:28:58.246 starting I/O failed: -6 00:28:58.246 Write completed with error (sct=0, sc=8) 00:28:58.246 starting I/O failed: -6 00:28:58.246 Write completed with error (sct=0, sc=8) 00:28:58.246 Write completed with error (sct=0, sc=8) 00:28:58.246 starting I/O failed: -6 00:28:58.246 Write completed with error (sct=0, sc=8) 00:28:58.246 starting I/O failed: -6 00:28:58.246 Write completed with error (sct=0, sc=8) 00:28:58.246 starting I/O failed: -6 00:28:58.246 Write completed with error (sct=0, sc=8) 00:28:58.246 Write completed with error (sct=0, sc=8) 00:28:58.246 starting I/O failed: -6 00:28:58.246 Write completed with error (sct=0, sc=8) 00:28:58.246 starting I/O failed: -6 00:28:58.246 Write completed with error (sct=0, sc=8) 00:28:58.246 starting I/O failed: -6 00:28:58.246 Write completed with error (sct=0, sc=8) 00:28:58.246 Write completed with error (sct=0, sc=8) 00:28:58.246 starting I/O failed: -6 00:28:58.246 Write completed with error (sct=0, sc=8) 00:28:58.246 starting I/O failed: -6 00:28:58.246 Write completed with error (sct=0, sc=8) 00:28:58.246 starting I/O failed: -6 00:28:58.246 Write completed with error (sct=0, sc=8) 00:28:58.246 Write completed with error (sct=0, sc=8) 00:28:58.246 starting I/O failed: -6 00:28:58.246 Write completed with error (sct=0, sc=8) 00:28:58.246 starting I/O failed: -6 00:28:58.246 Write completed with error (sct=0, sc=8) 00:28:58.246 starting I/O failed: -6 00:28:58.246 Write completed with error (sct=0, sc=8) 00:28:58.246 Write completed with error (sct=0, sc=8) 00:28:58.246 starting I/O failed: -6 00:28:58.246 Write completed with error (sct=0, sc=8) 00:28:58.246 starting I/O failed: -6 00:28:58.246 Write completed with error (sct=0, sc=8) 00:28:58.246 starting I/O failed: -6 00:28:58.246 Write completed with error (sct=0, sc=8) 00:28:58.246 Write completed with error (sct=0, sc=8) 00:28:58.246 starting I/O failed: -6 00:28:58.246 Write completed with error (sct=0, sc=8) 00:28:58.246 starting I/O failed: -6 00:28:58.246 Write completed with error (sct=0, sc=8) 00:28:58.246 starting I/O failed: -6 00:28:58.246 Write completed with error (sct=0, sc=8) 00:28:58.246 Write completed with error (sct=0, sc=8) 00:28:58.246 starting I/O failed: -6 00:28:58.246 Write completed with error (sct=0, sc=8) 00:28:58.246 starting I/O failed: -6 00:28:58.246 Write completed with error (sct=0, sc=8) 00:28:58.246 starting I/O failed: -6 00:28:58.246 Write completed with error (sct=0, sc=8) 00:28:58.246 Write completed with error (sct=0, sc=8) 00:28:58.246 starting I/O failed: -6 00:28:58.246 Write completed with error (sct=0, sc=8) 00:28:58.246 starting I/O failed: -6 00:28:58.246 Write completed with error (sct=0, sc=8) 00:28:58.246 starting I/O failed: -6 00:28:58.246 Write completed with error (sct=0, sc=8) 00:28:58.246 Write completed with error (sct=0, sc=8) 00:28:58.246 starting I/O failed: -6 00:28:58.246 Write completed with error (sct=0, sc=8) 00:28:58.246 starting I/O failed: -6 00:28:58.246 Write completed with error (sct=0, sc=8) 00:28:58.246 starting I/O failed: -6 00:28:58.246 [2024-11-18 07:14:18.802470] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:58.246 Write completed with error (sct=0, sc=8) 00:28:58.246 starting I/O failed: -6 00:28:58.246 Write completed with error (sct=0, sc=8) 00:28:58.246 starting I/O failed: -6 00:28:58.246 Write completed with error (sct=0, sc=8) 00:28:58.246 starting I/O failed: -6 00:28:58.246 Write completed with error (sct=0, sc=8) 00:28:58.246 starting I/O failed: -6 00:28:58.246 Write completed with error (sct=0, sc=8) 00:28:58.246 starting I/O failed: -6 00:28:58.246 Write completed with error (sct=0, sc=8) 00:28:58.246 starting I/O failed: -6 00:28:58.246 Write completed with error (sct=0, sc=8) 00:28:58.246 starting I/O failed: -6 00:28:58.246 Write completed with error (sct=0, sc=8) 00:28:58.246 starting I/O failed: -6 00:28:58.246 Write completed with error (sct=0, sc=8) 00:28:58.246 starting I/O failed: -6 00:28:58.246 Write completed with error (sct=0, sc=8) 00:28:58.246 starting I/O failed: -6 00:28:58.246 Write completed with error (sct=0, sc=8) 00:28:58.246 starting I/O failed: -6 00:28:58.246 Write completed with error (sct=0, sc=8) 00:28:58.246 starting I/O failed: -6 00:28:58.246 Write completed with error (sct=0, sc=8) 00:28:58.246 starting I/O failed: -6 00:28:58.246 Write completed with error (sct=0, sc=8) 00:28:58.246 starting I/O failed: -6 00:28:58.246 Write completed with error (sct=0, sc=8) 00:28:58.246 starting I/O failed: -6 00:28:58.246 Write completed with error (sct=0, sc=8) 00:28:58.246 starting I/O failed: -6 00:28:58.246 Write completed with error (sct=0, sc=8) 00:28:58.246 starting I/O failed: -6 00:28:58.246 Write completed with error (sct=0, sc=8) 00:28:58.246 starting I/O failed: -6 00:28:58.246 Write completed with error (sct=0, sc=8) 00:28:58.246 starting I/O failed: -6 00:28:58.246 Write completed with error (sct=0, sc=8) 00:28:58.246 starting I/O failed: -6 00:28:58.246 Write completed with error (sct=0, sc=8) 00:28:58.246 starting I/O failed: -6 00:28:58.246 Write completed with error (sct=0, sc=8) 00:28:58.246 starting I/O failed: -6 00:28:58.246 Write completed with error (sct=0, sc=8) 00:28:58.246 starting I/O failed: -6 00:28:58.246 Write completed with error (sct=0, sc=8) 00:28:58.246 starting I/O failed: -6 00:28:58.246 Write completed with error (sct=0, sc=8) 00:28:58.246 starting I/O failed: -6 00:28:58.246 Write completed with error (sct=0, sc=8) 00:28:58.247 starting I/O failed: -6 00:28:58.247 Write completed with error (sct=0, sc=8) 00:28:58.247 starting I/O failed: -6 00:28:58.247 Write completed with error (sct=0, sc=8) 00:28:58.247 starting I/O failed: -6 00:28:58.247 Write completed with error (sct=0, sc=8) 00:28:58.247 starting I/O failed: -6 00:28:58.247 Write completed with error (sct=0, sc=8) 00:28:58.247 starting I/O failed: -6 00:28:58.247 Write completed with error (sct=0, sc=8) 00:28:58.247 starting I/O failed: -6 00:28:58.247 Write completed with error (sct=0, sc=8) 00:28:58.247 starting I/O failed: -6 00:28:58.247 Write completed with error (sct=0, sc=8) 00:28:58.247 starting I/O failed: -6 00:28:58.247 Write completed with error (sct=0, sc=8) 00:28:58.247 starting I/O failed: -6 00:28:58.247 Write completed with error (sct=0, sc=8) 00:28:58.247 starting I/O failed: -6 00:28:58.247 Write completed with error (sct=0, sc=8) 00:28:58.247 starting I/O failed: -6 00:28:58.247 Write completed with error (sct=0, sc=8) 00:28:58.247 starting I/O failed: -6 00:28:58.247 Write completed with error (sct=0, sc=8) 00:28:58.247 starting I/O failed: -6 00:28:58.247 Write completed with error (sct=0, sc=8) 00:28:58.247 starting I/O failed: -6 00:28:58.247 Write completed with error (sct=0, sc=8) 00:28:58.247 starting I/O failed: -6 00:28:58.247 Write completed with error (sct=0, sc=8) 00:28:58.247 starting I/O failed: -6 00:28:58.247 Write completed with error (sct=0, sc=8) 00:28:58.247 starting I/O failed: -6 00:28:58.247 Write completed with error (sct=0, sc=8) 00:28:58.247 starting I/O failed: -6 00:28:58.247 Write completed with error (sct=0, sc=8) 00:28:58.247 starting I/O failed: -6 00:28:58.247 Write completed with error (sct=0, sc=8) 00:28:58.247 starting I/O failed: -6 00:28:58.247 Write completed with error (sct=0, sc=8) 00:28:58.247 starting I/O failed: -6 00:28:58.247 Write completed with error (sct=0, sc=8) 00:28:58.247 starting I/O failed: -6 00:28:58.247 Write completed with error (sct=0, sc=8) 00:28:58.247 starting I/O failed: -6 00:28:58.247 Write completed with error (sct=0, sc=8) 00:28:58.247 starting I/O failed: -6 00:28:58.247 Write completed with error (sct=0, sc=8) 00:28:58.247 starting I/O failed: -6 00:28:58.247 Write completed with error (sct=0, sc=8) 00:28:58.247 starting I/O failed: -6 00:28:58.247 Write completed with error (sct=0, sc=8) 00:28:58.247 starting I/O failed: -6 00:28:58.247 Write completed with error (sct=0, sc=8) 00:28:58.247 starting I/O failed: -6 00:28:58.247 Write completed with error (sct=0, sc=8) 00:28:58.247 starting I/O failed: -6 00:28:58.247 Write completed with error (sct=0, sc=8) 00:28:58.247 starting I/O failed: -6 00:28:58.247 Write completed with error (sct=0, sc=8) 00:28:58.247 starting I/O failed: -6 00:28:58.247 Write completed with error (sct=0, sc=8) 00:28:58.247 starting I/O failed: -6 00:28:58.247 Write completed with error (sct=0, sc=8) 00:28:58.247 starting I/O failed: -6 00:28:58.247 Write completed with error (sct=0, sc=8) 00:28:58.247 starting I/O failed: -6 00:28:58.247 [2024-11-18 07:14:18.805865] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:28:58.247 NVMe io qpair process completion error 00:28:58.247 Write completed with error (sct=0, sc=8) 00:28:58.247 Write completed with error (sct=0, sc=8) 00:28:58.247 Write completed with error (sct=0, sc=8) 00:28:58.247 Write completed with error (sct=0, sc=8) 00:28:58.247 starting I/O failed: -6 00:28:58.247 Write completed with error (sct=0, sc=8) 00:28:58.247 Write completed with error (sct=0, sc=8) 00:28:58.247 Write completed with error (sct=0, sc=8) 00:28:58.247 Write completed with error (sct=0, sc=8) 00:28:58.247 starting I/O failed: -6 00:28:58.247 Write completed with error (sct=0, sc=8) 00:28:58.247 Write completed with error (sct=0, sc=8) 00:28:58.247 Write completed with error (sct=0, sc=8) 00:28:58.247 Write completed with error (sct=0, sc=8) 00:28:58.247 starting I/O failed: -6 00:28:58.247 Write completed with error (sct=0, sc=8) 00:28:58.247 Write completed with error (sct=0, sc=8) 00:28:58.247 Write completed with error (sct=0, sc=8) 00:28:58.247 Write completed with error (sct=0, sc=8) 00:28:58.247 starting I/O failed: -6 00:28:58.247 Write completed with error (sct=0, sc=8) 00:28:58.247 Write completed with error (sct=0, sc=8) 00:28:58.247 Write completed with error (sct=0, sc=8) 00:28:58.247 Write completed with error (sct=0, sc=8) 00:28:58.247 starting I/O failed: -6 00:28:58.247 Write completed with error (sct=0, sc=8) 00:28:58.247 Write completed with error (sct=0, sc=8) 00:28:58.247 Write completed with error (sct=0, sc=8) 00:28:58.247 Write completed with error (sct=0, sc=8) 00:28:58.247 starting I/O failed: -6 00:28:58.247 Write completed with error (sct=0, sc=8) 00:28:58.247 Write completed with error (sct=0, sc=8) 00:28:58.247 Write completed with error (sct=0, sc=8) 00:28:58.247 Write completed with error (sct=0, sc=8) 00:28:58.247 starting I/O failed: -6 00:28:58.247 Write completed with error (sct=0, sc=8) 00:28:58.247 Write completed with error (sct=0, sc=8) 00:28:58.247 Write completed with error (sct=0, sc=8) 00:28:58.247 Write completed with error (sct=0, sc=8) 00:28:58.247 starting I/O failed: -6 00:28:58.247 Write completed with error (sct=0, sc=8) 00:28:58.247 Write completed with error (sct=0, sc=8) 00:28:58.247 Write completed with error (sct=0, sc=8) 00:28:58.247 Write completed with error (sct=0, sc=8) 00:28:58.247 starting I/O failed: -6 00:28:58.247 Write completed with error (sct=0, sc=8) 00:28:58.247 Write completed with error (sct=0, sc=8) 00:28:58.247 Write completed with error (sct=0, sc=8) 00:28:58.247 Write completed with error (sct=0, sc=8) 00:28:58.247 starting I/O failed: -6 00:28:58.247 [2024-11-18 07:14:18.807106] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:58.247 Write completed with error (sct=0, sc=8) 00:28:58.247 starting I/O failed: -6 00:28:58.247 Write completed with error (sct=0, sc=8) 00:28:58.247 starting I/O failed: -6 00:28:58.247 Write completed with error (sct=0, sc=8) 00:28:58.247 Write completed with error (sct=0, sc=8) 00:28:58.247 Write completed with error (sct=0, sc=8) 00:28:58.247 starting I/O failed: -6 00:28:58.247 Write completed with error (sct=0, sc=8) 00:28:58.247 starting I/O failed: -6 00:28:58.247 Write completed with error (sct=0, sc=8) 00:28:58.248 Write completed with error (sct=0, sc=8) 00:28:58.248 Write completed with error (sct=0, sc=8) 00:28:58.248 starting I/O failed: -6 00:28:58.248 Write completed with error (sct=0, sc=8) 00:28:58.248 starting I/O failed: -6 00:28:58.248 Write completed with error (sct=0, sc=8) 00:28:58.248 Write completed with error (sct=0, sc=8) 00:28:58.248 Write completed with error (sct=0, sc=8) 00:28:58.248 starting I/O failed: -6 00:28:58.248 Write completed with error (sct=0, sc=8) 00:28:58.248 starting I/O failed: -6 00:28:58.248 Write completed with error (sct=0, sc=8) 00:28:58.248 Write completed with error (sct=0, sc=8) 00:28:58.248 Write completed with error (sct=0, sc=8) 00:28:58.248 starting I/O failed: -6 00:28:58.248 Write completed with error (sct=0, sc=8) 00:28:58.248 starting I/O failed: -6 00:28:58.248 Write completed with error (sct=0, sc=8) 00:28:58.248 Write completed with error (sct=0, sc=8) 00:28:58.248 Write completed with error (sct=0, sc=8) 00:28:58.248 starting I/O failed: -6 00:28:58.248 Write completed with error (sct=0, sc=8) 00:28:58.248 starting I/O failed: -6 00:28:58.248 Write completed with error (sct=0, sc=8) 00:28:58.248 Write completed with error (sct=0, sc=8) 00:28:58.248 Write completed with error (sct=0, sc=8) 00:28:58.248 starting I/O failed: -6 00:28:58.248 Write completed with error (sct=0, sc=8) 00:28:58.248 starting I/O failed: -6 00:28:58.248 Write completed with error (sct=0, sc=8) 00:28:58.248 Write completed with error (sct=0, sc=8) 00:28:58.248 Write completed with error (sct=0, sc=8) 00:28:58.248 starting I/O failed: -6 00:28:58.248 Write completed with error (sct=0, sc=8) 00:28:58.248 starting I/O failed: -6 00:28:58.248 Write completed with error (sct=0, sc=8) 00:28:58.248 Write completed with error (sct=0, sc=8) 00:28:58.248 Write completed with error (sct=0, sc=8) 00:28:58.248 starting I/O failed: -6 00:28:58.248 Write completed with error (sct=0, sc=8) 00:28:58.248 starting I/O failed: -6 00:28:58.248 Write completed with error (sct=0, sc=8) 00:28:58.248 Write completed with error (sct=0, sc=8) 00:28:58.248 Write completed with error (sct=0, sc=8) 00:28:58.248 starting I/O failed: -6 00:28:58.248 Write completed with error (sct=0, sc=8) 00:28:58.248 starting I/O failed: -6 00:28:58.248 Write completed with error (sct=0, sc=8) 00:28:58.248 Write completed with error (sct=0, sc=8) 00:28:58.248 Write completed with error (sct=0, sc=8) 00:28:58.248 starting I/O failed: -6 00:28:58.248 Write completed with error (sct=0, sc=8) 00:28:58.248 starting I/O failed: -6 00:28:58.248 Write completed with error (sct=0, sc=8) 00:28:58.248 Write completed with error (sct=0, sc=8) 00:28:58.248 [2024-11-18 07:14:18.808311] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:58.248 Write completed with error (sct=0, sc=8) 00:28:58.248 starting I/O failed: -6 00:28:58.248 Write completed with error (sct=0, sc=8) 00:28:58.248 starting I/O failed: -6 00:28:58.248 Write completed with error (sct=0, sc=8) 00:28:58.248 Write completed with error (sct=0, sc=8) 00:28:58.248 starting I/O failed: -6 00:28:58.248 Write completed with error (sct=0, sc=8) 00:28:58.248 starting I/O failed: -6 00:28:58.248 Write completed with error (sct=0, sc=8) 00:28:58.248 starting I/O failed: -6 00:28:58.248 Write completed with error (sct=0, sc=8) 00:28:58.248 Write completed with error (sct=0, sc=8) 00:28:58.248 starting I/O failed: -6 00:28:58.248 Write completed with error (sct=0, sc=8) 00:28:58.248 starting I/O failed: -6 00:28:58.248 Write completed with error (sct=0, sc=8) 00:28:58.248 starting I/O failed: -6 00:28:58.248 Write completed with error (sct=0, sc=8) 00:28:58.248 Write completed with error (sct=0, sc=8) 00:28:58.248 starting I/O failed: -6 00:28:58.248 Write completed with error (sct=0, sc=8) 00:28:58.248 starting I/O failed: -6 00:28:58.248 Write completed with error (sct=0, sc=8) 00:28:58.248 starting I/O failed: -6 00:28:58.248 Write completed with error (sct=0, sc=8) 00:28:58.248 Write completed with error (sct=0, sc=8) 00:28:58.248 starting I/O failed: -6 00:28:58.248 Write completed with error (sct=0, sc=8) 00:28:58.248 starting I/O failed: -6 00:28:58.248 Write completed with error (sct=0, sc=8) 00:28:58.248 starting I/O failed: -6 00:28:58.248 Write completed with error (sct=0, sc=8) 00:28:58.248 Write completed with error (sct=0, sc=8) 00:28:58.248 starting I/O failed: -6 00:28:58.248 Write completed with error (sct=0, sc=8) 00:28:58.248 starting I/O failed: -6 00:28:58.248 Write completed with error (sct=0, sc=8) 00:28:58.248 starting I/O failed: -6 00:28:58.248 Write completed with error (sct=0, sc=8) 00:28:58.248 Write completed with error (sct=0, sc=8) 00:28:58.248 starting I/O failed: -6 00:28:58.248 Write completed with error (sct=0, sc=8) 00:28:58.248 starting I/O failed: -6 00:28:58.248 Write completed with error (sct=0, sc=8) 00:28:58.248 starting I/O failed: -6 00:28:58.248 Write completed with error (sct=0, sc=8) 00:28:58.248 Write completed with error (sct=0, sc=8) 00:28:58.248 starting I/O failed: -6 00:28:58.248 Write completed with error (sct=0, sc=8) 00:28:58.248 starting I/O failed: -6 00:28:58.248 Write completed with error (sct=0, sc=8) 00:28:58.248 starting I/O failed: -6 00:28:58.248 Write completed with error (sct=0, sc=8) 00:28:58.248 Write completed with error (sct=0, sc=8) 00:28:58.248 starting I/O failed: -6 00:28:58.248 Write completed with error (sct=0, sc=8) 00:28:58.248 starting I/O failed: -6 00:28:58.248 Write completed with error (sct=0, sc=8) 00:28:58.248 starting I/O failed: -6 00:28:58.248 Write completed with error (sct=0, sc=8) 00:28:58.248 Write completed with error (sct=0, sc=8) 00:28:58.248 starting I/O failed: -6 00:28:58.248 Write completed with error (sct=0, sc=8) 00:28:58.248 starting I/O failed: -6 00:28:58.248 Write completed with error (sct=0, sc=8) 00:28:58.248 starting I/O failed: -6 00:28:58.248 Write completed with error (sct=0, sc=8) 00:28:58.248 Write completed with error (sct=0, sc=8) 00:28:58.248 starting I/O failed: -6 00:28:58.248 Write completed with error (sct=0, sc=8) 00:28:58.248 starting I/O failed: -6 00:28:58.248 Write completed with error (sct=0, sc=8) 00:28:58.248 starting I/O failed: -6 00:28:58.249 Write completed with error (sct=0, sc=8) 00:28:58.249 Write completed with error (sct=0, sc=8) 00:28:58.249 starting I/O failed: -6 00:28:58.249 Write completed with error (sct=0, sc=8) 00:28:58.249 starting I/O failed: -6 00:28:58.249 Write completed with error (sct=0, sc=8) 00:28:58.249 starting I/O failed: -6 00:28:58.249 Write completed with error (sct=0, sc=8) 00:28:58.249 Write completed with error (sct=0, sc=8) 00:28:58.249 starting I/O failed: -6 00:28:58.249 [2024-11-18 07:14:18.809460] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:58.249 Write completed with error (sct=0, sc=8) 00:28:58.249 starting I/O failed: -6 00:28:58.249 Write completed with error (sct=0, sc=8) 00:28:58.249 starting I/O failed: -6 00:28:58.249 Write completed with error (sct=0, sc=8) 00:28:58.249 starting I/O failed: -6 00:28:58.249 Write completed with error (sct=0, sc=8) 00:28:58.249 starting I/O failed: -6 00:28:58.249 Write completed with error (sct=0, sc=8) 00:28:58.249 starting I/O failed: -6 00:28:58.249 Write completed with error (sct=0, sc=8) 00:28:58.249 starting I/O failed: -6 00:28:58.249 Write completed with error (sct=0, sc=8) 00:28:58.249 starting I/O failed: -6 00:28:58.249 Write completed with error (sct=0, sc=8) 00:28:58.249 starting I/O failed: -6 00:28:58.249 Write completed with error (sct=0, sc=8) 00:28:58.249 starting I/O failed: -6 00:28:58.249 Write completed with error (sct=0, sc=8) 00:28:58.249 starting I/O failed: -6 00:28:58.249 Write completed with error (sct=0, sc=8) 00:28:58.249 starting I/O failed: -6 00:28:58.249 Write completed with error (sct=0, sc=8) 00:28:58.249 starting I/O failed: -6 00:28:58.249 Write completed with error (sct=0, sc=8) 00:28:58.249 starting I/O failed: -6 00:28:58.249 Write completed with error (sct=0, sc=8) 00:28:58.249 starting I/O failed: -6 00:28:58.249 Write completed with error (sct=0, sc=8) 00:28:58.249 starting I/O failed: -6 00:28:58.249 Write completed with error (sct=0, sc=8) 00:28:58.249 starting I/O failed: -6 00:28:58.249 Write completed with error (sct=0, sc=8) 00:28:58.249 starting I/O failed: -6 00:28:58.249 Write completed with error (sct=0, sc=8) 00:28:58.249 starting I/O failed: -6 00:28:58.249 Write completed with error (sct=0, sc=8) 00:28:58.249 starting I/O failed: -6 00:28:58.249 Write completed with error (sct=0, sc=8) 00:28:58.249 starting I/O failed: -6 00:28:58.249 Write completed with error (sct=0, sc=8) 00:28:58.249 starting I/O failed: -6 00:28:58.249 Write completed with error (sct=0, sc=8) 00:28:58.249 starting I/O failed: -6 00:28:58.249 Write completed with error (sct=0, sc=8) 00:28:58.249 starting I/O failed: -6 00:28:58.249 Write completed with error (sct=0, sc=8) 00:28:58.249 starting I/O failed: -6 00:28:58.249 Write completed with error (sct=0, sc=8) 00:28:58.249 starting I/O failed: -6 00:28:58.249 Write completed with error (sct=0, sc=8) 00:28:58.249 starting I/O failed: -6 00:28:58.249 Write completed with error (sct=0, sc=8) 00:28:58.249 starting I/O failed: -6 00:28:58.249 Write completed with error (sct=0, sc=8) 00:28:58.249 starting I/O failed: -6 00:28:58.249 Write completed with error (sct=0, sc=8) 00:28:58.249 starting I/O failed: -6 00:28:58.249 Write completed with error (sct=0, sc=8) 00:28:58.249 starting I/O failed: -6 00:28:58.249 Write completed with error (sct=0, sc=8) 00:28:58.249 starting I/O failed: -6 00:28:58.249 Write completed with error (sct=0, sc=8) 00:28:58.249 starting I/O failed: -6 00:28:58.249 Write completed with error (sct=0, sc=8) 00:28:58.249 starting I/O failed: -6 00:28:58.249 Write completed with error (sct=0, sc=8) 00:28:58.249 starting I/O failed: -6 00:28:58.249 Write completed with error (sct=0, sc=8) 00:28:58.249 starting I/O failed: -6 00:28:58.249 Write completed with error (sct=0, sc=8) 00:28:58.249 starting I/O failed: -6 00:28:58.249 Write completed with error (sct=0, sc=8) 00:28:58.249 starting I/O failed: -6 00:28:58.249 Write completed with error (sct=0, sc=8) 00:28:58.249 starting I/O failed: -6 00:28:58.249 Write completed with error (sct=0, sc=8) 00:28:58.249 starting I/O failed: -6 00:28:58.249 Write completed with error (sct=0, sc=8) 00:28:58.249 starting I/O failed: -6 00:28:58.249 Write completed with error (sct=0, sc=8) 00:28:58.249 starting I/O failed: -6 00:28:58.249 Write completed with error (sct=0, sc=8) 00:28:58.249 starting I/O failed: -6 00:28:58.249 Write completed with error (sct=0, sc=8) 00:28:58.249 starting I/O failed: -6 00:28:58.249 Write completed with error (sct=0, sc=8) 00:28:58.249 starting I/O failed: -6 00:28:58.249 Write completed with error (sct=0, sc=8) 00:28:58.249 starting I/O failed: -6 00:28:58.249 Write completed with error (sct=0, sc=8) 00:28:58.249 starting I/O failed: -6 00:28:58.249 Write completed with error (sct=0, sc=8) 00:28:58.249 starting I/O failed: -6 00:28:58.249 Write completed with error (sct=0, sc=8) 00:28:58.249 starting I/O failed: -6 00:28:58.249 Write completed with error (sct=0, sc=8) 00:28:58.249 starting I/O failed: -6 00:28:58.249 Write completed with error (sct=0, sc=8) 00:28:58.249 starting I/O failed: -6 00:28:58.249 Write completed with error (sct=0, sc=8) 00:28:58.249 starting I/O failed: -6 00:28:58.249 Write completed with error (sct=0, sc=8) 00:28:58.249 starting I/O failed: -6 00:28:58.249 Write completed with error (sct=0, sc=8) 00:28:58.249 starting I/O failed: -6 00:28:58.249 Write completed with error (sct=0, sc=8) 00:28:58.249 starting I/O failed: -6 00:28:58.249 Write completed with error (sct=0, sc=8) 00:28:58.249 starting I/O failed: -6 00:28:58.249 Write completed with error (sct=0, sc=8) 00:28:58.249 starting I/O failed: -6 00:28:58.249 Write completed with error (sct=0, sc=8) 00:28:58.249 starting I/O failed: -6 00:28:58.249 Write completed with error (sct=0, sc=8) 00:28:58.249 starting I/O failed: -6 00:28:58.249 Write completed with error (sct=0, sc=8) 00:28:58.249 starting I/O failed: -6 00:28:58.249 Write completed with error (sct=0, sc=8) 00:28:58.249 starting I/O failed: -6 00:28:58.249 [2024-11-18 07:14:18.811843] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:28:58.249 NVMe io qpair process completion error 00:28:58.249 Write completed with error (sct=0, sc=8) 00:28:58.249 Write completed with error (sct=0, sc=8) 00:28:58.249 Write completed with error (sct=0, sc=8) 00:28:58.249 Write completed with error (sct=0, sc=8) 00:28:58.249 starting I/O failed: -6 00:28:58.249 Write completed with error (sct=0, sc=8) 00:28:58.250 Write completed with error (sct=0, sc=8) 00:28:58.250 Write completed with error (sct=0, sc=8) 00:28:58.250 Write completed with error (sct=0, sc=8) 00:28:58.250 starting I/O failed: -6 00:28:58.250 Write completed with error (sct=0, sc=8) 00:28:58.250 Write completed with error (sct=0, sc=8) 00:28:58.250 Write completed with error (sct=0, sc=8) 00:28:58.250 Write completed with error (sct=0, sc=8) 00:28:58.250 starting I/O failed: -6 00:28:58.250 Write completed with error (sct=0, sc=8) 00:28:58.250 Write completed with error (sct=0, sc=8) 00:28:58.250 Write completed with error (sct=0, sc=8) 00:28:58.250 Write completed with error (sct=0, sc=8) 00:28:58.250 starting I/O failed: -6 00:28:58.250 Write completed with error (sct=0, sc=8) 00:28:58.250 Write completed with error (sct=0, sc=8) 00:28:58.250 Write completed with error (sct=0, sc=8) 00:28:58.250 Write completed with error (sct=0, sc=8) 00:28:58.250 starting I/O failed: -6 00:28:58.250 Write completed with error (sct=0, sc=8) 00:28:58.250 Write completed with error (sct=0, sc=8) 00:28:58.250 Write completed with error (sct=0, sc=8) 00:28:58.250 Write completed with error (sct=0, sc=8) 00:28:58.250 starting I/O failed: -6 00:28:58.250 Write completed with error (sct=0, sc=8) 00:28:58.250 Write completed with error (sct=0, sc=8) 00:28:58.250 Write completed with error (sct=0, sc=8) 00:28:58.250 Write completed with error (sct=0, sc=8) 00:28:58.250 starting I/O failed: -6 00:28:58.250 Write completed with error (sct=0, sc=8) 00:28:58.250 Write completed with error (sct=0, sc=8) 00:28:58.250 Write completed with error (sct=0, sc=8) 00:28:58.250 Write completed with error (sct=0, sc=8) 00:28:58.250 starting I/O failed: -6 00:28:58.250 Write completed with error (sct=0, sc=8) 00:28:58.250 Write completed with error (sct=0, sc=8) 00:28:58.250 Write completed with error (sct=0, sc=8) 00:28:58.250 Write completed with error (sct=0, sc=8) 00:28:58.250 starting I/O failed: -6 00:28:58.250 Write completed with error (sct=0, sc=8) 00:28:58.250 [2024-11-18 07:14:18.813171] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:58.250 Write completed with error (sct=0, sc=8) 00:28:58.250 starting I/O failed: -6 00:28:58.250 Write completed with error (sct=0, sc=8) 00:28:58.250 starting I/O failed: -6 00:28:58.250 Write completed with error (sct=0, sc=8) 00:28:58.250 Write completed with error (sct=0, sc=8) 00:28:58.250 Write completed with error (sct=0, sc=8) 00:28:58.250 starting I/O failed: -6 00:28:58.250 Write completed with error (sct=0, sc=8) 00:28:58.250 starting I/O failed: -6 00:28:58.250 Write completed with error (sct=0, sc=8) 00:28:58.250 Write completed with error (sct=0, sc=8) 00:28:58.250 Write completed with error (sct=0, sc=8) 00:28:58.250 starting I/O failed: -6 00:28:58.250 Write completed with error (sct=0, sc=8) 00:28:58.250 starting I/O failed: -6 00:28:58.250 Write completed with error (sct=0, sc=8) 00:28:58.250 Write completed with error (sct=0, sc=8) 00:28:58.250 Write completed with error (sct=0, sc=8) 00:28:58.250 starting I/O failed: -6 00:28:58.250 Write completed with error (sct=0, sc=8) 00:28:58.250 starting I/O failed: -6 00:28:58.250 Write completed with error (sct=0, sc=8) 00:28:58.250 Write completed with error (sct=0, sc=8) 00:28:58.250 Write completed with error (sct=0, sc=8) 00:28:58.250 starting I/O failed: -6 00:28:58.250 Write completed with error (sct=0, sc=8) 00:28:58.250 starting I/O failed: -6 00:28:58.250 Write completed with error (sct=0, sc=8) 00:28:58.250 Write completed with error (sct=0, sc=8) 00:28:58.250 Write completed with error (sct=0, sc=8) 00:28:58.250 starting I/O failed: -6 00:28:58.250 Write completed with error (sct=0, sc=8) 00:28:58.250 starting I/O failed: -6 00:28:58.250 Write completed with error (sct=0, sc=8) 00:28:58.250 Write completed with error (sct=0, sc=8) 00:28:58.250 Write completed with error (sct=0, sc=8) 00:28:58.250 starting I/O failed: -6 00:28:58.250 Write completed with error (sct=0, sc=8) 00:28:58.250 starting I/O failed: -6 00:28:58.250 Write completed with error (sct=0, sc=8) 00:28:58.250 Write completed with error (sct=0, sc=8) 00:28:58.250 Write completed with error (sct=0, sc=8) 00:28:58.250 starting I/O failed: -6 00:28:58.250 Write completed with error (sct=0, sc=8) 00:28:58.250 starting I/O failed: -6 00:28:58.250 Write completed with error (sct=0, sc=8) 00:28:58.250 Write completed with error (sct=0, sc=8) 00:28:58.250 Write completed with error (sct=0, sc=8) 00:28:58.250 starting I/O failed: -6 00:28:58.250 Write completed with error (sct=0, sc=8) 00:28:58.250 starting I/O failed: -6 00:28:58.250 Write completed with error (sct=0, sc=8) 00:28:58.250 Write completed with error (sct=0, sc=8) 00:28:58.250 Write completed with error (sct=0, sc=8) 00:28:58.250 starting I/O failed: -6 00:28:58.250 Write completed with error (sct=0, sc=8) 00:28:58.250 starting I/O failed: -6 00:28:58.250 Write completed with error (sct=0, sc=8) 00:28:58.250 Write completed with error (sct=0, sc=8) 00:28:58.250 Write completed with error (sct=0, sc=8) 00:28:58.250 starting I/O failed: -6 00:28:58.250 Write completed with error (sct=0, sc=8) 00:28:58.250 starting I/O failed: -6 00:28:58.250 Write completed with error (sct=0, sc=8) 00:28:58.250 Write completed with error (sct=0, sc=8) 00:28:58.250 Write completed with error (sct=0, sc=8) 00:28:58.250 starting I/O failed: -6 00:28:58.250 [2024-11-18 07:14:18.814275] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:58.250 Write completed with error (sct=0, sc=8) 00:28:58.250 starting I/O failed: -6 00:28:58.250 Write completed with error (sct=0, sc=8) 00:28:58.250 starting I/O failed: -6 00:28:58.250 Write completed with error (sct=0, sc=8) 00:28:58.250 Write completed with error (sct=0, sc=8) 00:28:58.250 starting I/O failed: -6 00:28:58.250 Write completed with error (sct=0, sc=8) 00:28:58.250 starting I/O failed: -6 00:28:58.250 Write completed with error (sct=0, sc=8) 00:28:58.250 starting I/O failed: -6 00:28:58.250 Write completed with error (sct=0, sc=8) 00:28:58.250 Write completed with error (sct=0, sc=8) 00:28:58.250 starting I/O failed: -6 00:28:58.250 Write completed with error (sct=0, sc=8) 00:28:58.250 starting I/O failed: -6 00:28:58.250 Write completed with error (sct=0, sc=8) 00:28:58.250 starting I/O failed: -6 00:28:58.250 Write completed with error (sct=0, sc=8) 00:28:58.250 Write completed with error (sct=0, sc=8) 00:28:58.250 starting I/O failed: -6 00:28:58.250 Write completed with error (sct=0, sc=8) 00:28:58.250 starting I/O failed: -6 00:28:58.250 Write completed with error (sct=0, sc=8) 00:28:58.250 starting I/O failed: -6 00:28:58.250 Write completed with error (sct=0, sc=8) 00:28:58.251 Write completed with error (sct=0, sc=8) 00:28:58.251 starting I/O failed: -6 00:28:58.251 Write completed with error (sct=0, sc=8) 00:28:58.251 starting I/O failed: -6 00:28:58.251 Write completed with error (sct=0, sc=8) 00:28:58.251 starting I/O failed: -6 00:28:58.251 Write completed with error (sct=0, sc=8) 00:28:58.251 Write completed with error (sct=0, sc=8) 00:28:58.251 starting I/O failed: -6 00:28:58.251 Write completed with error (sct=0, sc=8) 00:28:58.251 starting I/O failed: -6 00:28:58.251 Write completed with error (sct=0, sc=8) 00:28:58.251 starting I/O failed: -6 00:28:58.251 Write completed with error (sct=0, sc=8) 00:28:58.251 Write completed with error (sct=0, sc=8) 00:28:58.251 starting I/O failed: -6 00:28:58.251 Write completed with error (sct=0, sc=8) 00:28:58.251 starting I/O failed: -6 00:28:58.251 Write completed with error (sct=0, sc=8) 00:28:58.251 starting I/O failed: -6 00:28:58.251 Write completed with error (sct=0, sc=8) 00:28:58.251 Write completed with error (sct=0, sc=8) 00:28:58.251 starting I/O failed: -6 00:28:58.251 Write completed with error (sct=0, sc=8) 00:28:58.251 starting I/O failed: -6 00:28:58.251 Write completed with error (sct=0, sc=8) 00:28:58.251 starting I/O failed: -6 00:28:58.251 Write completed with error (sct=0, sc=8) 00:28:58.251 Write completed with error (sct=0, sc=8) 00:28:58.251 starting I/O failed: -6 00:28:58.251 Write completed with error (sct=0, sc=8) 00:28:58.251 starting I/O failed: -6 00:28:58.251 Write completed with error (sct=0, sc=8) 00:28:58.251 starting I/O failed: -6 00:28:58.251 Write completed with error (sct=0, sc=8) 00:28:58.251 Write completed with error (sct=0, sc=8) 00:28:58.251 starting I/O failed: -6 00:28:58.251 Write completed with error (sct=0, sc=8) 00:28:58.251 starting I/O failed: -6 00:28:58.251 Write completed with error (sct=0, sc=8) 00:28:58.251 starting I/O failed: -6 00:28:58.251 Write completed with error (sct=0, sc=8) 00:28:58.251 Write completed with error (sct=0, sc=8) 00:28:58.251 starting I/O failed: -6 00:28:58.251 Write completed with error (sct=0, sc=8) 00:28:58.251 starting I/O failed: -6 00:28:58.251 Write completed with error (sct=0, sc=8) 00:28:58.251 starting I/O failed: -6 00:28:58.251 Write completed with error (sct=0, sc=8) 00:28:58.251 Write completed with error (sct=0, sc=8) 00:28:58.251 starting I/O failed: -6 00:28:58.251 Write completed with error (sct=0, sc=8) 00:28:58.251 starting I/O failed: -6 00:28:58.251 Write completed with error (sct=0, sc=8) 00:28:58.251 starting I/O failed: -6 00:28:58.251 Write completed with error (sct=0, sc=8) 00:28:58.251 Write completed with error (sct=0, sc=8) 00:28:58.251 starting I/O failed: -6 00:28:58.251 [2024-11-18 07:14:18.815412] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:58.251 Write completed with error (sct=0, sc=8) 00:28:58.251 starting I/O failed: -6 00:28:58.251 Write completed with error (sct=0, sc=8) 00:28:58.251 starting I/O failed: -6 00:28:58.251 Write completed with error (sct=0, sc=8) 00:28:58.251 starting I/O failed: -6 00:28:58.251 Write completed with error (sct=0, sc=8) 00:28:58.251 starting I/O failed: -6 00:28:58.251 Write completed with error (sct=0, sc=8) 00:28:58.251 starting I/O failed: -6 00:28:58.251 Write completed with error (sct=0, sc=8) 00:28:58.251 starting I/O failed: -6 00:28:58.251 Write completed with error (sct=0, sc=8) 00:28:58.251 starting I/O failed: -6 00:28:58.251 Write completed with error (sct=0, sc=8) 00:28:58.251 starting I/O failed: -6 00:28:58.251 Write completed with error (sct=0, sc=8) 00:28:58.251 starting I/O failed: -6 00:28:58.251 Write completed with error (sct=0, sc=8) 00:28:58.251 starting I/O failed: -6 00:28:58.251 Write completed with error (sct=0, sc=8) 00:28:58.251 starting I/O failed: -6 00:28:58.251 Write completed with error (sct=0, sc=8) 00:28:58.251 starting I/O failed: -6 00:28:58.251 Write completed with error (sct=0, sc=8) 00:28:58.251 starting I/O failed: -6 00:28:58.251 Write completed with error (sct=0, sc=8) 00:28:58.251 starting I/O failed: -6 00:28:58.251 Write completed with error (sct=0, sc=8) 00:28:58.251 starting I/O failed: -6 00:28:58.251 Write completed with error (sct=0, sc=8) 00:28:58.251 starting I/O failed: -6 00:28:58.251 Write completed with error (sct=0, sc=8) 00:28:58.251 starting I/O failed: -6 00:28:58.251 Write completed with error (sct=0, sc=8) 00:28:58.251 starting I/O failed: -6 00:28:58.251 Write completed with error (sct=0, sc=8) 00:28:58.251 starting I/O failed: -6 00:28:58.251 Write completed with error (sct=0, sc=8) 00:28:58.251 starting I/O failed: -6 00:28:58.251 Write completed with error (sct=0, sc=8) 00:28:58.251 starting I/O failed: -6 00:28:58.251 Write completed with error (sct=0, sc=8) 00:28:58.251 starting I/O failed: -6 00:28:58.251 Write completed with error (sct=0, sc=8) 00:28:58.251 starting I/O failed: -6 00:28:58.251 Write completed with error (sct=0, sc=8) 00:28:58.251 starting I/O failed: -6 00:28:58.251 Write completed with error (sct=0, sc=8) 00:28:58.251 starting I/O failed: -6 00:28:58.251 Write completed with error (sct=0, sc=8) 00:28:58.251 starting I/O failed: -6 00:28:58.251 Write completed with error (sct=0, sc=8) 00:28:58.251 starting I/O failed: -6 00:28:58.251 Write completed with error (sct=0, sc=8) 00:28:58.251 starting I/O failed: -6 00:28:58.251 Write completed with error (sct=0, sc=8) 00:28:58.251 starting I/O failed: -6 00:28:58.251 Write completed with error (sct=0, sc=8) 00:28:58.251 starting I/O failed: -6 00:28:58.251 Write completed with error (sct=0, sc=8) 00:28:58.251 starting I/O failed: -6 00:28:58.251 Write completed with error (sct=0, sc=8) 00:28:58.251 starting I/O failed: -6 00:28:58.251 Write completed with error (sct=0, sc=8) 00:28:58.251 starting I/O failed: -6 00:28:58.251 Write completed with error (sct=0, sc=8) 00:28:58.251 starting I/O failed: -6 00:28:58.251 Write completed with error (sct=0, sc=8) 00:28:58.251 starting I/O failed: -6 00:28:58.251 Write completed with error (sct=0, sc=8) 00:28:58.251 starting I/O failed: -6 00:28:58.251 Write completed with error (sct=0, sc=8) 00:28:58.251 starting I/O failed: -6 00:28:58.251 Write completed with error (sct=0, sc=8) 00:28:58.251 starting I/O failed: -6 00:28:58.251 Write completed with error (sct=0, sc=8) 00:28:58.251 starting I/O failed: -6 00:28:58.252 Write completed with error (sct=0, sc=8) 00:28:58.252 starting I/O failed: -6 00:28:58.252 Write completed with error (sct=0, sc=8) 00:28:58.252 starting I/O failed: -6 00:28:58.252 Write completed with error (sct=0, sc=8) 00:28:58.252 starting I/O failed: -6 00:28:58.252 Write completed with error (sct=0, sc=8) 00:28:58.252 starting I/O failed: -6 00:28:58.252 Write completed with error (sct=0, sc=8) 00:28:58.252 starting I/O failed: -6 00:28:58.252 Write completed with error (sct=0, sc=8) 00:28:58.252 starting I/O failed: -6 00:28:58.252 Write completed with error (sct=0, sc=8) 00:28:58.252 starting I/O failed: -6 00:28:58.252 Write completed with error (sct=0, sc=8) 00:28:58.252 starting I/O failed: -6 00:28:58.252 Write completed with error (sct=0, sc=8) 00:28:58.252 starting I/O failed: -6 00:28:58.252 Write completed with error (sct=0, sc=8) 00:28:58.252 starting I/O failed: -6 00:28:58.252 Write completed with error (sct=0, sc=8) 00:28:58.252 starting I/O failed: -6 00:28:58.252 Write completed with error (sct=0, sc=8) 00:28:58.252 starting I/O failed: -6 00:28:58.252 Write completed with error (sct=0, sc=8) 00:28:58.252 starting I/O failed: -6 00:28:58.252 Write completed with error (sct=0, sc=8) 00:28:58.252 starting I/O failed: -6 00:28:58.252 Write completed with error (sct=0, sc=8) 00:28:58.252 starting I/O failed: -6 00:28:58.252 Write completed with error (sct=0, sc=8) 00:28:58.252 starting I/O failed: -6 00:28:58.252 Write completed with error (sct=0, sc=8) 00:28:58.252 starting I/O failed: -6 00:28:58.252 Write completed with error (sct=0, sc=8) 00:28:58.252 starting I/O failed: -6 00:28:58.252 Write completed with error (sct=0, sc=8) 00:28:58.252 starting I/O failed: -6 00:28:58.252 Write completed with error (sct=0, sc=8) 00:28:58.252 starting I/O failed: -6 00:28:58.252 Write completed with error (sct=0, sc=8) 00:28:58.252 starting I/O failed: -6 00:28:58.252 [2024-11-18 07:14:18.817102] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:28:58.252 NVMe io qpair process completion error 00:28:58.252 Write completed with error (sct=0, sc=8) 00:28:58.252 Write completed with error (sct=0, sc=8) 00:28:58.252 Write completed with error (sct=0, sc=8) 00:28:58.252 starting I/O failed: -6 00:28:58.252 Write completed with error (sct=0, sc=8) 00:28:58.252 Write completed with error (sct=0, sc=8) 00:28:58.252 Write completed with error (sct=0, sc=8) 00:28:58.252 Write completed with error (sct=0, sc=8) 00:28:58.252 starting I/O failed: -6 00:28:58.252 Write completed with error (sct=0, sc=8) 00:28:58.252 Write completed with error (sct=0, sc=8) 00:28:58.252 Write completed with error (sct=0, sc=8) 00:28:58.252 Write completed with error (sct=0, sc=8) 00:28:58.252 starting I/O failed: -6 00:28:58.252 Write completed with error (sct=0, sc=8) 00:28:58.252 Write completed with error (sct=0, sc=8) 00:28:58.252 Write completed with error (sct=0, sc=8) 00:28:58.252 Write completed with error (sct=0, sc=8) 00:28:58.252 starting I/O failed: -6 00:28:58.252 Write completed with error (sct=0, sc=8) 00:28:58.252 Write completed with error (sct=0, sc=8) 00:28:58.252 Write completed with error (sct=0, sc=8) 00:28:58.252 Write completed with error (sct=0, sc=8) 00:28:58.252 starting I/O failed: -6 00:28:58.252 Write completed with error (sct=0, sc=8) 00:28:58.252 Write completed with error (sct=0, sc=8) 00:28:58.252 Write completed with error (sct=0, sc=8) 00:28:58.252 Write completed with error (sct=0, sc=8) 00:28:58.252 starting I/O failed: -6 00:28:58.252 Write completed with error (sct=0, sc=8) 00:28:58.252 Write completed with error (sct=0, sc=8) 00:28:58.252 Write completed with error (sct=0, sc=8) 00:28:58.252 Write completed with error (sct=0, sc=8) 00:28:58.252 starting I/O failed: -6 00:28:58.252 Write completed with error (sct=0, sc=8) 00:28:58.252 Write completed with error (sct=0, sc=8) 00:28:58.252 Write completed with error (sct=0, sc=8) 00:28:58.252 Write completed with error (sct=0, sc=8) 00:28:58.252 starting I/O failed: -6 00:28:58.252 Write completed with error (sct=0, sc=8) 00:28:58.252 Write completed with error (sct=0, sc=8) 00:28:58.252 Write completed with error (sct=0, sc=8) 00:28:58.252 Write completed with error (sct=0, sc=8) 00:28:58.252 starting I/O failed: -6 00:28:58.252 Write completed with error (sct=0, sc=8) 00:28:58.252 Write completed with error (sct=0, sc=8) 00:28:58.252 Write completed with error (sct=0, sc=8) 00:28:58.252 Write completed with error (sct=0, sc=8) 00:28:58.252 starting I/O failed: -6 00:28:58.252 [2024-11-18 07:14:18.818324] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:58.252 starting I/O failed: -6 00:28:58.252 starting I/O failed: -6 00:28:58.252 Write completed with error (sct=0, sc=8) 00:28:58.252 starting I/O failed: -6 00:28:58.252 Write completed with error (sct=0, sc=8) 00:28:58.252 Write completed with error (sct=0, sc=8) 00:28:58.252 Write completed with error (sct=0, sc=8) 00:28:58.252 starting I/O failed: -6 00:28:58.252 Write completed with error (sct=0, sc=8) 00:28:58.252 starting I/O failed: -6 00:28:58.252 Write completed with error (sct=0, sc=8) 00:28:58.252 Write completed with error (sct=0, sc=8) 00:28:58.252 Write completed with error (sct=0, sc=8) 00:28:58.252 starting I/O failed: -6 00:28:58.252 Write completed with error (sct=0, sc=8) 00:28:58.252 starting I/O failed: -6 00:28:58.252 Write completed with error (sct=0, sc=8) 00:28:58.252 Write completed with error (sct=0, sc=8) 00:28:58.252 Write completed with error (sct=0, sc=8) 00:28:58.252 starting I/O failed: -6 00:28:58.252 Write completed with error (sct=0, sc=8) 00:28:58.252 starting I/O failed: -6 00:28:58.252 Write completed with error (sct=0, sc=8) 00:28:58.252 Write completed with error (sct=0, sc=8) 00:28:58.252 Write completed with error (sct=0, sc=8) 00:28:58.252 starting I/O failed: -6 00:28:58.252 Write completed with error (sct=0, sc=8) 00:28:58.252 starting I/O failed: -6 00:28:58.252 Write completed with error (sct=0, sc=8) 00:28:58.252 Write completed with error (sct=0, sc=8) 00:28:58.252 Write completed with error (sct=0, sc=8) 00:28:58.252 starting I/O failed: -6 00:28:58.252 Write completed with error (sct=0, sc=8) 00:28:58.252 starting I/O failed: -6 00:28:58.253 Write completed with error (sct=0, sc=8) 00:28:58.253 Write completed with error (sct=0, sc=8) 00:28:58.253 Write completed with error (sct=0, sc=8) 00:28:58.253 starting I/O failed: -6 00:28:58.253 Write completed with error (sct=0, sc=8) 00:28:58.253 starting I/O failed: -6 00:28:58.253 Write completed with error (sct=0, sc=8) 00:28:58.253 Write completed with error (sct=0, sc=8) 00:28:58.253 Write completed with error (sct=0, sc=8) 00:28:58.253 starting I/O failed: -6 00:28:58.253 Write completed with error (sct=0, sc=8) 00:28:58.253 starting I/O failed: -6 00:28:58.253 Write completed with error (sct=0, sc=8) 00:28:58.253 Write completed with error (sct=0, sc=8) 00:28:58.253 Write completed with error (sct=0, sc=8) 00:28:58.253 starting I/O failed: -6 00:28:58.253 Write completed with error (sct=0, sc=8) 00:28:58.253 starting I/O failed: -6 00:28:58.253 Write completed with error (sct=0, sc=8) 00:28:58.253 Write completed with error (sct=0, sc=8) 00:28:58.253 Write completed with error (sct=0, sc=8) 00:28:58.253 starting I/O failed: -6 00:28:58.253 [2024-11-18 07:14:18.819450] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:58.253 Write completed with error (sct=0, sc=8) 00:28:58.253 starting I/O failed: -6 00:28:58.253 Write completed with error (sct=0, sc=8) 00:28:58.253 starting I/O failed: -6 00:28:58.253 Write completed with error (sct=0, sc=8) 00:28:58.253 Write completed with error (sct=0, sc=8) 00:28:58.253 starting I/O failed: -6 00:28:58.253 Write completed with error (sct=0, sc=8) 00:28:58.253 starting I/O failed: -6 00:28:58.253 Write completed with error (sct=0, sc=8) 00:28:58.253 starting I/O failed: -6 00:28:58.253 Write completed with error (sct=0, sc=8) 00:28:58.253 Write completed with error (sct=0, sc=8) 00:28:58.253 starting I/O failed: -6 00:28:58.253 Write completed with error (sct=0, sc=8) 00:28:58.253 starting I/O failed: -6 00:28:58.253 Write completed with error (sct=0, sc=8) 00:28:58.253 starting I/O failed: -6 00:28:58.253 Write completed with error (sct=0, sc=8) 00:28:58.253 Write completed with error (sct=0, sc=8) 00:28:58.253 starting I/O failed: -6 00:28:58.253 Write completed with error (sct=0, sc=8) 00:28:58.253 starting I/O failed: -6 00:28:58.253 Write completed with error (sct=0, sc=8) 00:28:58.253 starting I/O failed: -6 00:28:58.253 Write completed with error (sct=0, sc=8) 00:28:58.253 Write completed with error (sct=0, sc=8) 00:28:58.253 starting I/O failed: -6 00:28:58.253 Write completed with error (sct=0, sc=8) 00:28:58.253 starting I/O failed: -6 00:28:58.253 Write completed with error (sct=0, sc=8) 00:28:58.253 starting I/O failed: -6 00:28:58.253 Write completed with error (sct=0, sc=8) 00:28:58.253 Write completed with error (sct=0, sc=8) 00:28:58.253 starting I/O failed: -6 00:28:58.253 Write completed with error (sct=0, sc=8) 00:28:58.253 starting I/O failed: -6 00:28:58.253 Write completed with error (sct=0, sc=8) 00:28:58.253 starting I/O failed: -6 00:28:58.253 Write completed with error (sct=0, sc=8) 00:28:58.253 Write completed with error (sct=0, sc=8) 00:28:58.253 starting I/O failed: -6 00:28:58.253 Write completed with error (sct=0, sc=8) 00:28:58.253 starting I/O failed: -6 00:28:58.253 Write completed with error (sct=0, sc=8) 00:28:58.253 starting I/O failed: -6 00:28:58.253 Write completed with error (sct=0, sc=8) 00:28:58.253 Write completed with error (sct=0, sc=8) 00:28:58.253 starting I/O failed: -6 00:28:58.253 Write completed with error (sct=0, sc=8) 00:28:58.253 starting I/O failed: -6 00:28:58.253 Write completed with error (sct=0, sc=8) 00:28:58.253 starting I/O failed: -6 00:28:58.253 Write completed with error (sct=0, sc=8) 00:28:58.253 Write completed with error (sct=0, sc=8) 00:28:58.253 starting I/O failed: -6 00:28:58.253 Write completed with error (sct=0, sc=8) 00:28:58.253 starting I/O failed: -6 00:28:58.253 Write completed with error (sct=0, sc=8) 00:28:58.253 starting I/O failed: -6 00:28:58.253 Write completed with error (sct=0, sc=8) 00:28:58.253 Write completed with error (sct=0, sc=8) 00:28:58.253 starting I/O failed: -6 00:28:58.253 Write completed with error (sct=0, sc=8) 00:28:58.253 starting I/O failed: -6 00:28:58.253 Write completed with error (sct=0, sc=8) 00:28:58.253 starting I/O failed: -6 00:28:58.253 Write completed with error (sct=0, sc=8) 00:28:58.253 Write completed with error (sct=0, sc=8) 00:28:58.253 starting I/O failed: -6 00:28:58.253 Write completed with error (sct=0, sc=8) 00:28:58.254 starting I/O failed: -6 00:28:58.254 Write completed with error (sct=0, sc=8) 00:28:58.254 starting I/O failed: -6 00:28:58.254 Write completed with error (sct=0, sc=8) 00:28:58.254 Write completed with error (sct=0, sc=8) 00:28:58.254 starting I/O failed: -6 00:28:58.254 Write completed with error (sct=0, sc=8) 00:28:58.254 starting I/O failed: -6 00:28:58.254 Write completed with error (sct=0, sc=8) 00:28:58.254 starting I/O failed: -6 00:28:58.254 Write completed with error (sct=0, sc=8) 00:28:58.254 Write completed with error (sct=0, sc=8) 00:28:58.254 starting I/O failed: -6 00:28:58.254 Write completed with error (sct=0, sc=8) 00:28:58.254 starting I/O failed: -6 00:28:58.254 Write completed with error (sct=0, sc=8) 00:28:58.254 starting I/O failed: -6 00:28:58.254 Write completed with error (sct=0, sc=8) 00:28:58.254 [2024-11-18 07:14:18.820668] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:28:58.254 Write completed with error (sct=0, sc=8) 00:28:58.254 starting I/O failed: -6 00:28:58.254 Write completed with error (sct=0, sc=8) 00:28:58.254 starting I/O failed: -6 00:28:58.254 Write completed with error (sct=0, sc=8) 00:28:58.254 starting I/O failed: -6 00:28:58.254 Write completed with error (sct=0, sc=8) 00:28:58.254 starting I/O failed: -6 00:28:58.254 Write completed with error (sct=0, sc=8) 00:28:58.254 starting I/O failed: -6 00:28:58.254 Write completed with error (sct=0, sc=8) 00:28:58.254 starting I/O failed: -6 00:28:58.254 Write completed with error (sct=0, sc=8) 00:28:58.254 starting I/O failed: -6 00:28:58.254 Write completed with error (sct=0, sc=8) 00:28:58.254 starting I/O failed: -6 00:28:58.254 Write completed with error (sct=0, sc=8) 00:28:58.254 starting I/O failed: -6 00:28:58.254 Write completed with error (sct=0, sc=8) 00:28:58.254 starting I/O failed: -6 00:28:58.254 Write completed with error (sct=0, sc=8) 00:28:58.254 starting I/O failed: -6 00:28:58.254 Write completed with error (sct=0, sc=8) 00:28:58.254 starting I/O failed: -6 00:28:58.254 Write completed with error (sct=0, sc=8) 00:28:58.254 starting I/O failed: -6 00:28:58.254 Write completed with error (sct=0, sc=8) 00:28:58.254 starting I/O failed: -6 00:28:58.254 Write completed with error (sct=0, sc=8) 00:28:58.254 starting I/O failed: -6 00:28:58.254 Write completed with error (sct=0, sc=8) 00:28:58.254 starting I/O failed: -6 00:28:58.254 Write completed with error (sct=0, sc=8) 00:28:58.254 starting I/O failed: -6 00:28:58.254 Write completed with error (sct=0, sc=8) 00:28:58.254 starting I/O failed: -6 00:28:58.254 Write completed with error (sct=0, sc=8) 00:28:58.254 starting I/O failed: -6 00:28:58.254 Write completed with error (sct=0, sc=8) 00:28:58.254 starting I/O failed: -6 00:28:58.254 Write completed with error (sct=0, sc=8) 00:28:58.254 starting I/O failed: -6 00:28:58.254 Write completed with error (sct=0, sc=8) 00:28:58.254 starting I/O failed: -6 00:28:58.254 Write completed with error (sct=0, sc=8) 00:28:58.254 starting I/O failed: -6 00:28:58.254 Write completed with error (sct=0, sc=8) 00:28:58.254 starting I/O failed: -6 00:28:58.254 Write completed with error (sct=0, sc=8) 00:28:58.254 starting I/O failed: -6 00:28:58.254 Write completed with error (sct=0, sc=8) 00:28:58.254 starting I/O failed: -6 00:28:58.254 Write completed with error (sct=0, sc=8) 00:28:58.254 starting I/O failed: -6 00:28:58.254 Write completed with error (sct=0, sc=8) 00:28:58.254 starting I/O failed: -6 00:28:58.254 Write completed with error (sct=0, sc=8) 00:28:58.254 starting I/O failed: -6 00:28:58.254 Write completed with error (sct=0, sc=8) 00:28:58.254 starting I/O failed: -6 00:28:58.254 Write completed with error (sct=0, sc=8) 00:28:58.254 starting I/O failed: -6 00:28:58.254 Write completed with error (sct=0, sc=8) 00:28:58.254 starting I/O failed: -6 00:28:58.254 Write completed with error (sct=0, sc=8) 00:28:58.254 starting I/O failed: -6 00:28:58.254 Write completed with error (sct=0, sc=8) 00:28:58.254 starting I/O failed: -6 00:28:58.254 Write completed with error (sct=0, sc=8) 00:28:58.254 starting I/O failed: -6 00:28:58.254 Write completed with error (sct=0, sc=8) 00:28:58.254 starting I/O failed: -6 00:28:58.254 Write completed with error (sct=0, sc=8) 00:28:58.254 starting I/O failed: -6 00:28:58.254 Write completed with error (sct=0, sc=8) 00:28:58.254 starting I/O failed: -6 00:28:58.254 Write completed with error (sct=0, sc=8) 00:28:58.254 starting I/O failed: -6 00:28:58.254 Write completed with error (sct=0, sc=8) 00:28:58.254 starting I/O failed: -6 00:28:58.254 Write completed with error (sct=0, sc=8) 00:28:58.254 starting I/O failed: -6 00:28:58.254 Write completed with error (sct=0, sc=8) 00:28:58.254 starting I/O failed: -6 00:28:58.254 Write completed with error (sct=0, sc=8) 00:28:58.254 starting I/O failed: -6 00:28:58.254 Write completed with error (sct=0, sc=8) 00:28:58.254 starting I/O failed: -6 00:28:58.254 Write completed with error (sct=0, sc=8) 00:28:58.254 starting I/O failed: -6 00:28:58.254 Write completed with error (sct=0, sc=8) 00:28:58.254 starting I/O failed: -6 00:28:58.254 Write completed with error (sct=0, sc=8) 00:28:58.254 starting I/O failed: -6 00:28:58.254 Write completed with error (sct=0, sc=8) 00:28:58.254 starting I/O failed: -6 00:28:58.254 Write completed with error (sct=0, sc=8) 00:28:58.254 starting I/O failed: -6 00:28:58.254 Write completed with error (sct=0, sc=8) 00:28:58.254 starting I/O failed: -6 00:28:58.254 Write completed with error (sct=0, sc=8) 00:28:58.254 starting I/O failed: -6 00:28:58.254 Write completed with error (sct=0, sc=8) 00:28:58.254 starting I/O failed: -6 00:28:58.254 Write completed with error (sct=0, sc=8) 00:28:58.254 starting I/O failed: -6 00:28:58.254 Write completed with error (sct=0, sc=8) 00:28:58.254 starting I/O failed: -6 00:28:58.254 Write completed with error (sct=0, sc=8) 00:28:58.254 starting I/O failed: -6 00:28:58.254 Write completed with error (sct=0, sc=8) 00:28:58.254 starting I/O failed: -6 00:28:58.254 Write completed with error (sct=0, sc=8) 00:28:58.254 starting I/O failed: -6 00:28:58.254 Write completed with error (sct=0, sc=8) 00:28:58.254 starting I/O failed: -6 00:28:58.254 Write completed with error (sct=0, sc=8) 00:28:58.254 starting I/O failed: -6 00:28:58.254 Write completed with error (sct=0, sc=8) 00:28:58.254 starting I/O failed: -6 00:28:58.254 [2024-11-18 07:14:18.822595] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:58.254 NVMe io qpair process completion error 00:28:58.254 Initializing NVMe Controllers 00:28:58.254 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode8 00:28:58.254 Controller IO queue size 128, less than required. 00:28:58.254 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:58.254 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode9 00:28:58.254 Controller IO queue size 128, less than required. 00:28:58.254 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:58.254 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode2 00:28:58.254 Controller IO queue size 128, less than required. 00:28:58.254 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:58.254 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode4 00:28:58.254 Controller IO queue size 128, less than required. 00:28:58.254 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:58.254 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode3 00:28:58.254 Controller IO queue size 128, less than required. 00:28:58.254 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:58.254 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode7 00:28:58.254 Controller IO queue size 128, less than required. 00:28:58.254 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:58.254 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode10 00:28:58.254 Controller IO queue size 128, less than required. 00:28:58.254 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:58.254 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode5 00:28:58.254 Controller IO queue size 128, less than required. 00:28:58.254 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:58.254 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode6 00:28:58.254 Controller IO queue size 128, less than required. 00:28:58.254 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:58.254 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:58.254 Controller IO queue size 128, less than required. 00:28:58.254 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:58.254 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 with lcore 0 00:28:58.254 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 with lcore 0 00:28:58.254 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 with lcore 0 00:28:58.254 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 with lcore 0 00:28:58.254 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 with lcore 0 00:28:58.254 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 with lcore 0 00:28:58.254 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 with lcore 0 00:28:58.254 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 with lcore 0 00:28:58.254 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 with lcore 0 00:28:58.255 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:58.255 Initialization complete. Launching workers. 00:28:58.255 ======================================================== 00:28:58.255 Latency(us) 00:28:58.255 Device Information : IOPS MiB/s Average min max 00:28:58.255 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 from core 0: 1801.91 77.43 71056.83 1139.40 122453.77 00:28:58.255 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 from core 0: 1846.04 79.32 69379.69 758.10 137208.06 00:28:58.255 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 from core 0: 1814.42 77.96 69842.24 916.08 120700.04 00:28:58.255 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 from core 0: 1790.66 76.94 70792.31 956.66 120256.14 00:28:58.255 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 from core 0: 1790.45 76.93 70825.39 938.11 117007.51 00:28:58.255 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 from core 0: 1787.27 76.80 70979.40 840.43 118401.36 00:28:58.255 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 from core 0: 1768.38 75.99 71763.99 1156.13 128352.61 00:28:58.255 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 from core 0: 1819.09 78.16 69799.58 820.97 131411.71 00:28:58.255 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 from core 0: 1819.73 78.19 69823.14 918.25 116751.84 00:28:58.255 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1749.29 75.16 71838.86 1173.28 117291.97 00:28:58.255 ======================================================== 00:28:58.255 Total : 17987.23 772.89 70598.74 758.10 137208.06 00:28:58.255 00:28:58.255 [2024-11-18 07:14:18.828138] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b00b40 is same with the state(6) to be set 00:28:58.255 [2024-11-18 07:14:18.828236] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b05a40 is same with the state(6) to be set 00:28:58.255 [2024-11-18 07:14:18.828294] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae3140 is same with the state(6) to be set 00:28:58.255 [2024-11-18 07:14:18.828351] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aecf40 is same with the state(6) to be set 00:28:58.255 [2024-11-18 07:14:18.828407] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8040 is same with the state(6) to be set 00:28:58.255 [2024-11-18 07:14:18.828485] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afbc40 is same with the state(6) to be set 00:28:58.255 [2024-11-18 07:14:18.828565] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade240 is same with the state(6) to be set 00:28:58.255 [2024-11-18 07:14:18.828622] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1e40 is same with the state(6) to be set 00:28:58.255 [2024-11-18 07:14:18.828677] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af6d40 is same with the state(6) to be set 00:28:58.255 [2024-11-18 07:14:18.828744] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ad9330 is same with the state(6) to be set 00:28:58.255 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:28:58.512 07:14:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@156 -- # sleep 1 00:28:59.450 07:14:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@158 -- # NOT wait 327906 00:28:59.450 07:14:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@652 -- # local es=0 00:28:59.450 07:14:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 327906 00:28:59.450 07:14:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@640 -- # local arg=wait 00:28:59.450 07:14:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:59.450 07:14:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # type -t wait 00:28:59.450 07:14:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:59.450 07:14:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # wait 327906 00:28:59.450 07:14:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # es=1 00:28:59.450 07:14:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:59.450 07:14:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:28:59.450 07:14:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:59.450 07:14:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@159 -- # stoptarget 00:28:59.450 07:14:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:28:59.450 07:14:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:28:59.450 07:14:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:59.450 07:14:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@46 -- # nvmftestfini 00:28:59.450 07:14:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:59.450 07:14:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@121 -- # sync 00:28:59.450 07:14:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:59.450 07:14:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@124 -- # set +e 00:28:59.450 07:14:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:59.450 07:14:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:59.450 rmmod nvme_tcp 00:28:59.450 rmmod nvme_fabrics 00:28:59.450 rmmod nvme_keyring 00:28:59.450 07:14:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:59.450 07:14:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@128 -- # set -e 00:28:59.450 07:14:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@129 -- # return 0 00:28:59.450 07:14:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@517 -- # '[' -n 327818 ']' 00:28:59.450 07:14:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@518 -- # killprocess 327818 00:28:59.450 07:14:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 327818 ']' 00:28:59.450 07:14:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 327818 00:28:59.450 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (327818) - No such process 00:28:59.450 07:14:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@981 -- # echo 'Process with pid 327818 is not found' 00:28:59.450 Process with pid 327818 is not found 00:28:59.450 07:14:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:59.450 07:14:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:59.450 07:14:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:59.450 07:14:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@297 -- # iptr 00:28:59.450 07:14:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-save 00:28:59.450 07:14:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:59.450 07:14:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-restore 00:28:59.450 07:14:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:59.450 07:14:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:59.450 07:14:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:59.450 07:14:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:59.450 07:14:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:01.357 07:14:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:01.357 00:29:01.357 real 0m9.716s 00:29:01.357 user 0m23.364s 00:29:01.357 sys 0m5.671s 00:29:01.357 07:14:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:01.357 07:14:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:01.357 ************************************ 00:29:01.357 END TEST nvmf_shutdown_tc4 00:29:01.357 ************************************ 00:29:01.617 07:14:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@170 -- # trap - SIGINT SIGTERM EXIT 00:29:01.617 00:29:01.617 real 0m36.785s 00:29:01.617 user 1m39.495s 00:29:01.617 sys 0m11.823s 00:29:01.617 07:14:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:01.617 07:14:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:01.617 ************************************ 00:29:01.617 END TEST nvmf_shutdown 00:29:01.617 ************************************ 00:29:01.617 07:14:22 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # run_test nvmf_nsid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:29:01.617 07:14:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:01.617 07:14:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:01.617 07:14:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:29:01.617 ************************************ 00:29:01.617 START TEST nvmf_nsid 00:29:01.617 ************************************ 00:29:01.617 07:14:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:29:01.617 * Looking for test storage... 00:29:01.617 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:01.617 07:14:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:01.617 07:14:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # lcov --version 00:29:01.617 07:14:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:01.617 07:14:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:01.617 07:14:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:01.617 07:14:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:01.617 07:14:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:01.617 07:14:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-: 00:29:01.617 07:14:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1 00:29:01.617 07:14:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-: 00:29:01.617 07:14:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2 00:29:01.617 07:14:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<' 00:29:01.617 07:14:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2 00:29:01.617 07:14:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1 00:29:01.617 07:14:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:01.617 07:14:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in 00:29:01.617 07:14:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1 00:29:01.617 07:14:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:01.617 07:14:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:01.617 07:14:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1 00:29:01.617 07:14:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1 00:29:01.617 07:14:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:01.617 07:14:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1 00:29:01.617 07:14:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1 00:29:01.617 07:14:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2 00:29:01.617 07:14:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2 00:29:01.617 07:14:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:01.617 07:14:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2 00:29:01.617 07:14:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2 00:29:01.617 07:14:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:01.617 07:14:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:01.617 07:14:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0 00:29:01.617 07:14:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:01.617 07:14:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:01.617 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:01.617 --rc genhtml_branch_coverage=1 00:29:01.617 --rc genhtml_function_coverage=1 00:29:01.617 --rc genhtml_legend=1 00:29:01.617 --rc geninfo_all_blocks=1 00:29:01.617 --rc geninfo_unexecuted_blocks=1 00:29:01.617 00:29:01.617 ' 00:29:01.617 07:14:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:01.617 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:01.617 --rc genhtml_branch_coverage=1 00:29:01.617 --rc genhtml_function_coverage=1 00:29:01.617 --rc genhtml_legend=1 00:29:01.617 --rc geninfo_all_blocks=1 00:29:01.617 --rc geninfo_unexecuted_blocks=1 00:29:01.617 00:29:01.617 ' 00:29:01.617 07:14:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:01.617 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:01.617 --rc genhtml_branch_coverage=1 00:29:01.617 --rc genhtml_function_coverage=1 00:29:01.617 --rc genhtml_legend=1 00:29:01.617 --rc geninfo_all_blocks=1 00:29:01.617 --rc geninfo_unexecuted_blocks=1 00:29:01.617 00:29:01.617 ' 00:29:01.617 07:14:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:01.617 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:01.617 --rc genhtml_branch_coverage=1 00:29:01.617 --rc genhtml_function_coverage=1 00:29:01.617 --rc genhtml_legend=1 00:29:01.617 --rc geninfo_all_blocks=1 00:29:01.617 --rc geninfo_unexecuted_blocks=1 00:29:01.617 00:29:01.617 ' 00:29:01.617 07:14:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:01.617 07:14:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s 00:29:01.617 07:14:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:01.617 07:14:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:01.617 07:14:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:01.618 07:14:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:01.618 07:14:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:01.618 07:14:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:01.618 07:14:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:01.618 07:14:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:01.618 07:14:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:01.618 07:14:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:01.618 07:14:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:29:01.618 07:14:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:29:01.618 07:14:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:01.618 07:14:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:01.618 07:14:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:01.618 07:14:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:01.618 07:14:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:01.618 07:14:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob 00:29:01.618 07:14:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:01.618 07:14:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:01.618 07:14:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:01.618 07:14:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:01.618 07:14:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:01.618 07:14:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:01.618 07:14:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH 00:29:01.618 07:14:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:01.618 07:14:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # : 0 00:29:01.618 07:14:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:01.618 07:14:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:01.618 07:14:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:01.618 07:14:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:01.618 07:14:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:01.618 07:14:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:01.618 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:01.618 07:14:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:01.618 07:14:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:01.618 07:14:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:01.618 07:14:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0 00:29:01.618 07:14:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1 00:29:01.618 07:14:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2 00:29:01.618 07:14:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock 00:29:01.618 07:14:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid= 00:29:01.618 07:14:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit 00:29:01.618 07:14:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:01.618 07:14:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:01.618 07:14:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:01.618 07:14:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:01.618 07:14:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:01.618 07:14:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:01.618 07:14:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:01.618 07:14:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:01.618 07:14:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:01.618 07:14:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:01.618 07:14:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@309 -- # xtrace_disable 00:29:01.618 07:14:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:29:04.152 07:14:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:04.152 07:14:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # pci_devs=() 00:29:04.152 07:14:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:04.152 07:14:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:04.152 07:14:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:04.152 07:14:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:04.152 07:14:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:04.152 07:14:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # net_devs=() 00:29:04.152 07:14:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:04.152 07:14:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # e810=() 00:29:04.152 07:14:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # local -ga e810 00:29:04.152 07:14:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # x722=() 00:29:04.152 07:14:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # local -ga x722 00:29:04.152 07:14:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # mlx=() 00:29:04.152 07:14:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # local -ga mlx 00:29:04.152 07:14:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:04.152 07:14:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:04.152 07:14:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:04.152 07:14:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:04.152 07:14:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:04.152 07:14:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:04.152 07:14:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:04.152 07:14:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:04.152 07:14:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:04.152 07:14:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:04.152 07:14:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:04.152 07:14:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:04.152 07:14:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:04.152 07:14:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:04.152 07:14:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:04.152 07:14:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:04.152 07:14:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:04.152 07:14:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:04.152 07:14:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:04.152 07:14:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:29:04.152 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:29:04.152 07:14:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:04.152 07:14:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:04.152 07:14:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:04.152 07:14:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:04.152 07:14:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:04.152 07:14:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:04.152 07:14:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:29:04.152 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:29:04.152 07:14:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:04.152 07:14:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:04.152 07:14:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:04.152 07:14:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:04.152 07:14:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:04.152 07:14:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:04.152 07:14:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:04.153 07:14:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:04.153 07:14:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:04.153 07:14:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:04.153 07:14:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:04.153 07:14:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:04.153 07:14:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:04.153 07:14:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:04.153 07:14:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:04.153 07:14:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:29:04.153 Found net devices under 0000:0a:00.0: cvl_0_0 00:29:04.153 07:14:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:04.153 07:14:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:04.153 07:14:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:04.153 07:14:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:04.153 07:14:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:04.153 07:14:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:04.153 07:14:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:04.153 07:14:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:04.153 07:14:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:29:04.153 Found net devices under 0000:0a:00.1: cvl_0_1 00:29:04.153 07:14:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:04.153 07:14:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:04.153 07:14:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # is_hw=yes 00:29:04.153 07:14:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:04.153 07:14:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:04.153 07:14:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:04.153 07:14:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:04.153 07:14:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:04.153 07:14:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:04.153 07:14:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:04.153 07:14:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:04.153 07:14:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:04.153 07:14:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:04.153 07:14:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:04.153 07:14:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:04.153 07:14:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:04.153 07:14:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:04.153 07:14:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:04.153 07:14:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:04.153 07:14:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:04.153 07:14:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:04.153 07:14:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:04.153 07:14:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:04.153 07:14:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:04.153 07:14:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:04.153 07:14:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:04.153 07:14:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:04.153 07:14:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:04.153 07:14:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:04.153 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:04.153 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.232 ms 00:29:04.153 00:29:04.153 --- 10.0.0.2 ping statistics --- 00:29:04.153 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:04.153 rtt min/avg/max/mdev = 0.232/0.232/0.232/0.000 ms 00:29:04.153 07:14:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:04.153 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:04.153 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.142 ms 00:29:04.153 00:29:04.153 --- 10.0.0.1 ping statistics --- 00:29:04.153 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:04.153 rtt min/avg/max/mdev = 0.142/0.142/0.142/0.000 ms 00:29:04.153 07:14:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:04.153 07:14:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@450 -- # return 0 00:29:04.153 07:14:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:04.153 07:14:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:04.153 07:14:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:04.153 07:14:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:04.153 07:14:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:04.153 07:14:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:04.153 07:14:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:04.153 07:14:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1 00:29:04.153 07:14:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:04.153 07:14:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:04.153 07:14:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:29:04.153 07:14:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@509 -- # nvmfpid=330644 00:29:04.153 07:14:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1 00:29:04.153 07:14:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@510 -- # waitforlisten 330644 00:29:04.153 07:14:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 330644 ']' 00:29:04.153 07:14:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:04.153 07:14:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:04.153 07:14:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:04.153 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:04.153 07:14:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:04.153 07:14:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:29:04.153 [2024-11-18 07:14:24.883889] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:29:04.153 [2024-11-18 07:14:24.883977] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:04.153 [2024-11-18 07:14:24.954957] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:04.153 [2024-11-18 07:14:24.998792] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:04.153 [2024-11-18 07:14:24.998849] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:04.153 [2024-11-18 07:14:24.998878] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:04.153 [2024-11-18 07:14:24.998890] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:04.153 [2024-11-18 07:14:24.998900] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:04.153 [2024-11-18 07:14:24.999518] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:04.153 07:14:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:04.153 07:14:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:29:04.153 07:14:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:04.153 07:14:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:04.154 07:14:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:29:04.413 07:14:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:04.413 07:14:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT 00:29:04.413 07:14:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=330663 00:29:04.413 07:14:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=10.0.0.2 00:29:04.413 07:14:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock 00:29:04.413 07:14:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # get_main_ns_ip 00:29:04.413 07:14:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@769 -- # local ip 00:29:04.413 07:14:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:04.413 07:14:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:04.413 07:14:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:04.413 07:14:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:04.413 07:14:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:04.413 07:14:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:04.413 07:14:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:04.413 07:14:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:04.413 07:14:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:04.413 07:14:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=10.0.0.1 00:29:04.413 07:14:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen 00:29:04.413 07:14:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=1be0604b-dc4f-4e07-857d-9c4d05d4aa9a 00:29:04.413 07:14:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen 00:29:04.413 07:14:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=25122bda-23d5-4b7d-89f7-0cabba1a8a51 00:29:04.413 07:14:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen 00:29:04.413 07:14:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=41fdb164-4d08-4034-b3ba-ecb63dc5c9fa 00:29:04.413 07:14:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd 00:29:04.413 07:14:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:04.413 07:14:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:29:04.413 null0 00:29:04.413 null1 00:29:04.413 null2 00:29:04.413 [2024-11-18 07:14:25.172838] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:04.413 [2024-11-18 07:14:25.191687] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:29:04.413 [2024-11-18 07:14:25.191769] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid330663 ] 00:29:04.413 [2024-11-18 07:14:25.197074] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:04.413 07:14:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:04.413 07:14:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 330663 /var/tmp/tgt2.sock 00:29:04.413 07:14:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 330663 ']' 00:29:04.413 07:14:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/tgt2.sock 00:29:04.413 07:14:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:04.413 07:14:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...' 00:29:04.413 Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock... 00:29:04.413 07:14:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:04.413 07:14:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:29:04.413 [2024-11-18 07:14:25.267314] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:04.413 [2024-11-18 07:14:25.313631] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:04.672 07:14:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:04.672 07:14:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:29:04.672 07:14:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock 00:29:05.241 [2024-11-18 07:14:26.011308] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:05.241 [2024-11-18 07:14:26.027467] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.1 port 4421 *** 00:29:05.241 nvme0n1 nvme0n2 00:29:05.241 nvme1n1 00:29:05.241 07:14:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect 00:29:05.241 07:14:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr 00:29:05.241 07:14:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t tcp -a 10.0.0.1 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 00:29:05.808 07:14:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme* 00:29:05.808 07:14:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]] 00:29:05.808 07:14:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]] 00:29:05.808 07:14:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0 00:29:05.808 07:14:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 0 00:29:05.808 07:14:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0 00:29:05.808 07:14:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1 00:29:05.808 07:14:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:29:05.808 07:14:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:29:05.808 07:14:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:29:05.808 07:14:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1241 -- # '[' 0 -lt 15 ']' 00:29:05.808 07:14:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1242 -- # i=1 00:29:05.808 07:14:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1243 -- # sleep 1 00:29:06.743 07:14:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:29:06.743 07:14:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:29:06.743 07:14:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:29:06.743 07:14:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:29:06.743 07:14:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:29:06.743 07:14:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid 1be0604b-dc4f-4e07-857d-9c4d05d4aa9a 00:29:06.743 07:14:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:29:06.743 07:14:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1 00:29:06.743 07:14:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid 00:29:06.743 07:14:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json 00:29:06.743 07:14:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:29:06.743 07:14:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=1be0604bdc4f4e07857d9c4d05d4aa9a 00:29:06.743 07:14:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 1BE0604BDC4F4E07857D9C4D05D4AA9A 00:29:06.743 07:14:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ 1BE0604BDC4F4E07857D9C4D05D4AA9A == \1\B\E\0\6\0\4\B\D\C\4\F\4\E\0\7\8\5\7\D\9\C\4\D\0\5\D\4\A\A\9\A ]] 00:29:06.743 07:14:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2 00:29:06.743 07:14:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:29:06.743 07:14:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n2 00:29:06.743 07:14:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:29:07.004 07:14:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:29:07.004 07:14:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n2 00:29:07.004 07:14:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:29:07.004 07:14:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid 25122bda-23d5-4b7d-89f7-0cabba1a8a51 00:29:07.004 07:14:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:29:07.004 07:14:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2 00:29:07.004 07:14:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid 00:29:07.004 07:14:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json 00:29:07.004 07:14:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:29:07.004 07:14:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=25122bda23d54b7d89f70cabba1a8a51 00:29:07.004 07:14:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 25122BDA23D54B7D89F70CABBA1A8A51 00:29:07.004 07:14:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ 25122BDA23D54B7D89F70CABBA1A8A51 == \2\5\1\2\2\B\D\A\2\3\D\5\4\B\7\D\8\9\F\7\0\C\A\B\B\A\1\A\8\A\5\1 ]] 00:29:07.004 07:14:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3 00:29:07.004 07:14:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:29:07.004 07:14:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:29:07.004 07:14:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n3 00:29:07.004 07:14:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:29:07.004 07:14:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n3 00:29:07.004 07:14:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:29:07.004 07:14:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid 41fdb164-4d08-4034-b3ba-ecb63dc5c9fa 00:29:07.004 07:14:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:29:07.004 07:14:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3 00:29:07.004 07:14:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid 00:29:07.004 07:14:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json 00:29:07.004 07:14:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:29:07.004 07:14:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=41fdb1644d084034b3baecb63dc5c9fa 00:29:07.004 07:14:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 41FDB1644D084034B3BAECB63DC5C9FA 00:29:07.004 07:14:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ 41FDB1644D084034B3BAECB63DC5C9FA == \4\1\F\D\B\1\6\4\4\D\0\8\4\0\3\4\B\3\B\A\E\C\B\6\3\D\C\5\C\9\F\A ]] 00:29:07.004 07:14:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0 00:29:07.265 07:14:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:29:07.265 07:14:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup 00:29:07.265 07:14:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 330663 00:29:07.265 07:14:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 330663 ']' 00:29:07.265 07:14:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 330663 00:29:07.265 07:14:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:29:07.265 07:14:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:07.265 07:14:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 330663 00:29:07.265 07:14:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:07.265 07:14:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:07.265 07:14:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 330663' 00:29:07.265 killing process with pid 330663 00:29:07.265 07:14:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 330663 00:29:07.265 07:14:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 330663 00:29:07.523 07:14:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini 00:29:07.523 07:14:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:07.523 07:14:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@121 -- # sync 00:29:07.523 07:14:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:07.523 07:14:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@124 -- # set +e 00:29:07.523 07:14:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:07.523 07:14:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:07.523 rmmod nvme_tcp 00:29:07.523 rmmod nvme_fabrics 00:29:07.523 rmmod nvme_keyring 00:29:07.523 07:14:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:07.523 07:14:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@128 -- # set -e 00:29:07.523 07:14:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@129 -- # return 0 00:29:07.523 07:14:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@517 -- # '[' -n 330644 ']' 00:29:07.523 07:14:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@518 -- # killprocess 330644 00:29:07.523 07:14:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 330644 ']' 00:29:07.523 07:14:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 330644 00:29:07.523 07:14:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:29:07.524 07:14:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:07.524 07:14:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 330644 00:29:07.783 07:14:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:07.783 07:14:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:07.783 07:14:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 330644' 00:29:07.783 killing process with pid 330644 00:29:07.783 07:14:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 330644 00:29:07.783 07:14:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 330644 00:29:07.783 07:14:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:07.783 07:14:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:07.783 07:14:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:07.783 07:14:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@297 -- # iptr 00:29:07.783 07:14:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-save 00:29:07.783 07:14:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:07.783 07:14:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-restore 00:29:07.783 07:14:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:07.783 07:14:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:07.784 07:14:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:07.784 07:14:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:07.784 07:14:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:10.323 07:14:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:10.323 00:29:10.323 real 0m8.359s 00:29:10.323 user 0m8.232s 00:29:10.323 sys 0m2.730s 00:29:10.323 07:14:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:10.323 07:14:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:29:10.323 ************************************ 00:29:10.323 END TEST nvmf_nsid 00:29:10.323 ************************************ 00:29:10.323 07:14:30 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:29:10.323 00:29:10.323 real 18m12.649s 00:29:10.323 user 50m38.419s 00:29:10.323 sys 3m59.576s 00:29:10.323 07:14:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:10.323 07:14:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:29:10.323 ************************************ 00:29:10.323 END TEST nvmf_target_extra 00:29:10.323 ************************************ 00:29:10.323 07:14:30 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:29:10.323 07:14:30 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:10.323 07:14:30 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:10.323 07:14:30 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:10.323 ************************************ 00:29:10.323 START TEST nvmf_host 00:29:10.323 ************************************ 00:29:10.323 07:14:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:29:10.323 * Looking for test storage... 00:29:10.323 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:29:10.323 07:14:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:10.323 07:14:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # lcov --version 00:29:10.323 07:14:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:10.323 07:14:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:10.323 07:14:30 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:10.323 07:14:30 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:10.323 07:14:30 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:10.323 07:14:30 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:29:10.323 07:14:30 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:29:10.323 07:14:30 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:29:10.323 07:14:30 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:29:10.323 07:14:30 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:29:10.323 07:14:30 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:29:10.323 07:14:30 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:29:10.323 07:14:30 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:10.323 07:14:30 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:29:10.323 07:14:30 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:29:10.323 07:14:30 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:10.323 07:14:30 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:10.323 07:14:30 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:29:10.323 07:14:30 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:29:10.323 07:14:30 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:10.323 07:14:30 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:29:10.323 07:14:30 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:29:10.323 07:14:30 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:29:10.323 07:14:30 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:29:10.323 07:14:30 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:10.323 07:14:30 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:29:10.323 07:14:30 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:29:10.323 07:14:30 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:10.323 07:14:30 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:10.323 07:14:30 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:29:10.323 07:14:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:10.323 07:14:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:10.323 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:10.323 --rc genhtml_branch_coverage=1 00:29:10.323 --rc genhtml_function_coverage=1 00:29:10.323 --rc genhtml_legend=1 00:29:10.323 --rc geninfo_all_blocks=1 00:29:10.323 --rc geninfo_unexecuted_blocks=1 00:29:10.323 00:29:10.323 ' 00:29:10.323 07:14:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:10.323 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:10.323 --rc genhtml_branch_coverage=1 00:29:10.323 --rc genhtml_function_coverage=1 00:29:10.323 --rc genhtml_legend=1 00:29:10.323 --rc geninfo_all_blocks=1 00:29:10.323 --rc geninfo_unexecuted_blocks=1 00:29:10.323 00:29:10.323 ' 00:29:10.323 07:14:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:10.323 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:10.323 --rc genhtml_branch_coverage=1 00:29:10.323 --rc genhtml_function_coverage=1 00:29:10.323 --rc genhtml_legend=1 00:29:10.323 --rc geninfo_all_blocks=1 00:29:10.323 --rc geninfo_unexecuted_blocks=1 00:29:10.323 00:29:10.323 ' 00:29:10.323 07:14:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:10.323 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:10.323 --rc genhtml_branch_coverage=1 00:29:10.323 --rc genhtml_function_coverage=1 00:29:10.323 --rc genhtml_legend=1 00:29:10.323 --rc geninfo_all_blocks=1 00:29:10.323 --rc geninfo_unexecuted_blocks=1 00:29:10.323 00:29:10.323 ' 00:29:10.323 07:14:30 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:10.323 07:14:30 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:29:10.323 07:14:30 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:10.323 07:14:30 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:10.323 07:14:30 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:10.323 07:14:30 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:10.323 07:14:30 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:10.323 07:14:30 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:10.323 07:14:30 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:10.323 07:14:30 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:10.323 07:14:30 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:10.323 07:14:30 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:10.323 07:14:30 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:29:10.323 07:14:30 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:29:10.324 07:14:30 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:10.324 07:14:30 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:10.324 07:14:30 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:10.324 07:14:30 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:10.324 07:14:30 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:10.324 07:14:30 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:29:10.324 07:14:30 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:10.324 07:14:30 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:10.324 07:14:30 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:10.324 07:14:30 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:10.324 07:14:30 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:10.324 07:14:30 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:10.324 07:14:30 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:29:10.324 07:14:31 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:10.324 07:14:31 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:29:10.324 07:14:31 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:10.324 07:14:31 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:10.324 07:14:31 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:10.324 07:14:31 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:10.324 07:14:31 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:10.324 07:14:31 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:10.324 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:10.324 07:14:31 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:10.324 07:14:31 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:10.324 07:14:31 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:10.324 07:14:31 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:29:10.324 07:14:31 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:29:10.324 07:14:31 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:29:10.324 07:14:31 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:29:10.324 07:14:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:10.324 07:14:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:10.324 07:14:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:10.324 ************************************ 00:29:10.324 START TEST nvmf_multicontroller 00:29:10.324 ************************************ 00:29:10.324 07:14:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:29:10.324 * Looking for test storage... 00:29:10.324 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:10.324 07:14:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:10.324 07:14:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # lcov --version 00:29:10.324 07:14:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:10.324 07:14:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:10.324 07:14:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:10.324 07:14:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:10.324 07:14:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:10.324 07:14:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:29:10.324 07:14:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:29:10.324 07:14:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:29:10.324 07:14:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:29:10.324 07:14:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:29:10.324 07:14:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:29:10.324 07:14:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:29:10.324 07:14:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:10.324 07:14:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:29:10.324 07:14:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:29:10.324 07:14:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:10.324 07:14:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:10.324 07:14:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:29:10.324 07:14:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:29:10.324 07:14:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:10.324 07:14:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:29:10.324 07:14:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:29:10.324 07:14:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:29:10.324 07:14:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:29:10.324 07:14:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:10.324 07:14:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:29:10.324 07:14:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:29:10.324 07:14:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:10.324 07:14:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:10.324 07:14:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:29:10.324 07:14:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:10.324 07:14:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:10.324 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:10.324 --rc genhtml_branch_coverage=1 00:29:10.324 --rc genhtml_function_coverage=1 00:29:10.324 --rc genhtml_legend=1 00:29:10.324 --rc geninfo_all_blocks=1 00:29:10.324 --rc geninfo_unexecuted_blocks=1 00:29:10.324 00:29:10.324 ' 00:29:10.324 07:14:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:10.324 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:10.324 --rc genhtml_branch_coverage=1 00:29:10.324 --rc genhtml_function_coverage=1 00:29:10.324 --rc genhtml_legend=1 00:29:10.324 --rc geninfo_all_blocks=1 00:29:10.324 --rc geninfo_unexecuted_blocks=1 00:29:10.324 00:29:10.324 ' 00:29:10.324 07:14:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:10.324 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:10.324 --rc genhtml_branch_coverage=1 00:29:10.324 --rc genhtml_function_coverage=1 00:29:10.324 --rc genhtml_legend=1 00:29:10.325 --rc geninfo_all_blocks=1 00:29:10.325 --rc geninfo_unexecuted_blocks=1 00:29:10.325 00:29:10.325 ' 00:29:10.325 07:14:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:10.325 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:10.325 --rc genhtml_branch_coverage=1 00:29:10.325 --rc genhtml_function_coverage=1 00:29:10.325 --rc genhtml_legend=1 00:29:10.325 --rc geninfo_all_blocks=1 00:29:10.325 --rc geninfo_unexecuted_blocks=1 00:29:10.325 00:29:10.325 ' 00:29:10.325 07:14:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:10.325 07:14:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:29:10.325 07:14:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:10.325 07:14:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:10.325 07:14:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:10.325 07:14:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:10.325 07:14:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:10.325 07:14:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:10.325 07:14:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:10.325 07:14:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:10.325 07:14:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:10.325 07:14:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:10.325 07:14:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:29:10.325 07:14:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:29:10.325 07:14:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:10.325 07:14:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:10.325 07:14:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:10.325 07:14:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:10.325 07:14:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:10.325 07:14:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:29:10.325 07:14:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:10.325 07:14:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:10.325 07:14:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:10.325 07:14:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:10.325 07:14:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:10.325 07:14:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:10.325 07:14:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:29:10.325 07:14:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:10.325 07:14:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # : 0 00:29:10.325 07:14:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:10.325 07:14:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:10.325 07:14:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:10.325 07:14:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:10.325 07:14:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:10.325 07:14:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:10.325 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:10.325 07:14:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:10.325 07:14:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:10.325 07:14:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:10.325 07:14:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:10.325 07:14:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:10.325 07:14:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:29:10.325 07:14:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:29:10.325 07:14:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:29:10.325 07:14:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:29:10.325 07:14:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:29:10.325 07:14:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:10.325 07:14:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:10.325 07:14:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:10.325 07:14:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:10.325 07:14:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:10.325 07:14:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:10.325 07:14:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:10.325 07:14:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:10.325 07:14:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:10.325 07:14:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:10.325 07:14:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@309 -- # xtrace_disable 00:29:10.325 07:14:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:12.866 07:14:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:12.866 07:14:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # pci_devs=() 00:29:12.866 07:14:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:12.867 07:14:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:12.867 07:14:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:12.867 07:14:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:12.867 07:14:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:12.867 07:14:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # net_devs=() 00:29:12.867 07:14:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:12.867 07:14:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # e810=() 00:29:12.867 07:14:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # local -ga e810 00:29:12.867 07:14:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # x722=() 00:29:12.867 07:14:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # local -ga x722 00:29:12.867 07:14:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # mlx=() 00:29:12.867 07:14:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # local -ga mlx 00:29:12.867 07:14:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:12.867 07:14:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:12.867 07:14:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:12.867 07:14:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:12.867 07:14:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:12.867 07:14:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:12.867 07:14:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:12.867 07:14:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:12.867 07:14:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:12.867 07:14:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:12.867 07:14:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:12.867 07:14:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:12.867 07:14:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:12.867 07:14:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:12.867 07:14:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:12.867 07:14:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:12.867 07:14:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:12.867 07:14:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:12.867 07:14:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:12.867 07:14:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:29:12.867 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:29:12.867 07:14:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:12.867 07:14:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:12.867 07:14:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:12.867 07:14:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:12.867 07:14:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:12.867 07:14:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:12.867 07:14:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:29:12.867 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:29:12.867 07:14:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:12.867 07:14:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:12.867 07:14:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:12.867 07:14:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:12.867 07:14:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:12.867 07:14:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:12.867 07:14:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:12.867 07:14:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:12.867 07:14:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:12.867 07:14:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:12.867 07:14:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:12.867 07:14:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:12.867 07:14:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:12.867 07:14:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:12.867 07:14:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:12.867 07:14:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:29:12.867 Found net devices under 0000:0a:00.0: cvl_0_0 00:29:12.867 07:14:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:12.867 07:14:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:12.867 07:14:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:12.867 07:14:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:12.867 07:14:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:12.867 07:14:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:12.867 07:14:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:12.867 07:14:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:12.867 07:14:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:29:12.867 Found net devices under 0000:0a:00.1: cvl_0_1 00:29:12.867 07:14:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:12.867 07:14:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:12.867 07:14:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # is_hw=yes 00:29:12.867 07:14:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:12.867 07:14:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:12.867 07:14:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:12.867 07:14:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:12.867 07:14:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:12.867 07:14:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:12.867 07:14:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:12.867 07:14:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:12.867 07:14:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:12.867 07:14:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:12.867 07:14:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:12.867 07:14:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:12.867 07:14:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:12.867 07:14:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:12.867 07:14:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:12.867 07:14:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:12.867 07:14:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:12.867 07:14:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:12.867 07:14:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:12.867 07:14:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:12.867 07:14:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:12.867 07:14:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:12.867 07:14:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:12.867 07:14:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:12.867 07:14:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:12.867 07:14:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:12.867 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:12.867 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.417 ms 00:29:12.867 00:29:12.867 --- 10.0.0.2 ping statistics --- 00:29:12.867 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:12.867 rtt min/avg/max/mdev = 0.417/0.417/0.417/0.000 ms 00:29:12.867 07:14:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:12.867 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:12.867 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.123 ms 00:29:12.867 00:29:12.867 --- 10.0.0.1 ping statistics --- 00:29:12.867 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:12.867 rtt min/avg/max/mdev = 0.123/0.123/0.123/0.000 ms 00:29:12.867 07:14:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:12.867 07:14:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@450 -- # return 0 00:29:12.867 07:14:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:12.867 07:14:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:12.867 07:14:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:12.867 07:14:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:12.867 07:14:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:12.867 07:14:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:12.867 07:14:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:12.867 07:14:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:29:12.867 07:14:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:12.867 07:14:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:12.867 07:14:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:12.867 07:14:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@509 -- # nvmfpid=333111 00:29:12.867 07:14:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:29:12.867 07:14:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@510 -- # waitforlisten 333111 00:29:12.867 07:14:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 333111 ']' 00:29:12.867 07:14:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:12.867 07:14:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:12.867 07:14:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:12.867 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:12.867 07:14:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:12.867 07:14:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:12.867 [2024-11-18 07:14:33.473735] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:29:12.867 [2024-11-18 07:14:33.473844] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:12.867 [2024-11-18 07:14:33.551748] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:12.867 [2024-11-18 07:14:33.601311] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:12.867 [2024-11-18 07:14:33.601386] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:12.867 [2024-11-18 07:14:33.601399] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:12.867 [2024-11-18 07:14:33.601411] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:12.867 [2024-11-18 07:14:33.601420] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:12.867 [2024-11-18 07:14:33.603027] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:12.867 [2024-11-18 07:14:33.603094] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:12.867 [2024-11-18 07:14:33.603098] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:12.867 07:14:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:12.867 07:14:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:29:12.867 07:14:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:12.867 07:14:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:12.867 07:14:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:12.867 07:14:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:12.867 07:14:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:12.867 07:14:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:12.867 07:14:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:12.867 [2024-11-18 07:14:33.756059] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:12.867 07:14:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:12.867 07:14:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:12.867 07:14:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:12.867 07:14:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:12.867 Malloc0 00:29:12.867 07:14:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:12.867 07:14:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:12.867 07:14:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:12.867 07:14:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:12.867 07:14:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:12.867 07:14:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:12.867 07:14:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:12.867 07:14:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:12.867 07:14:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:12.867 07:14:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:12.867 07:14:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:12.867 07:14:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:12.867 [2024-11-18 07:14:33.822145] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:12.867 07:14:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:12.867 07:14:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:29:12.867 07:14:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:12.867 07:14:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:12.867 [2024-11-18 07:14:33.830015] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:29:12.867 07:14:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:12.867 07:14:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:29:12.867 07:14:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:12.867 07:14:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:13.127 Malloc1 00:29:13.127 07:14:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:13.127 07:14:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:29:13.127 07:14:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:13.127 07:14:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:13.127 07:14:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:13.127 07:14:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:29:13.127 07:14:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:13.127 07:14:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:13.127 07:14:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:13.127 07:14:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:29:13.127 07:14:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:13.127 07:14:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:13.127 07:14:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:13.127 07:14:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:29:13.127 07:14:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:13.127 07:14:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:13.127 07:14:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:13.127 07:14:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=333242 00:29:13.127 07:14:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:29:13.127 07:14:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:13.127 07:14:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 333242 /var/tmp/bdevperf.sock 00:29:13.127 07:14:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 333242 ']' 00:29:13.127 07:14:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:13.127 07:14:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:13.127 07:14:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:13.127 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:13.127 07:14:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:13.127 07:14:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:13.387 07:14:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:13.387 07:14:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:29:13.387 07:14:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:29:13.387 07:14:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:13.387 07:14:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:13.387 NVMe0n1 00:29:13.387 07:14:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:13.387 07:14:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:29:13.387 07:14:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:29:13.387 07:14:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:13.387 07:14:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:13.387 07:14:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:13.387 1 00:29:13.387 07:14:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:29:13.388 07:14:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:29:13.388 07:14:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:29:13.388 07:14:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:29:13.388 07:14:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:13.388 07:14:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:29:13.388 07:14:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:13.388 07:14:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:29:13.388 07:14:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:13.388 07:14:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:13.388 request: 00:29:13.388 { 00:29:13.388 "name": "NVMe0", 00:29:13.388 "trtype": "tcp", 00:29:13.388 "traddr": "10.0.0.2", 00:29:13.388 "adrfam": "ipv4", 00:29:13.388 "trsvcid": "4420", 00:29:13.388 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:13.388 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:29:13.388 "hostaddr": "10.0.0.1", 00:29:13.388 "prchk_reftag": false, 00:29:13.388 "prchk_guard": false, 00:29:13.388 "hdgst": false, 00:29:13.388 "ddgst": false, 00:29:13.388 "allow_unrecognized_csi": false, 00:29:13.388 "method": "bdev_nvme_attach_controller", 00:29:13.388 "req_id": 1 00:29:13.388 } 00:29:13.388 Got JSON-RPC error response 00:29:13.388 response: 00:29:13.388 { 00:29:13.388 "code": -114, 00:29:13.388 "message": "A controller named NVMe0 already exists with the specified network path" 00:29:13.388 } 00:29:13.388 07:14:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:29:13.388 07:14:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:29:13.388 07:14:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:13.388 07:14:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:13.388 07:14:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:13.388 07:14:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:29:13.388 07:14:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:29:13.388 07:14:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:29:13.388 07:14:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:29:13.388 07:14:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:13.388 07:14:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:29:13.388 07:14:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:13.388 07:14:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:29:13.388 07:14:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:13.388 07:14:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:13.388 request: 00:29:13.388 { 00:29:13.388 "name": "NVMe0", 00:29:13.388 "trtype": "tcp", 00:29:13.388 "traddr": "10.0.0.2", 00:29:13.388 "adrfam": "ipv4", 00:29:13.388 "trsvcid": "4420", 00:29:13.388 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:29:13.388 "hostaddr": "10.0.0.1", 00:29:13.388 "prchk_reftag": false, 00:29:13.388 "prchk_guard": false, 00:29:13.388 "hdgst": false, 00:29:13.388 "ddgst": false, 00:29:13.388 "allow_unrecognized_csi": false, 00:29:13.388 "method": "bdev_nvme_attach_controller", 00:29:13.388 "req_id": 1 00:29:13.388 } 00:29:13.388 Got JSON-RPC error response 00:29:13.388 response: 00:29:13.388 { 00:29:13.388 "code": -114, 00:29:13.388 "message": "A controller named NVMe0 already exists with the specified network path" 00:29:13.388 } 00:29:13.388 07:14:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:29:13.388 07:14:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:29:13.388 07:14:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:13.388 07:14:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:13.388 07:14:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:13.388 07:14:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:29:13.388 07:14:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:29:13.388 07:14:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:29:13.388 07:14:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:29:13.388 07:14:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:13.388 07:14:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:29:13.388 07:14:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:13.388 07:14:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:29:13.388 07:14:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:13.388 07:14:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:13.388 request: 00:29:13.388 { 00:29:13.388 "name": "NVMe0", 00:29:13.388 "trtype": "tcp", 00:29:13.388 "traddr": "10.0.0.2", 00:29:13.388 "adrfam": "ipv4", 00:29:13.388 "trsvcid": "4420", 00:29:13.388 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:13.388 "hostaddr": "10.0.0.1", 00:29:13.388 "prchk_reftag": false, 00:29:13.388 "prchk_guard": false, 00:29:13.388 "hdgst": false, 00:29:13.388 "ddgst": false, 00:29:13.388 "multipath": "disable", 00:29:13.388 "allow_unrecognized_csi": false, 00:29:13.388 "method": "bdev_nvme_attach_controller", 00:29:13.388 "req_id": 1 00:29:13.388 } 00:29:13.388 Got JSON-RPC error response 00:29:13.388 response: 00:29:13.388 { 00:29:13.388 "code": -114, 00:29:13.388 "message": "A controller named NVMe0 already exists and multipath is disabled" 00:29:13.388 } 00:29:13.388 07:14:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:29:13.388 07:14:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:29:13.388 07:14:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:13.388 07:14:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:13.388 07:14:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:13.388 07:14:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:29:13.388 07:14:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:29:13.388 07:14:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:29:13.388 07:14:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:29:13.388 07:14:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:13.388 07:14:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:29:13.388 07:14:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:13.388 07:14:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:29:13.388 07:14:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:13.388 07:14:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:13.388 request: 00:29:13.388 { 00:29:13.388 "name": "NVMe0", 00:29:13.388 "trtype": "tcp", 00:29:13.388 "traddr": "10.0.0.2", 00:29:13.388 "adrfam": "ipv4", 00:29:13.388 "trsvcid": "4420", 00:29:13.388 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:13.388 "hostaddr": "10.0.0.1", 00:29:13.388 "prchk_reftag": false, 00:29:13.388 "prchk_guard": false, 00:29:13.388 "hdgst": false, 00:29:13.388 "ddgst": false, 00:29:13.388 "multipath": "failover", 00:29:13.388 "allow_unrecognized_csi": false, 00:29:13.388 "method": "bdev_nvme_attach_controller", 00:29:13.388 "req_id": 1 00:29:13.388 } 00:29:13.388 Got JSON-RPC error response 00:29:13.388 response: 00:29:13.388 { 00:29:13.388 "code": -114, 00:29:13.388 "message": "A controller named NVMe0 already exists with the specified network path" 00:29:13.388 } 00:29:13.388 07:14:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:29:13.388 07:14:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:29:13.388 07:14:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:13.388 07:14:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:13.388 07:14:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:13.389 07:14:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:13.389 07:14:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:13.389 07:14:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:13.647 NVMe0n1 00:29:13.647 07:14:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:13.647 07:14:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:13.647 07:14:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:13.647 07:14:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:13.647 07:14:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:13.647 07:14:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:29:13.647 07:14:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:13.647 07:14:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:13.906 00:29:13.906 07:14:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:13.906 07:14:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:29:13.906 07:14:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:29:13.906 07:14:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:13.906 07:14:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:13.906 07:14:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:13.906 07:14:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:29:13.906 07:14:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:29:14.840 { 00:29:14.840 "results": [ 00:29:14.840 { 00:29:14.840 "job": "NVMe0n1", 00:29:14.840 "core_mask": "0x1", 00:29:14.840 "workload": "write", 00:29:14.840 "status": "finished", 00:29:14.840 "queue_depth": 128, 00:29:14.840 "io_size": 4096, 00:29:14.840 "runtime": 1.004516, 00:29:14.840 "iops": 18712.49437540069, 00:29:14.840 "mibps": 73.09568115390894, 00:29:14.840 "io_failed": 0, 00:29:14.840 "io_timeout": 0, 00:29:14.840 "avg_latency_us": 6829.098510045928, 00:29:14.840 "min_latency_us": 6068.148148148148, 00:29:14.840 "max_latency_us": 21456.971851851853 00:29:14.840 } 00:29:14.840 ], 00:29:14.840 "core_count": 1 00:29:14.840 } 00:29:14.840 07:14:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:29:14.840 07:14:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:14.840 07:14:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:14.840 07:14:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:14.840 07:14:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # [[ -n '' ]] 00:29:14.840 07:14:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@116 -- # killprocess 333242 00:29:14.840 07:14:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 333242 ']' 00:29:14.840 07:14:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 333242 00:29:14.840 07:14:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:29:14.840 07:14:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:14.840 07:14:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 333242 00:29:15.099 07:14:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:15.099 07:14:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:15.099 07:14:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 333242' 00:29:15.099 killing process with pid 333242 00:29:15.099 07:14:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 333242 00:29:15.099 07:14:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 333242 00:29:15.099 07:14:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@118 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:15.099 07:14:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:15.099 07:14:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:15.099 07:14:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:15.099 07:14:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@119 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:29:15.099 07:14:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:15.099 07:14:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:15.099 07:14:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:15.100 07:14:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:29:15.100 07:14:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@123 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:29:15.100 07:14:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:29:15.100 07:14:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:29:15.100 07:14:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # sort -u 00:29:15.100 07:14:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1600 -- # cat 00:29:15.100 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:29:15.100 [2024-11-18 07:14:33.935129] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:29:15.100 [2024-11-18 07:14:33.935231] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid333242 ] 00:29:15.100 [2024-11-18 07:14:34.006419] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:15.100 [2024-11-18 07:14:34.052818] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:15.100 [2024-11-18 07:14:34.653196] bdev.c:4691:bdev_name_add: *ERROR*: Bdev name 99608179-b6e4-44b9-82c4-cd907db45763 already exists 00:29:15.100 [2024-11-18 07:14:34.653235] bdev.c:7842:bdev_register: *ERROR*: Unable to add uuid:99608179-b6e4-44b9-82c4-cd907db45763 alias for bdev NVMe1n1 00:29:15.100 [2024-11-18 07:14:34.653261] bdev_nvme.c:4658:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:29:15.100 Running I/O for 1 seconds... 00:29:15.100 18669.00 IOPS, 72.93 MiB/s 00:29:15.100 Latency(us) 00:29:15.100 [2024-11-18T06:14:36.078Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:15.100 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:29:15.100 NVMe0n1 : 1.00 18712.49 73.10 0.00 0.00 6829.10 6068.15 21456.97 00:29:15.100 [2024-11-18T06:14:36.078Z] =================================================================================================================== 00:29:15.100 [2024-11-18T06:14:36.078Z] Total : 18712.49 73.10 0.00 0.00 6829.10 6068.15 21456.97 00:29:15.100 Received shutdown signal, test time was about 1.000000 seconds 00:29:15.100 00:29:15.100 Latency(us) 00:29:15.100 [2024-11-18T06:14:36.078Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:15.100 [2024-11-18T06:14:36.078Z] =================================================================================================================== 00:29:15.100 [2024-11-18T06:14:36.078Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:15.100 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:29:15.361 07:14:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1605 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:29:15.361 07:14:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:29:15.361 07:14:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@124 -- # nvmftestfini 00:29:15.361 07:14:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:15.361 07:14:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # sync 00:29:15.361 07:14:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:15.361 07:14:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set +e 00:29:15.361 07:14:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:15.361 07:14:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:15.361 rmmod nvme_tcp 00:29:15.361 rmmod nvme_fabrics 00:29:15.361 rmmod nvme_keyring 00:29:15.361 07:14:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:15.361 07:14:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@128 -- # set -e 00:29:15.361 07:14:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@129 -- # return 0 00:29:15.361 07:14:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@517 -- # '[' -n 333111 ']' 00:29:15.361 07:14:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@518 -- # killprocess 333111 00:29:15.361 07:14:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 333111 ']' 00:29:15.361 07:14:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 333111 00:29:15.361 07:14:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:29:15.361 07:14:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:15.361 07:14:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 333111 00:29:15.361 07:14:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:15.361 07:14:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:15.361 07:14:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 333111' 00:29:15.361 killing process with pid 333111 00:29:15.361 07:14:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 333111 00:29:15.361 07:14:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 333111 00:29:15.621 07:14:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:15.621 07:14:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:15.621 07:14:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:15.621 07:14:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # iptr 00:29:15.621 07:14:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-save 00:29:15.621 07:14:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:15.621 07:14:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-restore 00:29:15.621 07:14:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:15.621 07:14:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:15.621 07:14:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:15.621 07:14:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:15.621 07:14:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:17.530 07:14:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:17.530 00:29:17.530 real 0m7.458s 00:29:17.530 user 0m11.439s 00:29:17.530 sys 0m2.392s 00:29:17.530 07:14:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:17.530 07:14:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:17.530 ************************************ 00:29:17.530 END TEST nvmf_multicontroller 00:29:17.530 ************************************ 00:29:17.530 07:14:38 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:29:17.530 07:14:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:17.530 07:14:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:17.530 07:14:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:17.790 ************************************ 00:29:17.790 START TEST nvmf_aer 00:29:17.790 ************************************ 00:29:17.790 07:14:38 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:29:17.790 * Looking for test storage... 00:29:17.790 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:17.790 07:14:38 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:17.790 07:14:38 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # lcov --version 00:29:17.790 07:14:38 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:17.790 07:14:38 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:17.790 07:14:38 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:17.790 07:14:38 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:17.790 07:14:38 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:17.790 07:14:38 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:29:17.790 07:14:38 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:29:17.790 07:14:38 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:29:17.790 07:14:38 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:29:17.790 07:14:38 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:29:17.790 07:14:38 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:29:17.790 07:14:38 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:29:17.790 07:14:38 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:17.790 07:14:38 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:29:17.790 07:14:38 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:29:17.790 07:14:38 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:17.790 07:14:38 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:17.790 07:14:38 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:29:17.790 07:14:38 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:29:17.790 07:14:38 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:17.790 07:14:38 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:29:17.790 07:14:38 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:29:17.790 07:14:38 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:29:17.790 07:14:38 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:29:17.790 07:14:38 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:17.790 07:14:38 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:29:17.790 07:14:38 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:29:17.790 07:14:38 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:17.790 07:14:38 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:17.790 07:14:38 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:29:17.790 07:14:38 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:17.790 07:14:38 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:17.790 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:17.790 --rc genhtml_branch_coverage=1 00:29:17.790 --rc genhtml_function_coverage=1 00:29:17.790 --rc genhtml_legend=1 00:29:17.790 --rc geninfo_all_blocks=1 00:29:17.790 --rc geninfo_unexecuted_blocks=1 00:29:17.790 00:29:17.790 ' 00:29:17.790 07:14:38 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:17.790 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:17.790 --rc genhtml_branch_coverage=1 00:29:17.790 --rc genhtml_function_coverage=1 00:29:17.790 --rc genhtml_legend=1 00:29:17.790 --rc geninfo_all_blocks=1 00:29:17.790 --rc geninfo_unexecuted_blocks=1 00:29:17.790 00:29:17.790 ' 00:29:17.790 07:14:38 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:17.790 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:17.790 --rc genhtml_branch_coverage=1 00:29:17.790 --rc genhtml_function_coverage=1 00:29:17.790 --rc genhtml_legend=1 00:29:17.790 --rc geninfo_all_blocks=1 00:29:17.790 --rc geninfo_unexecuted_blocks=1 00:29:17.790 00:29:17.790 ' 00:29:17.790 07:14:38 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:17.790 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:17.790 --rc genhtml_branch_coverage=1 00:29:17.790 --rc genhtml_function_coverage=1 00:29:17.790 --rc genhtml_legend=1 00:29:17.790 --rc geninfo_all_blocks=1 00:29:17.790 --rc geninfo_unexecuted_blocks=1 00:29:17.790 00:29:17.790 ' 00:29:17.790 07:14:38 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:17.790 07:14:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:29:17.790 07:14:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:17.790 07:14:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:17.791 07:14:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:17.791 07:14:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:17.791 07:14:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:17.791 07:14:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:17.791 07:14:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:17.791 07:14:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:17.791 07:14:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:17.791 07:14:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:17.791 07:14:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:29:17.791 07:14:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:29:17.791 07:14:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:17.791 07:14:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:17.791 07:14:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:17.791 07:14:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:17.791 07:14:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:17.791 07:14:38 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:29:17.791 07:14:38 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:17.791 07:14:38 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:17.791 07:14:38 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:17.791 07:14:38 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:17.791 07:14:38 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:17.791 07:14:38 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:17.791 07:14:38 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:29:17.791 07:14:38 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:17.791 07:14:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # : 0 00:29:17.791 07:14:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:17.791 07:14:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:17.791 07:14:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:17.791 07:14:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:17.791 07:14:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:17.791 07:14:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:17.791 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:17.791 07:14:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:17.791 07:14:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:17.791 07:14:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:17.791 07:14:38 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:29:17.791 07:14:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:17.791 07:14:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:17.791 07:14:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:17.791 07:14:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:17.791 07:14:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:17.791 07:14:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:17.791 07:14:38 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:17.791 07:14:38 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:17.791 07:14:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:17.791 07:14:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:17.791 07:14:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@309 -- # xtrace_disable 00:29:17.791 07:14:38 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:20.337 07:14:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:20.337 07:14:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # pci_devs=() 00:29:20.337 07:14:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:20.337 07:14:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:20.337 07:14:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:20.337 07:14:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:20.337 07:14:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:20.337 07:14:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # net_devs=() 00:29:20.337 07:14:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:20.337 07:14:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # e810=() 00:29:20.337 07:14:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # local -ga e810 00:29:20.337 07:14:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # x722=() 00:29:20.337 07:14:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # local -ga x722 00:29:20.337 07:14:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # mlx=() 00:29:20.337 07:14:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # local -ga mlx 00:29:20.337 07:14:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:20.337 07:14:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:20.337 07:14:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:20.337 07:14:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:20.337 07:14:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:20.337 07:14:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:20.337 07:14:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:20.337 07:14:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:20.337 07:14:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:20.337 07:14:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:20.337 07:14:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:20.337 07:14:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:20.337 07:14:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:20.337 07:14:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:20.337 07:14:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:20.337 07:14:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:20.337 07:14:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:20.337 07:14:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:20.337 07:14:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:20.337 07:14:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:29:20.337 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:29:20.337 07:14:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:20.337 07:14:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:20.337 07:14:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:20.337 07:14:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:20.337 07:14:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:20.337 07:14:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:20.337 07:14:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:29:20.337 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:29:20.337 07:14:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:20.337 07:14:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:20.337 07:14:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:20.337 07:14:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:20.337 07:14:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:20.337 07:14:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:20.337 07:14:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:20.337 07:14:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:20.337 07:14:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:20.338 07:14:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:20.338 07:14:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:20.338 07:14:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:20.338 07:14:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:20.338 07:14:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:20.338 07:14:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:20.338 07:14:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:29:20.338 Found net devices under 0000:0a:00.0: cvl_0_0 00:29:20.338 07:14:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:20.338 07:14:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:20.338 07:14:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:20.338 07:14:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:20.338 07:14:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:20.338 07:14:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:20.338 07:14:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:20.338 07:14:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:20.338 07:14:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:29:20.338 Found net devices under 0000:0a:00.1: cvl_0_1 00:29:20.338 07:14:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:20.338 07:14:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:20.338 07:14:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # is_hw=yes 00:29:20.338 07:14:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:20.338 07:14:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:20.338 07:14:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:20.338 07:14:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:20.338 07:14:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:20.338 07:14:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:20.338 07:14:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:20.338 07:14:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:20.338 07:14:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:20.338 07:14:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:20.338 07:14:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:20.338 07:14:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:20.338 07:14:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:20.338 07:14:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:20.338 07:14:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:20.338 07:14:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:20.338 07:14:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:20.338 07:14:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:20.338 07:14:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:20.338 07:14:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:20.338 07:14:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:20.338 07:14:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:20.338 07:14:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:20.338 07:14:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:20.338 07:14:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:20.338 07:14:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:20.338 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:20.338 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.245 ms 00:29:20.338 00:29:20.338 --- 10.0.0.2 ping statistics --- 00:29:20.338 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:20.338 rtt min/avg/max/mdev = 0.245/0.245/0.245/0.000 ms 00:29:20.338 07:14:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:20.338 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:20.338 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.176 ms 00:29:20.338 00:29:20.338 --- 10.0.0.1 ping statistics --- 00:29:20.338 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:20.338 rtt min/avg/max/mdev = 0.176/0.176/0.176/0.000 ms 00:29:20.338 07:14:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:20.338 07:14:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@450 -- # return 0 00:29:20.338 07:14:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:20.338 07:14:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:20.338 07:14:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:20.338 07:14:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:20.338 07:14:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:20.338 07:14:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:20.338 07:14:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:20.338 07:14:40 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:29:20.338 07:14:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:20.338 07:14:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:20.338 07:14:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:20.338 07:14:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@509 -- # nvmfpid=335457 00:29:20.338 07:14:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:29:20.338 07:14:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@510 -- # waitforlisten 335457 00:29:20.338 07:14:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # '[' -z 335457 ']' 00:29:20.338 07:14:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:20.338 07:14:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:20.338 07:14:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:20.338 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:20.338 07:14:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:20.338 07:14:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:20.338 [2024-11-18 07:14:40.926677] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:29:20.338 [2024-11-18 07:14:40.926772] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:20.338 [2024-11-18 07:14:40.999058] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:20.338 [2024-11-18 07:14:41.045412] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:20.338 [2024-11-18 07:14:41.045461] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:20.338 [2024-11-18 07:14:41.045510] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:20.338 [2024-11-18 07:14:41.045523] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:20.338 [2024-11-18 07:14:41.045533] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:20.338 [2024-11-18 07:14:41.047099] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:20.338 [2024-11-18 07:14:41.047164] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:20.338 [2024-11-18 07:14:41.047230] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:20.338 [2024-11-18 07:14:41.047232] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:20.338 07:14:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:20.338 07:14:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@868 -- # return 0 00:29:20.339 07:14:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:20.339 07:14:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:20.339 07:14:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:20.339 07:14:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:20.339 07:14:41 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:20.339 07:14:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:20.339 07:14:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:20.339 [2024-11-18 07:14:41.192057] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:20.339 07:14:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:20.339 07:14:41 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:29:20.339 07:14:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:20.339 07:14:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:20.339 Malloc0 00:29:20.339 07:14:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:20.339 07:14:41 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:29:20.339 07:14:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:20.339 07:14:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:20.339 07:14:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:20.339 07:14:41 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:20.339 07:14:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:20.339 07:14:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:20.339 07:14:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:20.339 07:14:41 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:20.339 07:14:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:20.339 07:14:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:20.339 [2024-11-18 07:14:41.259648] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:20.339 07:14:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:20.339 07:14:41 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:29:20.339 07:14:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:20.339 07:14:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:20.339 [ 00:29:20.339 { 00:29:20.339 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:29:20.339 "subtype": "Discovery", 00:29:20.339 "listen_addresses": [], 00:29:20.339 "allow_any_host": true, 00:29:20.339 "hosts": [] 00:29:20.339 }, 00:29:20.339 { 00:29:20.339 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:29:20.339 "subtype": "NVMe", 00:29:20.339 "listen_addresses": [ 00:29:20.339 { 00:29:20.339 "trtype": "TCP", 00:29:20.339 "adrfam": "IPv4", 00:29:20.339 "traddr": "10.0.0.2", 00:29:20.339 "trsvcid": "4420" 00:29:20.339 } 00:29:20.339 ], 00:29:20.339 "allow_any_host": true, 00:29:20.339 "hosts": [], 00:29:20.339 "serial_number": "SPDK00000000000001", 00:29:20.339 "model_number": "SPDK bdev Controller", 00:29:20.339 "max_namespaces": 2, 00:29:20.339 "min_cntlid": 1, 00:29:20.339 "max_cntlid": 65519, 00:29:20.339 "namespaces": [ 00:29:20.339 { 00:29:20.339 "nsid": 1, 00:29:20.339 "bdev_name": "Malloc0", 00:29:20.339 "name": "Malloc0", 00:29:20.339 "nguid": "FDBE19311D634339B8C5C9906BCCDC74", 00:29:20.339 "uuid": "fdbe1931-1d63-4339-b8c5-c9906bccdc74" 00:29:20.339 } 00:29:20.339 ] 00:29:20.339 } 00:29:20.339 ] 00:29:20.339 07:14:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:20.339 07:14:41 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:29:20.339 07:14:41 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:29:20.339 07:14:41 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=335486 00:29:20.339 07:14:41 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:29:20.339 07:14:41 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:29:20.339 07:14:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # local i=0 00:29:20.339 07:14:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:29:20.339 07:14:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 0 -lt 200 ']' 00:29:20.339 07:14:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=1 00:29:20.339 07:14:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:29:20.598 07:14:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:29:20.598 07:14:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 1 -lt 200 ']' 00:29:20.598 07:14:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=2 00:29:20.598 07:14:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:29:20.598 07:14:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:29:20.598 07:14:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:29:20.598 07:14:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1280 -- # return 0 00:29:20.598 07:14:41 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:29:20.598 07:14:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:20.598 07:14:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:20.598 Malloc1 00:29:20.598 07:14:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:20.598 07:14:41 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:29:20.598 07:14:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:20.598 07:14:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:20.598 07:14:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:20.598 07:14:41 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:29:20.598 07:14:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:20.598 07:14:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:20.598 [ 00:29:20.598 { 00:29:20.598 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:29:20.598 "subtype": "Discovery", 00:29:20.598 "listen_addresses": [], 00:29:20.598 "allow_any_host": true, 00:29:20.598 "hosts": [] 00:29:20.598 }, 00:29:20.598 { 00:29:20.598 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:29:20.598 "subtype": "NVMe", 00:29:20.598 "listen_addresses": [ 00:29:20.598 { 00:29:20.598 "trtype": "TCP", 00:29:20.598 "adrfam": "IPv4", 00:29:20.598 "traddr": "10.0.0.2", 00:29:20.598 "trsvcid": "4420" 00:29:20.598 } 00:29:20.598 ], 00:29:20.598 "allow_any_host": true, 00:29:20.598 "hosts": [], 00:29:20.598 "serial_number": "SPDK00000000000001", 00:29:20.598 "model_number": "SPDK bdev Controller", 00:29:20.598 "max_namespaces": 2, 00:29:20.598 "min_cntlid": 1, 00:29:20.598 "max_cntlid": 65519, 00:29:20.598 "namespaces": [ 00:29:20.598 { 00:29:20.598 "nsid": 1, 00:29:20.598 "bdev_name": "Malloc0", 00:29:20.598 "name": "Malloc0", 00:29:20.598 "nguid": "FDBE19311D634339B8C5C9906BCCDC74", 00:29:20.598 "uuid": "fdbe1931-1d63-4339-b8c5-c9906bccdc74" 00:29:20.598 }, 00:29:20.598 { 00:29:20.598 "nsid": 2, 00:29:20.598 "bdev_name": "Malloc1", 00:29:20.598 "name": "Malloc1", 00:29:20.598 "nguid": "E212C69881884462939B2F0A4E701380", 00:29:20.598 "uuid": "e212c698-8188-4462-939b-2f0a4e701380" 00:29:20.598 } 00:29:20.598 ] 00:29:20.598 } 00:29:20.598 ] 00:29:20.598 07:14:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:20.598 07:14:41 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 335486 00:29:20.598 Asynchronous Event Request test 00:29:20.598 Attaching to 10.0.0.2 00:29:20.598 Attached to 10.0.0.2 00:29:20.598 Registering asynchronous event callbacks... 00:29:20.598 Starting namespace attribute notice tests for all controllers... 00:29:20.598 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:29:20.598 aer_cb - Changed Namespace 00:29:20.598 Cleaning up... 00:29:20.598 07:14:41 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:29:20.598 07:14:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:20.598 07:14:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:20.858 07:14:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:20.858 07:14:41 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:29:20.858 07:14:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:20.858 07:14:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:20.858 07:14:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:20.858 07:14:41 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:20.858 07:14:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:20.858 07:14:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:20.858 07:14:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:20.858 07:14:41 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:29:20.858 07:14:41 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:29:20.858 07:14:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:20.858 07:14:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # sync 00:29:20.858 07:14:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:20.858 07:14:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set +e 00:29:20.858 07:14:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:20.858 07:14:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:20.858 rmmod nvme_tcp 00:29:20.858 rmmod nvme_fabrics 00:29:20.858 rmmod nvme_keyring 00:29:20.858 07:14:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:20.858 07:14:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@128 -- # set -e 00:29:20.858 07:14:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # return 0 00:29:20.858 07:14:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@517 -- # '[' -n 335457 ']' 00:29:20.858 07:14:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@518 -- # killprocess 335457 00:29:20.858 07:14:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # '[' -z 335457 ']' 00:29:20.858 07:14:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # kill -0 335457 00:29:20.858 07:14:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # uname 00:29:20.858 07:14:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:20.858 07:14:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 335457 00:29:20.858 07:14:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:20.858 07:14:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:20.858 07:14:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@972 -- # echo 'killing process with pid 335457' 00:29:20.858 killing process with pid 335457 00:29:20.858 07:14:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@973 -- # kill 335457 00:29:20.858 07:14:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@978 -- # wait 335457 00:29:21.117 07:14:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:21.117 07:14:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:21.117 07:14:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:21.117 07:14:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # iptr 00:29:21.117 07:14:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-save 00:29:21.117 07:14:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:21.117 07:14:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-restore 00:29:21.117 07:14:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:21.117 07:14:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:21.117 07:14:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:21.117 07:14:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:21.117 07:14:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:23.021 07:14:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:23.021 00:29:23.021 real 0m5.455s 00:29:23.021 user 0m4.408s 00:29:23.021 sys 0m1.918s 00:29:23.021 07:14:43 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:23.021 07:14:43 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:23.021 ************************************ 00:29:23.021 END TEST nvmf_aer 00:29:23.021 ************************************ 00:29:23.281 07:14:44 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:29:23.281 07:14:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:23.281 07:14:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:23.281 07:14:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:23.281 ************************************ 00:29:23.281 START TEST nvmf_async_init 00:29:23.281 ************************************ 00:29:23.281 07:14:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:29:23.281 * Looking for test storage... 00:29:23.281 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:23.281 07:14:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:23.281 07:14:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # lcov --version 00:29:23.281 07:14:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:23.281 07:14:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:23.281 07:14:44 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:23.281 07:14:44 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:23.281 07:14:44 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:23.281 07:14:44 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:29:23.281 07:14:44 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:29:23.281 07:14:44 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:29:23.281 07:14:44 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:29:23.281 07:14:44 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:29:23.281 07:14:44 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:29:23.281 07:14:44 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:29:23.281 07:14:44 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:23.281 07:14:44 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:29:23.281 07:14:44 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:29:23.281 07:14:44 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:23.281 07:14:44 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:23.281 07:14:44 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:29:23.281 07:14:44 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:29:23.281 07:14:44 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:23.281 07:14:44 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:29:23.281 07:14:44 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:29:23.281 07:14:44 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:29:23.281 07:14:44 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:29:23.281 07:14:44 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:23.281 07:14:44 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:29:23.281 07:14:44 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:29:23.281 07:14:44 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:23.281 07:14:44 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:23.281 07:14:44 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:29:23.281 07:14:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:23.281 07:14:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:23.281 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:23.281 --rc genhtml_branch_coverage=1 00:29:23.281 --rc genhtml_function_coverage=1 00:29:23.281 --rc genhtml_legend=1 00:29:23.281 --rc geninfo_all_blocks=1 00:29:23.281 --rc geninfo_unexecuted_blocks=1 00:29:23.281 00:29:23.281 ' 00:29:23.281 07:14:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:23.281 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:23.281 --rc genhtml_branch_coverage=1 00:29:23.281 --rc genhtml_function_coverage=1 00:29:23.281 --rc genhtml_legend=1 00:29:23.281 --rc geninfo_all_blocks=1 00:29:23.281 --rc geninfo_unexecuted_blocks=1 00:29:23.281 00:29:23.281 ' 00:29:23.281 07:14:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:23.281 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:23.281 --rc genhtml_branch_coverage=1 00:29:23.281 --rc genhtml_function_coverage=1 00:29:23.281 --rc genhtml_legend=1 00:29:23.281 --rc geninfo_all_blocks=1 00:29:23.281 --rc geninfo_unexecuted_blocks=1 00:29:23.281 00:29:23.281 ' 00:29:23.281 07:14:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:23.281 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:23.281 --rc genhtml_branch_coverage=1 00:29:23.281 --rc genhtml_function_coverage=1 00:29:23.281 --rc genhtml_legend=1 00:29:23.281 --rc geninfo_all_blocks=1 00:29:23.281 --rc geninfo_unexecuted_blocks=1 00:29:23.281 00:29:23.281 ' 00:29:23.281 07:14:44 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:23.281 07:14:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:29:23.281 07:14:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:23.281 07:14:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:23.281 07:14:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:23.281 07:14:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:23.281 07:14:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:23.281 07:14:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:23.281 07:14:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:23.281 07:14:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:23.281 07:14:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:23.281 07:14:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:23.281 07:14:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:29:23.281 07:14:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:29:23.282 07:14:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:23.282 07:14:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:23.282 07:14:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:23.282 07:14:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:23.282 07:14:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:23.282 07:14:44 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:29:23.282 07:14:44 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:23.282 07:14:44 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:23.282 07:14:44 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:23.282 07:14:44 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:23.282 07:14:44 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:23.282 07:14:44 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:23.282 07:14:44 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:29:23.282 07:14:44 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:23.282 07:14:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # : 0 00:29:23.282 07:14:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:23.282 07:14:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:23.282 07:14:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:23.282 07:14:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:23.282 07:14:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:23.282 07:14:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:23.282 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:23.282 07:14:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:23.282 07:14:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:23.282 07:14:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:23.282 07:14:44 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:29:23.282 07:14:44 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:29:23.282 07:14:44 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:29:23.282 07:14:44 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:29:23.282 07:14:44 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:29:23.282 07:14:44 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:29:23.282 07:14:44 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=124c45286b8242d79335949445db110a 00:29:23.282 07:14:44 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:29:23.282 07:14:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:23.282 07:14:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:23.282 07:14:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:23.282 07:14:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:23.282 07:14:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:23.282 07:14:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:23.282 07:14:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:23.282 07:14:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:23.282 07:14:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:23.282 07:14:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:23.282 07:14:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@309 -- # xtrace_disable 00:29:23.282 07:14:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:25.816 07:14:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:25.816 07:14:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # pci_devs=() 00:29:25.816 07:14:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:25.816 07:14:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:25.816 07:14:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:25.816 07:14:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:25.816 07:14:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:25.816 07:14:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # net_devs=() 00:29:25.816 07:14:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:25.816 07:14:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # e810=() 00:29:25.816 07:14:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # local -ga e810 00:29:25.816 07:14:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # x722=() 00:29:25.816 07:14:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # local -ga x722 00:29:25.816 07:14:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # mlx=() 00:29:25.816 07:14:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # local -ga mlx 00:29:25.816 07:14:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:25.816 07:14:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:25.816 07:14:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:25.816 07:14:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:25.816 07:14:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:25.816 07:14:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:25.816 07:14:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:25.816 07:14:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:25.816 07:14:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:25.816 07:14:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:25.816 07:14:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:25.816 07:14:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:25.816 07:14:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:25.816 07:14:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:25.816 07:14:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:25.816 07:14:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:25.816 07:14:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:25.816 07:14:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:25.816 07:14:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:25.816 07:14:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:29:25.816 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:29:25.816 07:14:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:25.816 07:14:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:25.816 07:14:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:25.816 07:14:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:25.816 07:14:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:25.816 07:14:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:25.816 07:14:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:29:25.816 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:29:25.816 07:14:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:25.816 07:14:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:25.816 07:14:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:25.816 07:14:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:25.816 07:14:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:25.816 07:14:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:25.816 07:14:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:25.816 07:14:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:25.816 07:14:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:25.816 07:14:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:25.816 07:14:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:25.816 07:14:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:25.816 07:14:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:25.816 07:14:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:25.816 07:14:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:25.816 07:14:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:29:25.816 Found net devices under 0000:0a:00.0: cvl_0_0 00:29:25.816 07:14:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:25.816 07:14:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:25.816 07:14:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:25.816 07:14:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:25.816 07:14:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:25.816 07:14:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:25.816 07:14:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:25.816 07:14:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:25.816 07:14:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:29:25.816 Found net devices under 0000:0a:00.1: cvl_0_1 00:29:25.816 07:14:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:25.816 07:14:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:25.816 07:14:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # is_hw=yes 00:29:25.816 07:14:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:25.816 07:14:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:25.817 07:14:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:25.817 07:14:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:25.817 07:14:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:25.817 07:14:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:25.817 07:14:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:25.817 07:14:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:25.817 07:14:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:25.817 07:14:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:25.817 07:14:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:25.817 07:14:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:25.817 07:14:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:25.817 07:14:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:25.817 07:14:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:25.817 07:14:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:25.817 07:14:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:25.817 07:14:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:25.817 07:14:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:25.817 07:14:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:25.817 07:14:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:25.817 07:14:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:25.817 07:14:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:25.817 07:14:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:25.817 07:14:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:25.817 07:14:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:25.817 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:25.817 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.280 ms 00:29:25.817 00:29:25.817 --- 10.0.0.2 ping statistics --- 00:29:25.817 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:25.817 rtt min/avg/max/mdev = 0.280/0.280/0.280/0.000 ms 00:29:25.817 07:14:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:25.817 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:25.817 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.158 ms 00:29:25.817 00:29:25.817 --- 10.0.0.1 ping statistics --- 00:29:25.817 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:25.817 rtt min/avg/max/mdev = 0.158/0.158/0.158/0.000 ms 00:29:25.817 07:14:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:25.817 07:14:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@450 -- # return 0 00:29:25.817 07:14:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:25.817 07:14:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:25.817 07:14:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:25.817 07:14:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:25.817 07:14:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:25.817 07:14:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:25.817 07:14:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:25.817 07:14:46 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:29:25.817 07:14:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:25.817 07:14:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:25.817 07:14:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:25.817 07:14:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@509 -- # nvmfpid=337551 00:29:25.817 07:14:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:29:25.817 07:14:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@510 -- # waitforlisten 337551 00:29:25.817 07:14:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # '[' -z 337551 ']' 00:29:25.817 07:14:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:25.817 07:14:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:25.817 07:14:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:25.817 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:25.817 07:14:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:25.817 07:14:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:25.817 [2024-11-18 07:14:46.622105] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:29:25.817 [2024-11-18 07:14:46.622181] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:25.817 [2024-11-18 07:14:46.692092] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:25.817 [2024-11-18 07:14:46.736723] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:25.817 [2024-11-18 07:14:46.736800] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:25.817 [2024-11-18 07:14:46.736814] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:25.817 [2024-11-18 07:14:46.736824] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:25.817 [2024-11-18 07:14:46.736843] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:25.817 [2024-11-18 07:14:46.737405] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:26.076 07:14:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:26.076 07:14:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@868 -- # return 0 00:29:26.076 07:14:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:26.076 07:14:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:26.076 07:14:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:26.076 07:14:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:26.076 07:14:46 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:29:26.076 07:14:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:26.076 07:14:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:26.076 [2024-11-18 07:14:46.871976] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:26.076 07:14:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:26.076 07:14:46 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:29:26.076 07:14:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:26.076 07:14:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:26.076 null0 00:29:26.076 07:14:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:26.076 07:14:46 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:29:26.076 07:14:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:26.076 07:14:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:26.076 07:14:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:26.076 07:14:46 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:29:26.076 07:14:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:26.076 07:14:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:26.076 07:14:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:26.076 07:14:46 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 124c45286b8242d79335949445db110a 00:29:26.076 07:14:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:26.076 07:14:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:26.076 07:14:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:26.076 07:14:46 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:26.076 07:14:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:26.076 07:14:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:26.076 [2024-11-18 07:14:46.912230] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:26.076 07:14:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:26.076 07:14:46 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:29:26.076 07:14:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:26.076 07:14:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:26.335 nvme0n1 00:29:26.335 07:14:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:26.335 07:14:47 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:29:26.335 07:14:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:26.335 07:14:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:26.335 [ 00:29:26.335 { 00:29:26.335 "name": "nvme0n1", 00:29:26.335 "aliases": [ 00:29:26.335 "124c4528-6b82-42d7-9335-949445db110a" 00:29:26.335 ], 00:29:26.335 "product_name": "NVMe disk", 00:29:26.335 "block_size": 512, 00:29:26.335 "num_blocks": 2097152, 00:29:26.335 "uuid": "124c4528-6b82-42d7-9335-949445db110a", 00:29:26.335 "numa_id": 0, 00:29:26.335 "assigned_rate_limits": { 00:29:26.335 "rw_ios_per_sec": 0, 00:29:26.335 "rw_mbytes_per_sec": 0, 00:29:26.335 "r_mbytes_per_sec": 0, 00:29:26.335 "w_mbytes_per_sec": 0 00:29:26.335 }, 00:29:26.335 "claimed": false, 00:29:26.335 "zoned": false, 00:29:26.335 "supported_io_types": { 00:29:26.335 "read": true, 00:29:26.335 "write": true, 00:29:26.335 "unmap": false, 00:29:26.335 "flush": true, 00:29:26.335 "reset": true, 00:29:26.335 "nvme_admin": true, 00:29:26.335 "nvme_io": true, 00:29:26.335 "nvme_io_md": false, 00:29:26.335 "write_zeroes": true, 00:29:26.335 "zcopy": false, 00:29:26.335 "get_zone_info": false, 00:29:26.335 "zone_management": false, 00:29:26.335 "zone_append": false, 00:29:26.335 "compare": true, 00:29:26.335 "compare_and_write": true, 00:29:26.335 "abort": true, 00:29:26.335 "seek_hole": false, 00:29:26.335 "seek_data": false, 00:29:26.335 "copy": true, 00:29:26.335 "nvme_iov_md": false 00:29:26.335 }, 00:29:26.335 "memory_domains": [ 00:29:26.335 { 00:29:26.335 "dma_device_id": "system", 00:29:26.335 "dma_device_type": 1 00:29:26.335 } 00:29:26.335 ], 00:29:26.335 "driver_specific": { 00:29:26.335 "nvme": [ 00:29:26.335 { 00:29:26.335 "trid": { 00:29:26.335 "trtype": "TCP", 00:29:26.335 "adrfam": "IPv4", 00:29:26.335 "traddr": "10.0.0.2", 00:29:26.335 "trsvcid": "4420", 00:29:26.335 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:29:26.335 }, 00:29:26.335 "ctrlr_data": { 00:29:26.335 "cntlid": 1, 00:29:26.335 "vendor_id": "0x8086", 00:29:26.335 "model_number": "SPDK bdev Controller", 00:29:26.335 "serial_number": "00000000000000000000", 00:29:26.335 "firmware_revision": "25.01", 00:29:26.335 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:26.336 "oacs": { 00:29:26.336 "security": 0, 00:29:26.336 "format": 0, 00:29:26.336 "firmware": 0, 00:29:26.336 "ns_manage": 0 00:29:26.336 }, 00:29:26.336 "multi_ctrlr": true, 00:29:26.336 "ana_reporting": false 00:29:26.336 }, 00:29:26.336 "vs": { 00:29:26.336 "nvme_version": "1.3" 00:29:26.336 }, 00:29:26.336 "ns_data": { 00:29:26.336 "id": 1, 00:29:26.336 "can_share": true 00:29:26.336 } 00:29:26.336 } 00:29:26.336 ], 00:29:26.336 "mp_policy": "active_passive" 00:29:26.336 } 00:29:26.336 } 00:29:26.336 ] 00:29:26.336 07:14:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:26.336 07:14:47 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:29:26.336 07:14:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:26.336 07:14:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:26.336 [2024-11-18 07:14:47.161279] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:29:26.336 [2024-11-18 07:14:47.161375] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x830700 (9): Bad file descriptor 00:29:26.336 [2024-11-18 07:14:47.293632] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:29:26.336 07:14:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:26.336 07:14:47 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:29:26.336 07:14:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:26.336 07:14:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:26.336 [ 00:29:26.336 { 00:29:26.336 "name": "nvme0n1", 00:29:26.336 "aliases": [ 00:29:26.336 "124c4528-6b82-42d7-9335-949445db110a" 00:29:26.336 ], 00:29:26.336 "product_name": "NVMe disk", 00:29:26.336 "block_size": 512, 00:29:26.336 "num_blocks": 2097152, 00:29:26.336 "uuid": "124c4528-6b82-42d7-9335-949445db110a", 00:29:26.336 "numa_id": 0, 00:29:26.336 "assigned_rate_limits": { 00:29:26.336 "rw_ios_per_sec": 0, 00:29:26.336 "rw_mbytes_per_sec": 0, 00:29:26.336 "r_mbytes_per_sec": 0, 00:29:26.336 "w_mbytes_per_sec": 0 00:29:26.336 }, 00:29:26.336 "claimed": false, 00:29:26.336 "zoned": false, 00:29:26.336 "supported_io_types": { 00:29:26.336 "read": true, 00:29:26.336 "write": true, 00:29:26.336 "unmap": false, 00:29:26.336 "flush": true, 00:29:26.336 "reset": true, 00:29:26.336 "nvme_admin": true, 00:29:26.336 "nvme_io": true, 00:29:26.336 "nvme_io_md": false, 00:29:26.336 "write_zeroes": true, 00:29:26.336 "zcopy": false, 00:29:26.336 "get_zone_info": false, 00:29:26.336 "zone_management": false, 00:29:26.336 "zone_append": false, 00:29:26.336 "compare": true, 00:29:26.336 "compare_and_write": true, 00:29:26.336 "abort": true, 00:29:26.336 "seek_hole": false, 00:29:26.336 "seek_data": false, 00:29:26.336 "copy": true, 00:29:26.336 "nvme_iov_md": false 00:29:26.336 }, 00:29:26.336 "memory_domains": [ 00:29:26.336 { 00:29:26.336 "dma_device_id": "system", 00:29:26.336 "dma_device_type": 1 00:29:26.336 } 00:29:26.336 ], 00:29:26.336 "driver_specific": { 00:29:26.336 "nvme": [ 00:29:26.336 { 00:29:26.336 "trid": { 00:29:26.336 "trtype": "TCP", 00:29:26.336 "adrfam": "IPv4", 00:29:26.336 "traddr": "10.0.0.2", 00:29:26.336 "trsvcid": "4420", 00:29:26.336 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:29:26.336 }, 00:29:26.336 "ctrlr_data": { 00:29:26.336 "cntlid": 2, 00:29:26.336 "vendor_id": "0x8086", 00:29:26.336 "model_number": "SPDK bdev Controller", 00:29:26.336 "serial_number": "00000000000000000000", 00:29:26.336 "firmware_revision": "25.01", 00:29:26.336 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:26.336 "oacs": { 00:29:26.336 "security": 0, 00:29:26.336 "format": 0, 00:29:26.336 "firmware": 0, 00:29:26.336 "ns_manage": 0 00:29:26.336 }, 00:29:26.336 "multi_ctrlr": true, 00:29:26.336 "ana_reporting": false 00:29:26.336 }, 00:29:26.336 "vs": { 00:29:26.336 "nvme_version": "1.3" 00:29:26.336 }, 00:29:26.336 "ns_data": { 00:29:26.336 "id": 1, 00:29:26.336 "can_share": true 00:29:26.336 } 00:29:26.336 } 00:29:26.336 ], 00:29:26.336 "mp_policy": "active_passive" 00:29:26.336 } 00:29:26.336 } 00:29:26.336 ] 00:29:26.336 07:14:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:26.336 07:14:47 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:26.336 07:14:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:26.336 07:14:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:26.595 07:14:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:26.595 07:14:47 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:29:26.595 07:14:47 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.cu1sJoqx7I 00:29:26.595 07:14:47 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:29:26.595 07:14:47 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.cu1sJoqx7I 00:29:26.595 07:14:47 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.cu1sJoqx7I 00:29:26.595 07:14:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:26.595 07:14:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:26.595 07:14:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:26.595 07:14:47 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:29:26.595 07:14:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:26.595 07:14:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:26.595 07:14:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:26.595 07:14:47 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:29:26.595 07:14:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:26.595 07:14:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:26.595 [2024-11-18 07:14:47.353911] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:29:26.595 [2024-11-18 07:14:47.354042] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:29:26.595 07:14:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:26.595 07:14:47 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:29:26.595 07:14:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:26.595 07:14:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:26.595 07:14:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:26.595 07:14:47 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:29:26.595 07:14:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:26.595 07:14:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:26.595 [2024-11-18 07:14:47.369949] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:29:26.595 nvme0n1 00:29:26.595 07:14:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:26.595 07:14:47 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:29:26.595 07:14:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:26.595 07:14:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:26.595 [ 00:29:26.595 { 00:29:26.595 "name": "nvme0n1", 00:29:26.595 "aliases": [ 00:29:26.595 "124c4528-6b82-42d7-9335-949445db110a" 00:29:26.595 ], 00:29:26.595 "product_name": "NVMe disk", 00:29:26.595 "block_size": 512, 00:29:26.595 "num_blocks": 2097152, 00:29:26.595 "uuid": "124c4528-6b82-42d7-9335-949445db110a", 00:29:26.595 "numa_id": 0, 00:29:26.595 "assigned_rate_limits": { 00:29:26.595 "rw_ios_per_sec": 0, 00:29:26.595 "rw_mbytes_per_sec": 0, 00:29:26.595 "r_mbytes_per_sec": 0, 00:29:26.595 "w_mbytes_per_sec": 0 00:29:26.595 }, 00:29:26.595 "claimed": false, 00:29:26.595 "zoned": false, 00:29:26.595 "supported_io_types": { 00:29:26.595 "read": true, 00:29:26.595 "write": true, 00:29:26.595 "unmap": false, 00:29:26.595 "flush": true, 00:29:26.595 "reset": true, 00:29:26.595 "nvme_admin": true, 00:29:26.595 "nvme_io": true, 00:29:26.595 "nvme_io_md": false, 00:29:26.595 "write_zeroes": true, 00:29:26.595 "zcopy": false, 00:29:26.595 "get_zone_info": false, 00:29:26.595 "zone_management": false, 00:29:26.595 "zone_append": false, 00:29:26.595 "compare": true, 00:29:26.595 "compare_and_write": true, 00:29:26.595 "abort": true, 00:29:26.595 "seek_hole": false, 00:29:26.595 "seek_data": false, 00:29:26.595 "copy": true, 00:29:26.595 "nvme_iov_md": false 00:29:26.595 }, 00:29:26.595 "memory_domains": [ 00:29:26.595 { 00:29:26.595 "dma_device_id": "system", 00:29:26.595 "dma_device_type": 1 00:29:26.595 } 00:29:26.595 ], 00:29:26.595 "driver_specific": { 00:29:26.595 "nvme": [ 00:29:26.595 { 00:29:26.595 "trid": { 00:29:26.595 "trtype": "TCP", 00:29:26.595 "adrfam": "IPv4", 00:29:26.595 "traddr": "10.0.0.2", 00:29:26.595 "trsvcid": "4421", 00:29:26.595 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:29:26.595 }, 00:29:26.595 "ctrlr_data": { 00:29:26.595 "cntlid": 3, 00:29:26.595 "vendor_id": "0x8086", 00:29:26.595 "model_number": "SPDK bdev Controller", 00:29:26.595 "serial_number": "00000000000000000000", 00:29:26.595 "firmware_revision": "25.01", 00:29:26.595 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:26.595 "oacs": { 00:29:26.595 "security": 0, 00:29:26.595 "format": 0, 00:29:26.595 "firmware": 0, 00:29:26.595 "ns_manage": 0 00:29:26.595 }, 00:29:26.595 "multi_ctrlr": true, 00:29:26.595 "ana_reporting": false 00:29:26.595 }, 00:29:26.595 "vs": { 00:29:26.595 "nvme_version": "1.3" 00:29:26.595 }, 00:29:26.595 "ns_data": { 00:29:26.595 "id": 1, 00:29:26.595 "can_share": true 00:29:26.595 } 00:29:26.595 } 00:29:26.595 ], 00:29:26.595 "mp_policy": "active_passive" 00:29:26.595 } 00:29:26.595 } 00:29:26.595 ] 00:29:26.595 07:14:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:26.595 07:14:47 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:26.595 07:14:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:26.595 07:14:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:26.595 07:14:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:26.595 07:14:47 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.cu1sJoqx7I 00:29:26.595 07:14:47 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:29:26.595 07:14:47 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:29:26.595 07:14:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:26.595 07:14:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # sync 00:29:26.595 07:14:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:26.595 07:14:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set +e 00:29:26.595 07:14:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:26.595 07:14:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:26.595 rmmod nvme_tcp 00:29:26.595 rmmod nvme_fabrics 00:29:26.595 rmmod nvme_keyring 00:29:26.595 07:14:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:26.595 07:14:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@128 -- # set -e 00:29:26.595 07:14:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # return 0 00:29:26.595 07:14:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@517 -- # '[' -n 337551 ']' 00:29:26.595 07:14:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@518 -- # killprocess 337551 00:29:26.595 07:14:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # '[' -z 337551 ']' 00:29:26.595 07:14:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # kill -0 337551 00:29:26.595 07:14:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # uname 00:29:26.595 07:14:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:26.596 07:14:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 337551 00:29:26.855 07:14:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:26.855 07:14:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:26.855 07:14:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 337551' 00:29:26.855 killing process with pid 337551 00:29:26.855 07:14:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@973 -- # kill 337551 00:29:26.855 07:14:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@978 -- # wait 337551 00:29:26.855 07:14:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:26.855 07:14:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:26.855 07:14:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:26.855 07:14:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # iptr 00:29:26.855 07:14:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-save 00:29:26.855 07:14:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:26.855 07:14:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-restore 00:29:26.855 07:14:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:26.855 07:14:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:26.855 07:14:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:26.855 07:14:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:26.855 07:14:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:29.389 07:14:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:29.389 00:29:29.389 real 0m5.760s 00:29:29.389 user 0m2.176s 00:29:29.389 sys 0m2.008s 00:29:29.389 07:14:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:29.390 07:14:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:29.390 ************************************ 00:29:29.390 END TEST nvmf_async_init 00:29:29.390 ************************************ 00:29:29.390 07:14:49 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:29:29.390 07:14:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:29.390 07:14:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:29.390 07:14:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:29.390 ************************************ 00:29:29.390 START TEST dma 00:29:29.390 ************************************ 00:29:29.390 07:14:49 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:29:29.390 * Looking for test storage... 00:29:29.390 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:29.390 07:14:49 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:29.390 07:14:49 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1693 -- # lcov --version 00:29:29.390 07:14:49 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:29.390 07:14:49 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:29.390 07:14:49 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:29.390 07:14:49 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:29.390 07:14:49 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:29.390 07:14:49 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-: 00:29:29.390 07:14:49 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1 00:29:29.390 07:14:49 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-: 00:29:29.390 07:14:49 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2 00:29:29.390 07:14:49 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<' 00:29:29.390 07:14:49 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2 00:29:29.390 07:14:49 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1 00:29:29.390 07:14:49 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:29.390 07:14:49 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in 00:29:29.390 07:14:49 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@345 -- # : 1 00:29:29.390 07:14:49 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:29.390 07:14:49 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:29.390 07:14:49 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1 00:29:29.390 07:14:49 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1 00:29:29.390 07:14:49 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:29.390 07:14:49 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1 00:29:29.390 07:14:49 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1 00:29:29.390 07:14:49 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2 00:29:29.390 07:14:49 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2 00:29:29.390 07:14:49 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:29.390 07:14:50 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2 00:29:29.390 07:14:50 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2 00:29:29.390 07:14:50 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:29.390 07:14:50 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:29.390 07:14:50 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # return 0 00:29:29.390 07:14:50 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:29.390 07:14:50 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:29.390 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:29.390 --rc genhtml_branch_coverage=1 00:29:29.390 --rc genhtml_function_coverage=1 00:29:29.390 --rc genhtml_legend=1 00:29:29.390 --rc geninfo_all_blocks=1 00:29:29.390 --rc geninfo_unexecuted_blocks=1 00:29:29.390 00:29:29.390 ' 00:29:29.390 07:14:50 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:29.390 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:29.390 --rc genhtml_branch_coverage=1 00:29:29.390 --rc genhtml_function_coverage=1 00:29:29.390 --rc genhtml_legend=1 00:29:29.390 --rc geninfo_all_blocks=1 00:29:29.390 --rc geninfo_unexecuted_blocks=1 00:29:29.390 00:29:29.390 ' 00:29:29.390 07:14:50 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:29.390 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:29.390 --rc genhtml_branch_coverage=1 00:29:29.390 --rc genhtml_function_coverage=1 00:29:29.390 --rc genhtml_legend=1 00:29:29.390 --rc geninfo_all_blocks=1 00:29:29.390 --rc geninfo_unexecuted_blocks=1 00:29:29.390 00:29:29.390 ' 00:29:29.390 07:14:50 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:29.390 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:29.390 --rc genhtml_branch_coverage=1 00:29:29.390 --rc genhtml_function_coverage=1 00:29:29.390 --rc genhtml_legend=1 00:29:29.390 --rc geninfo_all_blocks=1 00:29:29.390 --rc geninfo_unexecuted_blocks=1 00:29:29.390 00:29:29.390 ' 00:29:29.390 07:14:50 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:29.390 07:14:50 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:29:29.390 07:14:50 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:29.390 07:14:50 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:29.390 07:14:50 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:29.390 07:14:50 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:29.390 07:14:50 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:29.390 07:14:50 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:29.390 07:14:50 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:29.390 07:14:50 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:29.390 07:14:50 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:29.390 07:14:50 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:29.390 07:14:50 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:29:29.390 07:14:50 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:29:29.390 07:14:50 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:29.390 07:14:50 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:29.390 07:14:50 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:29.390 07:14:50 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:29.390 07:14:50 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:29.390 07:14:50 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob 00:29:29.390 07:14:50 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:29.390 07:14:50 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:29.390 07:14:50 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:29.390 07:14:50 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:29.390 07:14:50 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:29.390 07:14:50 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:29.390 07:14:50 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:29:29.390 07:14:50 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:29.390 07:14:50 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # : 0 00:29:29.390 07:14:50 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:29.390 07:14:50 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:29.390 07:14:50 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:29.390 07:14:50 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:29.390 07:14:50 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:29.390 07:14:50 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:29.390 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:29.390 07:14:50 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:29.390 07:14:50 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:29.390 07:14:50 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:29.390 07:14:50 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:29:29.390 07:14:50 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:29:29.390 00:29:29.391 real 0m0.169s 00:29:29.391 user 0m0.111s 00:29:29.391 sys 0m0.067s 00:29:29.391 07:14:50 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:29.391 07:14:50 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:29:29.391 ************************************ 00:29:29.391 END TEST dma 00:29:29.391 ************************************ 00:29:29.391 07:14:50 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:29:29.391 07:14:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:29.391 07:14:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:29.391 07:14:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:29.391 ************************************ 00:29:29.391 START TEST nvmf_identify 00:29:29.391 ************************************ 00:29:29.391 07:14:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:29:29.391 * Looking for test storage... 00:29:29.391 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:29.391 07:14:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:29.391 07:14:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # lcov --version 00:29:29.391 07:14:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:29.391 07:14:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:29.391 07:14:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:29.391 07:14:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:29.391 07:14:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:29.391 07:14:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:29:29.391 07:14:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:29:29.391 07:14:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:29:29.391 07:14:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:29:29.391 07:14:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:29:29.391 07:14:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:29:29.391 07:14:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:29:29.391 07:14:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:29.391 07:14:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:29:29.391 07:14:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:29:29.391 07:14:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:29.391 07:14:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:29.391 07:14:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:29:29.391 07:14:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:29:29.391 07:14:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:29.391 07:14:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:29:29.391 07:14:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:29:29.391 07:14:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:29:29.391 07:14:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:29:29.391 07:14:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:29.391 07:14:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:29:29.391 07:14:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:29:29.391 07:14:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:29.391 07:14:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:29.391 07:14:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:29:29.391 07:14:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:29.391 07:14:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:29.391 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:29.391 --rc genhtml_branch_coverage=1 00:29:29.391 --rc genhtml_function_coverage=1 00:29:29.391 --rc genhtml_legend=1 00:29:29.391 --rc geninfo_all_blocks=1 00:29:29.391 --rc geninfo_unexecuted_blocks=1 00:29:29.391 00:29:29.391 ' 00:29:29.391 07:14:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:29.391 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:29.391 --rc genhtml_branch_coverage=1 00:29:29.391 --rc genhtml_function_coverage=1 00:29:29.391 --rc genhtml_legend=1 00:29:29.391 --rc geninfo_all_blocks=1 00:29:29.391 --rc geninfo_unexecuted_blocks=1 00:29:29.391 00:29:29.391 ' 00:29:29.391 07:14:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:29.391 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:29.391 --rc genhtml_branch_coverage=1 00:29:29.391 --rc genhtml_function_coverage=1 00:29:29.391 --rc genhtml_legend=1 00:29:29.391 --rc geninfo_all_blocks=1 00:29:29.391 --rc geninfo_unexecuted_blocks=1 00:29:29.391 00:29:29.391 ' 00:29:29.391 07:14:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:29.391 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:29.391 --rc genhtml_branch_coverage=1 00:29:29.391 --rc genhtml_function_coverage=1 00:29:29.391 --rc genhtml_legend=1 00:29:29.391 --rc geninfo_all_blocks=1 00:29:29.391 --rc geninfo_unexecuted_blocks=1 00:29:29.391 00:29:29.391 ' 00:29:29.391 07:14:50 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:29.391 07:14:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:29:29.391 07:14:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:29.391 07:14:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:29.391 07:14:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:29.391 07:14:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:29.391 07:14:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:29.391 07:14:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:29.391 07:14:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:29.391 07:14:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:29.391 07:14:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:29.391 07:14:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:29.391 07:14:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:29:29.391 07:14:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:29:29.391 07:14:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:29.391 07:14:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:29.391 07:14:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:29.391 07:14:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:29.391 07:14:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:29.391 07:14:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:29:29.391 07:14:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:29.391 07:14:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:29.391 07:14:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:29.391 07:14:50 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:29.391 07:14:50 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:29.391 07:14:50 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:29.391 07:14:50 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:29:29.391 07:14:50 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:29.391 07:14:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:29:29.391 07:14:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:29.391 07:14:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:29.391 07:14:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:29.391 07:14:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:29.392 07:14:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:29.392 07:14:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:29.392 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:29.392 07:14:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:29.392 07:14:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:29.392 07:14:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:29.392 07:14:50 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:29.392 07:14:50 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:29.392 07:14:50 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:29:29.392 07:14:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:29.392 07:14:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:29.392 07:14:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:29.392 07:14:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:29.392 07:14:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:29.392 07:14:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:29.392 07:14:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:29.392 07:14:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:29.392 07:14:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:29.392 07:14:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:29.392 07:14:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@309 -- # xtrace_disable 00:29:29.392 07:14:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:31.925 07:14:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:31.925 07:14:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # pci_devs=() 00:29:31.925 07:14:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:31.925 07:14:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:31.925 07:14:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:31.925 07:14:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:31.925 07:14:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:31.925 07:14:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # net_devs=() 00:29:31.925 07:14:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:31.925 07:14:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # e810=() 00:29:31.925 07:14:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # local -ga e810 00:29:31.925 07:14:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # x722=() 00:29:31.925 07:14:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # local -ga x722 00:29:31.925 07:14:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # mlx=() 00:29:31.925 07:14:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # local -ga mlx 00:29:31.925 07:14:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:31.925 07:14:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:31.925 07:14:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:31.925 07:14:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:31.925 07:14:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:31.925 07:14:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:31.925 07:14:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:31.925 07:14:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:31.925 07:14:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:31.925 07:14:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:31.925 07:14:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:31.925 07:14:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:31.925 07:14:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:31.925 07:14:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:31.925 07:14:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:31.925 07:14:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:31.925 07:14:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:31.925 07:14:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:31.925 07:14:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:31.925 07:14:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:29:31.925 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:29:31.925 07:14:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:31.925 07:14:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:31.925 07:14:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:31.925 07:14:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:31.925 07:14:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:31.925 07:14:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:31.925 07:14:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:29:31.925 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:29:31.925 07:14:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:31.925 07:14:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:31.925 07:14:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:31.925 07:14:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:31.925 07:14:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:31.925 07:14:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:31.925 07:14:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:31.925 07:14:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:31.925 07:14:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:31.925 07:14:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:31.925 07:14:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:31.925 07:14:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:31.925 07:14:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:31.925 07:14:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:31.925 07:14:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:31.925 07:14:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:29:31.925 Found net devices under 0000:0a:00.0: cvl_0_0 00:29:31.925 07:14:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:31.925 07:14:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:31.925 07:14:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:31.925 07:14:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:31.925 07:14:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:31.925 07:14:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:31.925 07:14:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:31.925 07:14:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:31.925 07:14:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:29:31.925 Found net devices under 0000:0a:00.1: cvl_0_1 00:29:31.925 07:14:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:31.925 07:14:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:31.925 07:14:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # is_hw=yes 00:29:31.925 07:14:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:31.925 07:14:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:31.925 07:14:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:31.925 07:14:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:31.925 07:14:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:31.925 07:14:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:31.925 07:14:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:31.925 07:14:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:31.925 07:14:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:31.925 07:14:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:31.925 07:14:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:31.925 07:14:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:31.925 07:14:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:31.925 07:14:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:31.925 07:14:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:31.925 07:14:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:31.925 07:14:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:31.925 07:14:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:31.925 07:14:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:31.925 07:14:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:31.925 07:14:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:31.925 07:14:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:31.925 07:14:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:31.925 07:14:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:31.925 07:14:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:31.925 07:14:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:31.925 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:31.925 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.258 ms 00:29:31.925 00:29:31.925 --- 10.0.0.2 ping statistics --- 00:29:31.925 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:31.925 rtt min/avg/max/mdev = 0.258/0.258/0.258/0.000 ms 00:29:31.925 07:14:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:31.925 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:31.925 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.103 ms 00:29:31.925 00:29:31.925 --- 10.0.0.1 ping statistics --- 00:29:31.925 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:31.925 rtt min/avg/max/mdev = 0.103/0.103/0.103/0.000 ms 00:29:31.926 07:14:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:31.926 07:14:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # return 0 00:29:31.926 07:14:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:31.926 07:14:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:31.926 07:14:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:31.926 07:14:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:31.926 07:14:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:31.926 07:14:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:31.926 07:14:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:31.926 07:14:52 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:29:31.926 07:14:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:31.926 07:14:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:31.926 07:14:52 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=339694 00:29:31.926 07:14:52 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:29:31.926 07:14:52 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:31.926 07:14:52 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 339694 00:29:31.926 07:14:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # '[' -z 339694 ']' 00:29:31.926 07:14:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:31.926 07:14:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:31.926 07:14:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:31.926 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:31.926 07:14:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:31.926 07:14:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:31.926 [2024-11-18 07:14:52.549506] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:29:31.926 [2024-11-18 07:14:52.549579] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:31.926 [2024-11-18 07:14:52.625185] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:31.926 [2024-11-18 07:14:52.672535] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:31.926 [2024-11-18 07:14:52.672589] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:31.926 [2024-11-18 07:14:52.672604] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:31.926 [2024-11-18 07:14:52.672616] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:31.926 [2024-11-18 07:14:52.672626] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:31.926 [2024-11-18 07:14:52.674170] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:31.926 [2024-11-18 07:14:52.674193] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:31.926 [2024-11-18 07:14:52.674254] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:31.926 [2024-11-18 07:14:52.674256] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:31.926 07:14:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:31.926 07:14:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@868 -- # return 0 00:29:31.926 07:14:52 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:31.926 07:14:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:31.926 07:14:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:31.926 [2024-11-18 07:14:52.808838] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:31.926 07:14:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:31.926 07:14:52 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:29:31.926 07:14:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:31.926 07:14:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:31.926 07:14:52 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:31.926 07:14:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:31.926 07:14:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:31.926 Malloc0 00:29:31.926 07:14:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:31.926 07:14:52 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:31.926 07:14:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:31.926 07:14:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:31.926 07:14:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:31.926 07:14:52 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:29:31.926 07:14:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:31.926 07:14:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:31.926 07:14:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:31.926 07:14:52 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:31.926 07:14:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:31.926 07:14:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:31.926 [2024-11-18 07:14:52.895362] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:31.926 07:14:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:31.926 07:14:52 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:31.926 07:14:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:31.926 07:14:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:32.189 07:14:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:32.189 07:14:52 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:29:32.189 07:14:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:32.189 07:14:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:32.189 [ 00:29:32.189 { 00:29:32.189 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:29:32.189 "subtype": "Discovery", 00:29:32.189 "listen_addresses": [ 00:29:32.189 { 00:29:32.189 "trtype": "TCP", 00:29:32.189 "adrfam": "IPv4", 00:29:32.189 "traddr": "10.0.0.2", 00:29:32.189 "trsvcid": "4420" 00:29:32.189 } 00:29:32.189 ], 00:29:32.190 "allow_any_host": true, 00:29:32.190 "hosts": [] 00:29:32.190 }, 00:29:32.190 { 00:29:32.190 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:29:32.190 "subtype": "NVMe", 00:29:32.190 "listen_addresses": [ 00:29:32.190 { 00:29:32.190 "trtype": "TCP", 00:29:32.190 "adrfam": "IPv4", 00:29:32.190 "traddr": "10.0.0.2", 00:29:32.190 "trsvcid": "4420" 00:29:32.190 } 00:29:32.190 ], 00:29:32.190 "allow_any_host": true, 00:29:32.190 "hosts": [], 00:29:32.190 "serial_number": "SPDK00000000000001", 00:29:32.190 "model_number": "SPDK bdev Controller", 00:29:32.190 "max_namespaces": 32, 00:29:32.190 "min_cntlid": 1, 00:29:32.190 "max_cntlid": 65519, 00:29:32.190 "namespaces": [ 00:29:32.190 { 00:29:32.190 "nsid": 1, 00:29:32.190 "bdev_name": "Malloc0", 00:29:32.190 "name": "Malloc0", 00:29:32.190 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:29:32.190 "eui64": "ABCDEF0123456789", 00:29:32.190 "uuid": "2d6c31f7-c127-4520-8a78-7c0bdb4504fd" 00:29:32.190 } 00:29:32.190 ] 00:29:32.190 } 00:29:32.190 ] 00:29:32.190 07:14:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:32.190 07:14:52 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:29:32.190 [2024-11-18 07:14:52.932911] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:29:32.190 [2024-11-18 07:14:52.932950] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid339716 ] 00:29:32.190 [2024-11-18 07:14:52.981528] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:29:32.190 [2024-11-18 07:14:52.981594] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:29:32.190 [2024-11-18 07:14:52.981605] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:29:32.190 [2024-11-18 07:14:52.981619] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:29:32.190 [2024-11-18 07:14:52.981635] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:29:32.190 [2024-11-18 07:14:52.985961] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:29:32.190 [2024-11-18 07:14:52.986023] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x2451650 0 00:29:32.190 [2024-11-18 07:14:52.993515] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:29:32.190 [2024-11-18 07:14:52.993542] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:29:32.190 [2024-11-18 07:14:52.993555] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:29:32.190 [2024-11-18 07:14:52.993565] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:29:32.190 [2024-11-18 07:14:52.993616] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:32.190 [2024-11-18 07:14:52.993634] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:32.190 [2024-11-18 07:14:52.993646] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2451650) 00:29:32.190 [2024-11-18 07:14:52.993670] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:29:32.190 [2024-11-18 07:14:52.993718] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24abf40, cid 0, qid 0 00:29:32.190 [2024-11-18 07:14:53.001506] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:32.190 [2024-11-18 07:14:53.001526] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:32.190 [2024-11-18 07:14:53.001533] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:32.190 [2024-11-18 07:14:53.001541] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24abf40) on tqpair=0x2451650 00:29:32.190 [2024-11-18 07:14:53.001562] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:29:32.190 [2024-11-18 07:14:53.001574] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:29:32.190 [2024-11-18 07:14:53.001584] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:29:32.190 [2024-11-18 07:14:53.001606] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:32.190 [2024-11-18 07:14:53.001615] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:32.190 [2024-11-18 07:14:53.001622] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2451650) 00:29:32.190 [2024-11-18 07:14:53.001634] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.190 [2024-11-18 07:14:53.001659] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24abf40, cid 0, qid 0 00:29:32.190 [2024-11-18 07:14:53.001765] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:32.190 [2024-11-18 07:14:53.001785] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:32.190 [2024-11-18 07:14:53.001792] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:32.190 [2024-11-18 07:14:53.001799] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24abf40) on tqpair=0x2451650 00:29:32.190 [2024-11-18 07:14:53.001808] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:29:32.190 [2024-11-18 07:14:53.001821] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:29:32.190 [2024-11-18 07:14:53.001841] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:32.190 [2024-11-18 07:14:53.001849] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:32.190 [2024-11-18 07:14:53.001856] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2451650) 00:29:32.190 [2024-11-18 07:14:53.001866] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.190 [2024-11-18 07:14:53.001888] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24abf40, cid 0, qid 0 00:29:32.190 [2024-11-18 07:14:53.001969] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:32.190 [2024-11-18 07:14:53.001983] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:32.190 [2024-11-18 07:14:53.001990] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:32.190 [2024-11-18 07:14:53.001997] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24abf40) on tqpair=0x2451650 00:29:32.190 [2024-11-18 07:14:53.002006] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:29:32.190 [2024-11-18 07:14:53.002021] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:29:32.190 [2024-11-18 07:14:53.002033] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:32.190 [2024-11-18 07:14:53.002041] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:32.190 [2024-11-18 07:14:53.002048] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2451650) 00:29:32.190 [2024-11-18 07:14:53.002058] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.190 [2024-11-18 07:14:53.002079] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24abf40, cid 0, qid 0 00:29:32.190 [2024-11-18 07:14:53.002155] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:32.190 [2024-11-18 07:14:53.002169] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:32.190 [2024-11-18 07:14:53.002176] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:32.190 [2024-11-18 07:14:53.002182] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24abf40) on tqpair=0x2451650 00:29:32.190 [2024-11-18 07:14:53.002192] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:29:32.190 [2024-11-18 07:14:53.002208] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:32.190 [2024-11-18 07:14:53.002217] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:32.190 [2024-11-18 07:14:53.002224] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2451650) 00:29:32.190 [2024-11-18 07:14:53.002234] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.190 [2024-11-18 07:14:53.002254] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24abf40, cid 0, qid 0 00:29:32.190 [2024-11-18 07:14:53.002330] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:32.190 [2024-11-18 07:14:53.002343] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:32.190 [2024-11-18 07:14:53.002350] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:32.190 [2024-11-18 07:14:53.002357] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24abf40) on tqpair=0x2451650 00:29:32.190 [2024-11-18 07:14:53.002365] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:29:32.190 [2024-11-18 07:14:53.002373] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:29:32.190 [2024-11-18 07:14:53.002386] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:29:32.190 [2024-11-18 07:14:53.002501] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:29:32.190 [2024-11-18 07:14:53.002514] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:29:32.190 [2024-11-18 07:14:53.002529] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:32.190 [2024-11-18 07:14:53.002545] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:32.190 [2024-11-18 07:14:53.002551] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2451650) 00:29:32.190 [2024-11-18 07:14:53.002562] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.190 [2024-11-18 07:14:53.002584] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24abf40, cid 0, qid 0 00:29:32.190 [2024-11-18 07:14:53.002683] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:32.190 [2024-11-18 07:14:53.002704] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:32.190 [2024-11-18 07:14:53.002716] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:32.190 [2024-11-18 07:14:53.002728] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24abf40) on tqpair=0x2451650 00:29:32.190 [2024-11-18 07:14:53.002742] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:29:32.191 [2024-11-18 07:14:53.002769] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:32.191 [2024-11-18 07:14:53.002784] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:32.191 [2024-11-18 07:14:53.002795] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2451650) 00:29:32.191 [2024-11-18 07:14:53.002811] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.191 [2024-11-18 07:14:53.002856] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24abf40, cid 0, qid 0 00:29:32.191 [2024-11-18 07:14:53.002974] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:32.191 [2024-11-18 07:14:53.002992] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:32.191 [2024-11-18 07:14:53.003002] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:32.191 [2024-11-18 07:14:53.003012] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24abf40) on tqpair=0x2451650 00:29:32.191 [2024-11-18 07:14:53.003022] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:29:32.191 [2024-11-18 07:14:53.003034] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:29:32.191 [2024-11-18 07:14:53.003057] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:29:32.191 [2024-11-18 07:14:53.003079] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:29:32.191 [2024-11-18 07:14:53.003102] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:32.191 [2024-11-18 07:14:53.003116] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2451650) 00:29:32.191 [2024-11-18 07:14:53.003133] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.191 [2024-11-18 07:14:53.003175] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24abf40, cid 0, qid 0 00:29:32.191 [2024-11-18 07:14:53.003333] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:32.191 [2024-11-18 07:14:53.003348] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:32.191 [2024-11-18 07:14:53.003355] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:32.191 [2024-11-18 07:14:53.003369] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2451650): datao=0, datal=4096, cccid=0 00:29:32.191 [2024-11-18 07:14:53.003378] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x24abf40) on tqpair(0x2451650): expected_datao=0, payload_size=4096 00:29:32.191 [2024-11-18 07:14:53.003386] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:32.191 [2024-11-18 07:14:53.003407] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:32.191 [2024-11-18 07:14:53.003419] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:32.191 [2024-11-18 07:14:53.003437] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:32.191 [2024-11-18 07:14:53.003447] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:32.191 [2024-11-18 07:14:53.003454] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:32.191 [2024-11-18 07:14:53.003461] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24abf40) on tqpair=0x2451650 00:29:32.191 [2024-11-18 07:14:53.003483] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:29:32.191 [2024-11-18 07:14:53.003502] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:29:32.191 [2024-11-18 07:14:53.003511] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:29:32.191 [2024-11-18 07:14:53.003526] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:29:32.191 [2024-11-18 07:14:53.003535] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:29:32.191 [2024-11-18 07:14:53.003544] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:29:32.191 [2024-11-18 07:14:53.003568] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:29:32.191 [2024-11-18 07:14:53.003583] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:32.191 [2024-11-18 07:14:53.003591] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:32.191 [2024-11-18 07:14:53.003597] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2451650) 00:29:32.191 [2024-11-18 07:14:53.003608] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:29:32.191 [2024-11-18 07:14:53.003630] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24abf40, cid 0, qid 0 00:29:32.191 [2024-11-18 07:14:53.003728] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:32.191 [2024-11-18 07:14:53.003740] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:32.191 [2024-11-18 07:14:53.003747] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:32.191 [2024-11-18 07:14:53.003754] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24abf40) on tqpair=0x2451650 00:29:32.191 [2024-11-18 07:14:53.003766] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:32.191 [2024-11-18 07:14:53.003774] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:32.191 [2024-11-18 07:14:53.003780] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2451650) 00:29:32.191 [2024-11-18 07:14:53.003790] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:32.191 [2024-11-18 07:14:53.003800] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:32.191 [2024-11-18 07:14:53.003807] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:32.191 [2024-11-18 07:14:53.003813] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x2451650) 00:29:32.191 [2024-11-18 07:14:53.003822] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:32.191 [2024-11-18 07:14:53.003835] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:32.191 [2024-11-18 07:14:53.003843] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:32.191 [2024-11-18 07:14:53.003850] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x2451650) 00:29:32.191 [2024-11-18 07:14:53.003859] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:32.191 [2024-11-18 07:14:53.003868] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:32.191 [2024-11-18 07:14:53.003875] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:32.191 [2024-11-18 07:14:53.003881] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2451650) 00:29:32.191 [2024-11-18 07:14:53.003890] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:32.191 [2024-11-18 07:14:53.003899] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:29:32.191 [2024-11-18 07:14:53.003913] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:29:32.191 [2024-11-18 07:14:53.003925] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:32.191 [2024-11-18 07:14:53.003932] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2451650) 00:29:32.191 [2024-11-18 07:14:53.003942] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.191 [2024-11-18 07:14:53.003964] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24abf40, cid 0, qid 0 00:29:32.191 [2024-11-18 07:14:53.003975] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24ac0c0, cid 1, qid 0 00:29:32.191 [2024-11-18 07:14:53.003983] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24ac240, cid 2, qid 0 00:29:32.191 [2024-11-18 07:14:53.003991] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24ac3c0, cid 3, qid 0 00:29:32.191 [2024-11-18 07:14:53.003999] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24ac540, cid 4, qid 0 00:29:32.191 [2024-11-18 07:14:53.004160] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:32.191 [2024-11-18 07:14:53.004172] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:32.191 [2024-11-18 07:14:53.004178] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:32.191 [2024-11-18 07:14:53.004185] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24ac540) on tqpair=0x2451650 00:29:32.191 [2024-11-18 07:14:53.004199] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:29:32.191 [2024-11-18 07:14:53.004209] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:29:32.191 [2024-11-18 07:14:53.004226] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:32.191 [2024-11-18 07:14:53.004235] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2451650) 00:29:32.191 [2024-11-18 07:14:53.004246] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.191 [2024-11-18 07:14:53.004266] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24ac540, cid 4, qid 0 00:29:32.191 [2024-11-18 07:14:53.004366] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:32.191 [2024-11-18 07:14:53.004383] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:32.191 [2024-11-18 07:14:53.004390] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:32.191 [2024-11-18 07:14:53.004396] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2451650): datao=0, datal=4096, cccid=4 00:29:32.191 [2024-11-18 07:14:53.004404] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x24ac540) on tqpair(0x2451650): expected_datao=0, payload_size=4096 00:29:32.191 [2024-11-18 07:14:53.004415] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:32.191 [2024-11-18 07:14:53.004433] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:32.191 [2024-11-18 07:14:53.004442] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:32.191 [2024-11-18 07:14:53.044599] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:32.191 [2024-11-18 07:14:53.044619] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:32.191 [2024-11-18 07:14:53.044627] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:32.191 [2024-11-18 07:14:53.044634] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24ac540) on tqpair=0x2451650 00:29:32.191 [2024-11-18 07:14:53.044654] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:29:32.191 [2024-11-18 07:14:53.044691] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:32.191 [2024-11-18 07:14:53.044702] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2451650) 00:29:32.191 [2024-11-18 07:14:53.044714] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.191 [2024-11-18 07:14:53.044726] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:32.192 [2024-11-18 07:14:53.044733] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:32.192 [2024-11-18 07:14:53.044739] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x2451650) 00:29:32.192 [2024-11-18 07:14:53.044749] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:29:32.192 [2024-11-18 07:14:53.044777] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24ac540, cid 4, qid 0 00:29:32.192 [2024-11-18 07:14:53.044789] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24ac6c0, cid 5, qid 0 00:29:32.192 [2024-11-18 07:14:53.044943] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:32.192 [2024-11-18 07:14:53.044958] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:32.192 [2024-11-18 07:14:53.044965] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:32.192 [2024-11-18 07:14:53.044972] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2451650): datao=0, datal=1024, cccid=4 00:29:32.192 [2024-11-18 07:14:53.044979] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x24ac540) on tqpair(0x2451650): expected_datao=0, payload_size=1024 00:29:32.192 [2024-11-18 07:14:53.044987] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:32.192 [2024-11-18 07:14:53.044997] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:32.192 [2024-11-18 07:14:53.045004] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:32.192 [2024-11-18 07:14:53.045013] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:32.192 [2024-11-18 07:14:53.045022] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:32.192 [2024-11-18 07:14:53.045028] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:32.192 [2024-11-18 07:14:53.045035] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24ac6c0) on tqpair=0x2451650 00:29:32.192 [2024-11-18 07:14:53.089509] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:32.192 [2024-11-18 07:14:53.089527] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:32.192 [2024-11-18 07:14:53.089534] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:32.192 [2024-11-18 07:14:53.089541] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24ac540) on tqpair=0x2451650 00:29:32.192 [2024-11-18 07:14:53.089558] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:32.192 [2024-11-18 07:14:53.089567] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2451650) 00:29:32.192 [2024-11-18 07:14:53.089578] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.192 [2024-11-18 07:14:53.089612] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24ac540, cid 4, qid 0 00:29:32.192 [2024-11-18 07:14:53.089735] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:32.192 [2024-11-18 07:14:53.089749] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:32.192 [2024-11-18 07:14:53.089756] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:32.192 [2024-11-18 07:14:53.089763] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2451650): datao=0, datal=3072, cccid=4 00:29:32.192 [2024-11-18 07:14:53.089770] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x24ac540) on tqpair(0x2451650): expected_datao=0, payload_size=3072 00:29:32.192 [2024-11-18 07:14:53.089778] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:32.192 [2024-11-18 07:14:53.089797] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:32.192 [2024-11-18 07:14:53.089816] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:32.192 [2024-11-18 07:14:53.089856] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:32.192 [2024-11-18 07:14:53.089868] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:32.192 [2024-11-18 07:14:53.089875] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:32.192 [2024-11-18 07:14:53.089882] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24ac540) on tqpair=0x2451650 00:29:32.192 [2024-11-18 07:14:53.089896] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:32.192 [2024-11-18 07:14:53.089905] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2451650) 00:29:32.192 [2024-11-18 07:14:53.089916] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.192 [2024-11-18 07:14:53.089943] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24ac540, cid 4, qid 0 00:29:32.192 [2024-11-18 07:14:53.090057] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:32.192 [2024-11-18 07:14:53.090069] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:32.192 [2024-11-18 07:14:53.090076] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:32.192 [2024-11-18 07:14:53.090082] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2451650): datao=0, datal=8, cccid=4 00:29:32.192 [2024-11-18 07:14:53.090089] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x24ac540) on tqpair(0x2451650): expected_datao=0, payload_size=8 00:29:32.192 [2024-11-18 07:14:53.090096] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:32.192 [2024-11-18 07:14:53.090106] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:32.192 [2024-11-18 07:14:53.090113] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:32.192 [2024-11-18 07:14:53.130658] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:32.192 [2024-11-18 07:14:53.130677] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:32.192 [2024-11-18 07:14:53.130684] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:32.192 [2024-11-18 07:14:53.130691] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24ac540) on tqpair=0x2451650 00:29:32.192 ===================================================== 00:29:32.192 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:29:32.192 ===================================================== 00:29:32.192 Controller Capabilities/Features 00:29:32.192 ================================ 00:29:32.192 Vendor ID: 0000 00:29:32.192 Subsystem Vendor ID: 0000 00:29:32.192 Serial Number: .................... 00:29:32.192 Model Number: ........................................ 00:29:32.192 Firmware Version: 25.01 00:29:32.192 Recommended Arb Burst: 0 00:29:32.192 IEEE OUI Identifier: 00 00 00 00:29:32.192 Multi-path I/O 00:29:32.192 May have multiple subsystem ports: No 00:29:32.192 May have multiple controllers: No 00:29:32.192 Associated with SR-IOV VF: No 00:29:32.192 Max Data Transfer Size: 131072 00:29:32.192 Max Number of Namespaces: 0 00:29:32.192 Max Number of I/O Queues: 1024 00:29:32.192 NVMe Specification Version (VS): 1.3 00:29:32.192 NVMe Specification Version (Identify): 1.3 00:29:32.192 Maximum Queue Entries: 128 00:29:32.192 Contiguous Queues Required: Yes 00:29:32.192 Arbitration Mechanisms Supported 00:29:32.192 Weighted Round Robin: Not Supported 00:29:32.192 Vendor Specific: Not Supported 00:29:32.192 Reset Timeout: 15000 ms 00:29:32.192 Doorbell Stride: 4 bytes 00:29:32.192 NVM Subsystem Reset: Not Supported 00:29:32.192 Command Sets Supported 00:29:32.192 NVM Command Set: Supported 00:29:32.192 Boot Partition: Not Supported 00:29:32.192 Memory Page Size Minimum: 4096 bytes 00:29:32.192 Memory Page Size Maximum: 4096 bytes 00:29:32.192 Persistent Memory Region: Not Supported 00:29:32.192 Optional Asynchronous Events Supported 00:29:32.192 Namespace Attribute Notices: Not Supported 00:29:32.192 Firmware Activation Notices: Not Supported 00:29:32.192 ANA Change Notices: Not Supported 00:29:32.192 PLE Aggregate Log Change Notices: Not Supported 00:29:32.192 LBA Status Info Alert Notices: Not Supported 00:29:32.192 EGE Aggregate Log Change Notices: Not Supported 00:29:32.192 Normal NVM Subsystem Shutdown event: Not Supported 00:29:32.192 Zone Descriptor Change Notices: Not Supported 00:29:32.192 Discovery Log Change Notices: Supported 00:29:32.192 Controller Attributes 00:29:32.192 128-bit Host Identifier: Not Supported 00:29:32.192 Non-Operational Permissive Mode: Not Supported 00:29:32.192 NVM Sets: Not Supported 00:29:32.192 Read Recovery Levels: Not Supported 00:29:32.192 Endurance Groups: Not Supported 00:29:32.192 Predictable Latency Mode: Not Supported 00:29:32.192 Traffic Based Keep ALive: Not Supported 00:29:32.192 Namespace Granularity: Not Supported 00:29:32.192 SQ Associations: Not Supported 00:29:32.192 UUID List: Not Supported 00:29:32.192 Multi-Domain Subsystem: Not Supported 00:29:32.192 Fixed Capacity Management: Not Supported 00:29:32.192 Variable Capacity Management: Not Supported 00:29:32.192 Delete Endurance Group: Not Supported 00:29:32.192 Delete NVM Set: Not Supported 00:29:32.192 Extended LBA Formats Supported: Not Supported 00:29:32.192 Flexible Data Placement Supported: Not Supported 00:29:32.192 00:29:32.192 Controller Memory Buffer Support 00:29:32.192 ================================ 00:29:32.192 Supported: No 00:29:32.192 00:29:32.192 Persistent Memory Region Support 00:29:32.192 ================================ 00:29:32.192 Supported: No 00:29:32.192 00:29:32.192 Admin Command Set Attributes 00:29:32.192 ============================ 00:29:32.192 Security Send/Receive: Not Supported 00:29:32.192 Format NVM: Not Supported 00:29:32.192 Firmware Activate/Download: Not Supported 00:29:32.192 Namespace Management: Not Supported 00:29:32.192 Device Self-Test: Not Supported 00:29:32.192 Directives: Not Supported 00:29:32.192 NVMe-MI: Not Supported 00:29:32.192 Virtualization Management: Not Supported 00:29:32.192 Doorbell Buffer Config: Not Supported 00:29:32.192 Get LBA Status Capability: Not Supported 00:29:32.192 Command & Feature Lockdown Capability: Not Supported 00:29:32.192 Abort Command Limit: 1 00:29:32.192 Async Event Request Limit: 4 00:29:32.192 Number of Firmware Slots: N/A 00:29:32.192 Firmware Slot 1 Read-Only: N/A 00:29:32.192 Firmware Activation Without Reset: N/A 00:29:32.192 Multiple Update Detection Support: N/A 00:29:32.192 Firmware Update Granularity: No Information Provided 00:29:32.192 Per-Namespace SMART Log: No 00:29:32.192 Asymmetric Namespace Access Log Page: Not Supported 00:29:32.193 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:29:32.193 Command Effects Log Page: Not Supported 00:29:32.193 Get Log Page Extended Data: Supported 00:29:32.193 Telemetry Log Pages: Not Supported 00:29:32.193 Persistent Event Log Pages: Not Supported 00:29:32.193 Supported Log Pages Log Page: May Support 00:29:32.193 Commands Supported & Effects Log Page: Not Supported 00:29:32.193 Feature Identifiers & Effects Log Page:May Support 00:29:32.193 NVMe-MI Commands & Effects Log Page: May Support 00:29:32.193 Data Area 4 for Telemetry Log: Not Supported 00:29:32.193 Error Log Page Entries Supported: 128 00:29:32.193 Keep Alive: Not Supported 00:29:32.193 00:29:32.193 NVM Command Set Attributes 00:29:32.193 ========================== 00:29:32.193 Submission Queue Entry Size 00:29:32.193 Max: 1 00:29:32.193 Min: 1 00:29:32.193 Completion Queue Entry Size 00:29:32.193 Max: 1 00:29:32.193 Min: 1 00:29:32.193 Number of Namespaces: 0 00:29:32.193 Compare Command: Not Supported 00:29:32.193 Write Uncorrectable Command: Not Supported 00:29:32.193 Dataset Management Command: Not Supported 00:29:32.193 Write Zeroes Command: Not Supported 00:29:32.193 Set Features Save Field: Not Supported 00:29:32.193 Reservations: Not Supported 00:29:32.193 Timestamp: Not Supported 00:29:32.193 Copy: Not Supported 00:29:32.193 Volatile Write Cache: Not Present 00:29:32.193 Atomic Write Unit (Normal): 1 00:29:32.193 Atomic Write Unit (PFail): 1 00:29:32.193 Atomic Compare & Write Unit: 1 00:29:32.193 Fused Compare & Write: Supported 00:29:32.193 Scatter-Gather List 00:29:32.193 SGL Command Set: Supported 00:29:32.193 SGL Keyed: Supported 00:29:32.193 SGL Bit Bucket Descriptor: Not Supported 00:29:32.193 SGL Metadata Pointer: Not Supported 00:29:32.193 Oversized SGL: Not Supported 00:29:32.193 SGL Metadata Address: Not Supported 00:29:32.193 SGL Offset: Supported 00:29:32.193 Transport SGL Data Block: Not Supported 00:29:32.193 Replay Protected Memory Block: Not Supported 00:29:32.193 00:29:32.193 Firmware Slot Information 00:29:32.193 ========================= 00:29:32.193 Active slot: 0 00:29:32.193 00:29:32.193 00:29:32.193 Error Log 00:29:32.193 ========= 00:29:32.193 00:29:32.193 Active Namespaces 00:29:32.193 ================= 00:29:32.193 Discovery Log Page 00:29:32.193 ================== 00:29:32.193 Generation Counter: 2 00:29:32.193 Number of Records: 2 00:29:32.193 Record Format: 0 00:29:32.193 00:29:32.193 Discovery Log Entry 0 00:29:32.193 ---------------------- 00:29:32.193 Transport Type: 3 (TCP) 00:29:32.193 Address Family: 1 (IPv4) 00:29:32.193 Subsystem Type: 3 (Current Discovery Subsystem) 00:29:32.193 Entry Flags: 00:29:32.193 Duplicate Returned Information: 1 00:29:32.193 Explicit Persistent Connection Support for Discovery: 1 00:29:32.193 Transport Requirements: 00:29:32.193 Secure Channel: Not Required 00:29:32.193 Port ID: 0 (0x0000) 00:29:32.193 Controller ID: 65535 (0xffff) 00:29:32.193 Admin Max SQ Size: 128 00:29:32.193 Transport Service Identifier: 4420 00:29:32.193 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:29:32.193 Transport Address: 10.0.0.2 00:29:32.193 Discovery Log Entry 1 00:29:32.193 ---------------------- 00:29:32.193 Transport Type: 3 (TCP) 00:29:32.193 Address Family: 1 (IPv4) 00:29:32.193 Subsystem Type: 2 (NVM Subsystem) 00:29:32.193 Entry Flags: 00:29:32.193 Duplicate Returned Information: 0 00:29:32.193 Explicit Persistent Connection Support for Discovery: 0 00:29:32.193 Transport Requirements: 00:29:32.193 Secure Channel: Not Required 00:29:32.193 Port ID: 0 (0x0000) 00:29:32.193 Controller ID: 65535 (0xffff) 00:29:32.193 Admin Max SQ Size: 128 00:29:32.193 Transport Service Identifier: 4420 00:29:32.193 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:29:32.193 Transport Address: 10.0.0.2 [2024-11-18 07:14:53.130821] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:29:32.193 [2024-11-18 07:14:53.130842] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24abf40) on tqpair=0x2451650 00:29:32.193 [2024-11-18 07:14:53.130854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.193 [2024-11-18 07:14:53.130863] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24ac0c0) on tqpair=0x2451650 00:29:32.193 [2024-11-18 07:14:53.130871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.193 [2024-11-18 07:14:53.130879] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24ac240) on tqpair=0x2451650 00:29:32.193 [2024-11-18 07:14:53.130886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.193 [2024-11-18 07:14:53.130897] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24ac3c0) on tqpair=0x2451650 00:29:32.193 [2024-11-18 07:14:53.130905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.193 [2024-11-18 07:14:53.130922] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:32.193 [2024-11-18 07:14:53.130931] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:32.193 [2024-11-18 07:14:53.130951] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2451650) 00:29:32.193 [2024-11-18 07:14:53.130962] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.193 [2024-11-18 07:14:53.130986] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24ac3c0, cid 3, qid 0 00:29:32.193 [2024-11-18 07:14:53.131114] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:32.193 [2024-11-18 07:14:53.131126] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:32.193 [2024-11-18 07:14:53.131133] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:32.193 [2024-11-18 07:14:53.131140] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24ac3c0) on tqpair=0x2451650 00:29:32.193 [2024-11-18 07:14:53.131152] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:32.193 [2024-11-18 07:14:53.131160] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:32.193 [2024-11-18 07:14:53.131166] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2451650) 00:29:32.193 [2024-11-18 07:14:53.131176] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.193 [2024-11-18 07:14:53.131202] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24ac3c0, cid 3, qid 0 00:29:32.193 [2024-11-18 07:14:53.131300] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:32.193 [2024-11-18 07:14:53.131313] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:32.193 [2024-11-18 07:14:53.131320] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:32.193 [2024-11-18 07:14:53.131327] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24ac3c0) on tqpair=0x2451650 00:29:32.193 [2024-11-18 07:14:53.131335] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:29:32.193 [2024-11-18 07:14:53.131344] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:29:32.193 [2024-11-18 07:14:53.131359] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:32.193 [2024-11-18 07:14:53.131368] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:32.193 [2024-11-18 07:14:53.131374] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2451650) 00:29:32.193 [2024-11-18 07:14:53.131384] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.193 [2024-11-18 07:14:53.131405] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24ac3c0, cid 3, qid 0 00:29:32.193 [2024-11-18 07:14:53.131482] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:32.193 [2024-11-18 07:14:53.131504] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:32.193 [2024-11-18 07:14:53.131512] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:32.193 [2024-11-18 07:14:53.131518] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24ac3c0) on tqpair=0x2451650 00:29:32.193 [2024-11-18 07:14:53.131536] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:32.193 [2024-11-18 07:14:53.131545] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:32.193 [2024-11-18 07:14:53.131551] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2451650) 00:29:32.193 [2024-11-18 07:14:53.131561] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.193 [2024-11-18 07:14:53.131586] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24ac3c0, cid 3, qid 0 00:29:32.193 [2024-11-18 07:14:53.131667] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:32.193 [2024-11-18 07:14:53.131681] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:32.193 [2024-11-18 07:14:53.131687] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:32.193 [2024-11-18 07:14:53.131694] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24ac3c0) on tqpair=0x2451650 00:29:32.193 [2024-11-18 07:14:53.131710] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:32.193 [2024-11-18 07:14:53.131719] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:32.193 [2024-11-18 07:14:53.131726] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2451650) 00:29:32.193 [2024-11-18 07:14:53.131736] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.193 [2024-11-18 07:14:53.131756] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24ac3c0, cid 3, qid 0 00:29:32.193 [2024-11-18 07:14:53.131848] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:32.193 [2024-11-18 07:14:53.131860] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:32.194 [2024-11-18 07:14:53.131866] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:32.194 [2024-11-18 07:14:53.131873] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24ac3c0) on tqpair=0x2451650 00:29:32.194 [2024-11-18 07:14:53.131888] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:32.194 [2024-11-18 07:14:53.131897] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:32.194 [2024-11-18 07:14:53.131903] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2451650) 00:29:32.194 [2024-11-18 07:14:53.131914] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.194 [2024-11-18 07:14:53.131933] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24ac3c0, cid 3, qid 0 00:29:32.194 [2024-11-18 07:14:53.132017] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:32.194 [2024-11-18 07:14:53.132030] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:32.194 [2024-11-18 07:14:53.132037] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:32.194 [2024-11-18 07:14:53.132044] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24ac3c0) on tqpair=0x2451650 00:29:32.194 [2024-11-18 07:14:53.132060] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:32.194 [2024-11-18 07:14:53.132068] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:32.194 [2024-11-18 07:14:53.132075] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2451650) 00:29:32.194 [2024-11-18 07:14:53.132085] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.194 [2024-11-18 07:14:53.132105] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24ac3c0, cid 3, qid 0 00:29:32.194 [2024-11-18 07:14:53.132188] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:32.194 [2024-11-18 07:14:53.132201] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:32.194 [2024-11-18 07:14:53.132208] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:32.194 [2024-11-18 07:14:53.132214] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24ac3c0) on tqpair=0x2451650 00:29:32.194 [2024-11-18 07:14:53.132231] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:32.194 [2024-11-18 07:14:53.132239] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:32.194 [2024-11-18 07:14:53.132246] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2451650) 00:29:32.194 [2024-11-18 07:14:53.132256] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.194 [2024-11-18 07:14:53.132280] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24ac3c0, cid 3, qid 0 00:29:32.194 [2024-11-18 07:14:53.132357] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:32.194 [2024-11-18 07:14:53.132369] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:32.194 [2024-11-18 07:14:53.132375] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:32.194 [2024-11-18 07:14:53.132382] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24ac3c0) on tqpair=0x2451650 00:29:32.194 [2024-11-18 07:14:53.132398] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:32.194 [2024-11-18 07:14:53.132407] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:32.194 [2024-11-18 07:14:53.132413] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2451650) 00:29:32.194 [2024-11-18 07:14:53.132423] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.194 [2024-11-18 07:14:53.132444] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24ac3c0, cid 3, qid 0 00:29:32.194 [2024-11-18 07:14:53.132548] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:32.194 [2024-11-18 07:14:53.132562] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:32.194 [2024-11-18 07:14:53.132568] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:32.194 [2024-11-18 07:14:53.132575] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24ac3c0) on tqpair=0x2451650 00:29:32.194 [2024-11-18 07:14:53.132591] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:32.194 [2024-11-18 07:14:53.132600] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:32.194 [2024-11-18 07:14:53.132606] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2451650) 00:29:32.194 [2024-11-18 07:14:53.132616] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.194 [2024-11-18 07:14:53.132637] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24ac3c0, cid 3, qid 0 00:29:32.194 [2024-11-18 07:14:53.132722] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:32.194 [2024-11-18 07:14:53.132733] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:32.194 [2024-11-18 07:14:53.132740] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:32.194 [2024-11-18 07:14:53.132747] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24ac3c0) on tqpair=0x2451650 00:29:32.194 [2024-11-18 07:14:53.132763] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:32.194 [2024-11-18 07:14:53.132771] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:32.194 [2024-11-18 07:14:53.132778] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2451650) 00:29:32.194 [2024-11-18 07:14:53.132788] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.194 [2024-11-18 07:14:53.132808] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24ac3c0, cid 3, qid 0 00:29:32.194 [2024-11-18 07:14:53.132899] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:32.194 [2024-11-18 07:14:53.132912] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:32.194 [2024-11-18 07:14:53.132919] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:32.194 [2024-11-18 07:14:53.132925] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24ac3c0) on tqpair=0x2451650 00:29:32.194 [2024-11-18 07:14:53.132942] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:32.194 [2024-11-18 07:14:53.132950] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:32.194 [2024-11-18 07:14:53.132957] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2451650) 00:29:32.194 [2024-11-18 07:14:53.132967] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.194 [2024-11-18 07:14:53.132987] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24ac3c0, cid 3, qid 0 00:29:32.194 [2024-11-18 07:14:53.133067] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:32.194 [2024-11-18 07:14:53.133081] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:32.194 [2024-11-18 07:14:53.133088] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:32.194 [2024-11-18 07:14:53.133095] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24ac3c0) on tqpair=0x2451650 00:29:32.194 [2024-11-18 07:14:53.133111] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:32.194 [2024-11-18 07:14:53.133119] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:32.194 [2024-11-18 07:14:53.133126] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2451650) 00:29:32.194 [2024-11-18 07:14:53.133136] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.194 [2024-11-18 07:14:53.133157] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24ac3c0, cid 3, qid 0 00:29:32.194 [2024-11-18 07:14:53.133240] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:32.194 [2024-11-18 07:14:53.133253] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:32.194 [2024-11-18 07:14:53.133259] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:32.194 [2024-11-18 07:14:53.133266] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24ac3c0) on tqpair=0x2451650 00:29:32.194 [2024-11-18 07:14:53.133282] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:32.194 [2024-11-18 07:14:53.133290] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:32.194 [2024-11-18 07:14:53.133297] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2451650) 00:29:32.194 [2024-11-18 07:14:53.133307] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.194 [2024-11-18 07:14:53.133327] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24ac3c0, cid 3, qid 0 00:29:32.194 [2024-11-18 07:14:53.133404] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:32.194 [2024-11-18 07:14:53.133417] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:32.194 [2024-11-18 07:14:53.133423] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:32.194 [2024-11-18 07:14:53.133430] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24ac3c0) on tqpair=0x2451650 00:29:32.194 [2024-11-18 07:14:53.133446] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:32.194 [2024-11-18 07:14:53.133455] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:32.194 [2024-11-18 07:14:53.133461] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2451650) 00:29:32.194 [2024-11-18 07:14:53.133471] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.194 [2024-11-18 07:14:53.137512] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24ac3c0, cid 3, qid 0 00:29:32.194 [2024-11-18 07:14:53.137533] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:32.195 [2024-11-18 07:14:53.137543] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:32.195 [2024-11-18 07:14:53.137550] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:32.195 [2024-11-18 07:14:53.137556] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24ac3c0) on tqpair=0x2451650 00:29:32.195 [2024-11-18 07:14:53.137573] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:32.195 [2024-11-18 07:14:53.137582] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:32.195 [2024-11-18 07:14:53.137588] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2451650) 00:29:32.195 [2024-11-18 07:14:53.137599] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.195 [2024-11-18 07:14:53.137620] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24ac3c0, cid 3, qid 0 00:29:32.195 [2024-11-18 07:14:53.137730] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:32.195 [2024-11-18 07:14:53.137746] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:32.195 [2024-11-18 07:14:53.137754] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:32.195 [2024-11-18 07:14:53.137761] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24ac3c0) on tqpair=0x2451650 00:29:32.195 [2024-11-18 07:14:53.137774] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 6 milliseconds 00:29:32.195 00:29:32.195 07:14:53 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:29:32.459 [2024-11-18 07:14:53.170423] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:29:32.459 [2024-11-18 07:14:53.170463] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid339833 ] 00:29:32.459 [2024-11-18 07:14:53.216349] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:29:32.459 [2024-11-18 07:14:53.216405] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:29:32.459 [2024-11-18 07:14:53.216415] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:29:32.459 [2024-11-18 07:14:53.216428] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:29:32.459 [2024-11-18 07:14:53.216441] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:29:32.459 [2024-11-18 07:14:53.220752] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:29:32.459 [2024-11-18 07:14:53.220806] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0xa86650 0 00:29:32.459 [2024-11-18 07:14:53.227510] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:29:32.459 [2024-11-18 07:14:53.227530] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:29:32.459 [2024-11-18 07:14:53.227538] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:29:32.459 [2024-11-18 07:14:53.227544] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:29:32.459 [2024-11-18 07:14:53.227574] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:32.459 [2024-11-18 07:14:53.227586] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:32.459 [2024-11-18 07:14:53.227593] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa86650) 00:29:32.459 [2024-11-18 07:14:53.227606] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:29:32.459 [2024-11-18 07:14:53.227632] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xae0f40, cid 0, qid 0 00:29:32.459 [2024-11-18 07:14:53.235508] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:32.459 [2024-11-18 07:14:53.235527] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:32.459 [2024-11-18 07:14:53.235534] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:32.459 [2024-11-18 07:14:53.235542] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xae0f40) on tqpair=0xa86650 00:29:32.459 [2024-11-18 07:14:53.235556] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:29:32.459 [2024-11-18 07:14:53.235566] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:29:32.459 [2024-11-18 07:14:53.235576] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:29:32.459 [2024-11-18 07:14:53.235595] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:32.459 [2024-11-18 07:14:53.235608] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:32.459 [2024-11-18 07:14:53.235615] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa86650) 00:29:32.459 [2024-11-18 07:14:53.235626] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.459 [2024-11-18 07:14:53.235650] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xae0f40, cid 0, qid 0 00:29:32.459 [2024-11-18 07:14:53.235742] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:32.459 [2024-11-18 07:14:53.235755] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:32.459 [2024-11-18 07:14:53.235761] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:32.459 [2024-11-18 07:14:53.235768] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xae0f40) on tqpair=0xa86650 00:29:32.459 [2024-11-18 07:14:53.235777] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:29:32.459 [2024-11-18 07:14:53.235790] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:29:32.459 [2024-11-18 07:14:53.235802] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:32.459 [2024-11-18 07:14:53.235810] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:32.459 [2024-11-18 07:14:53.235816] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa86650) 00:29:32.459 [2024-11-18 07:14:53.235827] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.459 [2024-11-18 07:14:53.235858] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xae0f40, cid 0, qid 0 00:29:32.459 [2024-11-18 07:14:53.235932] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:32.459 [2024-11-18 07:14:53.235944] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:32.459 [2024-11-18 07:14:53.235951] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:32.459 [2024-11-18 07:14:53.235957] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xae0f40) on tqpair=0xa86650 00:29:32.459 [2024-11-18 07:14:53.235966] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:29:32.459 [2024-11-18 07:14:53.235979] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:29:32.459 [2024-11-18 07:14:53.235991] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:32.459 [2024-11-18 07:14:53.235999] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:32.459 [2024-11-18 07:14:53.236006] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa86650) 00:29:32.460 [2024-11-18 07:14:53.236016] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.460 [2024-11-18 07:14:53.236037] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xae0f40, cid 0, qid 0 00:29:32.460 [2024-11-18 07:14:53.236133] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:32.460 [2024-11-18 07:14:53.236148] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:32.460 [2024-11-18 07:14:53.236154] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:32.460 [2024-11-18 07:14:53.236161] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xae0f40) on tqpair=0xa86650 00:29:32.460 [2024-11-18 07:14:53.236169] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:29:32.460 [2024-11-18 07:14:53.236186] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:32.460 [2024-11-18 07:14:53.236195] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:32.460 [2024-11-18 07:14:53.236202] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa86650) 00:29:32.460 [2024-11-18 07:14:53.236212] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.460 [2024-11-18 07:14:53.236237] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xae0f40, cid 0, qid 0 00:29:32.460 [2024-11-18 07:14:53.236311] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:32.460 [2024-11-18 07:14:53.236323] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:32.460 [2024-11-18 07:14:53.236330] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:32.460 [2024-11-18 07:14:53.236337] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xae0f40) on tqpair=0xa86650 00:29:32.460 [2024-11-18 07:14:53.236344] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:29:32.460 [2024-11-18 07:14:53.236353] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:29:32.460 [2024-11-18 07:14:53.236366] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:29:32.460 [2024-11-18 07:14:53.236476] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:29:32.460 [2024-11-18 07:14:53.236487] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:29:32.460 [2024-11-18 07:14:53.236508] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:32.460 [2024-11-18 07:14:53.236517] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:32.460 [2024-11-18 07:14:53.236523] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa86650) 00:29:32.460 [2024-11-18 07:14:53.236533] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.460 [2024-11-18 07:14:53.236555] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xae0f40, cid 0, qid 0 00:29:32.460 [2024-11-18 07:14:53.236670] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:32.460 [2024-11-18 07:14:53.236684] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:32.460 [2024-11-18 07:14:53.236691] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:32.460 [2024-11-18 07:14:53.236698] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xae0f40) on tqpair=0xa86650 00:29:32.460 [2024-11-18 07:14:53.236706] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:29:32.460 [2024-11-18 07:14:53.236733] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:32.460 [2024-11-18 07:14:53.236742] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:32.460 [2024-11-18 07:14:53.236748] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa86650) 00:29:32.460 [2024-11-18 07:14:53.236760] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.460 [2024-11-18 07:14:53.236781] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xae0f40, cid 0, qid 0 00:29:32.460 [2024-11-18 07:14:53.236867] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:32.460 [2024-11-18 07:14:53.236880] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:32.460 [2024-11-18 07:14:53.236887] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:32.460 [2024-11-18 07:14:53.236895] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xae0f40) on tqpair=0xa86650 00:29:32.460 [2024-11-18 07:14:53.236904] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:29:32.460 [2024-11-18 07:14:53.236913] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:29:32.460 [2024-11-18 07:14:53.236926] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:29:32.460 [2024-11-18 07:14:53.236944] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:29:32.460 [2024-11-18 07:14:53.236958] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:32.460 [2024-11-18 07:14:53.236966] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa86650) 00:29:32.460 [2024-11-18 07:14:53.236978] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.460 [2024-11-18 07:14:53.236999] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xae0f40, cid 0, qid 0 00:29:32.460 [2024-11-18 07:14:53.237112] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:32.460 [2024-11-18 07:14:53.237127] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:32.460 [2024-11-18 07:14:53.237134] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:32.460 [2024-11-18 07:14:53.237141] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xa86650): datao=0, datal=4096, cccid=0 00:29:32.460 [2024-11-18 07:14:53.237149] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xae0f40) on tqpair(0xa86650): expected_datao=0, payload_size=4096 00:29:32.460 [2024-11-18 07:14:53.237156] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:32.460 [2024-11-18 07:14:53.237173] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:32.460 [2024-11-18 07:14:53.237182] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:32.460 [2024-11-18 07:14:53.237194] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:32.460 [2024-11-18 07:14:53.237204] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:32.460 [2024-11-18 07:14:53.237211] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:32.460 [2024-11-18 07:14:53.237217] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xae0f40) on tqpair=0xa86650 00:29:32.460 [2024-11-18 07:14:53.237229] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:29:32.460 [2024-11-18 07:14:53.237238] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:29:32.460 [2024-11-18 07:14:53.237245] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:29:32.460 [2024-11-18 07:14:53.237257] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:29:32.460 [2024-11-18 07:14:53.237267] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:29:32.460 [2024-11-18 07:14:53.237276] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:29:32.460 [2024-11-18 07:14:53.237294] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:29:32.460 [2024-11-18 07:14:53.237308] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:32.460 [2024-11-18 07:14:53.237317] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:32.460 [2024-11-18 07:14:53.237324] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa86650) 00:29:32.460 [2024-11-18 07:14:53.237335] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:29:32.460 [2024-11-18 07:14:53.237357] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xae0f40, cid 0, qid 0 00:29:32.460 [2024-11-18 07:14:53.237452] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:32.460 [2024-11-18 07:14:53.237466] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:32.460 [2024-11-18 07:14:53.237484] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:32.460 [2024-11-18 07:14:53.237499] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xae0f40) on tqpair=0xa86650 00:29:32.460 [2024-11-18 07:14:53.237511] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:32.460 [2024-11-18 07:14:53.237525] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:32.460 [2024-11-18 07:14:53.237532] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa86650) 00:29:32.460 [2024-11-18 07:14:53.237542] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:32.460 [2024-11-18 07:14:53.237552] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:32.460 [2024-11-18 07:14:53.237559] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:32.460 [2024-11-18 07:14:53.237567] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0xa86650) 00:29:32.460 [2024-11-18 07:14:53.237576] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:32.460 [2024-11-18 07:14:53.237585] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:32.460 [2024-11-18 07:14:53.237592] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:32.460 [2024-11-18 07:14:53.237598] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0xa86650) 00:29:32.460 [2024-11-18 07:14:53.237607] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:32.460 [2024-11-18 07:14:53.237616] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:32.460 [2024-11-18 07:14:53.237623] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:32.460 [2024-11-18 07:14:53.237629] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa86650) 00:29:32.460 [2024-11-18 07:14:53.237638] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:32.460 [2024-11-18 07:14:53.237647] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:29:32.460 [2024-11-18 07:14:53.237661] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:29:32.460 [2024-11-18 07:14:53.237673] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:32.460 [2024-11-18 07:14:53.237680] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xa86650) 00:29:32.460 [2024-11-18 07:14:53.237690] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.460 [2024-11-18 07:14:53.237713] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xae0f40, cid 0, qid 0 00:29:32.461 [2024-11-18 07:14:53.237724] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xae10c0, cid 1, qid 0 00:29:32.461 [2024-11-18 07:14:53.237732] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xae1240, cid 2, qid 0 00:29:32.461 [2024-11-18 07:14:53.237740] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xae13c0, cid 3, qid 0 00:29:32.461 [2024-11-18 07:14:53.237748] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xae1540, cid 4, qid 0 00:29:32.461 [2024-11-18 07:14:53.237894] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:32.461 [2024-11-18 07:14:53.237908] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:32.461 [2024-11-18 07:14:53.237915] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:32.461 [2024-11-18 07:14:53.237921] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xae1540) on tqpair=0xa86650 00:29:32.461 [2024-11-18 07:14:53.237934] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:29:32.461 [2024-11-18 07:14:53.237944] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:29:32.461 [2024-11-18 07:14:53.237957] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:29:32.461 [2024-11-18 07:14:53.237972] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:29:32.461 [2024-11-18 07:14:53.237983] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:32.461 [2024-11-18 07:14:53.237991] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:32.461 [2024-11-18 07:14:53.237997] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xa86650) 00:29:32.461 [2024-11-18 07:14:53.238007] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:29:32.461 [2024-11-18 07:14:53.238043] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xae1540, cid 4, qid 0 00:29:32.461 [2024-11-18 07:14:53.238204] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:32.461 [2024-11-18 07:14:53.238219] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:32.461 [2024-11-18 07:14:53.238225] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:32.461 [2024-11-18 07:14:53.238232] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xae1540) on tqpair=0xa86650 00:29:32.461 [2024-11-18 07:14:53.238300] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:29:32.461 [2024-11-18 07:14:53.238321] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:29:32.461 [2024-11-18 07:14:53.238340] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:32.461 [2024-11-18 07:14:53.238348] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xa86650) 00:29:32.461 [2024-11-18 07:14:53.238359] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.461 [2024-11-18 07:14:53.238380] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xae1540, cid 4, qid 0 00:29:32.461 [2024-11-18 07:14:53.238503] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:32.461 [2024-11-18 07:14:53.238516] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:32.461 [2024-11-18 07:14:53.238523] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:32.461 [2024-11-18 07:14:53.238530] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xa86650): datao=0, datal=4096, cccid=4 00:29:32.461 [2024-11-18 07:14:53.238537] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xae1540) on tqpair(0xa86650): expected_datao=0, payload_size=4096 00:29:32.461 [2024-11-18 07:14:53.238545] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:32.461 [2024-11-18 07:14:53.238561] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:32.461 [2024-11-18 07:14:53.238570] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:32.461 [2024-11-18 07:14:53.281504] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:32.461 [2024-11-18 07:14:53.281523] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:32.461 [2024-11-18 07:14:53.281530] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:32.461 [2024-11-18 07:14:53.281537] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xae1540) on tqpair=0xa86650 00:29:32.461 [2024-11-18 07:14:53.281552] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:29:32.461 [2024-11-18 07:14:53.281573] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:29:32.461 [2024-11-18 07:14:53.281591] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:29:32.461 [2024-11-18 07:14:53.281605] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:32.461 [2024-11-18 07:14:53.281613] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xa86650) 00:29:32.461 [2024-11-18 07:14:53.281624] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.461 [2024-11-18 07:14:53.281652] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xae1540, cid 4, qid 0 00:29:32.461 [2024-11-18 07:14:53.281802] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:32.461 [2024-11-18 07:14:53.281816] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:32.461 [2024-11-18 07:14:53.281823] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:32.461 [2024-11-18 07:14:53.281829] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xa86650): datao=0, datal=4096, cccid=4 00:29:32.461 [2024-11-18 07:14:53.281837] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xae1540) on tqpair(0xa86650): expected_datao=0, payload_size=4096 00:29:32.461 [2024-11-18 07:14:53.281845] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:32.461 [2024-11-18 07:14:53.281861] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:32.461 [2024-11-18 07:14:53.281870] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:32.461 [2024-11-18 07:14:53.324500] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:32.461 [2024-11-18 07:14:53.324518] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:32.461 [2024-11-18 07:14:53.324550] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:32.461 [2024-11-18 07:14:53.324558] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xae1540) on tqpair=0xa86650 00:29:32.461 [2024-11-18 07:14:53.324580] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:29:32.461 [2024-11-18 07:14:53.324601] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:29:32.461 [2024-11-18 07:14:53.324616] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:32.461 [2024-11-18 07:14:53.324624] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xa86650) 00:29:32.461 [2024-11-18 07:14:53.324635] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.461 [2024-11-18 07:14:53.324659] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xae1540, cid 4, qid 0 00:29:32.461 [2024-11-18 07:14:53.324781] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:32.461 [2024-11-18 07:14:53.324793] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:32.461 [2024-11-18 07:14:53.324800] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:32.461 [2024-11-18 07:14:53.324807] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xa86650): datao=0, datal=4096, cccid=4 00:29:32.461 [2024-11-18 07:14:53.324815] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xae1540) on tqpair(0xa86650): expected_datao=0, payload_size=4096 00:29:32.461 [2024-11-18 07:14:53.324822] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:32.461 [2024-11-18 07:14:53.324838] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:32.461 [2024-11-18 07:14:53.324847] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:32.461 [2024-11-18 07:14:53.370508] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:32.461 [2024-11-18 07:14:53.370526] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:32.461 [2024-11-18 07:14:53.370549] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:32.461 [2024-11-18 07:14:53.370556] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xae1540) on tqpair=0xa86650 00:29:32.461 [2024-11-18 07:14:53.370569] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:29:32.461 [2024-11-18 07:14:53.370585] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:29:32.461 [2024-11-18 07:14:53.370601] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:29:32.461 [2024-11-18 07:14:53.370617] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:29:32.461 [2024-11-18 07:14:53.370626] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:29:32.461 [2024-11-18 07:14:53.370635] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:29:32.461 [2024-11-18 07:14:53.370643] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:29:32.461 [2024-11-18 07:14:53.370651] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:29:32.461 [2024-11-18 07:14:53.370659] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:29:32.461 [2024-11-18 07:14:53.370679] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:32.461 [2024-11-18 07:14:53.370688] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xa86650) 00:29:32.461 [2024-11-18 07:14:53.370700] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.461 [2024-11-18 07:14:53.370711] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:32.461 [2024-11-18 07:14:53.370718] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:32.461 [2024-11-18 07:14:53.370725] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xa86650) 00:29:32.461 [2024-11-18 07:14:53.370734] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:29:32.461 [2024-11-18 07:14:53.370761] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xae1540, cid 4, qid 0 00:29:32.461 [2024-11-18 07:14:53.370773] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xae16c0, cid 5, qid 0 00:29:32.461 [2024-11-18 07:14:53.370871] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:32.461 [2024-11-18 07:14:53.370884] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:32.462 [2024-11-18 07:14:53.370890] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:32.462 [2024-11-18 07:14:53.370897] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xae1540) on tqpair=0xa86650 00:29:32.462 [2024-11-18 07:14:53.370907] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:32.462 [2024-11-18 07:14:53.370917] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:32.462 [2024-11-18 07:14:53.370923] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:32.462 [2024-11-18 07:14:53.370930] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xae16c0) on tqpair=0xa86650 00:29:32.462 [2024-11-18 07:14:53.370945] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:32.462 [2024-11-18 07:14:53.370954] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xa86650) 00:29:32.462 [2024-11-18 07:14:53.370964] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.462 [2024-11-18 07:14:53.370985] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xae16c0, cid 5, qid 0 00:29:32.462 [2024-11-18 07:14:53.371071] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:32.462 [2024-11-18 07:14:53.371085] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:32.462 [2024-11-18 07:14:53.371092] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:32.462 [2024-11-18 07:14:53.371098] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xae16c0) on tqpair=0xa86650 00:29:32.462 [2024-11-18 07:14:53.371114] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:32.462 [2024-11-18 07:14:53.371123] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xa86650) 00:29:32.462 [2024-11-18 07:14:53.371137] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.462 [2024-11-18 07:14:53.371158] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xae16c0, cid 5, qid 0 00:29:32.462 [2024-11-18 07:14:53.371235] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:32.462 [2024-11-18 07:14:53.371247] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:32.462 [2024-11-18 07:14:53.371254] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:32.462 [2024-11-18 07:14:53.371260] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xae16c0) on tqpair=0xa86650 00:29:32.462 [2024-11-18 07:14:53.371275] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:32.462 [2024-11-18 07:14:53.371284] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xa86650) 00:29:32.462 [2024-11-18 07:14:53.371295] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.462 [2024-11-18 07:14:53.371315] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xae16c0, cid 5, qid 0 00:29:32.462 [2024-11-18 07:14:53.371384] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:32.462 [2024-11-18 07:14:53.371396] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:32.462 [2024-11-18 07:14:53.371403] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:32.462 [2024-11-18 07:14:53.371409] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xae16c0) on tqpair=0xa86650 00:29:32.462 [2024-11-18 07:14:53.371433] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:32.462 [2024-11-18 07:14:53.371444] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xa86650) 00:29:32.462 [2024-11-18 07:14:53.371454] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.462 [2024-11-18 07:14:53.371466] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:32.462 [2024-11-18 07:14:53.371474] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xa86650) 00:29:32.462 [2024-11-18 07:14:53.371483] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.462 [2024-11-18 07:14:53.371504] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:32.462 [2024-11-18 07:14:53.371513] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0xa86650) 00:29:32.462 [2024-11-18 07:14:53.371523] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.462 [2024-11-18 07:14:53.371535] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:32.462 [2024-11-18 07:14:53.371542] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0xa86650) 00:29:32.462 [2024-11-18 07:14:53.371551] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.462 [2024-11-18 07:14:53.371573] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xae16c0, cid 5, qid 0 00:29:32.462 [2024-11-18 07:14:53.371585] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xae1540, cid 4, qid 0 00:29:32.462 [2024-11-18 07:14:53.371593] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xae1840, cid 6, qid 0 00:29:32.462 [2024-11-18 07:14:53.371600] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xae19c0, cid 7, qid 0 00:29:32.462 [2024-11-18 07:14:53.371806] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:32.462 [2024-11-18 07:14:53.371819] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:32.462 [2024-11-18 07:14:53.371825] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:32.462 [2024-11-18 07:14:53.371835] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xa86650): datao=0, datal=8192, cccid=5 00:29:32.462 [2024-11-18 07:14:53.371843] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xae16c0) on tqpair(0xa86650): expected_datao=0, payload_size=8192 00:29:32.462 [2024-11-18 07:14:53.371851] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:32.462 [2024-11-18 07:14:53.371871] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:32.462 [2024-11-18 07:14:53.371881] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:32.462 [2024-11-18 07:14:53.371890] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:32.462 [2024-11-18 07:14:53.371899] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:32.462 [2024-11-18 07:14:53.371905] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:32.462 [2024-11-18 07:14:53.371912] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xa86650): datao=0, datal=512, cccid=4 00:29:32.462 [2024-11-18 07:14:53.371919] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xae1540) on tqpair(0xa86650): expected_datao=0, payload_size=512 00:29:32.462 [2024-11-18 07:14:53.371926] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:32.462 [2024-11-18 07:14:53.371936] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:32.462 [2024-11-18 07:14:53.371943] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:32.462 [2024-11-18 07:14:53.371951] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:32.462 [2024-11-18 07:14:53.371960] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:32.462 [2024-11-18 07:14:53.371966] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:32.462 [2024-11-18 07:14:53.371972] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xa86650): datao=0, datal=512, cccid=6 00:29:32.462 [2024-11-18 07:14:53.371979] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xae1840) on tqpair(0xa86650): expected_datao=0, payload_size=512 00:29:32.462 [2024-11-18 07:14:53.371987] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:32.462 [2024-11-18 07:14:53.371996] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:32.462 [2024-11-18 07:14:53.372002] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:32.462 [2024-11-18 07:14:53.372011] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:32.462 [2024-11-18 07:14:53.372019] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:32.462 [2024-11-18 07:14:53.372025] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:32.462 [2024-11-18 07:14:53.372031] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xa86650): datao=0, datal=4096, cccid=7 00:29:32.462 [2024-11-18 07:14:53.372039] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xae19c0) on tqpair(0xa86650): expected_datao=0, payload_size=4096 00:29:32.462 [2024-11-18 07:14:53.372046] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:32.462 [2024-11-18 07:14:53.372056] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:32.462 [2024-11-18 07:14:53.372063] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:32.462 [2024-11-18 07:14:53.372074] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:32.462 [2024-11-18 07:14:53.372083] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:32.462 [2024-11-18 07:14:53.372090] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:32.462 [2024-11-18 07:14:53.372096] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xae16c0) on tqpair=0xa86650 00:29:32.462 [2024-11-18 07:14:53.372115] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:32.462 [2024-11-18 07:14:53.372142] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:32.462 [2024-11-18 07:14:53.372149] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:32.462 [2024-11-18 07:14:53.372155] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xae1540) on tqpair=0xa86650 00:29:32.462 [2024-11-18 07:14:53.372173] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:32.462 [2024-11-18 07:14:53.372184] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:32.462 [2024-11-18 07:14:53.372208] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:32.462 [2024-11-18 07:14:53.372215] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xae1840) on tqpair=0xa86650 00:29:32.462 [2024-11-18 07:14:53.372225] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:32.462 [2024-11-18 07:14:53.372234] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:32.462 [2024-11-18 07:14:53.372240] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:32.462 [2024-11-18 07:14:53.372246] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xae19c0) on tqpair=0xa86650 00:29:32.462 ===================================================== 00:29:32.462 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:32.462 ===================================================== 00:29:32.462 Controller Capabilities/Features 00:29:32.462 ================================ 00:29:32.462 Vendor ID: 8086 00:29:32.462 Subsystem Vendor ID: 8086 00:29:32.462 Serial Number: SPDK00000000000001 00:29:32.462 Model Number: SPDK bdev Controller 00:29:32.462 Firmware Version: 25.01 00:29:32.462 Recommended Arb Burst: 6 00:29:32.462 IEEE OUI Identifier: e4 d2 5c 00:29:32.462 Multi-path I/O 00:29:32.462 May have multiple subsystem ports: Yes 00:29:32.462 May have multiple controllers: Yes 00:29:32.462 Associated with SR-IOV VF: No 00:29:32.462 Max Data Transfer Size: 131072 00:29:32.462 Max Number of Namespaces: 32 00:29:32.462 Max Number of I/O Queues: 127 00:29:32.462 NVMe Specification Version (VS): 1.3 00:29:32.463 NVMe Specification Version (Identify): 1.3 00:29:32.463 Maximum Queue Entries: 128 00:29:32.463 Contiguous Queues Required: Yes 00:29:32.463 Arbitration Mechanisms Supported 00:29:32.463 Weighted Round Robin: Not Supported 00:29:32.463 Vendor Specific: Not Supported 00:29:32.463 Reset Timeout: 15000 ms 00:29:32.463 Doorbell Stride: 4 bytes 00:29:32.463 NVM Subsystem Reset: Not Supported 00:29:32.463 Command Sets Supported 00:29:32.463 NVM Command Set: Supported 00:29:32.463 Boot Partition: Not Supported 00:29:32.463 Memory Page Size Minimum: 4096 bytes 00:29:32.463 Memory Page Size Maximum: 4096 bytes 00:29:32.463 Persistent Memory Region: Not Supported 00:29:32.463 Optional Asynchronous Events Supported 00:29:32.463 Namespace Attribute Notices: Supported 00:29:32.463 Firmware Activation Notices: Not Supported 00:29:32.463 ANA Change Notices: Not Supported 00:29:32.463 PLE Aggregate Log Change Notices: Not Supported 00:29:32.463 LBA Status Info Alert Notices: Not Supported 00:29:32.463 EGE Aggregate Log Change Notices: Not Supported 00:29:32.463 Normal NVM Subsystem Shutdown event: Not Supported 00:29:32.463 Zone Descriptor Change Notices: Not Supported 00:29:32.463 Discovery Log Change Notices: Not Supported 00:29:32.463 Controller Attributes 00:29:32.463 128-bit Host Identifier: Supported 00:29:32.463 Non-Operational Permissive Mode: Not Supported 00:29:32.463 NVM Sets: Not Supported 00:29:32.463 Read Recovery Levels: Not Supported 00:29:32.463 Endurance Groups: Not Supported 00:29:32.463 Predictable Latency Mode: Not Supported 00:29:32.463 Traffic Based Keep ALive: Not Supported 00:29:32.463 Namespace Granularity: Not Supported 00:29:32.463 SQ Associations: Not Supported 00:29:32.463 UUID List: Not Supported 00:29:32.463 Multi-Domain Subsystem: Not Supported 00:29:32.463 Fixed Capacity Management: Not Supported 00:29:32.463 Variable Capacity Management: Not Supported 00:29:32.463 Delete Endurance Group: Not Supported 00:29:32.463 Delete NVM Set: Not Supported 00:29:32.463 Extended LBA Formats Supported: Not Supported 00:29:32.463 Flexible Data Placement Supported: Not Supported 00:29:32.463 00:29:32.463 Controller Memory Buffer Support 00:29:32.463 ================================ 00:29:32.463 Supported: No 00:29:32.463 00:29:32.463 Persistent Memory Region Support 00:29:32.463 ================================ 00:29:32.463 Supported: No 00:29:32.463 00:29:32.463 Admin Command Set Attributes 00:29:32.463 ============================ 00:29:32.463 Security Send/Receive: Not Supported 00:29:32.463 Format NVM: Not Supported 00:29:32.463 Firmware Activate/Download: Not Supported 00:29:32.463 Namespace Management: Not Supported 00:29:32.463 Device Self-Test: Not Supported 00:29:32.463 Directives: Not Supported 00:29:32.463 NVMe-MI: Not Supported 00:29:32.463 Virtualization Management: Not Supported 00:29:32.463 Doorbell Buffer Config: Not Supported 00:29:32.463 Get LBA Status Capability: Not Supported 00:29:32.463 Command & Feature Lockdown Capability: Not Supported 00:29:32.463 Abort Command Limit: 4 00:29:32.463 Async Event Request Limit: 4 00:29:32.463 Number of Firmware Slots: N/A 00:29:32.463 Firmware Slot 1 Read-Only: N/A 00:29:32.463 Firmware Activation Without Reset: N/A 00:29:32.463 Multiple Update Detection Support: N/A 00:29:32.463 Firmware Update Granularity: No Information Provided 00:29:32.463 Per-Namespace SMART Log: No 00:29:32.463 Asymmetric Namespace Access Log Page: Not Supported 00:29:32.463 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:29:32.463 Command Effects Log Page: Supported 00:29:32.463 Get Log Page Extended Data: Supported 00:29:32.463 Telemetry Log Pages: Not Supported 00:29:32.463 Persistent Event Log Pages: Not Supported 00:29:32.463 Supported Log Pages Log Page: May Support 00:29:32.463 Commands Supported & Effects Log Page: Not Supported 00:29:32.463 Feature Identifiers & Effects Log Page:May Support 00:29:32.463 NVMe-MI Commands & Effects Log Page: May Support 00:29:32.463 Data Area 4 for Telemetry Log: Not Supported 00:29:32.463 Error Log Page Entries Supported: 128 00:29:32.463 Keep Alive: Supported 00:29:32.463 Keep Alive Granularity: 10000 ms 00:29:32.463 00:29:32.463 NVM Command Set Attributes 00:29:32.463 ========================== 00:29:32.463 Submission Queue Entry Size 00:29:32.463 Max: 64 00:29:32.463 Min: 64 00:29:32.463 Completion Queue Entry Size 00:29:32.463 Max: 16 00:29:32.463 Min: 16 00:29:32.463 Number of Namespaces: 32 00:29:32.463 Compare Command: Supported 00:29:32.463 Write Uncorrectable Command: Not Supported 00:29:32.463 Dataset Management Command: Supported 00:29:32.463 Write Zeroes Command: Supported 00:29:32.463 Set Features Save Field: Not Supported 00:29:32.463 Reservations: Supported 00:29:32.463 Timestamp: Not Supported 00:29:32.463 Copy: Supported 00:29:32.463 Volatile Write Cache: Present 00:29:32.463 Atomic Write Unit (Normal): 1 00:29:32.463 Atomic Write Unit (PFail): 1 00:29:32.463 Atomic Compare & Write Unit: 1 00:29:32.463 Fused Compare & Write: Supported 00:29:32.463 Scatter-Gather List 00:29:32.463 SGL Command Set: Supported 00:29:32.463 SGL Keyed: Supported 00:29:32.463 SGL Bit Bucket Descriptor: Not Supported 00:29:32.463 SGL Metadata Pointer: Not Supported 00:29:32.463 Oversized SGL: Not Supported 00:29:32.463 SGL Metadata Address: Not Supported 00:29:32.463 SGL Offset: Supported 00:29:32.463 Transport SGL Data Block: Not Supported 00:29:32.463 Replay Protected Memory Block: Not Supported 00:29:32.463 00:29:32.463 Firmware Slot Information 00:29:32.463 ========================= 00:29:32.463 Active slot: 1 00:29:32.463 Slot 1 Firmware Revision: 25.01 00:29:32.463 00:29:32.463 00:29:32.463 Commands Supported and Effects 00:29:32.463 ============================== 00:29:32.463 Admin Commands 00:29:32.463 -------------- 00:29:32.463 Get Log Page (02h): Supported 00:29:32.463 Identify (06h): Supported 00:29:32.463 Abort (08h): Supported 00:29:32.463 Set Features (09h): Supported 00:29:32.463 Get Features (0Ah): Supported 00:29:32.463 Asynchronous Event Request (0Ch): Supported 00:29:32.463 Keep Alive (18h): Supported 00:29:32.463 I/O Commands 00:29:32.463 ------------ 00:29:32.463 Flush (00h): Supported LBA-Change 00:29:32.463 Write (01h): Supported LBA-Change 00:29:32.463 Read (02h): Supported 00:29:32.463 Compare (05h): Supported 00:29:32.463 Write Zeroes (08h): Supported LBA-Change 00:29:32.463 Dataset Management (09h): Supported LBA-Change 00:29:32.463 Copy (19h): Supported LBA-Change 00:29:32.463 00:29:32.463 Error Log 00:29:32.463 ========= 00:29:32.463 00:29:32.463 Arbitration 00:29:32.463 =========== 00:29:32.463 Arbitration Burst: 1 00:29:32.463 00:29:32.463 Power Management 00:29:32.463 ================ 00:29:32.463 Number of Power States: 1 00:29:32.463 Current Power State: Power State #0 00:29:32.463 Power State #0: 00:29:32.463 Max Power: 0.00 W 00:29:32.463 Non-Operational State: Operational 00:29:32.463 Entry Latency: Not Reported 00:29:32.463 Exit Latency: Not Reported 00:29:32.463 Relative Read Throughput: 0 00:29:32.463 Relative Read Latency: 0 00:29:32.463 Relative Write Throughput: 0 00:29:32.463 Relative Write Latency: 0 00:29:32.463 Idle Power: Not Reported 00:29:32.463 Active Power: Not Reported 00:29:32.463 Non-Operational Permissive Mode: Not Supported 00:29:32.463 00:29:32.463 Health Information 00:29:32.463 ================== 00:29:32.463 Critical Warnings: 00:29:32.463 Available Spare Space: OK 00:29:32.463 Temperature: OK 00:29:32.463 Device Reliability: OK 00:29:32.463 Read Only: No 00:29:32.463 Volatile Memory Backup: OK 00:29:32.463 Current Temperature: 0 Kelvin (-273 Celsius) 00:29:32.463 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:29:32.463 Available Spare: 0% 00:29:32.463 Available Spare Threshold: 0% 00:29:32.463 Life Percentage Used:[2024-11-18 07:14:53.372359] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:32.463 [2024-11-18 07:14:53.372371] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0xa86650) 00:29:32.463 [2024-11-18 07:14:53.372381] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.463 [2024-11-18 07:14:53.372402] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xae19c0, cid 7, qid 0 00:29:32.463 [2024-11-18 07:14:53.372549] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:32.463 [2024-11-18 07:14:53.372563] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:32.463 [2024-11-18 07:14:53.372570] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:32.463 [2024-11-18 07:14:53.372577] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xae19c0) on tqpair=0xa86650 00:29:32.463 [2024-11-18 07:14:53.372622] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:29:32.463 [2024-11-18 07:14:53.372641] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xae0f40) on tqpair=0xa86650 00:29:32.464 [2024-11-18 07:14:53.372652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.464 [2024-11-18 07:14:53.372660] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xae10c0) on tqpair=0xa86650 00:29:32.464 [2024-11-18 07:14:53.372668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.464 [2024-11-18 07:14:53.372676] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xae1240) on tqpair=0xa86650 00:29:32.464 [2024-11-18 07:14:53.372684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.464 [2024-11-18 07:14:53.372692] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xae13c0) on tqpair=0xa86650 00:29:32.464 [2024-11-18 07:14:53.372700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.464 [2024-11-18 07:14:53.372712] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:32.464 [2024-11-18 07:14:53.372721] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:32.464 [2024-11-18 07:14:53.372727] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa86650) 00:29:32.464 [2024-11-18 07:14:53.372738] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.464 [2024-11-18 07:14:53.372759] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xae13c0, cid 3, qid 0 00:29:32.464 [2024-11-18 07:14:53.372944] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:32.464 [2024-11-18 07:14:53.372956] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:32.464 [2024-11-18 07:14:53.372963] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:32.464 [2024-11-18 07:14:53.372970] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xae13c0) on tqpair=0xa86650 00:29:32.464 [2024-11-18 07:14:53.372981] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:32.464 [2024-11-18 07:14:53.372989] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:32.464 [2024-11-18 07:14:53.372996] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa86650) 00:29:32.464 [2024-11-18 07:14:53.373010] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.464 [2024-11-18 07:14:53.373036] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xae13c0, cid 3, qid 0 00:29:32.464 [2024-11-18 07:14:53.373124] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:32.464 [2024-11-18 07:14:53.373138] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:32.464 [2024-11-18 07:14:53.373145] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:32.464 [2024-11-18 07:14:53.373152] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xae13c0) on tqpair=0xa86650 00:29:32.464 [2024-11-18 07:14:53.373159] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:29:32.464 [2024-11-18 07:14:53.373167] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:29:32.464 [2024-11-18 07:14:53.373183] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:32.464 [2024-11-18 07:14:53.373191] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:32.464 [2024-11-18 07:14:53.373198] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa86650) 00:29:32.464 [2024-11-18 07:14:53.373208] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.464 [2024-11-18 07:14:53.373228] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xae13c0, cid 3, qid 0 00:29:32.464 [2024-11-18 07:14:53.373304] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:32.464 [2024-11-18 07:14:53.373315] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:32.464 [2024-11-18 07:14:53.373322] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:32.464 [2024-11-18 07:14:53.373329] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xae13c0) on tqpair=0xa86650 00:29:32.464 [2024-11-18 07:14:53.373344] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:32.464 [2024-11-18 07:14:53.373353] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:32.464 [2024-11-18 07:14:53.373360] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa86650) 00:29:32.464 [2024-11-18 07:14:53.373370] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.464 [2024-11-18 07:14:53.373389] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xae13c0, cid 3, qid 0 00:29:32.464 [2024-11-18 07:14:53.373466] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:32.464 [2024-11-18 07:14:53.373479] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:32.464 [2024-11-18 07:14:53.373486] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:32.464 [2024-11-18 07:14:53.373501] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xae13c0) on tqpair=0xa86650 00:29:32.464 [2024-11-18 07:14:53.373518] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:32.464 [2024-11-18 07:14:53.373528] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:32.464 [2024-11-18 07:14:53.373534] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa86650) 00:29:32.464 [2024-11-18 07:14:53.373544] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.464 [2024-11-18 07:14:53.373565] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xae13c0, cid 3, qid 0 00:29:32.464 [2024-11-18 07:14:53.373643] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:32.464 [2024-11-18 07:14:53.373655] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:32.464 [2024-11-18 07:14:53.373662] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:32.464 [2024-11-18 07:14:53.373668] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xae13c0) on tqpair=0xa86650 00:29:32.464 [2024-11-18 07:14:53.373684] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:32.464 [2024-11-18 07:14:53.373696] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:32.464 [2024-11-18 07:14:53.373703] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa86650) 00:29:32.464 [2024-11-18 07:14:53.373713] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.464 [2024-11-18 07:14:53.373733] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xae13c0, cid 3, qid 0 00:29:32.464 [2024-11-18 07:14:53.373812] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:32.464 [2024-11-18 07:14:53.373826] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:32.464 [2024-11-18 07:14:53.373832] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:32.464 [2024-11-18 07:14:53.373839] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xae13c0) on tqpair=0xa86650 00:29:32.464 [2024-11-18 07:14:53.373855] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:32.464 [2024-11-18 07:14:53.373864] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:32.464 [2024-11-18 07:14:53.373870] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa86650) 00:29:32.464 [2024-11-18 07:14:53.373880] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.464 [2024-11-18 07:14:53.373900] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xae13c0, cid 3, qid 0 00:29:32.464 [2024-11-18 07:14:53.373973] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:32.464 [2024-11-18 07:14:53.373987] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:32.464 [2024-11-18 07:14:53.373993] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:32.464 [2024-11-18 07:14:53.374000] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xae13c0) on tqpair=0xa86650 00:29:32.464 [2024-11-18 07:14:53.374016] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:32.464 [2024-11-18 07:14:53.374025] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:32.464 [2024-11-18 07:14:53.374031] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa86650) 00:29:32.464 [2024-11-18 07:14:53.374041] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.464 [2024-11-18 07:14:53.374061] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xae13c0, cid 3, qid 0 00:29:32.464 [2024-11-18 07:14:53.374133] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:32.464 [2024-11-18 07:14:53.374144] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:32.464 [2024-11-18 07:14:53.374151] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:32.464 [2024-11-18 07:14:53.374158] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xae13c0) on tqpair=0xa86650 00:29:32.464 [2024-11-18 07:14:53.374173] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:32.464 [2024-11-18 07:14:53.374182] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:32.464 [2024-11-18 07:14:53.374188] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa86650) 00:29:32.464 [2024-11-18 07:14:53.374198] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.464 [2024-11-18 07:14:53.374218] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xae13c0, cid 3, qid 0 00:29:32.465 [2024-11-18 07:14:53.374291] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:32.465 [2024-11-18 07:14:53.374303] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:32.465 [2024-11-18 07:14:53.374309] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:32.465 [2024-11-18 07:14:53.374316] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xae13c0) on tqpair=0xa86650 00:29:32.465 [2024-11-18 07:14:53.374331] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:32.465 [2024-11-18 07:14:53.374340] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:32.465 [2024-11-18 07:14:53.374350] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa86650) 00:29:32.465 [2024-11-18 07:14:53.374361] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.465 [2024-11-18 07:14:53.374381] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xae13c0, cid 3, qid 0 00:29:32.465 [2024-11-18 07:14:53.374460] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:32.465 [2024-11-18 07:14:53.374473] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:32.465 [2024-11-18 07:14:53.374480] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:32.465 [2024-11-18 07:14:53.374486] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xae13c0) on tqpair=0xa86650 00:29:32.465 [2024-11-18 07:14:53.378533] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:32.465 [2024-11-18 07:14:53.378544] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:32.465 [2024-11-18 07:14:53.378550] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa86650) 00:29:32.465 [2024-11-18 07:14:53.378576] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.465 [2024-11-18 07:14:53.378599] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xae13c0, cid 3, qid 0 00:29:32.465 [2024-11-18 07:14:53.378719] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:32.465 [2024-11-18 07:14:53.378731] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:32.465 [2024-11-18 07:14:53.378738] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:32.465 [2024-11-18 07:14:53.378745] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xae13c0) on tqpair=0xa86650 00:29:32.465 [2024-11-18 07:14:53.378757] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 5 milliseconds 00:29:32.465 0% 00:29:32.465 Data Units Read: 0 00:29:32.465 Data Units Written: 0 00:29:32.465 Host Read Commands: 0 00:29:32.465 Host Write Commands: 0 00:29:32.465 Controller Busy Time: 0 minutes 00:29:32.465 Power Cycles: 0 00:29:32.465 Power On Hours: 0 hours 00:29:32.465 Unsafe Shutdowns: 0 00:29:32.465 Unrecoverable Media Errors: 0 00:29:32.465 Lifetime Error Log Entries: 0 00:29:32.465 Warning Temperature Time: 0 minutes 00:29:32.465 Critical Temperature Time: 0 minutes 00:29:32.465 00:29:32.465 Number of Queues 00:29:32.465 ================ 00:29:32.465 Number of I/O Submission Queues: 127 00:29:32.465 Number of I/O Completion Queues: 127 00:29:32.465 00:29:32.465 Active Namespaces 00:29:32.465 ================= 00:29:32.465 Namespace ID:1 00:29:32.465 Error Recovery Timeout: Unlimited 00:29:32.465 Command Set Identifier: NVM (00h) 00:29:32.465 Deallocate: Supported 00:29:32.465 Deallocated/Unwritten Error: Not Supported 00:29:32.465 Deallocated Read Value: Unknown 00:29:32.465 Deallocate in Write Zeroes: Not Supported 00:29:32.465 Deallocated Guard Field: 0xFFFF 00:29:32.465 Flush: Supported 00:29:32.465 Reservation: Supported 00:29:32.465 Namespace Sharing Capabilities: Multiple Controllers 00:29:32.465 Size (in LBAs): 131072 (0GiB) 00:29:32.465 Capacity (in LBAs): 131072 (0GiB) 00:29:32.465 Utilization (in LBAs): 131072 (0GiB) 00:29:32.465 NGUID: ABCDEF0123456789ABCDEF0123456789 00:29:32.465 EUI64: ABCDEF0123456789 00:29:32.465 UUID: 2d6c31f7-c127-4520-8a78-7c0bdb4504fd 00:29:32.465 Thin Provisioning: Not Supported 00:29:32.465 Per-NS Atomic Units: Yes 00:29:32.465 Atomic Boundary Size (Normal): 0 00:29:32.465 Atomic Boundary Size (PFail): 0 00:29:32.465 Atomic Boundary Offset: 0 00:29:32.465 Maximum Single Source Range Length: 65535 00:29:32.465 Maximum Copy Length: 65535 00:29:32.465 Maximum Source Range Count: 1 00:29:32.465 NGUID/EUI64 Never Reused: No 00:29:32.465 Namespace Write Protected: No 00:29:32.465 Number of LBA Formats: 1 00:29:32.465 Current LBA Format: LBA Format #00 00:29:32.465 LBA Format #00: Data Size: 512 Metadata Size: 0 00:29:32.465 00:29:32.465 07:14:53 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:29:32.465 07:14:53 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:32.465 07:14:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:32.465 07:14:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:32.465 07:14:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:32.465 07:14:53 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:29:32.465 07:14:53 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:29:32.465 07:14:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:32.465 07:14:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:29:32.465 07:14:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:32.465 07:14:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:29:32.465 07:14:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:32.465 07:14:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:32.465 rmmod nvme_tcp 00:29:32.465 rmmod nvme_fabrics 00:29:32.725 rmmod nvme_keyring 00:29:32.725 07:14:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:32.725 07:14:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:29:32.725 07:14:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:29:32.725 07:14:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@517 -- # '[' -n 339694 ']' 00:29:32.725 07:14:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # killprocess 339694 00:29:32.725 07:14:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # '[' -z 339694 ']' 00:29:32.725 07:14:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # kill -0 339694 00:29:32.725 07:14:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # uname 00:29:32.725 07:14:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:32.725 07:14:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 339694 00:29:32.725 07:14:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:32.725 07:14:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:32.725 07:14:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@972 -- # echo 'killing process with pid 339694' 00:29:32.725 killing process with pid 339694 00:29:32.725 07:14:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@973 -- # kill 339694 00:29:32.725 07:14:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@978 -- # wait 339694 00:29:32.986 07:14:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:32.986 07:14:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:32.986 07:14:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:32.986 07:14:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:29:32.986 07:14:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-save 00:29:32.986 07:14:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:32.986 07:14:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-restore 00:29:32.986 07:14:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:32.986 07:14:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:32.986 07:14:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:32.986 07:14:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:32.986 07:14:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:34.895 07:14:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:34.895 00:29:34.895 real 0m5.711s 00:29:34.895 user 0m4.992s 00:29:34.895 sys 0m2.018s 00:29:34.895 07:14:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:34.895 07:14:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:34.895 ************************************ 00:29:34.895 END TEST nvmf_identify 00:29:34.895 ************************************ 00:29:34.895 07:14:55 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:29:34.895 07:14:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:34.895 07:14:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:34.895 07:14:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:34.895 ************************************ 00:29:34.895 START TEST nvmf_perf 00:29:34.895 ************************************ 00:29:34.895 07:14:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:29:35.155 * Looking for test storage... 00:29:35.155 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:35.155 07:14:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:35.155 07:14:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # lcov --version 00:29:35.155 07:14:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:35.155 07:14:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:35.155 07:14:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:35.155 07:14:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:35.155 07:14:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:35.155 07:14:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:29:35.155 07:14:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:29:35.155 07:14:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:29:35.155 07:14:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:29:35.155 07:14:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:29:35.155 07:14:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:29:35.155 07:14:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:29:35.155 07:14:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:35.155 07:14:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:29:35.155 07:14:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:29:35.155 07:14:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:35.155 07:14:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:35.155 07:14:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:29:35.155 07:14:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:29:35.155 07:14:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:35.155 07:14:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:29:35.155 07:14:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:29:35.155 07:14:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:29:35.155 07:14:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:29:35.155 07:14:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:35.155 07:14:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:29:35.155 07:14:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:29:35.155 07:14:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:35.155 07:14:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:35.155 07:14:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:29:35.155 07:14:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:35.155 07:14:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:35.155 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:35.155 --rc genhtml_branch_coverage=1 00:29:35.155 --rc genhtml_function_coverage=1 00:29:35.155 --rc genhtml_legend=1 00:29:35.155 --rc geninfo_all_blocks=1 00:29:35.155 --rc geninfo_unexecuted_blocks=1 00:29:35.155 00:29:35.155 ' 00:29:35.155 07:14:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:35.155 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:35.155 --rc genhtml_branch_coverage=1 00:29:35.155 --rc genhtml_function_coverage=1 00:29:35.155 --rc genhtml_legend=1 00:29:35.155 --rc geninfo_all_blocks=1 00:29:35.155 --rc geninfo_unexecuted_blocks=1 00:29:35.155 00:29:35.155 ' 00:29:35.155 07:14:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:35.155 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:35.155 --rc genhtml_branch_coverage=1 00:29:35.155 --rc genhtml_function_coverage=1 00:29:35.155 --rc genhtml_legend=1 00:29:35.155 --rc geninfo_all_blocks=1 00:29:35.155 --rc geninfo_unexecuted_blocks=1 00:29:35.155 00:29:35.155 ' 00:29:35.155 07:14:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:35.155 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:35.155 --rc genhtml_branch_coverage=1 00:29:35.155 --rc genhtml_function_coverage=1 00:29:35.155 --rc genhtml_legend=1 00:29:35.155 --rc geninfo_all_blocks=1 00:29:35.155 --rc geninfo_unexecuted_blocks=1 00:29:35.155 00:29:35.155 ' 00:29:35.155 07:14:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:35.155 07:14:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:29:35.155 07:14:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:35.155 07:14:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:35.155 07:14:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:35.155 07:14:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:35.155 07:14:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:35.155 07:14:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:35.155 07:14:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:35.155 07:14:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:35.155 07:14:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:35.155 07:14:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:35.155 07:14:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:29:35.155 07:14:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:29:35.155 07:14:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:35.155 07:14:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:35.155 07:14:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:35.155 07:14:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:35.155 07:14:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:35.155 07:14:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:29:35.155 07:14:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:35.155 07:14:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:35.155 07:14:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:35.155 07:14:55 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:35.155 07:14:55 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:35.155 07:14:55 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:35.155 07:14:55 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:29:35.156 07:14:55 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:35.156 07:14:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:29:35.156 07:14:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:35.156 07:14:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:35.156 07:14:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:35.156 07:14:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:35.156 07:14:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:35.156 07:14:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:35.156 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:35.156 07:14:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:35.156 07:14:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:35.156 07:14:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:35.156 07:14:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:29:35.156 07:14:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:29:35.156 07:14:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:35.156 07:14:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:29:35.156 07:14:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:35.156 07:14:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:35.156 07:14:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:35.156 07:14:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:35.156 07:14:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:35.156 07:14:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:35.156 07:14:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:35.156 07:14:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:35.156 07:14:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:35.156 07:14:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:35.156 07:14:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@309 -- # xtrace_disable 00:29:35.156 07:14:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:29:37.695 07:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:37.695 07:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # pci_devs=() 00:29:37.695 07:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:37.695 07:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:37.695 07:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:37.695 07:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:37.695 07:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:37.695 07:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # net_devs=() 00:29:37.695 07:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:37.695 07:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # e810=() 00:29:37.695 07:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # local -ga e810 00:29:37.695 07:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # x722=() 00:29:37.695 07:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # local -ga x722 00:29:37.695 07:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # mlx=() 00:29:37.695 07:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # local -ga mlx 00:29:37.695 07:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:37.695 07:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:37.695 07:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:37.695 07:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:37.695 07:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:37.695 07:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:37.695 07:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:37.695 07:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:37.695 07:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:37.695 07:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:37.695 07:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:37.695 07:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:37.695 07:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:37.695 07:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:37.695 07:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:37.695 07:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:37.695 07:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:37.695 07:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:37.695 07:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:37.695 07:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:29:37.695 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:29:37.695 07:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:37.695 07:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:37.695 07:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:37.695 07:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:37.695 07:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:37.695 07:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:37.695 07:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:29:37.695 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:29:37.695 07:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:37.695 07:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:37.695 07:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:37.695 07:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:37.695 07:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:37.695 07:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:37.695 07:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:37.695 07:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:37.695 07:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:37.695 07:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:37.695 07:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:37.695 07:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:37.695 07:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:37.695 07:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:37.695 07:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:37.695 07:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:29:37.695 Found net devices under 0000:0a:00.0: cvl_0_0 00:29:37.695 07:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:37.695 07:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:37.695 07:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:37.695 07:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:37.695 07:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:37.695 07:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:37.695 07:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:37.695 07:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:37.696 07:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:29:37.696 Found net devices under 0000:0a:00.1: cvl_0_1 00:29:37.696 07:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:37.696 07:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:37.696 07:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # is_hw=yes 00:29:37.696 07:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:37.696 07:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:37.696 07:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:37.696 07:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:37.696 07:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:37.696 07:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:37.696 07:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:37.696 07:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:37.696 07:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:37.696 07:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:37.696 07:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:37.696 07:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:37.696 07:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:37.696 07:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:37.696 07:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:37.696 07:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:37.696 07:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:37.696 07:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:37.696 07:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:37.696 07:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:37.696 07:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:37.696 07:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:37.696 07:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:37.696 07:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:37.696 07:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:37.696 07:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:37.696 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:37.696 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.174 ms 00:29:37.696 00:29:37.696 --- 10.0.0.2 ping statistics --- 00:29:37.696 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:37.696 rtt min/avg/max/mdev = 0.174/0.174/0.174/0.000 ms 00:29:37.696 07:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:37.696 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:37.696 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.107 ms 00:29:37.696 00:29:37.696 --- 10.0.0.1 ping statistics --- 00:29:37.696 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:37.696 rtt min/avg/max/mdev = 0.107/0.107/0.107/0.000 ms 00:29:37.696 07:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:37.696 07:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # return 0 00:29:37.696 07:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:37.696 07:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:37.696 07:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:37.696 07:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:37.696 07:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:37.696 07:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:37.696 07:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:37.696 07:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:29:37.696 07:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:37.696 07:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:37.696 07:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:29:37.696 07:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # nvmfpid=341779 00:29:37.696 07:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:29:37.696 07:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # waitforlisten 341779 00:29:37.696 07:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # '[' -z 341779 ']' 00:29:37.696 07:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:37.696 07:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:37.696 07:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:37.696 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:37.696 07:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:37.696 07:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:29:37.696 [2024-11-18 07:14:58.346381] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:29:37.696 [2024-11-18 07:14:58.346474] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:37.696 [2024-11-18 07:14:58.421150] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:37.696 [2024-11-18 07:14:58.466990] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:37.696 [2024-11-18 07:14:58.467056] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:37.696 [2024-11-18 07:14:58.467079] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:37.696 [2024-11-18 07:14:58.467090] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:37.696 [2024-11-18 07:14:58.467100] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:37.696 [2024-11-18 07:14:58.468667] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:37.696 [2024-11-18 07:14:58.468697] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:37.696 [2024-11-18 07:14:58.468754] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:37.696 [2024-11-18 07:14:58.468757] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:37.696 07:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:37.696 07:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@868 -- # return 0 00:29:37.696 07:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:37.696 07:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:37.696 07:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:29:37.696 07:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:37.696 07:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:29:37.696 07:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:29:40.987 07:15:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:29:40.987 07:15:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:29:41.245 07:15:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:88:00.0 00:29:41.245 07:15:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:29:41.504 07:15:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:29:41.504 07:15:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:88:00.0 ']' 00:29:41.504 07:15:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:29:41.504 07:15:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:29:41.504 07:15:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:29:41.762 [2024-11-18 07:15:02.591205] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:41.762 07:15:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:42.021 07:15:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:29:42.021 07:15:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:42.279 07:15:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:29:42.279 07:15:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:29:42.537 07:15:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:42.796 [2024-11-18 07:15:03.719311] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:42.796 07:15:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:43.055 07:15:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:88:00.0 ']' 00:29:43.055 07:15:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:88:00.0' 00:29:43.055 07:15:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:29:43.055 07:15:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:88:00.0' 00:29:44.430 Initializing NVMe Controllers 00:29:44.430 Attached to NVMe Controller at 0000:88:00.0 [8086:0a54] 00:29:44.430 Associating PCIE (0000:88:00.0) NSID 1 with lcore 0 00:29:44.430 Initialization complete. Launching workers. 00:29:44.430 ======================================================== 00:29:44.430 Latency(us) 00:29:44.430 Device Information : IOPS MiB/s Average min max 00:29:44.430 PCIE (0000:88:00.0) NSID 1 from core 0: 83941.92 327.90 380.56 37.81 4320.08 00:29:44.430 ======================================================== 00:29:44.430 Total : 83941.92 327.90 380.56 37.81 4320.08 00:29:44.430 00:29:44.430 07:15:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:45.807 Initializing NVMe Controllers 00:29:45.807 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:45.807 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:45.807 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:29:45.807 Initialization complete. Launching workers. 00:29:45.807 ======================================================== 00:29:45.807 Latency(us) 00:29:45.807 Device Information : IOPS MiB/s Average min max 00:29:45.807 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 58.00 0.23 17337.22 147.38 45877.28 00:29:45.807 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 57.00 0.22 17762.49 7940.03 47911.82 00:29:45.807 ======================================================== 00:29:45.807 Total : 115.00 0.45 17548.01 147.38 47911.82 00:29:45.807 00:29:45.807 07:15:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:47.186 Initializing NVMe Controllers 00:29:47.186 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:47.186 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:47.186 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:29:47.186 Initialization complete. Launching workers. 00:29:47.187 ======================================================== 00:29:47.187 Latency(us) 00:29:47.187 Device Information : IOPS MiB/s Average min max 00:29:47.187 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 7921.67 30.94 4041.84 644.36 10267.23 00:29:47.187 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3807.48 14.87 8432.22 4932.32 18600.52 00:29:47.187 ======================================================== 00:29:47.187 Total : 11729.15 45.82 5467.03 644.36 18600.52 00:29:47.187 00:29:47.187 07:15:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:29:47.187 07:15:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:29:47.187 07:15:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:49.716 Initializing NVMe Controllers 00:29:49.716 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:49.716 Controller IO queue size 128, less than required. 00:29:49.716 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:49.716 Controller IO queue size 128, less than required. 00:29:49.716 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:49.716 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:49.716 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:29:49.716 Initialization complete. Launching workers. 00:29:49.716 ======================================================== 00:29:49.716 Latency(us) 00:29:49.716 Device Information : IOPS MiB/s Average min max 00:29:49.716 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1575.79 393.95 82915.63 55614.42 136915.59 00:29:49.716 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 552.43 138.11 244781.97 93718.12 371740.08 00:29:49.716 ======================================================== 00:29:49.716 Total : 2128.22 532.05 124931.68 55614.42 371740.08 00:29:49.716 00:29:49.716 07:15:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:29:49.716 No valid NVMe controllers or AIO or URING devices found 00:29:49.716 Initializing NVMe Controllers 00:29:49.716 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:49.717 Controller IO queue size 128, less than required. 00:29:49.717 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:49.717 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:29:49.717 Controller IO queue size 128, less than required. 00:29:49.717 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:49.717 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:29:49.717 WARNING: Some requested NVMe devices were skipped 00:29:49.717 07:15:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:29:52.252 Initializing NVMe Controllers 00:29:52.252 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:52.252 Controller IO queue size 128, less than required. 00:29:52.252 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:52.252 Controller IO queue size 128, less than required. 00:29:52.252 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:52.252 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:52.252 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:29:52.252 Initialization complete. Launching workers. 00:29:52.252 00:29:52.252 ==================== 00:29:52.252 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:29:52.252 TCP transport: 00:29:52.252 polls: 9295 00:29:52.252 idle_polls: 6078 00:29:52.252 sock_completions: 3217 00:29:52.252 nvme_completions: 6081 00:29:52.252 submitted_requests: 9082 00:29:52.252 queued_requests: 1 00:29:52.252 00:29:52.252 ==================== 00:29:52.252 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:29:52.252 TCP transport: 00:29:52.252 polls: 11875 00:29:52.252 idle_polls: 8499 00:29:52.252 sock_completions: 3376 00:29:52.252 nvme_completions: 6297 00:29:52.252 submitted_requests: 9428 00:29:52.252 queued_requests: 1 00:29:52.252 ======================================================== 00:29:52.252 Latency(us) 00:29:52.252 Device Information : IOPS MiB/s Average min max 00:29:52.252 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1519.71 379.93 86111.24 47244.83 159277.89 00:29:52.252 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1573.69 393.42 81490.57 40198.08 126663.70 00:29:52.252 ======================================================== 00:29:52.252 Total : 3093.40 773.35 83760.59 40198.08 159277.89 00:29:52.252 00:29:52.252 07:15:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:29:52.252 07:15:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:52.510 07:15:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:29:52.510 07:15:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@71 -- # '[' -n 0000:88:00.0 ']' 00:29:52.511 07:15:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:29:55.799 07:15:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@72 -- # ls_guid=311e5770-2a13-4e38-bfe3-06a9cc9c82fd 00:29:55.799 07:15:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@73 -- # get_lvs_free_mb 311e5770-2a13-4e38-bfe3-06a9cc9c82fd 00:29:55.799 07:15:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # local lvs_uuid=311e5770-2a13-4e38-bfe3-06a9cc9c82fd 00:29:55.799 07:15:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # local lvs_info 00:29:55.799 07:15:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # local fc 00:29:55.799 07:15:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1371 -- # local cs 00:29:55.799 07:15:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:29:56.058 07:15:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # lvs_info='[ 00:29:56.058 { 00:29:56.058 "uuid": "311e5770-2a13-4e38-bfe3-06a9cc9c82fd", 00:29:56.058 "name": "lvs_0", 00:29:56.058 "base_bdev": "Nvme0n1", 00:29:56.058 "total_data_clusters": 238234, 00:29:56.058 "free_clusters": 238234, 00:29:56.058 "block_size": 512, 00:29:56.058 "cluster_size": 4194304 00:29:56.058 } 00:29:56.058 ]' 00:29:56.058 07:15:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # jq '.[] | select(.uuid=="311e5770-2a13-4e38-bfe3-06a9cc9c82fd") .free_clusters' 00:29:56.317 07:15:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # fc=238234 00:29:56.317 07:15:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # jq '.[] | select(.uuid=="311e5770-2a13-4e38-bfe3-06a9cc9c82fd") .cluster_size' 00:29:56.317 07:15:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # cs=4194304 00:29:56.317 07:15:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1377 -- # free_mb=952936 00:29:56.317 07:15:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1378 -- # echo 952936 00:29:56.317 952936 00:29:56.317 07:15:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@77 -- # '[' 952936 -gt 20480 ']' 00:29:56.317 07:15:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@78 -- # free_mb=20480 00:29:56.317 07:15:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 311e5770-2a13-4e38-bfe3-06a9cc9c82fd lbd_0 20480 00:29:56.884 07:15:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@80 -- # lb_guid=9c43630d-b8f7-4a77-a65b-6265aeb9bc7d 00:29:56.884 07:15:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore 9c43630d-b8f7-4a77-a65b-6265aeb9bc7d lvs_n_0 00:29:57.451 07:15:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@83 -- # ls_nested_guid=b7e4ba16-f373-4240-b82e-ec98bd727f1b 00:29:57.451 07:15:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@84 -- # get_lvs_free_mb b7e4ba16-f373-4240-b82e-ec98bd727f1b 00:29:57.451 07:15:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # local lvs_uuid=b7e4ba16-f373-4240-b82e-ec98bd727f1b 00:29:57.452 07:15:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # local lvs_info 00:29:57.452 07:15:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # local fc 00:29:57.452 07:15:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1371 -- # local cs 00:29:57.452 07:15:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:29:57.710 07:15:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # lvs_info='[ 00:29:57.710 { 00:29:57.710 "uuid": "311e5770-2a13-4e38-bfe3-06a9cc9c82fd", 00:29:57.710 "name": "lvs_0", 00:29:57.710 "base_bdev": "Nvme0n1", 00:29:57.710 "total_data_clusters": 238234, 00:29:57.710 "free_clusters": 233114, 00:29:57.710 "block_size": 512, 00:29:57.710 "cluster_size": 4194304 00:29:57.710 }, 00:29:57.710 { 00:29:57.710 "uuid": "b7e4ba16-f373-4240-b82e-ec98bd727f1b", 00:29:57.710 "name": "lvs_n_0", 00:29:57.710 "base_bdev": "9c43630d-b8f7-4a77-a65b-6265aeb9bc7d", 00:29:57.710 "total_data_clusters": 5114, 00:29:57.710 "free_clusters": 5114, 00:29:57.710 "block_size": 512, 00:29:57.710 "cluster_size": 4194304 00:29:57.710 } 00:29:57.710 ]' 00:29:57.710 07:15:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # jq '.[] | select(.uuid=="b7e4ba16-f373-4240-b82e-ec98bd727f1b") .free_clusters' 00:29:57.968 07:15:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # fc=5114 00:29:57.968 07:15:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # jq '.[] | select(.uuid=="b7e4ba16-f373-4240-b82e-ec98bd727f1b") .cluster_size' 00:29:57.968 07:15:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # cs=4194304 00:29:57.968 07:15:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1377 -- # free_mb=20456 00:29:57.968 07:15:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1378 -- # echo 20456 00:29:57.968 20456 00:29:57.968 07:15:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@85 -- # '[' 20456 -gt 20480 ']' 00:29:57.968 07:15:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u b7e4ba16-f373-4240-b82e-ec98bd727f1b lbd_nest_0 20456 00:29:58.226 07:15:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@88 -- # lb_nested_guid=e4e62c9a-337f-4a3a-b37e-11060db8b87c 00:29:58.226 07:15:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:58.484 07:15:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:29:58.484 07:15:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 e4e62c9a-337f-4a3a-b37e-11060db8b87c 00:29:58.742 07:15:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:59.001 07:15:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:29:59.001 07:15:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@96 -- # io_size=("512" "131072") 00:29:59.001 07:15:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:29:59.001 07:15:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:29:59.001 07:15:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:11.208 Initializing NVMe Controllers 00:30:11.208 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:11.208 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:11.208 Initialization complete. Launching workers. 00:30:11.208 ======================================================== 00:30:11.208 Latency(us) 00:30:11.208 Device Information : IOPS MiB/s Average min max 00:30:11.208 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 47.80 0.02 20992.17 175.94 45814.58 00:30:11.208 ======================================================== 00:30:11.208 Total : 47.80 0.02 20992.17 175.94 45814.58 00:30:11.208 00:30:11.208 07:15:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:30:11.208 07:15:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:21.192 Initializing NVMe Controllers 00:30:21.192 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:21.192 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:21.192 Initialization complete. Launching workers. 00:30:21.192 ======================================================== 00:30:21.192 Latency(us) 00:30:21.192 Device Information : IOPS MiB/s Average min max 00:30:21.192 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 74.10 9.26 13504.86 6024.42 50861.84 00:30:21.192 ======================================================== 00:30:21.192 Total : 74.10 9.26 13504.86 6024.42 50861.84 00:30:21.192 00:30:21.192 07:15:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:30:21.192 07:15:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:30:21.193 07:15:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:31.179 Initializing NVMe Controllers 00:30:31.179 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:31.179 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:31.179 Initialization complete. Launching workers. 00:30:31.179 ======================================================== 00:30:31.179 Latency(us) 00:30:31.179 Device Information : IOPS MiB/s Average min max 00:30:31.179 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 7391.99 3.61 4329.57 300.01 12181.47 00:30:31.179 ======================================================== 00:30:31.179 Total : 7391.99 3.61 4329.57 300.01 12181.47 00:30:31.179 00:30:31.179 07:15:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:30:31.179 07:15:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:41.160 Initializing NVMe Controllers 00:30:41.160 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:41.160 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:41.160 Initialization complete. Launching workers. 00:30:41.160 ======================================================== 00:30:41.160 Latency(us) 00:30:41.160 Device Information : IOPS MiB/s Average min max 00:30:41.160 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3971.01 496.38 8058.58 762.46 18136.89 00:30:41.160 ======================================================== 00:30:41.160 Total : 3971.01 496.38 8058.58 762.46 18136.89 00:30:41.160 00:30:41.160 07:16:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:30:41.160 07:16:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:30:41.160 07:16:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:51.149 Initializing NVMe Controllers 00:30:51.149 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:51.149 Controller IO queue size 128, less than required. 00:30:51.149 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:51.149 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:51.149 Initialization complete. Launching workers. 00:30:51.149 ======================================================== 00:30:51.149 Latency(us) 00:30:51.149 Device Information : IOPS MiB/s Average min max 00:30:51.149 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11808.04 5.77 10844.41 1777.59 26635.94 00:30:51.149 ======================================================== 00:30:51.149 Total : 11808.04 5.77 10844.41 1777.59 26635.94 00:30:51.149 00:30:51.149 07:16:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:30:51.149 07:16:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:01.132 Initializing NVMe Controllers 00:31:01.132 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:01.132 Controller IO queue size 128, less than required. 00:31:01.132 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:01.132 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:01.132 Initialization complete. Launching workers. 00:31:01.132 ======================================================== 00:31:01.132 Latency(us) 00:31:01.132 Device Information : IOPS MiB/s Average min max 00:31:01.132 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1182.89 147.86 108693.02 16397.74 228878.41 00:31:01.132 ======================================================== 00:31:01.132 Total : 1182.89 147.86 108693.02 16397.74 228878.41 00:31:01.132 00:31:01.132 07:16:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:01.390 07:16:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete e4e62c9a-337f-4a3a-b37e-11060db8b87c 00:31:02.324 07:16:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:31:02.582 07:16:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@107 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 9c43630d-b8f7-4a77-a65b-6265aeb9bc7d 00:31:02.841 07:16:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@108 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:31:03.099 07:16:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:31:03.099 07:16:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:31:03.099 07:16:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:03.099 07:16:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:31:03.099 07:16:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:03.099 07:16:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:31:03.099 07:16:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:03.099 07:16:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:03.099 rmmod nvme_tcp 00:31:03.099 rmmod nvme_fabrics 00:31:03.099 rmmod nvme_keyring 00:31:03.099 07:16:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:03.099 07:16:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:31:03.099 07:16:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:31:03.099 07:16:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@517 -- # '[' -n 341779 ']' 00:31:03.099 07:16:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # killprocess 341779 00:31:03.099 07:16:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # '[' -z 341779 ']' 00:31:03.099 07:16:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # kill -0 341779 00:31:03.099 07:16:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # uname 00:31:03.358 07:16:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:03.358 07:16:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 341779 00:31:03.358 07:16:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:03.358 07:16:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:03.358 07:16:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 341779' 00:31:03.358 killing process with pid 341779 00:31:03.358 07:16:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@973 -- # kill 341779 00:31:03.358 07:16:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@978 -- # wait 341779 00:31:04.735 07:16:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:04.735 07:16:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:04.735 07:16:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:04.735 07:16:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:31:04.735 07:16:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-save 00:31:04.735 07:16:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-restore 00:31:04.735 07:16:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:04.993 07:16:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:04.993 07:16:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:04.993 07:16:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:04.993 07:16:25 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:04.993 07:16:25 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:06.903 07:16:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:06.903 00:31:06.903 real 1m31.922s 00:31:06.903 user 5m40.850s 00:31:06.903 sys 0m15.452s 00:31:06.903 07:16:27 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:06.903 07:16:27 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:31:06.903 ************************************ 00:31:06.903 END TEST nvmf_perf 00:31:06.903 ************************************ 00:31:06.903 07:16:27 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:31:06.903 07:16:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:31:06.903 07:16:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:06.903 07:16:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:31:06.903 ************************************ 00:31:06.903 START TEST nvmf_fio_host 00:31:06.903 ************************************ 00:31:06.903 07:16:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:31:06.903 * Looking for test storage... 00:31:06.903 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:06.903 07:16:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:31:06.903 07:16:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # lcov --version 00:31:06.903 07:16:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:31:07.162 07:16:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:31:07.162 07:16:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:07.162 07:16:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:07.162 07:16:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:07.162 07:16:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:31:07.162 07:16:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:31:07.162 07:16:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:31:07.162 07:16:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:31:07.162 07:16:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:31:07.162 07:16:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:31:07.162 07:16:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:31:07.162 07:16:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:07.162 07:16:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:31:07.162 07:16:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:31:07.162 07:16:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:07.162 07:16:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:07.162 07:16:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:31:07.162 07:16:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:31:07.162 07:16:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:07.162 07:16:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:31:07.162 07:16:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:31:07.162 07:16:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:31:07.162 07:16:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:31:07.162 07:16:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:07.162 07:16:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:31:07.162 07:16:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:31:07.162 07:16:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:07.162 07:16:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:07.162 07:16:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:31:07.162 07:16:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:07.162 07:16:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:31:07.162 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:07.162 --rc genhtml_branch_coverage=1 00:31:07.162 --rc genhtml_function_coverage=1 00:31:07.162 --rc genhtml_legend=1 00:31:07.162 --rc geninfo_all_blocks=1 00:31:07.162 --rc geninfo_unexecuted_blocks=1 00:31:07.162 00:31:07.163 ' 00:31:07.163 07:16:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:31:07.163 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:07.163 --rc genhtml_branch_coverage=1 00:31:07.163 --rc genhtml_function_coverage=1 00:31:07.163 --rc genhtml_legend=1 00:31:07.163 --rc geninfo_all_blocks=1 00:31:07.163 --rc geninfo_unexecuted_blocks=1 00:31:07.163 00:31:07.163 ' 00:31:07.163 07:16:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:31:07.163 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:07.163 --rc genhtml_branch_coverage=1 00:31:07.163 --rc genhtml_function_coverage=1 00:31:07.163 --rc genhtml_legend=1 00:31:07.163 --rc geninfo_all_blocks=1 00:31:07.163 --rc geninfo_unexecuted_blocks=1 00:31:07.163 00:31:07.163 ' 00:31:07.163 07:16:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:31:07.163 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:07.163 --rc genhtml_branch_coverage=1 00:31:07.163 --rc genhtml_function_coverage=1 00:31:07.163 --rc genhtml_legend=1 00:31:07.163 --rc geninfo_all_blocks=1 00:31:07.163 --rc geninfo_unexecuted_blocks=1 00:31:07.163 00:31:07.163 ' 00:31:07.163 07:16:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:07.163 07:16:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:31:07.163 07:16:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:07.163 07:16:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:07.163 07:16:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:07.163 07:16:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:07.163 07:16:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:07.163 07:16:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:07.163 07:16:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:31:07.163 07:16:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:07.163 07:16:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:07.163 07:16:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:31:07.163 07:16:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:07.163 07:16:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:07.163 07:16:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:07.163 07:16:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:07.163 07:16:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:07.163 07:16:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:07.163 07:16:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:07.163 07:16:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:07.163 07:16:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:07.163 07:16:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:07.163 07:16:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:31:07.163 07:16:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:31:07.163 07:16:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:07.163 07:16:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:07.163 07:16:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:07.163 07:16:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:07.163 07:16:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:07.163 07:16:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:31:07.163 07:16:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:07.163 07:16:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:07.163 07:16:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:07.163 07:16:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:07.163 07:16:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:07.163 07:16:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:07.163 07:16:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:31:07.163 07:16:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:07.163 07:16:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:31:07.163 07:16:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:07.163 07:16:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:07.163 07:16:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:07.163 07:16:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:07.163 07:16:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:07.163 07:16:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:31:07.163 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:31:07.163 07:16:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:07.163 07:16:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:07.163 07:16:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:07.163 07:16:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:07.163 07:16:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:31:07.163 07:16:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:07.163 07:16:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:07.163 07:16:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:07.163 07:16:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:07.163 07:16:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:07.163 07:16:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:07.163 07:16:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:07.163 07:16:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:07.163 07:16:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:07.163 07:16:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:07.163 07:16:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@309 -- # xtrace_disable 00:31:07.163 07:16:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:31:09.698 07:16:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:09.698 07:16:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # pci_devs=() 00:31:09.698 07:16:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:09.698 07:16:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:09.698 07:16:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:09.698 07:16:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:09.698 07:16:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:09.698 07:16:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # net_devs=() 00:31:09.698 07:16:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:09.698 07:16:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # e810=() 00:31:09.698 07:16:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # local -ga e810 00:31:09.698 07:16:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # x722=() 00:31:09.698 07:16:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # local -ga x722 00:31:09.698 07:16:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # mlx=() 00:31:09.698 07:16:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # local -ga mlx 00:31:09.698 07:16:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:09.698 07:16:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:09.698 07:16:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:09.698 07:16:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:09.698 07:16:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:09.698 07:16:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:09.698 07:16:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:09.698 07:16:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:09.698 07:16:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:09.698 07:16:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:09.698 07:16:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:09.698 07:16:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:09.698 07:16:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:09.698 07:16:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:09.698 07:16:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:09.698 07:16:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:09.698 07:16:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:09.698 07:16:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:09.698 07:16:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:09.698 07:16:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:31:09.698 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:31:09.698 07:16:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:09.698 07:16:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:09.698 07:16:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:09.698 07:16:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:09.698 07:16:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:09.698 07:16:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:09.699 07:16:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:31:09.699 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:31:09.699 07:16:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:09.699 07:16:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:09.699 07:16:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:09.699 07:16:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:09.699 07:16:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:09.699 07:16:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:09.699 07:16:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:09.699 07:16:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:09.699 07:16:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:09.699 07:16:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:09.699 07:16:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:09.699 07:16:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:09.699 07:16:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:09.699 07:16:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:09.699 07:16:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:09.699 07:16:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:31:09.699 Found net devices under 0000:0a:00.0: cvl_0_0 00:31:09.699 07:16:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:09.699 07:16:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:09.699 07:16:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:09.699 07:16:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:09.699 07:16:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:09.699 07:16:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:09.699 07:16:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:09.699 07:16:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:09.699 07:16:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:31:09.699 Found net devices under 0000:0a:00.1: cvl_0_1 00:31:09.699 07:16:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:09.699 07:16:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:09.699 07:16:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # is_hw=yes 00:31:09.699 07:16:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:09.699 07:16:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:09.699 07:16:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:09.699 07:16:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:09.699 07:16:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:09.699 07:16:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:09.699 07:16:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:09.699 07:16:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:09.699 07:16:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:09.699 07:16:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:09.699 07:16:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:09.699 07:16:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:09.699 07:16:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:09.699 07:16:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:09.699 07:16:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:09.699 07:16:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:09.699 07:16:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:09.699 07:16:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:09.699 07:16:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:09.699 07:16:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:09.699 07:16:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:09.699 07:16:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:09.699 07:16:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:09.699 07:16:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:09.699 07:16:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:09.699 07:16:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:09.699 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:09.699 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.334 ms 00:31:09.699 00:31:09.699 --- 10.0.0.2 ping statistics --- 00:31:09.699 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:09.699 rtt min/avg/max/mdev = 0.334/0.334/0.334/0.000 ms 00:31:09.699 07:16:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:09.699 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:09.699 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.168 ms 00:31:09.699 00:31:09.699 --- 10.0.0.1 ping statistics --- 00:31:09.699 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:09.699 rtt min/avg/max/mdev = 0.168/0.168/0.168/0.000 ms 00:31:09.699 07:16:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:09.699 07:16:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # return 0 00:31:09.699 07:16:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:09.699 07:16:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:09.699 07:16:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:09.699 07:16:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:09.699 07:16:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:09.699 07:16:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:09.699 07:16:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:09.699 07:16:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:31:09.699 07:16:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:31:09.699 07:16:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:09.699 07:16:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:31:09.699 07:16:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=354487 00:31:09.699 07:16:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:31:09.699 07:16:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:09.699 07:16:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 354487 00:31:09.699 07:16:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # '[' -z 354487 ']' 00:31:09.699 07:16:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:09.699 07:16:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:09.699 07:16:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:09.699 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:09.699 07:16:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:09.699 07:16:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:31:09.699 [2024-11-18 07:16:30.320597] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:31:09.699 [2024-11-18 07:16:30.320708] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:09.699 [2024-11-18 07:16:30.397519] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:09.699 [2024-11-18 07:16:30.443089] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:09.700 [2024-11-18 07:16:30.443157] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:09.700 [2024-11-18 07:16:30.443171] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:09.700 [2024-11-18 07:16:30.443182] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:09.700 [2024-11-18 07:16:30.443207] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:09.700 [2024-11-18 07:16:30.444760] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:09.700 [2024-11-18 07:16:30.444809] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:09.700 [2024-11-18 07:16:30.444853] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:31:09.700 [2024-11-18 07:16:30.444856] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:09.700 07:16:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:09.700 07:16:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@868 -- # return 0 00:31:09.700 07:16:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:31:09.958 [2024-11-18 07:16:30.851127] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:09.958 07:16:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:31:09.958 07:16:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:09.958 07:16:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:31:09.958 07:16:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:31:10.526 Malloc1 00:31:10.526 07:16:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:10.526 07:16:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:31:11.093 07:16:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:11.093 [2024-11-18 07:16:32.023279] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:11.093 07:16:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:11.354 07:16:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:31:11.354 07:16:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:11.354 07:16:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:11.354 07:16:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:31:11.354 07:16:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:11.354 07:16:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:31:11.354 07:16:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:11.354 07:16:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:31:11.354 07:16:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:31:11.612 07:16:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:31:11.612 07:16:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:11.612 07:16:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:31:11.612 07:16:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:31:11.612 07:16:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:31:11.612 07:16:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:31:11.612 07:16:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:31:11.612 07:16:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:11.612 07:16:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:31:11.612 07:16:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:31:11.612 07:16:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:31:11.612 07:16:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:31:11.612 07:16:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:31:11.612 07:16:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:11.612 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:31:11.612 fio-3.35 00:31:11.612 Starting 1 thread 00:31:14.143 00:31:14.143 test: (groupid=0, jobs=1): err= 0: pid=354850: Mon Nov 18 07:16:34 2024 00:31:14.143 read: IOPS=8920, BW=34.8MiB/s (36.5MB/s)(69.9MiB/2006msec) 00:31:14.143 slat (nsec): min=1818, max=123354, avg=2329.62, stdev=1532.07 00:31:14.143 clat (usec): min=2393, max=13084, avg=7835.42, stdev=649.61 00:31:14.143 lat (usec): min=2416, max=13087, avg=7837.75, stdev=649.52 00:31:14.143 clat percentiles (usec): 00:31:14.143 | 1.00th=[ 6390], 5.00th=[ 6783], 10.00th=[ 7046], 20.00th=[ 7308], 00:31:14.143 | 30.00th=[ 7504], 40.00th=[ 7701], 50.00th=[ 7832], 60.00th=[ 8029], 00:31:14.143 | 70.00th=[ 8160], 80.00th=[ 8356], 90.00th=[ 8586], 95.00th=[ 8848], 00:31:14.143 | 99.00th=[ 9241], 99.50th=[ 9503], 99.90th=[11207], 99.95th=[12125], 00:31:14.143 | 99.99th=[13042] 00:31:14.143 bw ( KiB/s): min=34744, max=36200, per=99.92%, avg=35654.00, stdev=631.89, samples=4 00:31:14.143 iops : min= 8686, max= 9050, avg=8913.50, stdev=157.97, samples=4 00:31:14.143 write: IOPS=8934, BW=34.9MiB/s (36.6MB/s)(70.0MiB/2006msec); 0 zone resets 00:31:14.143 slat (nsec): min=1935, max=94305, avg=2451.19, stdev=1204.76 00:31:14.143 clat (usec): min=988, max=12725, avg=6452.35, stdev=547.38 00:31:14.143 lat (usec): min=994, max=12728, avg=6454.80, stdev=547.34 00:31:14.143 clat percentiles (usec): 00:31:14.143 | 1.00th=[ 5211], 5.00th=[ 5669], 10.00th=[ 5800], 20.00th=[ 6063], 00:31:14.143 | 30.00th=[ 6194], 40.00th=[ 6325], 50.00th=[ 6456], 60.00th=[ 6587], 00:31:14.143 | 70.00th=[ 6718], 80.00th=[ 6849], 90.00th=[ 7046], 95.00th=[ 7242], 00:31:14.143 | 99.00th=[ 7635], 99.50th=[ 7832], 99.90th=[10814], 99.95th=[11600], 00:31:14.143 | 99.99th=[11994] 00:31:14.143 bw ( KiB/s): min=35512, max=35944, per=99.96%, avg=35726.00, stdev=183.06, samples=4 00:31:14.143 iops : min= 8878, max= 8986, avg=8931.50, stdev=45.76, samples=4 00:31:14.143 lat (usec) : 1000=0.01% 00:31:14.143 lat (msec) : 2=0.03%, 4=0.11%, 10=99.72%, 20=0.14% 00:31:14.143 cpu : usr=64.66%, sys=33.70%, ctx=73, majf=0, minf=41 00:31:14.143 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:31:14.143 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:14.143 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:14.143 issued rwts: total=17894,17923,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:14.143 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:14.143 00:31:14.143 Run status group 0 (all jobs): 00:31:14.143 READ: bw=34.8MiB/s (36.5MB/s), 34.8MiB/s-34.8MiB/s (36.5MB/s-36.5MB/s), io=69.9MiB (73.3MB), run=2006-2006msec 00:31:14.143 WRITE: bw=34.9MiB/s (36.6MB/s), 34.9MiB/s-34.9MiB/s (36.6MB/s-36.6MB/s), io=70.0MiB (73.4MB), run=2006-2006msec 00:31:14.143 07:16:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:31:14.143 07:16:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:31:14.143 07:16:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:31:14.143 07:16:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:14.143 07:16:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:31:14.143 07:16:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:14.143 07:16:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:31:14.143 07:16:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:31:14.143 07:16:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:31:14.143 07:16:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:14.143 07:16:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:31:14.143 07:16:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:31:14.143 07:16:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:31:14.143 07:16:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:31:14.143 07:16:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:31:14.143 07:16:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:14.143 07:16:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:31:14.143 07:16:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:31:14.143 07:16:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:31:14.143 07:16:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:31:14.143 07:16:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:31:14.144 07:16:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:31:14.402 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:31:14.402 fio-3.35 00:31:14.402 Starting 1 thread 00:31:16.932 00:31:16.932 test: (groupid=0, jobs=1): err= 0: pid=355184: Mon Nov 18 07:16:37 2024 00:31:16.932 read: IOPS=8506, BW=133MiB/s (139MB/s)(267MiB/2006msec) 00:31:16.932 slat (usec): min=2, max=102, avg= 3.61, stdev= 1.63 00:31:16.932 clat (usec): min=2166, max=18660, avg=8658.77, stdev=2114.24 00:31:16.932 lat (usec): min=2169, max=18664, avg=8662.37, stdev=2114.27 00:31:16.932 clat percentiles (usec): 00:31:16.932 | 1.00th=[ 4555], 5.00th=[ 5538], 10.00th=[ 6194], 20.00th=[ 6980], 00:31:16.932 | 30.00th=[ 7439], 40.00th=[ 8029], 50.00th=[ 8586], 60.00th=[ 8979], 00:31:16.932 | 70.00th=[ 9372], 80.00th=[10028], 90.00th=[11469], 95.00th=[12518], 00:31:16.932 | 99.00th=[15008], 99.50th=[15664], 99.90th=[16909], 99.95th=[18220], 00:31:16.932 | 99.99th=[18482] 00:31:16.932 bw ( KiB/s): min=62496, max=74880, per=51.05%, avg=69480.00, stdev=6032.57, samples=4 00:31:16.932 iops : min= 3906, max= 4680, avg=4342.50, stdev=377.04, samples=4 00:31:16.932 write: IOPS=4889, BW=76.4MiB/s (80.1MB/s)(142MiB/1859msec); 0 zone resets 00:31:16.932 slat (usec): min=30, max=164, avg=33.10, stdev= 4.93 00:31:16.932 clat (usec): min=5913, max=20730, avg=11315.65, stdev=1884.97 00:31:16.932 lat (usec): min=5945, max=20761, avg=11348.75, stdev=1884.95 00:31:16.932 clat percentiles (usec): 00:31:16.932 | 1.00th=[ 7373], 5.00th=[ 8455], 10.00th=[ 9110], 20.00th=[ 9765], 00:31:16.932 | 30.00th=[10159], 40.00th=[10683], 50.00th=[11207], 60.00th=[11731], 00:31:16.932 | 70.00th=[12256], 80.00th=[12911], 90.00th=[13829], 95.00th=[14615], 00:31:16.932 | 99.00th=[15664], 99.50th=[16188], 99.90th=[20055], 99.95th=[20579], 00:31:16.932 | 99.99th=[20841] 00:31:16.932 bw ( KiB/s): min=65952, max=77536, per=92.14%, avg=72088.00, stdev=5738.89, samples=4 00:31:16.932 iops : min= 4122, max= 4846, avg=4505.50, stdev=358.68, samples=4 00:31:16.932 lat (msec) : 4=0.25%, 10=60.41%, 20=39.29%, 50=0.05% 00:31:16.932 cpu : usr=78.15%, sys=20.70%, ctx=39, majf=0, minf=61 00:31:16.932 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:31:16.932 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:16.932 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:16.932 issued rwts: total=17064,9090,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:16.932 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:16.932 00:31:16.932 Run status group 0 (all jobs): 00:31:16.932 READ: bw=133MiB/s (139MB/s), 133MiB/s-133MiB/s (139MB/s-139MB/s), io=267MiB (280MB), run=2006-2006msec 00:31:16.932 WRITE: bw=76.4MiB/s (80.1MB/s), 76.4MiB/s-76.4MiB/s (80.1MB/s-80.1MB/s), io=142MiB (149MB), run=1859-1859msec 00:31:16.932 07:16:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:16.932 07:16:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:31:16.932 07:16:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:31:16.932 07:16:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@51 -- # get_nvme_bdfs 00:31:16.932 07:16:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1498 -- # bdfs=() 00:31:16.932 07:16:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1498 -- # local bdfs 00:31:16.932 07:16:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:31:16.932 07:16:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:31:16.932 07:16:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:31:16.932 07:16:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:31:16.932 07:16:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:88:00.0 00:31:16.932 07:16:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:88:00.0 -i 10.0.0.2 00:31:20.224 Nvme0n1 00:31:20.224 07:16:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:31:23.512 07:16:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@53 -- # ls_guid=8371f68c-d23b-45e1-ae8c-da26c21da931 00:31:23.512 07:16:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@54 -- # get_lvs_free_mb 8371f68c-d23b-45e1-ae8c-da26c21da931 00:31:23.512 07:16:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # local lvs_uuid=8371f68c-d23b-45e1-ae8c-da26c21da931 00:31:23.512 07:16:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # local lvs_info 00:31:23.512 07:16:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # local fc 00:31:23.512 07:16:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1371 -- # local cs 00:31:23.512 07:16:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:31:23.512 07:16:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # lvs_info='[ 00:31:23.512 { 00:31:23.512 "uuid": "8371f68c-d23b-45e1-ae8c-da26c21da931", 00:31:23.512 "name": "lvs_0", 00:31:23.512 "base_bdev": "Nvme0n1", 00:31:23.512 "total_data_clusters": 930, 00:31:23.512 "free_clusters": 930, 00:31:23.512 "block_size": 512, 00:31:23.512 "cluster_size": 1073741824 00:31:23.512 } 00:31:23.512 ]' 00:31:23.512 07:16:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # jq '.[] | select(.uuid=="8371f68c-d23b-45e1-ae8c-da26c21da931") .free_clusters' 00:31:23.512 07:16:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # fc=930 00:31:23.512 07:16:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # jq '.[] | select(.uuid=="8371f68c-d23b-45e1-ae8c-da26c21da931") .cluster_size' 00:31:23.512 07:16:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # cs=1073741824 00:31:23.512 07:16:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1377 -- # free_mb=952320 00:31:23.512 07:16:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1378 -- # echo 952320 00:31:23.512 952320 00:31:23.512 07:16:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 952320 00:31:23.770 44b6f183-8890-43b3-a50d-5bf1418077be 00:31:23.770 07:16:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:31:24.027 07:16:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:31:24.285 07:16:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:31:24.854 07:16:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@59 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:24.854 07:16:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:24.854 07:16:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:31:24.854 07:16:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:24.854 07:16:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:31:24.854 07:16:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:24.854 07:16:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:31:24.854 07:16:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:31:24.854 07:16:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:31:24.854 07:16:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:24.854 07:16:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:31:24.854 07:16:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:31:24.854 07:16:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:31:24.854 07:16:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:31:24.854 07:16:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:31:24.854 07:16:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:24.854 07:16:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:31:24.854 07:16:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:31:24.854 07:16:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:31:24.854 07:16:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:31:24.854 07:16:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:31:24.854 07:16:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:24.854 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:31:24.854 fio-3.35 00:31:24.854 Starting 1 thread 00:31:27.387 00:31:27.387 test: (groupid=0, jobs=1): err= 0: pid=356582: Mon Nov 18 07:16:48 2024 00:31:27.387 read: IOPS=6056, BW=23.7MiB/s (24.8MB/s)(47.5MiB/2007msec) 00:31:27.387 slat (nsec): min=1968, max=188199, avg=2531.90, stdev=2446.73 00:31:27.387 clat (usec): min=1019, max=171147, avg=11546.86, stdev=11585.05 00:31:27.387 lat (usec): min=1023, max=171198, avg=11549.39, stdev=11585.47 00:31:27.387 clat percentiles (msec): 00:31:27.387 | 1.00th=[ 9], 5.00th=[ 10], 10.00th=[ 10], 20.00th=[ 10], 00:31:27.387 | 30.00th=[ 11], 40.00th=[ 11], 50.00th=[ 11], 60.00th=[ 11], 00:31:27.387 | 70.00th=[ 12], 80.00th=[ 12], 90.00th=[ 12], 95.00th=[ 13], 00:31:27.387 | 99.00th=[ 14], 99.50th=[ 157], 99.90th=[ 171], 99.95th=[ 171], 00:31:27.387 | 99.99th=[ 171] 00:31:27.387 bw ( KiB/s): min=17048, max=26832, per=99.76%, avg=24166.00, stdev=4749.86, samples=4 00:31:27.387 iops : min= 4262, max= 6708, avg=6041.50, stdev=1187.47, samples=4 00:31:27.387 write: IOPS=6036, BW=23.6MiB/s (24.7MB/s)(47.3MiB/2007msec); 0 zone resets 00:31:27.387 slat (usec): min=2, max=148, avg= 2.63, stdev= 1.78 00:31:27.387 clat (usec): min=278, max=168835, avg=9515.43, stdev=10867.60 00:31:27.387 lat (usec): min=283, max=168843, avg=9518.06, stdev=10867.99 00:31:27.387 clat percentiles (msec): 00:31:27.387 | 1.00th=[ 7], 5.00th=[ 8], 10.00th=[ 8], 20.00th=[ 9], 00:31:27.387 | 30.00th=[ 9], 40.00th=[ 9], 50.00th=[ 9], 60.00th=[ 9], 00:31:27.387 | 70.00th=[ 10], 80.00th=[ 10], 90.00th=[ 10], 95.00th=[ 11], 00:31:27.387 | 99.00th=[ 11], 99.50th=[ 17], 99.90th=[ 169], 99.95th=[ 169], 00:31:27.387 | 99.99th=[ 169] 00:31:27.387 bw ( KiB/s): min=18088, max=26184, per=99.89%, avg=24122.00, stdev=4022.83, samples=4 00:31:27.387 iops : min= 4522, max= 6546, avg=6030.50, stdev=1005.71, samples=4 00:31:27.387 lat (usec) : 500=0.01%, 750=0.01% 00:31:27.387 lat (msec) : 2=0.03%, 4=0.13%, 10=58.60%, 20=40.69%, 250=0.53% 00:31:27.387 cpu : usr=62.16%, sys=36.54%, ctx=92, majf=0, minf=41 00:31:27.387 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:31:27.387 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:27.387 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:27.387 issued rwts: total=12155,12116,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:27.387 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:27.387 00:31:27.387 Run status group 0 (all jobs): 00:31:27.387 READ: bw=23.7MiB/s (24.8MB/s), 23.7MiB/s-23.7MiB/s (24.8MB/s-24.8MB/s), io=47.5MiB (49.8MB), run=2007-2007msec 00:31:27.387 WRITE: bw=23.6MiB/s (24.7MB/s), 23.6MiB/s-23.6MiB/s (24.7MB/s-24.7MB/s), io=47.3MiB (49.6MB), run=2007-2007msec 00:31:27.387 07:16:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:31:27.647 07:16:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:31:29.077 07:16:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@64 -- # ls_nested_guid=24f15f03-d1db-4753-aed2-e4cc6c0cd333 00:31:29.077 07:16:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@65 -- # get_lvs_free_mb 24f15f03-d1db-4753-aed2-e4cc6c0cd333 00:31:29.077 07:16:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # local lvs_uuid=24f15f03-d1db-4753-aed2-e4cc6c0cd333 00:31:29.077 07:16:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # local lvs_info 00:31:29.077 07:16:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # local fc 00:31:29.077 07:16:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1371 -- # local cs 00:31:29.077 07:16:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:31:29.077 07:16:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # lvs_info='[ 00:31:29.077 { 00:31:29.077 "uuid": "8371f68c-d23b-45e1-ae8c-da26c21da931", 00:31:29.077 "name": "lvs_0", 00:31:29.077 "base_bdev": "Nvme0n1", 00:31:29.077 "total_data_clusters": 930, 00:31:29.077 "free_clusters": 0, 00:31:29.077 "block_size": 512, 00:31:29.077 "cluster_size": 1073741824 00:31:29.077 }, 00:31:29.077 { 00:31:29.077 "uuid": "24f15f03-d1db-4753-aed2-e4cc6c0cd333", 00:31:29.077 "name": "lvs_n_0", 00:31:29.077 "base_bdev": "44b6f183-8890-43b3-a50d-5bf1418077be", 00:31:29.077 "total_data_clusters": 237847, 00:31:29.077 "free_clusters": 237847, 00:31:29.077 "block_size": 512, 00:31:29.077 "cluster_size": 4194304 00:31:29.077 } 00:31:29.077 ]' 00:31:29.077 07:16:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # jq '.[] | select(.uuid=="24f15f03-d1db-4753-aed2-e4cc6c0cd333") .free_clusters' 00:31:29.077 07:16:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # fc=237847 00:31:29.077 07:16:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # jq '.[] | select(.uuid=="24f15f03-d1db-4753-aed2-e4cc6c0cd333") .cluster_size' 00:31:29.077 07:16:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # cs=4194304 00:31:29.077 07:16:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1377 -- # free_mb=951388 00:31:29.077 07:16:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1378 -- # echo 951388 00:31:29.077 951388 00:31:29.077 07:16:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 951388 00:31:29.723 34a107a4-0f03-46af-8107-3c187abfd0f0 00:31:29.723 07:16:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:31:30.033 07:16:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:31:30.320 07:16:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:31:30.603 07:16:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@70 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:30.604 07:16:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:30.604 07:16:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:31:30.604 07:16:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:30.604 07:16:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:31:30.604 07:16:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:30.604 07:16:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:31:30.604 07:16:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:31:30.604 07:16:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:31:30.604 07:16:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:30.604 07:16:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:31:30.604 07:16:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:31:30.604 07:16:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:31:30.604 07:16:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:31:30.604 07:16:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:31:30.604 07:16:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:30.604 07:16:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:31:30.604 07:16:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:31:30.604 07:16:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:31:30.604 07:16:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:31:30.604 07:16:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:31:30.604 07:16:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:30.862 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:31:30.862 fio-3.35 00:31:30.862 Starting 1 thread 00:31:33.395 00:31:33.395 test: (groupid=0, jobs=1): err= 0: pid=357335: Mon Nov 18 07:16:54 2024 00:31:33.395 read: IOPS=5744, BW=22.4MiB/s (23.5MB/s)(45.1MiB/2010msec) 00:31:33.395 slat (nsec): min=1826, max=142045, avg=2396.22, stdev=1981.39 00:31:33.395 clat (usec): min=4490, max=20397, avg=12153.05, stdev=1144.83 00:31:33.395 lat (usec): min=4504, max=20399, avg=12155.45, stdev=1144.73 00:31:33.395 clat percentiles (usec): 00:31:33.395 | 1.00th=[ 9634], 5.00th=[10290], 10.00th=[10814], 20.00th=[11207], 00:31:33.395 | 30.00th=[11600], 40.00th=[11863], 50.00th=[12125], 60.00th=[12387], 00:31:33.395 | 70.00th=[12780], 80.00th=[13042], 90.00th=[13566], 95.00th=[13960], 00:31:33.395 | 99.00th=[14615], 99.50th=[15008], 99.90th=[17433], 99.95th=[19006], 00:31:33.395 | 99.99th=[20317] 00:31:33.395 bw ( KiB/s): min=21832, max=23496, per=99.93%, avg=22962.00, stdev=768.20, samples=4 00:31:33.395 iops : min= 5458, max= 5874, avg=5740.50, stdev=192.05, samples=4 00:31:33.395 write: IOPS=5734, BW=22.4MiB/s (23.5MB/s)(45.0MiB/2010msec); 0 zone resets 00:31:33.395 slat (nsec): min=1978, max=113823, avg=2579.72, stdev=1619.64 00:31:33.395 clat (usec): min=2186, max=19050, avg=10032.48, stdev=952.12 00:31:33.395 lat (usec): min=2193, max=19053, avg=10035.06, stdev=952.08 00:31:33.395 clat percentiles (usec): 00:31:33.395 | 1.00th=[ 7898], 5.00th=[ 8586], 10.00th=[ 8979], 20.00th=[ 9372], 00:31:33.395 | 30.00th=[ 9634], 40.00th=[ 9765], 50.00th=[10028], 60.00th=[10290], 00:31:33.395 | 70.00th=[10421], 80.00th=[10814], 90.00th=[11076], 95.00th=[11338], 00:31:33.395 | 99.00th=[11994], 99.50th=[12518], 99.90th=[17171], 99.95th=[17433], 00:31:33.395 | 99.99th=[19006] 00:31:33.395 bw ( KiB/s): min=22784, max=23104, per=99.97%, avg=22930.00, stdev=136.45, samples=4 00:31:33.396 iops : min= 5696, max= 5776, avg=5732.50, stdev=34.11, samples=4 00:31:33.396 lat (msec) : 4=0.05%, 10=25.23%, 20=74.71%, 50=0.01% 00:31:33.396 cpu : usr=59.93%, sys=38.83%, ctx=90, majf=0, minf=41 00:31:33.396 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:31:33.396 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:33.396 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:33.396 issued rwts: total=11547,11526,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:33.396 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:33.396 00:31:33.396 Run status group 0 (all jobs): 00:31:33.396 READ: bw=22.4MiB/s (23.5MB/s), 22.4MiB/s-22.4MiB/s (23.5MB/s-23.5MB/s), io=45.1MiB (47.3MB), run=2010-2010msec 00:31:33.396 WRITE: bw=22.4MiB/s (23.5MB/s), 22.4MiB/s-22.4MiB/s (23.5MB/s-23.5MB/s), io=45.0MiB (47.2MB), run=2010-2010msec 00:31:33.396 07:16:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:31:33.654 07:16:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@74 -- # sync 00:31:33.654 07:16:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -t 120 bdev_lvol_delete lvs_n_0/lbd_nest_0 00:31:37.844 07:16:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:31:37.844 07:16:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:31:41.145 07:17:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:31:41.145 07:17:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:31:43.049 07:17:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:31:43.049 07:17:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:31:43.049 07:17:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:31:43.049 07:17:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:43.049 07:17:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:31:43.049 07:17:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:43.049 07:17:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:31:43.049 07:17:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:43.049 07:17:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:43.049 rmmod nvme_tcp 00:31:43.049 rmmod nvme_fabrics 00:31:43.049 rmmod nvme_keyring 00:31:43.049 07:17:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:43.049 07:17:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:31:43.049 07:17:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:31:43.049 07:17:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@517 -- # '[' -n 354487 ']' 00:31:43.049 07:17:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # killprocess 354487 00:31:43.049 07:17:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # '[' -z 354487 ']' 00:31:43.049 07:17:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # kill -0 354487 00:31:43.049 07:17:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # uname 00:31:43.049 07:17:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:43.049 07:17:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 354487 00:31:43.049 07:17:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:43.049 07:17:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:43.049 07:17:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 354487' 00:31:43.049 killing process with pid 354487 00:31:43.049 07:17:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@973 -- # kill 354487 00:31:43.049 07:17:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@978 -- # wait 354487 00:31:43.049 07:17:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:43.049 07:17:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:43.049 07:17:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:43.049 07:17:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:31:43.049 07:17:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-save 00:31:43.049 07:17:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:43.049 07:17:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-restore 00:31:43.049 07:17:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:43.049 07:17:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:43.049 07:17:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:43.049 07:17:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:43.049 07:17:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:45.591 07:17:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:45.591 00:31:45.591 real 0m38.252s 00:31:45.591 user 2m27.229s 00:31:45.592 sys 0m7.051s 00:31:45.592 07:17:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:45.592 07:17:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:31:45.592 ************************************ 00:31:45.592 END TEST nvmf_fio_host 00:31:45.592 ************************************ 00:31:45.592 07:17:06 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:31:45.592 07:17:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:31:45.592 07:17:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:45.592 07:17:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:31:45.592 ************************************ 00:31:45.592 START TEST nvmf_failover 00:31:45.592 ************************************ 00:31:45.592 07:17:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:31:45.592 * Looking for test storage... 00:31:45.592 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:45.592 07:17:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:31:45.592 07:17:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # lcov --version 00:31:45.592 07:17:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:31:45.592 07:17:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:31:45.592 07:17:06 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:45.592 07:17:06 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:45.592 07:17:06 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:45.592 07:17:06 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:31:45.592 07:17:06 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:31:45.592 07:17:06 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:31:45.592 07:17:06 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:31:45.592 07:17:06 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:31:45.592 07:17:06 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:31:45.592 07:17:06 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:31:45.592 07:17:06 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:45.592 07:17:06 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:31:45.592 07:17:06 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:31:45.592 07:17:06 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:45.592 07:17:06 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:45.592 07:17:06 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:31:45.592 07:17:06 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:31:45.592 07:17:06 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:45.592 07:17:06 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:31:45.592 07:17:06 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:31:45.592 07:17:06 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:31:45.592 07:17:06 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:31:45.592 07:17:06 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:45.592 07:17:06 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:31:45.592 07:17:06 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:31:45.592 07:17:06 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:45.592 07:17:06 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:45.592 07:17:06 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:31:45.592 07:17:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:45.592 07:17:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:31:45.592 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:45.592 --rc genhtml_branch_coverage=1 00:31:45.592 --rc genhtml_function_coverage=1 00:31:45.592 --rc genhtml_legend=1 00:31:45.592 --rc geninfo_all_blocks=1 00:31:45.592 --rc geninfo_unexecuted_blocks=1 00:31:45.592 00:31:45.592 ' 00:31:45.592 07:17:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:31:45.592 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:45.592 --rc genhtml_branch_coverage=1 00:31:45.592 --rc genhtml_function_coverage=1 00:31:45.592 --rc genhtml_legend=1 00:31:45.592 --rc geninfo_all_blocks=1 00:31:45.592 --rc geninfo_unexecuted_blocks=1 00:31:45.592 00:31:45.592 ' 00:31:45.592 07:17:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:31:45.592 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:45.592 --rc genhtml_branch_coverage=1 00:31:45.592 --rc genhtml_function_coverage=1 00:31:45.592 --rc genhtml_legend=1 00:31:45.592 --rc geninfo_all_blocks=1 00:31:45.592 --rc geninfo_unexecuted_blocks=1 00:31:45.592 00:31:45.592 ' 00:31:45.592 07:17:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:31:45.592 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:45.592 --rc genhtml_branch_coverage=1 00:31:45.592 --rc genhtml_function_coverage=1 00:31:45.592 --rc genhtml_legend=1 00:31:45.592 --rc geninfo_all_blocks=1 00:31:45.592 --rc geninfo_unexecuted_blocks=1 00:31:45.592 00:31:45.592 ' 00:31:45.592 07:17:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:45.592 07:17:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:31:45.592 07:17:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:45.592 07:17:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:45.592 07:17:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:45.592 07:17:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:45.592 07:17:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:45.592 07:17:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:45.592 07:17:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:45.592 07:17:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:45.592 07:17:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:45.592 07:17:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:45.592 07:17:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:31:45.592 07:17:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:31:45.592 07:17:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:45.592 07:17:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:45.592 07:17:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:45.592 07:17:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:45.592 07:17:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:45.592 07:17:06 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:31:45.592 07:17:06 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:45.592 07:17:06 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:45.592 07:17:06 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:45.592 07:17:06 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:45.592 07:17:06 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:45.592 07:17:06 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:45.592 07:17:06 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:31:45.592 07:17:06 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:45.592 07:17:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:31:45.592 07:17:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:45.592 07:17:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:45.592 07:17:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:45.592 07:17:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:45.593 07:17:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:45.593 07:17:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:31:45.593 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:31:45.593 07:17:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:45.593 07:17:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:45.593 07:17:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:45.593 07:17:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:45.593 07:17:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:45.593 07:17:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:45.593 07:17:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:31:45.593 07:17:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:31:45.593 07:17:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:45.593 07:17:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:45.593 07:17:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:45.593 07:17:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:45.593 07:17:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:45.593 07:17:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:45.593 07:17:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:45.593 07:17:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:45.593 07:17:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:45.593 07:17:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:45.593 07:17:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@309 -- # xtrace_disable 00:31:45.593 07:17:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:31:47.496 07:17:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:47.496 07:17:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # pci_devs=() 00:31:47.496 07:17:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:47.496 07:17:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:47.496 07:17:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:47.496 07:17:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:47.496 07:17:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:47.496 07:17:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # net_devs=() 00:31:47.496 07:17:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:47.496 07:17:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # e810=() 00:31:47.496 07:17:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # local -ga e810 00:31:47.496 07:17:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # x722=() 00:31:47.496 07:17:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # local -ga x722 00:31:47.496 07:17:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # mlx=() 00:31:47.496 07:17:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # local -ga mlx 00:31:47.496 07:17:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:47.496 07:17:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:47.496 07:17:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:47.496 07:17:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:47.496 07:17:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:47.496 07:17:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:47.496 07:17:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:47.496 07:17:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:47.496 07:17:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:47.496 07:17:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:47.496 07:17:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:47.496 07:17:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:47.496 07:17:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:47.496 07:17:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:47.496 07:17:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:47.496 07:17:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:47.496 07:17:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:47.496 07:17:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:47.496 07:17:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:47.496 07:17:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:31:47.496 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:31:47.496 07:17:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:47.496 07:17:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:47.496 07:17:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:47.496 07:17:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:47.496 07:17:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:47.496 07:17:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:47.496 07:17:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:31:47.496 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:31:47.496 07:17:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:47.496 07:17:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:47.496 07:17:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:47.496 07:17:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:47.496 07:17:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:47.496 07:17:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:47.496 07:17:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:47.496 07:17:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:47.496 07:17:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:47.496 07:17:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:47.496 07:17:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:47.496 07:17:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:47.496 07:17:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:47.496 07:17:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:47.496 07:17:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:47.496 07:17:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:31:47.496 Found net devices under 0000:0a:00.0: cvl_0_0 00:31:47.496 07:17:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:47.496 07:17:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:47.496 07:17:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:47.496 07:17:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:47.496 07:17:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:47.496 07:17:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:47.496 07:17:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:47.496 07:17:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:47.496 07:17:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:31:47.496 Found net devices under 0000:0a:00.1: cvl_0_1 00:31:47.497 07:17:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:47.497 07:17:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:47.497 07:17:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # is_hw=yes 00:31:47.497 07:17:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:47.497 07:17:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:47.497 07:17:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:47.497 07:17:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:47.497 07:17:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:47.497 07:17:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:47.497 07:17:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:47.497 07:17:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:47.497 07:17:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:47.497 07:17:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:47.497 07:17:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:47.497 07:17:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:47.497 07:17:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:47.497 07:17:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:47.497 07:17:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:47.497 07:17:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:47.497 07:17:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:47.497 07:17:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:47.755 07:17:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:47.755 07:17:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:47.755 07:17:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:47.755 07:17:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:47.755 07:17:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:47.755 07:17:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:47.755 07:17:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:47.755 07:17:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:47.755 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:47.755 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.329 ms 00:31:47.755 00:31:47.755 --- 10.0.0.2 ping statistics --- 00:31:47.755 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:47.755 rtt min/avg/max/mdev = 0.329/0.329/0.329/0.000 ms 00:31:47.755 07:17:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:47.755 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:47.755 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.163 ms 00:31:47.755 00:31:47.755 --- 10.0.0.1 ping statistics --- 00:31:47.755 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:47.755 rtt min/avg/max/mdev = 0.163/0.163/0.163/0.000 ms 00:31:47.755 07:17:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:47.756 07:17:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # return 0 00:31:47.756 07:17:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:47.756 07:17:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:47.756 07:17:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:47.756 07:17:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:47.756 07:17:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:47.756 07:17:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:47.756 07:17:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:47.756 07:17:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:31:47.756 07:17:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:47.756 07:17:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:47.756 07:17:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:31:47.756 07:17:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # nvmfpid=360717 00:31:47.756 07:17:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:31:47.756 07:17:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # waitforlisten 360717 00:31:47.756 07:17:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 360717 ']' 00:31:47.756 07:17:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:47.756 07:17:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:47.756 07:17:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:47.756 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:47.756 07:17:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:47.756 07:17:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:31:47.756 [2024-11-18 07:17:08.662056] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:31:47.756 [2024-11-18 07:17:08.662143] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:48.014 [2024-11-18 07:17:08.740093] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:31:48.014 [2024-11-18 07:17:08.785978] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:48.014 [2024-11-18 07:17:08.786032] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:48.014 [2024-11-18 07:17:08.786045] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:48.014 [2024-11-18 07:17:08.786056] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:48.014 [2024-11-18 07:17:08.786065] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:48.014 [2024-11-18 07:17:08.787405] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:48.014 [2024-11-18 07:17:08.787468] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:31:48.014 [2024-11-18 07:17:08.787471] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:48.014 07:17:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:48.014 07:17:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:31:48.014 07:17:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:48.014 07:17:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:48.014 07:17:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:31:48.014 07:17:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:48.014 07:17:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:31:48.272 [2024-11-18 07:17:09.166646] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:48.272 07:17:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:31:48.530 Malloc0 00:31:48.530 07:17:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:48.790 07:17:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:49.358 07:17:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:49.358 [2024-11-18 07:17:10.289833] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:49.358 07:17:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:31:49.616 [2024-11-18 07:17:10.578597] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:31:49.875 07:17:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:31:50.133 [2024-11-18 07:17:10.859534] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:31:50.133 07:17:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=361008 00:31:50.133 07:17:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:31:50.133 07:17:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:50.133 07:17:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 361008 /var/tmp/bdevperf.sock 00:31:50.133 07:17:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 361008 ']' 00:31:50.133 07:17:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:31:50.133 07:17:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:50.133 07:17:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:31:50.133 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:31:50.133 07:17:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:50.133 07:17:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:31:50.391 07:17:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:50.391 07:17:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:31:50.391 07:17:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:31:50.649 NVMe0n1 00:31:50.649 07:17:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:31:51.218 00:31:51.218 07:17:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=361144 00:31:51.218 07:17:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:31:51.218 07:17:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:31:52.155 07:17:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:52.412 [2024-11-18 07:17:13.287742] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17aa060 is same with the state(6) to be set 00:31:52.412 [2024-11-18 07:17:13.287844] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17aa060 is same with the state(6) to be set 00:31:52.412 [2024-11-18 07:17:13.287869] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17aa060 is same with the state(6) to be set 00:31:52.412 [2024-11-18 07:17:13.287881] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17aa060 is same with the state(6) to be set 00:31:52.412 [2024-11-18 07:17:13.287893] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17aa060 is same with the state(6) to be set 00:31:52.412 [2024-11-18 07:17:13.287905] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17aa060 is same with the state(6) to be set 00:31:52.412 [2024-11-18 07:17:13.287917] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17aa060 is same with the state(6) to be set 00:31:52.412 [2024-11-18 07:17:13.287928] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17aa060 is same with the state(6) to be set 00:31:52.413 [2024-11-18 07:17:13.287940] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17aa060 is same with the state(6) to be set 00:31:52.413 [2024-11-18 07:17:13.287952] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17aa060 is same with the state(6) to be set 00:31:52.413 [2024-11-18 07:17:13.287963] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17aa060 is same with the state(6) to be set 00:31:52.413 [2024-11-18 07:17:13.287975] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17aa060 is same with the state(6) to be set 00:31:52.413 [2024-11-18 07:17:13.287987] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17aa060 is same with the state(6) to be set 00:31:52.413 [2024-11-18 07:17:13.287999] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17aa060 is same with the state(6) to be set 00:31:52.413 [2024-11-18 07:17:13.288011] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17aa060 is same with the state(6) to be set 00:31:52.413 [2024-11-18 07:17:13.288037] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17aa060 is same with the state(6) to be set 00:31:52.413 [2024-11-18 07:17:13.288049] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17aa060 is same with the state(6) to be set 00:31:52.413 [2024-11-18 07:17:13.288061] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17aa060 is same with the state(6) to be set 00:31:52.413 [2024-11-18 07:17:13.288072] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17aa060 is same with the state(6) to be set 00:31:52.413 [2024-11-18 07:17:13.288083] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17aa060 is same with the state(6) to be set 00:31:52.413 [2024-11-18 07:17:13.288094] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17aa060 is same with the state(6) to be set 00:31:52.413 [2024-11-18 07:17:13.288105] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17aa060 is same with the state(6) to be set 00:31:52.413 [2024-11-18 07:17:13.288116] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17aa060 is same with the state(6) to be set 00:31:52.413 [2024-11-18 07:17:13.288138] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17aa060 is same with the state(6) to be set 00:31:52.413 [2024-11-18 07:17:13.288150] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17aa060 is same with the state(6) to be set 00:31:52.413 07:17:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:31:55.701 07:17:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:31:55.959 00:31:55.959 07:17:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:31:56.218 [2024-11-18 07:17:17.012612] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ab5a0 is same with the state(6) to be set 00:31:56.218 [2024-11-18 07:17:17.012677] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ab5a0 is same with the state(6) to be set 00:31:56.218 [2024-11-18 07:17:17.012701] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ab5a0 is same with the state(6) to be set 00:31:56.218 [2024-11-18 07:17:17.012714] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ab5a0 is same with the state(6) to be set 00:31:56.218 [2024-11-18 07:17:17.012726] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ab5a0 is same with the state(6) to be set 00:31:56.218 [2024-11-18 07:17:17.012738] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ab5a0 is same with the state(6) to be set 00:31:56.218 [2024-11-18 07:17:17.012750] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ab5a0 is same with the state(6) to be set 00:31:56.218 [2024-11-18 07:17:17.012762] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ab5a0 is same with the state(6) to be set 00:31:56.218 [2024-11-18 07:17:17.012773] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ab5a0 is same with the state(6) to be set 00:31:56.218 [2024-11-18 07:17:17.012785] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ab5a0 is same with the state(6) to be set 00:31:56.218 [2024-11-18 07:17:17.012798] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ab5a0 is same with the state(6) to be set 00:31:56.218 [2024-11-18 07:17:17.012810] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ab5a0 is same with the state(6) to be set 00:31:56.218 [2024-11-18 07:17:17.012822] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ab5a0 is same with the state(6) to be set 00:31:56.218 [2024-11-18 07:17:17.012835] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ab5a0 is same with the state(6) to be set 00:31:56.218 [2024-11-18 07:17:17.012847] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ab5a0 is same with the state(6) to be set 00:31:56.218 [2024-11-18 07:17:17.012859] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ab5a0 is same with the state(6) to be set 00:31:56.218 [2024-11-18 07:17:17.012871] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ab5a0 is same with the state(6) to be set 00:31:56.218 [2024-11-18 07:17:17.012883] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ab5a0 is same with the state(6) to be set 00:31:56.218 [2024-11-18 07:17:17.012896] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ab5a0 is same with the state(6) to be set 00:31:56.218 [2024-11-18 07:17:17.012908] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ab5a0 is same with the state(6) to be set 00:31:56.218 [2024-11-18 07:17:17.012920] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ab5a0 is same with the state(6) to be set 00:31:56.218 [2024-11-18 07:17:17.012933] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ab5a0 is same with the state(6) to be set 00:31:56.218 [2024-11-18 07:17:17.012955] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ab5a0 is same with the state(6) to be set 00:31:56.218 [2024-11-18 07:17:17.012969] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ab5a0 is same with the state(6) to be set 00:31:56.218 [2024-11-18 07:17:17.012980] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ab5a0 is same with the state(6) to be set 00:31:56.218 [2024-11-18 07:17:17.012992] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ab5a0 is same with the state(6) to be set 00:31:56.218 [2024-11-18 07:17:17.013005] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ab5a0 is same with the state(6) to be set 00:31:56.218 [2024-11-18 07:17:17.013016] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ab5a0 is same with the state(6) to be set 00:31:56.218 [2024-11-18 07:17:17.013028] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ab5a0 is same with the state(6) to be set 00:31:56.218 [2024-11-18 07:17:17.013040] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ab5a0 is same with the state(6) to be set 00:31:56.218 [2024-11-18 07:17:17.013051] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ab5a0 is same with the state(6) to be set 00:31:56.218 [2024-11-18 07:17:17.013063] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ab5a0 is same with the state(6) to be set 00:31:56.218 [2024-11-18 07:17:17.013075] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ab5a0 is same with the state(6) to be set 00:31:56.218 [2024-11-18 07:17:17.013086] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ab5a0 is same with the state(6) to be set 00:31:56.218 [2024-11-18 07:17:17.013099] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ab5a0 is same with the state(6) to be set 00:31:56.218 [2024-11-18 07:17:17.013111] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ab5a0 is same with the state(6) to be set 00:31:56.218 [2024-11-18 07:17:17.013123] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ab5a0 is same with the state(6) to be set 00:31:56.218 [2024-11-18 07:17:17.013134] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ab5a0 is same with the state(6) to be set 00:31:56.218 [2024-11-18 07:17:17.013146] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ab5a0 is same with the state(6) to be set 00:31:56.218 [2024-11-18 07:17:17.013158] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ab5a0 is same with the state(6) to be set 00:31:56.218 [2024-11-18 07:17:17.013170] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ab5a0 is same with the state(6) to be set 00:31:56.218 [2024-11-18 07:17:17.013182] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ab5a0 is same with the state(6) to be set 00:31:56.218 [2024-11-18 07:17:17.013194] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ab5a0 is same with the state(6) to be set 00:31:56.218 [2024-11-18 07:17:17.013205] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ab5a0 is same with the state(6) to be set 00:31:56.218 [2024-11-18 07:17:17.013217] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ab5a0 is same with the state(6) to be set 00:31:56.218 [2024-11-18 07:17:17.013229] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ab5a0 is same with the state(6) to be set 00:31:56.218 07:17:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:31:59.505 07:17:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:59.505 [2024-11-18 07:17:20.341256] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:59.505 07:17:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:32:00.441 07:17:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:32:00.700 [2024-11-18 07:17:21.617325] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ac790 is same with the state(6) to be set 00:32:00.700 [2024-11-18 07:17:21.617415] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ac790 is same with the state(6) to be set 00:32:00.700 [2024-11-18 07:17:21.617439] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ac790 is same with the state(6) to be set 00:32:00.700 [2024-11-18 07:17:21.617452] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ac790 is same with the state(6) to be set 00:32:00.700 [2024-11-18 07:17:21.617464] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ac790 is same with the state(6) to be set 00:32:00.700 [2024-11-18 07:17:21.617477] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ac790 is same with the state(6) to be set 00:32:00.700 [2024-11-18 07:17:21.617497] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ac790 is same with the state(6) to be set 00:32:00.700 [2024-11-18 07:17:21.617512] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ac790 is same with the state(6) to be set 00:32:00.700 [2024-11-18 07:17:21.617523] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ac790 is same with the state(6) to be set 00:32:00.700 [2024-11-18 07:17:21.617535] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ac790 is same with the state(6) to be set 00:32:00.700 [2024-11-18 07:17:21.617548] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ac790 is same with the state(6) to be set 00:32:00.700 [2024-11-18 07:17:21.617560] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ac790 is same with the state(6) to be set 00:32:00.700 [2024-11-18 07:17:21.617572] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ac790 is same with the state(6) to be set 00:32:00.700 [2024-11-18 07:17:21.617584] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ac790 is same with the state(6) to be set 00:32:00.700 [2024-11-18 07:17:21.617597] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ac790 is same with the state(6) to be set 00:32:00.700 [2024-11-18 07:17:21.617609] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ac790 is same with the state(6) to be set 00:32:00.700 [2024-11-18 07:17:21.617621] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ac790 is same with the state(6) to be set 00:32:00.700 [2024-11-18 07:17:21.617635] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ac790 is same with the state(6) to be set 00:32:00.700 [2024-11-18 07:17:21.617647] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ac790 is same with the state(6) to be set 00:32:00.700 [2024-11-18 07:17:21.617661] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ac790 is same with the state(6) to be set 00:32:00.700 [2024-11-18 07:17:21.617674] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ac790 is same with the state(6) to be set 00:32:00.700 [2024-11-18 07:17:21.617687] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ac790 is same with the state(6) to be set 00:32:00.700 [2024-11-18 07:17:21.617699] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ac790 is same with the state(6) to be set 00:32:00.700 [2024-11-18 07:17:21.617711] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ac790 is same with the state(6) to be set 00:32:00.700 [2024-11-18 07:17:21.617722] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ac790 is same with the state(6) to be set 00:32:00.700 [2024-11-18 07:17:21.617743] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ac790 is same with the state(6) to be set 00:32:00.700 [2024-11-18 07:17:21.617756] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ac790 is same with the state(6) to be set 00:32:00.700 [2024-11-18 07:17:21.617767] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ac790 is same with the state(6) to be set 00:32:00.700 [2024-11-18 07:17:21.617794] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ac790 is same with the state(6) to be set 00:32:00.700 [2024-11-18 07:17:21.617806] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ac790 is same with the state(6) to be set 00:32:00.700 [2024-11-18 07:17:21.617817] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ac790 is same with the state(6) to be set 00:32:00.700 [2024-11-18 07:17:21.617828] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ac790 is same with the state(6) to be set 00:32:00.700 [2024-11-18 07:17:21.617839] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ac790 is same with the state(6) to be set 00:32:00.700 [2024-11-18 07:17:21.617850] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ac790 is same with the state(6) to be set 00:32:00.700 [2024-11-18 07:17:21.617862] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ac790 is same with the state(6) to be set 00:32:00.700 [2024-11-18 07:17:21.617889] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ac790 is same with the state(6) to be set 00:32:00.700 [2024-11-18 07:17:21.617900] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ac790 is same with the state(6) to be set 00:32:00.700 [2024-11-18 07:17:21.617911] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ac790 is same with the state(6) to be set 00:32:00.700 07:17:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 361144 00:32:07.275 { 00:32:07.275 "results": [ 00:32:07.275 { 00:32:07.275 "job": "NVMe0n1", 00:32:07.275 "core_mask": "0x1", 00:32:07.275 "workload": "verify", 00:32:07.275 "status": "finished", 00:32:07.275 "verify_range": { 00:32:07.275 "start": 0, 00:32:07.275 "length": 16384 00:32:07.275 }, 00:32:07.276 "queue_depth": 128, 00:32:07.276 "io_size": 4096, 00:32:07.276 "runtime": 15.005342, 00:32:07.276 "iops": 8497.173873144644, 00:32:07.276 "mibps": 33.192085441971265, 00:32:07.276 "io_failed": 10509, 00:32:07.276 "io_timeout": 0, 00:32:07.276 "avg_latency_us": 13889.757650617606, 00:32:07.276 "min_latency_us": 515.7925925925925, 00:32:07.276 "max_latency_us": 23787.140740740742 00:32:07.276 } 00:32:07.276 ], 00:32:07.276 "core_count": 1 00:32:07.276 } 00:32:07.276 07:17:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 361008 00:32:07.276 07:17:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 361008 ']' 00:32:07.276 07:17:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 361008 00:32:07.276 07:17:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:32:07.276 07:17:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:07.276 07:17:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 361008 00:32:07.276 07:17:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:07.276 07:17:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:07.276 07:17:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 361008' 00:32:07.276 killing process with pid 361008 00:32:07.276 07:17:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 361008 00:32:07.276 07:17:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 361008 00:32:07.276 07:17:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:32:07.276 [2024-11-18 07:17:10.922564] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:32:07.276 [2024-11-18 07:17:10.922671] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid361008 ] 00:32:07.276 [2024-11-18 07:17:10.990792] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:07.276 [2024-11-18 07:17:11.037030] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:07.276 Running I/O for 15 seconds... 00:32:07.276 8098.00 IOPS, 31.63 MiB/s [2024-11-18T06:17:28.254Z] [2024-11-18 07:17:13.288658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:75064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.276 [2024-11-18 07:17:13.288699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.276 [2024-11-18 07:17:13.288726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:75328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.276 [2024-11-18 07:17:13.288743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.276 [2024-11-18 07:17:13.288760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:75336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.276 [2024-11-18 07:17:13.288774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.276 [2024-11-18 07:17:13.288789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:75344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.276 [2024-11-18 07:17:13.288803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.276 [2024-11-18 07:17:13.288830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:75352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.276 [2024-11-18 07:17:13.288859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.276 [2024-11-18 07:17:13.288874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:75360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.276 [2024-11-18 07:17:13.288888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.276 [2024-11-18 07:17:13.288902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:75368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.276 [2024-11-18 07:17:13.288930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.276 [2024-11-18 07:17:13.288946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:75376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.276 [2024-11-18 07:17:13.288959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.276 [2024-11-18 07:17:13.288974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:75384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.276 [2024-11-18 07:17:13.288987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.276 [2024-11-18 07:17:13.289002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:75392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.276 [2024-11-18 07:17:13.289017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.276 [2024-11-18 07:17:13.289031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:75400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.276 [2024-11-18 07:17:13.289046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.276 [2024-11-18 07:17:13.289069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:75408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.276 [2024-11-18 07:17:13.289085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.276 [2024-11-18 07:17:13.289100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:75416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.276 [2024-11-18 07:17:13.289113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.276 [2024-11-18 07:17:13.289127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:75424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.276 [2024-11-18 07:17:13.289140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.276 [2024-11-18 07:17:13.289155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:75432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.276 [2024-11-18 07:17:13.289168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.276 [2024-11-18 07:17:13.289182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:75440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.276 [2024-11-18 07:17:13.289195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.276 [2024-11-18 07:17:13.289209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:75448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.276 [2024-11-18 07:17:13.289223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.276 [2024-11-18 07:17:13.289237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:75456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.276 [2024-11-18 07:17:13.289251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.276 [2024-11-18 07:17:13.289265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:75464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.276 [2024-11-18 07:17:13.289279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.276 [2024-11-18 07:17:13.289294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:75472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.276 [2024-11-18 07:17:13.289307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.276 [2024-11-18 07:17:13.289322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:75480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.276 [2024-11-18 07:17:13.289335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.276 [2024-11-18 07:17:13.289350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:75488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.276 [2024-11-18 07:17:13.289364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.276 [2024-11-18 07:17:13.289378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:75496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.276 [2024-11-18 07:17:13.289391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.276 [2024-11-18 07:17:13.289405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:75504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.276 [2024-11-18 07:17:13.289422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.276 [2024-11-18 07:17:13.289438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:75512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.276 [2024-11-18 07:17:13.289451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.276 [2024-11-18 07:17:13.289466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:75520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.277 [2024-11-18 07:17:13.289503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.277 [2024-11-18 07:17:13.289521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:75072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.277 [2024-11-18 07:17:13.289535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.277 [2024-11-18 07:17:13.289550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:75528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.277 [2024-11-18 07:17:13.289564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.277 [2024-11-18 07:17:13.289578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:75536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.277 [2024-11-18 07:17:13.289591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.277 [2024-11-18 07:17:13.289605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:75544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.277 [2024-11-18 07:17:13.289619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.277 [2024-11-18 07:17:13.289634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:75552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.277 [2024-11-18 07:17:13.289648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.277 [2024-11-18 07:17:13.289663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:75560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.277 [2024-11-18 07:17:13.289677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.277 [2024-11-18 07:17:13.289691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:75568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.277 [2024-11-18 07:17:13.289704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.277 [2024-11-18 07:17:13.289719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:75576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.277 [2024-11-18 07:17:13.289732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.277 [2024-11-18 07:17:13.289746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:75584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.277 [2024-11-18 07:17:13.289759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.277 [2024-11-18 07:17:13.289789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:75592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.277 [2024-11-18 07:17:13.289803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.277 [2024-11-18 07:17:13.289820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:75600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.277 [2024-11-18 07:17:13.289834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.277 [2024-11-18 07:17:13.289849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:75608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.277 [2024-11-18 07:17:13.289862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.277 [2024-11-18 07:17:13.289876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:75616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.277 [2024-11-18 07:17:13.289889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.277 [2024-11-18 07:17:13.289903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:75624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.277 [2024-11-18 07:17:13.289916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.277 [2024-11-18 07:17:13.289930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:75632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.277 [2024-11-18 07:17:13.289944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.277 [2024-11-18 07:17:13.289958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:75640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.277 [2024-11-18 07:17:13.289970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.277 [2024-11-18 07:17:13.289984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:75648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.277 [2024-11-18 07:17:13.290004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.277 [2024-11-18 07:17:13.290019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:75656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.277 [2024-11-18 07:17:13.290032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.277 [2024-11-18 07:17:13.290046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:75664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.277 [2024-11-18 07:17:13.290059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.277 [2024-11-18 07:17:13.290074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:75672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.277 [2024-11-18 07:17:13.290087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.277 [2024-11-18 07:17:13.290101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:75680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.277 [2024-11-18 07:17:13.290114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.277 [2024-11-18 07:17:13.290128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:75688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.277 [2024-11-18 07:17:13.290141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.277 [2024-11-18 07:17:13.290156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:75696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.277 [2024-11-18 07:17:13.290169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.277 [2024-11-18 07:17:13.290187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:75704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.277 [2024-11-18 07:17:13.290200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.277 [2024-11-18 07:17:13.290214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:75712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.277 [2024-11-18 07:17:13.290228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.277 [2024-11-18 07:17:13.290242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:75720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.277 [2024-11-18 07:17:13.290255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.277 [2024-11-18 07:17:13.290269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:75728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.277 [2024-11-18 07:17:13.290283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.277 [2024-11-18 07:17:13.290298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:75736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.277 [2024-11-18 07:17:13.290311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.277 [2024-11-18 07:17:13.290326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:75744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.277 [2024-11-18 07:17:13.290339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.277 [2024-11-18 07:17:13.290353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:75752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.277 [2024-11-18 07:17:13.290366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.277 [2024-11-18 07:17:13.290380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:75760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.277 [2024-11-18 07:17:13.290393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.277 [2024-11-18 07:17:13.290407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:75768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.277 [2024-11-18 07:17:13.290420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.277 [2024-11-18 07:17:13.290433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:75776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.277 [2024-11-18 07:17:13.290451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.278 [2024-11-18 07:17:13.290466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:75784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.278 [2024-11-18 07:17:13.290506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.278 [2024-11-18 07:17:13.290522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:75792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.278 [2024-11-18 07:17:13.290536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.278 [2024-11-18 07:17:13.290552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:75800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.278 [2024-11-18 07:17:13.290570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.278 [2024-11-18 07:17:13.290586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:75808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.278 [2024-11-18 07:17:13.290599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.278 [2024-11-18 07:17:13.290614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:75080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.278 [2024-11-18 07:17:13.290628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.278 [2024-11-18 07:17:13.290643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:75816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.278 [2024-11-18 07:17:13.290656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.278 [2024-11-18 07:17:13.290671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:75824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.278 [2024-11-18 07:17:13.290685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.278 [2024-11-18 07:17:13.290700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:75832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.278 [2024-11-18 07:17:13.290713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.278 [2024-11-18 07:17:13.290727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:75840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.278 [2024-11-18 07:17:13.290741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.278 [2024-11-18 07:17:13.290756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:75848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.278 [2024-11-18 07:17:13.290784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.278 [2024-11-18 07:17:13.290799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:75856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.278 [2024-11-18 07:17:13.290813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.278 [2024-11-18 07:17:13.290827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:75864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.278 [2024-11-18 07:17:13.290840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.278 [2024-11-18 07:17:13.290854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:75872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.278 [2024-11-18 07:17:13.290868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.278 [2024-11-18 07:17:13.290882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:75880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.278 [2024-11-18 07:17:13.290895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.278 [2024-11-18 07:17:13.290909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:75888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.278 [2024-11-18 07:17:13.290922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.278 [2024-11-18 07:17:13.290940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:75896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.278 [2024-11-18 07:17:13.290955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.278 [2024-11-18 07:17:13.290969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:75904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.278 [2024-11-18 07:17:13.290983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.278 [2024-11-18 07:17:13.290997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:75912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.278 [2024-11-18 07:17:13.291010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.278 [2024-11-18 07:17:13.291024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:75920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.278 [2024-11-18 07:17:13.291038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.278 [2024-11-18 07:17:13.291052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:75928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.278 [2024-11-18 07:17:13.291065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.278 [2024-11-18 07:17:13.291080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:75936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.278 [2024-11-18 07:17:13.291093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.278 [2024-11-18 07:17:13.291107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:75944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.278 [2024-11-18 07:17:13.291120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.278 [2024-11-18 07:17:13.291134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:75952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.278 [2024-11-18 07:17:13.291147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.278 [2024-11-18 07:17:13.291162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:75960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.278 [2024-11-18 07:17:13.291175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.278 [2024-11-18 07:17:13.291190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:75968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.278 [2024-11-18 07:17:13.291203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.278 [2024-11-18 07:17:13.291217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:75976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.278 [2024-11-18 07:17:13.291231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.278 [2024-11-18 07:17:13.291244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:75984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.278 [2024-11-18 07:17:13.291257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.278 [2024-11-18 07:17:13.291272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:75992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.278 [2024-11-18 07:17:13.291288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.278 [2024-11-18 07:17:13.291303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:76000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.278 [2024-11-18 07:17:13.291316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.278 [2024-11-18 07:17:13.291330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:76008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.278 [2024-11-18 07:17:13.291343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.278 [2024-11-18 07:17:13.291357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:76016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.278 [2024-11-18 07:17:13.291370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.278 [2024-11-18 07:17:13.291384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:76024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.278 [2024-11-18 07:17:13.291396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.278 [2024-11-18 07:17:13.291410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:76032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.278 [2024-11-18 07:17:13.291424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.278 [2024-11-18 07:17:13.291438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:76040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.278 [2024-11-18 07:17:13.291451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.278 [2024-11-18 07:17:13.291479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:76048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.278 [2024-11-18 07:17:13.291502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.278 [2024-11-18 07:17:13.291519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:76056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.278 [2024-11-18 07:17:13.291533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.279 [2024-11-18 07:17:13.291548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:76064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.279 [2024-11-18 07:17:13.291561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.279 [2024-11-18 07:17:13.291575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:76072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.279 [2024-11-18 07:17:13.291589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.279 [2024-11-18 07:17:13.291603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:76080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.279 [2024-11-18 07:17:13.291616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.279 [2024-11-18 07:17:13.291630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:75088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.279 [2024-11-18 07:17:13.291643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.279 [2024-11-18 07:17:13.291658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:75096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.279 [2024-11-18 07:17:13.291675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.279 [2024-11-18 07:17:13.291691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:75104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.279 [2024-11-18 07:17:13.291705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.279 [2024-11-18 07:17:13.291720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:75112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.279 [2024-11-18 07:17:13.291733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.279 [2024-11-18 07:17:13.291747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:75120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.279 [2024-11-18 07:17:13.291761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.279 [2024-11-18 07:17:13.291775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:75128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.279 [2024-11-18 07:17:13.291789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.279 [2024-11-18 07:17:13.291803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:75136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.279 [2024-11-18 07:17:13.291821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.279 [2024-11-18 07:17:13.291837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:75144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.279 [2024-11-18 07:17:13.291851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.279 [2024-11-18 07:17:13.291865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:75152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.279 [2024-11-18 07:17:13.291879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.279 [2024-11-18 07:17:13.291894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:75160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.279 [2024-11-18 07:17:13.291907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.279 [2024-11-18 07:17:13.291922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:75168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.279 [2024-11-18 07:17:13.291935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.279 [2024-11-18 07:17:13.291950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:75176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.279 [2024-11-18 07:17:13.291963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.279 [2024-11-18 07:17:13.291978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:75184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.279 [2024-11-18 07:17:13.291991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.279 [2024-11-18 07:17:13.292006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:75192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.279 [2024-11-18 07:17:13.292019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.279 [2024-11-18 07:17:13.292037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:75200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.279 [2024-11-18 07:17:13.292052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.279 [2024-11-18 07:17:13.292066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:75208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.279 [2024-11-18 07:17:13.292080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.279 [2024-11-18 07:17:13.292095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:75216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.279 [2024-11-18 07:17:13.292109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.279 [2024-11-18 07:17:13.292123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:75224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.279 [2024-11-18 07:17:13.292137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.279 [2024-11-18 07:17:13.292152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:75232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.279 [2024-11-18 07:17:13.292166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.279 [2024-11-18 07:17:13.292180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:75240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.279 [2024-11-18 07:17:13.292193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.279 [2024-11-18 07:17:13.292208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:75248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.279 [2024-11-18 07:17:13.292222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.279 [2024-11-18 07:17:13.292236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:75256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.279 [2024-11-18 07:17:13.292250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.279 [2024-11-18 07:17:13.292265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:75264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.279 [2024-11-18 07:17:13.292283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.279 [2024-11-18 07:17:13.292299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:75272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.279 [2024-11-18 07:17:13.292313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.279 [2024-11-18 07:17:13.292327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:75280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.279 [2024-11-18 07:17:13.292341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.279 [2024-11-18 07:17:13.292355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:75288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.279 [2024-11-18 07:17:13.292369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.279 [2024-11-18 07:17:13.292384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:75296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.279 [2024-11-18 07:17:13.292400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.279 [2024-11-18 07:17:13.292416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:75304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.279 [2024-11-18 07:17:13.292430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.279 [2024-11-18 07:17:13.292445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:75312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.279 [2024-11-18 07:17:13.292459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.279 [2024-11-18 07:17:13.292519] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:07.279 [2024-11-18 07:17:13.292537] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:07.279 [2024-11-18 07:17:13.292550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:75320 len:8 PRP1 0x0 PRP2 0x0 00:32:07.279 [2024-11-18 07:17:13.292564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.279 [2024-11-18 07:17:13.292636] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:32:07.279 [2024-11-18 07:17:13.292675] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:07.279 [2024-11-18 07:17:13.292694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.279 [2024-11-18 07:17:13.292710] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:07.279 [2024-11-18 07:17:13.292724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.280 [2024-11-18 07:17:13.292738] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:07.280 [2024-11-18 07:17:13.292756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.280 [2024-11-18 07:17:13.292771] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:07.280 [2024-11-18 07:17:13.292784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.280 [2024-11-18 07:17:13.292807] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:32:07.280 [2024-11-18 07:17:13.296073] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:32:07.280 [2024-11-18 07:17:13.296113] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9d53b0 (9): Bad file descriptor 00:32:07.280 [2024-11-18 07:17:13.406960] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:32:07.280 7847.50 IOPS, 30.65 MiB/s [2024-11-18T06:17:28.258Z] 8150.67 IOPS, 31.84 MiB/s [2024-11-18T06:17:28.258Z] 8286.75 IOPS, 32.37 MiB/s [2024-11-18T06:17:28.258Z] [2024-11-18 07:17:17.013941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:100024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.280 [2024-11-18 07:17:17.013982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.280 [2024-11-18 07:17:17.014006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:100032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.280 [2024-11-18 07:17:17.014023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.280 [2024-11-18 07:17:17.014045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:100040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.280 [2024-11-18 07:17:17.014060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.280 [2024-11-18 07:17:17.014074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:100048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.280 [2024-11-18 07:17:17.014088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.280 [2024-11-18 07:17:17.014102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:100056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.280 [2024-11-18 07:17:17.014115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.280 [2024-11-18 07:17:17.014129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:100064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.280 [2024-11-18 07:17:17.014143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.280 [2024-11-18 07:17:17.014157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:100072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.280 [2024-11-18 07:17:17.014170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.280 [2024-11-18 07:17:17.014184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:100080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.280 [2024-11-18 07:17:17.014197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.280 [2024-11-18 07:17:17.014211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:100088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.280 [2024-11-18 07:17:17.014224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.280 [2024-11-18 07:17:17.014238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:100096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.280 [2024-11-18 07:17:17.014251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.280 [2024-11-18 07:17:17.014266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:100104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.280 [2024-11-18 07:17:17.014280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.280 [2024-11-18 07:17:17.014294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:100112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.280 [2024-11-18 07:17:17.014308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.280 [2024-11-18 07:17:17.014325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:100120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.280 [2024-11-18 07:17:17.014338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.280 [2024-11-18 07:17:17.014354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:100128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.280 [2024-11-18 07:17:17.014367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.280 [2024-11-18 07:17:17.014381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:100136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.280 [2024-11-18 07:17:17.014402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.280 [2024-11-18 07:17:17.014418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:100144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.280 [2024-11-18 07:17:17.014431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.280 [2024-11-18 07:17:17.014446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:100152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.280 [2024-11-18 07:17:17.014460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.280 [2024-11-18 07:17:17.014487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:100160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.280 [2024-11-18 07:17:17.014525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.280 [2024-11-18 07:17:17.014542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:100168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.280 [2024-11-18 07:17:17.014556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.280 [2024-11-18 07:17:17.014587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:100176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.280 [2024-11-18 07:17:17.014608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.280 [2024-11-18 07:17:17.014625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:100184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.280 [2024-11-18 07:17:17.014639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.280 [2024-11-18 07:17:17.014654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:100192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.280 [2024-11-18 07:17:17.014668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.280 [2024-11-18 07:17:17.014683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:100200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.280 [2024-11-18 07:17:17.014697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.280 [2024-11-18 07:17:17.014712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:100208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.280 [2024-11-18 07:17:17.014726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.280 [2024-11-18 07:17:17.014741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:100216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.280 [2024-11-18 07:17:17.014755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.280 [2024-11-18 07:17:17.014771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:100224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.280 [2024-11-18 07:17:17.014785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.280 [2024-11-18 07:17:17.014800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:100232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.280 [2024-11-18 07:17:17.014814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.280 [2024-11-18 07:17:17.014834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:100240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.280 [2024-11-18 07:17:17.014849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.281 [2024-11-18 07:17:17.014864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:100248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.281 [2024-11-18 07:17:17.014903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.281 [2024-11-18 07:17:17.014918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:100256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.281 [2024-11-18 07:17:17.014932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.281 [2024-11-18 07:17:17.014946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:100264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.281 [2024-11-18 07:17:17.014960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.281 [2024-11-18 07:17:17.014975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:100272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.281 [2024-11-18 07:17:17.014988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.281 [2024-11-18 07:17:17.015003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:100280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.281 [2024-11-18 07:17:17.015016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.281 [2024-11-18 07:17:17.015030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:100288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.281 [2024-11-18 07:17:17.015045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.281 [2024-11-18 07:17:17.015059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:100296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.281 [2024-11-18 07:17:17.015072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.281 [2024-11-18 07:17:17.015087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:100304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.281 [2024-11-18 07:17:17.015102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.281 [2024-11-18 07:17:17.015117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:100312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.281 [2024-11-18 07:17:17.015131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.281 [2024-11-18 07:17:17.015146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:100320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.281 [2024-11-18 07:17:17.015160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.281 [2024-11-18 07:17:17.015175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:100328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.281 [2024-11-18 07:17:17.015189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.281 [2024-11-18 07:17:17.015204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:100336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.281 [2024-11-18 07:17:17.015222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.281 [2024-11-18 07:17:17.015237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:100344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.281 [2024-11-18 07:17:17.015252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.281 [2024-11-18 07:17:17.015266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:100352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.281 [2024-11-18 07:17:17.015280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.281 [2024-11-18 07:17:17.015295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:100360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.281 [2024-11-18 07:17:17.015308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.281 [2024-11-18 07:17:17.015323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:100368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.281 [2024-11-18 07:17:17.015337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.281 [2024-11-18 07:17:17.015352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:100376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.281 [2024-11-18 07:17:17.015366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.281 [2024-11-18 07:17:17.015380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:100384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.281 [2024-11-18 07:17:17.015394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.281 [2024-11-18 07:17:17.015409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:100392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.281 [2024-11-18 07:17:17.015423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.281 [2024-11-18 07:17:17.015438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:100400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.281 [2024-11-18 07:17:17.015451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.281 [2024-11-18 07:17:17.015466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:100432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.281 [2024-11-18 07:17:17.015480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.281 [2024-11-18 07:17:17.015522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:100440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.281 [2024-11-18 07:17:17.015539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.281 [2024-11-18 07:17:17.015555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:100448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.281 [2024-11-18 07:17:17.015570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.281 [2024-11-18 07:17:17.015585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:100456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.281 [2024-11-18 07:17:17.015600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.281 [2024-11-18 07:17:17.015615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:100464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.281 [2024-11-18 07:17:17.015633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.281 [2024-11-18 07:17:17.015649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:100472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.281 [2024-11-18 07:17:17.015663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.281 [2024-11-18 07:17:17.015678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:100480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.281 [2024-11-18 07:17:17.015692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.281 [2024-11-18 07:17:17.015707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:100488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.281 [2024-11-18 07:17:17.015721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.281 [2024-11-18 07:17:17.015737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:100496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.281 [2024-11-18 07:17:17.015751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.281 [2024-11-18 07:17:17.015766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:100504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.281 [2024-11-18 07:17:17.015780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.281 [2024-11-18 07:17:17.015809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:100512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.281 [2024-11-18 07:17:17.015824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.281 [2024-11-18 07:17:17.015839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:100520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.281 [2024-11-18 07:17:17.015852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.281 [2024-11-18 07:17:17.015868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:100528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.282 [2024-11-18 07:17:17.015881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.282 [2024-11-18 07:17:17.015896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:100536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.282 [2024-11-18 07:17:17.015909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.282 [2024-11-18 07:17:17.015924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:100544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.282 [2024-11-18 07:17:17.015938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.282 [2024-11-18 07:17:17.015952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:100552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.282 [2024-11-18 07:17:17.015966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.282 [2024-11-18 07:17:17.015981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:100560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.282 [2024-11-18 07:17:17.015995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.282 [2024-11-18 07:17:17.016013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:100568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.282 [2024-11-18 07:17:17.016027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.282 [2024-11-18 07:17:17.016042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:100576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.282 [2024-11-18 07:17:17.016055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.282 [2024-11-18 07:17:17.016070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:100584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.282 [2024-11-18 07:17:17.016085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.282 [2024-11-18 07:17:17.016100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:100592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.282 [2024-11-18 07:17:17.016113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.282 [2024-11-18 07:17:17.016128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:100600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.282 [2024-11-18 07:17:17.016141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.282 [2024-11-18 07:17:17.016156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:100608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.282 [2024-11-18 07:17:17.016170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.282 [2024-11-18 07:17:17.016184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:100616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.282 [2024-11-18 07:17:17.016197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.282 [2024-11-18 07:17:17.016212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:100624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.282 [2024-11-18 07:17:17.016225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.282 [2024-11-18 07:17:17.016240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:100632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.282 [2024-11-18 07:17:17.016253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.282 [2024-11-18 07:17:17.016268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:100640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.282 [2024-11-18 07:17:17.016282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.282 [2024-11-18 07:17:17.016297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:100648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.282 [2024-11-18 07:17:17.016310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.282 [2024-11-18 07:17:17.016325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:100656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.282 [2024-11-18 07:17:17.016339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.282 [2024-11-18 07:17:17.016353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:100664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.282 [2024-11-18 07:17:17.016371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.282 [2024-11-18 07:17:17.016387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:100672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.282 [2024-11-18 07:17:17.016401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.282 [2024-11-18 07:17:17.016416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:100680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.282 [2024-11-18 07:17:17.016430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.282 [2024-11-18 07:17:17.016444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:100688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.282 [2024-11-18 07:17:17.016458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.282 [2024-11-18 07:17:17.016473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:100696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.282 [2024-11-18 07:17:17.016486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.282 [2024-11-18 07:17:17.016528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:100704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.282 [2024-11-18 07:17:17.016543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.282 [2024-11-18 07:17:17.016559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:100712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.282 [2024-11-18 07:17:17.016574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.282 [2024-11-18 07:17:17.016589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:100720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.282 [2024-11-18 07:17:17.016602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.282 [2024-11-18 07:17:17.016617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:100728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.282 [2024-11-18 07:17:17.016632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.282 [2024-11-18 07:17:17.016647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:100736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.282 [2024-11-18 07:17:17.016661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.282 [2024-11-18 07:17:17.016676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:100744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.282 [2024-11-18 07:17:17.016689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.282 [2024-11-18 07:17:17.016705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:100752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.282 [2024-11-18 07:17:17.016719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.282 [2024-11-18 07:17:17.016735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:100760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.282 [2024-11-18 07:17:17.016748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.282 [2024-11-18 07:17:17.016768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:100768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.282 [2024-11-18 07:17:17.016783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.282 [2024-11-18 07:17:17.016812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:100776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.282 [2024-11-18 07:17:17.016827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.282 [2024-11-18 07:17:17.016842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:100784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.282 [2024-11-18 07:17:17.016855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.282 [2024-11-18 07:17:17.016870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:100792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.282 [2024-11-18 07:17:17.016883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.282 [2024-11-18 07:17:17.016898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:100800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.282 [2024-11-18 07:17:17.016911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.282 [2024-11-18 07:17:17.016926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:100808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.283 [2024-11-18 07:17:17.016939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.283 [2024-11-18 07:17:17.016955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:100816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.283 [2024-11-18 07:17:17.016968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.283 [2024-11-18 07:17:17.016983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:100824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.283 [2024-11-18 07:17:17.016996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.283 [2024-11-18 07:17:17.017011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:100832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.283 [2024-11-18 07:17:17.017025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.283 [2024-11-18 07:17:17.017039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:100840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.283 [2024-11-18 07:17:17.017059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.283 [2024-11-18 07:17:17.017074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:100848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.283 [2024-11-18 07:17:17.017088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.283 [2024-11-18 07:17:17.017102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:100856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.283 [2024-11-18 07:17:17.017116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.283 [2024-11-18 07:17:17.017131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:100864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.283 [2024-11-18 07:17:17.017156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.283 [2024-11-18 07:17:17.017172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:100872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.283 [2024-11-18 07:17:17.017186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.283 [2024-11-18 07:17:17.017201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:100880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.283 [2024-11-18 07:17:17.017214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.283 [2024-11-18 07:17:17.017229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:100888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.283 [2024-11-18 07:17:17.017242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.283 [2024-11-18 07:17:17.017257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:100896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.283 [2024-11-18 07:17:17.017270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.283 [2024-11-18 07:17:17.017286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.283 [2024-11-18 07:17:17.017299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.283 [2024-11-18 07:17:17.017314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:100912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.283 [2024-11-18 07:17:17.017327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.283 [2024-11-18 07:17:17.017342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:100920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.283 [2024-11-18 07:17:17.017355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.283 [2024-11-18 07:17:17.017370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:100928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.283 [2024-11-18 07:17:17.017383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.283 [2024-11-18 07:17:17.017397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:100936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.283 [2024-11-18 07:17:17.017411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.283 [2024-11-18 07:17:17.017426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:100944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.283 [2024-11-18 07:17:17.017439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.283 [2024-11-18 07:17:17.017453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:100952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.283 [2024-11-18 07:17:17.017467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.283 [2024-11-18 07:17:17.017507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:100960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.283 [2024-11-18 07:17:17.017523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.283 [2024-11-18 07:17:17.017538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:100968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.283 [2024-11-18 07:17:17.017560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.283 [2024-11-18 07:17:17.017577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:100976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.283 [2024-11-18 07:17:17.017591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.283 [2024-11-18 07:17:17.017607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:100984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.283 [2024-11-18 07:17:17.017621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.283 [2024-11-18 07:17:17.017636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:100992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.283 [2024-11-18 07:17:17.017650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.283 [2024-11-18 07:17:17.017665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:101000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.283 [2024-11-18 07:17:17.017679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.283 [2024-11-18 07:17:17.017694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:101008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.283 [2024-11-18 07:17:17.017708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.283 [2024-11-18 07:17:17.017723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:101016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.283 [2024-11-18 07:17:17.017737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.283 [2024-11-18 07:17:17.017752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:101024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.283 [2024-11-18 07:17:17.017766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.283 [2024-11-18 07:17:17.017781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:101032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.283 [2024-11-18 07:17:17.017795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.283 [2024-11-18 07:17:17.017810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:101040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.283 [2024-11-18 07:17:17.017823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.283 [2024-11-18 07:17:17.017838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:100408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.283 [2024-11-18 07:17:17.017852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.283 [2024-11-18 07:17:17.017867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:100416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.283 [2024-11-18 07:17:17.017896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.284 [2024-11-18 07:17:17.017938] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:07.284 [2024-11-18 07:17:17.017955] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:07.284 [2024-11-18 07:17:17.017971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:100424 len:8 PRP1 0x0 PRP2 0x0 00:32:07.284 [2024-11-18 07:17:17.017993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.284 [2024-11-18 07:17:17.018061] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:32:07.284 [2024-11-18 07:17:17.018113] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:07.284 [2024-11-18 07:17:17.018133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.284 [2024-11-18 07:17:17.018149] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:07.284 [2024-11-18 07:17:17.018162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.284 [2024-11-18 07:17:17.018177] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:07.284 [2024-11-18 07:17:17.018191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.284 [2024-11-18 07:17:17.018205] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:07.284 [2024-11-18 07:17:17.018219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.284 [2024-11-18 07:17:17.018233] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:32:07.284 [2024-11-18 07:17:17.018272] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9d53b0 (9): Bad file descriptor 00:32:07.284 [2024-11-18 07:17:17.021554] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:32:07.284 [2024-11-18 07:17:17.091811] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:32:07.284 8220.40 IOPS, 32.11 MiB/s [2024-11-18T06:17:28.262Z] 8291.83 IOPS, 32.39 MiB/s [2024-11-18T06:17:28.262Z] 8356.00 IOPS, 32.64 MiB/s [2024-11-18T06:17:28.262Z] 8398.38 IOPS, 32.81 MiB/s [2024-11-18T06:17:28.262Z] 8426.22 IOPS, 32.91 MiB/s [2024-11-18T06:17:28.262Z] [2024-11-18 07:17:21.619088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:44728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.284 [2024-11-18 07:17:21.619140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.284 [2024-11-18 07:17:21.619164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:44864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.284 [2024-11-18 07:17:21.619190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.284 [2024-11-18 07:17:21.619205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:44872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.284 [2024-11-18 07:17:21.619219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.284 [2024-11-18 07:17:21.619235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:44880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.284 [2024-11-18 07:17:21.619248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.284 [2024-11-18 07:17:21.619264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:44888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.284 [2024-11-18 07:17:21.619279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.284 [2024-11-18 07:17:21.619301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:44896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.284 [2024-11-18 07:17:21.619316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.284 [2024-11-18 07:17:21.619332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:44904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.284 [2024-11-18 07:17:21.619345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.284 [2024-11-18 07:17:21.619359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:44912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.284 [2024-11-18 07:17:21.619372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.284 [2024-11-18 07:17:21.619387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:44920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.284 [2024-11-18 07:17:21.619400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.284 [2024-11-18 07:17:21.619414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:44928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.284 [2024-11-18 07:17:21.619428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.284 [2024-11-18 07:17:21.619442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:44936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.284 [2024-11-18 07:17:21.619456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.284 [2024-11-18 07:17:21.619470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:44944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.284 [2024-11-18 07:17:21.619484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.284 [2024-11-18 07:17:21.619525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:44952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.284 [2024-11-18 07:17:21.619541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.284 [2024-11-18 07:17:21.619556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:44960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.284 [2024-11-18 07:17:21.619570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.284 [2024-11-18 07:17:21.619585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:44968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.284 [2024-11-18 07:17:21.619599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.284 [2024-11-18 07:17:21.619613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:44976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.284 [2024-11-18 07:17:21.619627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.284 [2024-11-18 07:17:21.619641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:44984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.284 [2024-11-18 07:17:21.619655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.284 [2024-11-18 07:17:21.619671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:44992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.284 [2024-11-18 07:17:21.619685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.284 [2024-11-18 07:17:21.619709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:45000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.284 [2024-11-18 07:17:21.619724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.284 [2024-11-18 07:17:21.619740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:45008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.284 [2024-11-18 07:17:21.619754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.284 [2024-11-18 07:17:21.619769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:45016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.284 [2024-11-18 07:17:21.619783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.284 [2024-11-18 07:17:21.619812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:45024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.284 [2024-11-18 07:17:21.619826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.284 [2024-11-18 07:17:21.619840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:45032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.284 [2024-11-18 07:17:21.619854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.284 [2024-11-18 07:17:21.619868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:45040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.284 [2024-11-18 07:17:21.619881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.284 [2024-11-18 07:17:21.619896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:45048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.284 [2024-11-18 07:17:21.619910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.284 [2024-11-18 07:17:21.619924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:45056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.284 [2024-11-18 07:17:21.619938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.285 [2024-11-18 07:17:21.619952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:45064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.285 [2024-11-18 07:17:21.619966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.285 [2024-11-18 07:17:21.619981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:45072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.285 [2024-11-18 07:17:21.619994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.285 [2024-11-18 07:17:21.620008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:45080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.285 [2024-11-18 07:17:21.620021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.285 [2024-11-18 07:17:21.620035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:45088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.285 [2024-11-18 07:17:21.620048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.285 [2024-11-18 07:17:21.620062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:45096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.285 [2024-11-18 07:17:21.620079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.285 [2024-11-18 07:17:21.620094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:45104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.285 [2024-11-18 07:17:21.620107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.285 [2024-11-18 07:17:21.620122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:45112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.285 [2024-11-18 07:17:21.620136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.285 [2024-11-18 07:17:21.620151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:45120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.285 [2024-11-18 07:17:21.620164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.285 [2024-11-18 07:17:21.620178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:45128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.285 [2024-11-18 07:17:21.620191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.285 [2024-11-18 07:17:21.620205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:45136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.285 [2024-11-18 07:17:21.620218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.285 [2024-11-18 07:17:21.620232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:45144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.285 [2024-11-18 07:17:21.620244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.285 [2024-11-18 07:17:21.620258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:45152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.285 [2024-11-18 07:17:21.620271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.285 [2024-11-18 07:17:21.620285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:45160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.285 [2024-11-18 07:17:21.620298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.285 [2024-11-18 07:17:21.620312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:45168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.285 [2024-11-18 07:17:21.620326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.285 [2024-11-18 07:17:21.620340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:45176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.285 [2024-11-18 07:17:21.620353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.285 [2024-11-18 07:17:21.620367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:44736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.285 [2024-11-18 07:17:21.620381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.285 [2024-11-18 07:17:21.620395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:44744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.285 [2024-11-18 07:17:21.620408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.285 [2024-11-18 07:17:21.620427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:44752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.285 [2024-11-18 07:17:21.620441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.285 [2024-11-18 07:17:21.620455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:44760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.285 [2024-11-18 07:17:21.620469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.285 [2024-11-18 07:17:21.620483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:44768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.285 [2024-11-18 07:17:21.620519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.285 [2024-11-18 07:17:21.620536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:44776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.285 [2024-11-18 07:17:21.620550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.285 [2024-11-18 07:17:21.620565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:44784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.285 [2024-11-18 07:17:21.620580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.285 [2024-11-18 07:17:21.620595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:45184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.285 [2024-11-18 07:17:21.620609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.285 [2024-11-18 07:17:21.620623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:45192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.285 [2024-11-18 07:17:21.620637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.285 [2024-11-18 07:17:21.620651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:45200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.285 [2024-11-18 07:17:21.620665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.285 [2024-11-18 07:17:21.620679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:45208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.285 [2024-11-18 07:17:21.620693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.285 [2024-11-18 07:17:21.620708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:45216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.285 [2024-11-18 07:17:21.620721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.285 [2024-11-18 07:17:21.620736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:45224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.285 [2024-11-18 07:17:21.620750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.285 [2024-11-18 07:17:21.620764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:45232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.285 [2024-11-18 07:17:21.620777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.285 [2024-11-18 07:17:21.620792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:45240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.285 [2024-11-18 07:17:21.620823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.285 [2024-11-18 07:17:21.620839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:45248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.285 [2024-11-18 07:17:21.620852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.285 [2024-11-18 07:17:21.620867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:45256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.285 [2024-11-18 07:17:21.620879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.285 [2024-11-18 07:17:21.620894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:45264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.285 [2024-11-18 07:17:21.620907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.286 [2024-11-18 07:17:21.620922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:45272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.286 [2024-11-18 07:17:21.620936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.286 [2024-11-18 07:17:21.620950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:45280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.286 [2024-11-18 07:17:21.620963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.286 [2024-11-18 07:17:21.620977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:45288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.286 [2024-11-18 07:17:21.620990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.286 [2024-11-18 07:17:21.621004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:45296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.286 [2024-11-18 07:17:21.621017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.286 [2024-11-18 07:17:21.621031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:45304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.286 [2024-11-18 07:17:21.621044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.286 [2024-11-18 07:17:21.621059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:45312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.286 [2024-11-18 07:17:21.621072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.286 [2024-11-18 07:17:21.621086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:45320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.286 [2024-11-18 07:17:21.621100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.286 [2024-11-18 07:17:21.621114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:45328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.286 [2024-11-18 07:17:21.621127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.286 [2024-11-18 07:17:21.621141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:45336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.286 [2024-11-18 07:17:21.621169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.286 [2024-11-18 07:17:21.621184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:45344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.286 [2024-11-18 07:17:21.621202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.286 [2024-11-18 07:17:21.621218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:45352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.286 [2024-11-18 07:17:21.621231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.286 [2024-11-18 07:17:21.621246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:45360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.286 [2024-11-18 07:17:21.621259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.286 [2024-11-18 07:17:21.621274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:45368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.286 [2024-11-18 07:17:21.621287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.286 [2024-11-18 07:17:21.621302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:45376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.286 [2024-11-18 07:17:21.621315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.286 [2024-11-18 07:17:21.621329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:45384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.286 [2024-11-18 07:17:21.621343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.286 [2024-11-18 07:17:21.621357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:45392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.286 [2024-11-18 07:17:21.621371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.286 [2024-11-18 07:17:21.621387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:45400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.286 [2024-11-18 07:17:21.621400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.286 [2024-11-18 07:17:21.621415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:45408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.286 [2024-11-18 07:17:21.621429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.286 [2024-11-18 07:17:21.621453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:45416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.286 [2024-11-18 07:17:21.621474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.286 [2024-11-18 07:17:21.621498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:45424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.286 [2024-11-18 07:17:21.621529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.286 [2024-11-18 07:17:21.621546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:45432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.286 [2024-11-18 07:17:21.621560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.286 [2024-11-18 07:17:21.621575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:45440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.286 [2024-11-18 07:17:21.621589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.286 [2024-11-18 07:17:21.621609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:45448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.286 [2024-11-18 07:17:21.621624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.286 [2024-11-18 07:17:21.621639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:45456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.286 [2024-11-18 07:17:21.621653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.286 [2024-11-18 07:17:21.621668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:45464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.286 [2024-11-18 07:17:21.621681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.286 [2024-11-18 07:17:21.621697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:45472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.286 [2024-11-18 07:17:21.621711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.287 [2024-11-18 07:17:21.621726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:45480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.287 [2024-11-18 07:17:21.621740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.287 [2024-11-18 07:17:21.621755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:45488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.287 [2024-11-18 07:17:21.621769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.287 [2024-11-18 07:17:21.621784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:45496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.287 [2024-11-18 07:17:21.621797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.287 [2024-11-18 07:17:21.621827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:44792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.287 [2024-11-18 07:17:21.621841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.287 [2024-11-18 07:17:21.621856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:44800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.287 [2024-11-18 07:17:21.621870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.287 [2024-11-18 07:17:21.621884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:44808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.287 [2024-11-18 07:17:21.621898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.287 [2024-11-18 07:17:21.621913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:44816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.287 [2024-11-18 07:17:21.621927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.287 [2024-11-18 07:17:21.621943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:44824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.287 [2024-11-18 07:17:21.621956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.287 [2024-11-18 07:17:21.621971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:44832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.287 [2024-11-18 07:17:21.621988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.287 [2024-11-18 07:17:21.622004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:44840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.287 [2024-11-18 07:17:21.622017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.287 [2024-11-18 07:17:21.622032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:44848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.287 [2024-11-18 07:17:21.622046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.287 [2024-11-18 07:17:21.622060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:44856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.287 [2024-11-18 07:17:21.622074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.287 [2024-11-18 07:17:21.622089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:45504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.287 [2024-11-18 07:17:21.622102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.287 [2024-11-18 07:17:21.622117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:45512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.287 [2024-11-18 07:17:21.622130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.287 [2024-11-18 07:17:21.622144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:45520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.287 [2024-11-18 07:17:21.622158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.287 [2024-11-18 07:17:21.622172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:45528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.287 [2024-11-18 07:17:21.622186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.287 [2024-11-18 07:17:21.622201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:45536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.287 [2024-11-18 07:17:21.622214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.287 [2024-11-18 07:17:21.622229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:45544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.287 [2024-11-18 07:17:21.622243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.287 [2024-11-18 07:17:21.622257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:45552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.287 [2024-11-18 07:17:21.622271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.287 [2024-11-18 07:17:21.622285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:45560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.287 [2024-11-18 07:17:21.622298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.287 [2024-11-18 07:17:21.622313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:45568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.287 [2024-11-18 07:17:21.622326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.287 [2024-11-18 07:17:21.622346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:45576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.287 [2024-11-18 07:17:21.622360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.287 [2024-11-18 07:17:21.622375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:45584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.287 [2024-11-18 07:17:21.622389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.287 [2024-11-18 07:17:21.622404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:45592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.287 [2024-11-18 07:17:21.622418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.287 [2024-11-18 07:17:21.622432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:45600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.287 [2024-11-18 07:17:21.622446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.287 [2024-11-18 07:17:21.622460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:45608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.287 [2024-11-18 07:17:21.622474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.287 [2024-11-18 07:17:21.622495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:45616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.287 [2024-11-18 07:17:21.622527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.287 [2024-11-18 07:17:21.622559] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:07.287 [2024-11-18 07:17:21.622576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45624 len:8 PRP1 0x0 PRP2 0x0 00:32:07.287 [2024-11-18 07:17:21.622589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.287 [2024-11-18 07:17:21.622854] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:07.287 [2024-11-18 07:17:21.622874] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:07.287 [2024-11-18 07:17:21.622886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45632 len:8 PRP1 0x0 PRP2 0x0 00:32:07.287 [2024-11-18 07:17:21.622899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.287 [2024-11-18 07:17:21.622915] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:07.287 [2024-11-18 07:17:21.622927] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:07.287 [2024-11-18 07:17:21.622938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45640 len:8 PRP1 0x0 PRP2 0x0 00:32:07.287 [2024-11-18 07:17:21.622951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.287 [2024-11-18 07:17:21.622963] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:07.287 [2024-11-18 07:17:21.622974] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:07.287 [2024-11-18 07:17:21.622984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45648 len:8 PRP1 0x0 PRP2 0x0 00:32:07.287 [2024-11-18 07:17:21.622997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.287 [2024-11-18 07:17:21.623010] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:07.287 [2024-11-18 07:17:21.623020] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:07.287 [2024-11-18 07:17:21.623036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45656 len:8 PRP1 0x0 PRP2 0x0 00:32:07.287 [2024-11-18 07:17:21.623049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.287 [2024-11-18 07:17:21.623062] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:07.288 [2024-11-18 07:17:21.623073] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:07.288 [2024-11-18 07:17:21.623084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45664 len:8 PRP1 0x0 PRP2 0x0 00:32:07.288 [2024-11-18 07:17:21.623097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.288 [2024-11-18 07:17:21.623110] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:07.288 [2024-11-18 07:17:21.623121] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:07.288 [2024-11-18 07:17:21.623132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45672 len:8 PRP1 0x0 PRP2 0x0 00:32:07.288 [2024-11-18 07:17:21.623144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.288 [2024-11-18 07:17:21.623157] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:07.288 [2024-11-18 07:17:21.623167] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:07.288 [2024-11-18 07:17:21.623178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45680 len:8 PRP1 0x0 PRP2 0x0 00:32:07.288 [2024-11-18 07:17:21.623190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.288 [2024-11-18 07:17:21.623203] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:07.288 [2024-11-18 07:17:21.623214] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:07.288 [2024-11-18 07:17:21.623224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45688 len:8 PRP1 0x0 PRP2 0x0 00:32:07.288 [2024-11-18 07:17:21.623236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.288 [2024-11-18 07:17:21.623248] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:07.288 [2024-11-18 07:17:21.623259] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:07.288 [2024-11-18 07:17:21.623270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45696 len:8 PRP1 0x0 PRP2 0x0 00:32:07.288 [2024-11-18 07:17:21.623282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.288 [2024-11-18 07:17:21.623294] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:07.288 [2024-11-18 07:17:21.623304] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:07.288 [2024-11-18 07:17:21.623315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45704 len:8 PRP1 0x0 PRP2 0x0 00:32:07.288 [2024-11-18 07:17:21.623328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.288 [2024-11-18 07:17:21.623340] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:07.288 [2024-11-18 07:17:21.623351] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:07.288 [2024-11-18 07:17:21.623361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45712 len:8 PRP1 0x0 PRP2 0x0 00:32:07.288 [2024-11-18 07:17:21.623374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.288 [2024-11-18 07:17:21.623390] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:07.288 [2024-11-18 07:17:21.623402] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:07.288 [2024-11-18 07:17:21.623413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45720 len:8 PRP1 0x0 PRP2 0x0 00:32:07.288 [2024-11-18 07:17:21.623432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.288 [2024-11-18 07:17:21.623447] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:07.288 [2024-11-18 07:17:21.623457] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:07.288 [2024-11-18 07:17:21.623468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45728 len:8 PRP1 0x0 PRP2 0x0 00:32:07.288 [2024-11-18 07:17:21.623481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.288 [2024-11-18 07:17:21.623521] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:07.288 [2024-11-18 07:17:21.623535] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:07.288 [2024-11-18 07:17:21.623547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45736 len:8 PRP1 0x0 PRP2 0x0 00:32:07.288 [2024-11-18 07:17:21.623560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.288 [2024-11-18 07:17:21.623574] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:07.288 [2024-11-18 07:17:21.623585] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:07.288 [2024-11-18 07:17:21.623596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45744 len:8 PRP1 0x0 PRP2 0x0 00:32:07.288 [2024-11-18 07:17:21.623609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.288 [2024-11-18 07:17:21.623623] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:07.288 [2024-11-18 07:17:21.623634] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:07.288 [2024-11-18 07:17:21.623645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:44728 len:8 PRP1 0x0 PRP2 0x0 00:32:07.288 [2024-11-18 07:17:21.623658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.288 [2024-11-18 07:17:21.623671] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:07.288 [2024-11-18 07:17:21.623683] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:07.288 [2024-11-18 07:17:21.623694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44864 len:8 PRP1 0x0 PRP2 0x0 00:32:07.288 [2024-11-18 07:17:21.623707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.288 [2024-11-18 07:17:21.623720] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:07.288 [2024-11-18 07:17:21.623731] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:07.288 [2024-11-18 07:17:21.623743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44872 len:8 PRP1 0x0 PRP2 0x0 00:32:07.288 [2024-11-18 07:17:21.623756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.288 [2024-11-18 07:17:21.623769] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:07.288 [2024-11-18 07:17:21.623781] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:07.288 [2024-11-18 07:17:21.623792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44880 len:8 PRP1 0x0 PRP2 0x0 00:32:07.288 [2024-11-18 07:17:21.623824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.288 [2024-11-18 07:17:21.623838] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:07.288 [2024-11-18 07:17:21.623849] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:07.288 [2024-11-18 07:17:21.623860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44888 len:8 PRP1 0x0 PRP2 0x0 00:32:07.288 [2024-11-18 07:17:21.623878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.288 [2024-11-18 07:17:21.623891] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:07.288 [2024-11-18 07:17:21.623903] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:07.288 [2024-11-18 07:17:21.623913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44896 len:8 PRP1 0x0 PRP2 0x0 00:32:07.288 [2024-11-18 07:17:21.623926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.288 [2024-11-18 07:17:21.623939] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:07.288 [2024-11-18 07:17:21.623949] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:07.288 [2024-11-18 07:17:21.623960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44904 len:8 PRP1 0x0 PRP2 0x0 00:32:07.288 [2024-11-18 07:17:21.623973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.288 [2024-11-18 07:17:21.623986] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:07.288 [2024-11-18 07:17:21.623997] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:07.288 [2024-11-18 07:17:21.624008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44912 len:8 PRP1 0x0 PRP2 0x0 00:32:07.288 [2024-11-18 07:17:21.624021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.288 [2024-11-18 07:17:21.624034] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:07.288 [2024-11-18 07:17:21.624044] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:07.289 [2024-11-18 07:17:21.624056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44920 len:8 PRP1 0x0 PRP2 0x0 00:32:07.289 [2024-11-18 07:17:21.624068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.289 [2024-11-18 07:17:21.624081] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:07.289 [2024-11-18 07:17:21.624093] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:07.289 [2024-11-18 07:17:21.624104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44928 len:8 PRP1 0x0 PRP2 0x0 00:32:07.289 [2024-11-18 07:17:21.624116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.289 [2024-11-18 07:17:21.624129] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:07.289 [2024-11-18 07:17:21.624139] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:07.289 [2024-11-18 07:17:21.624150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44936 len:8 PRP1 0x0 PRP2 0x0 00:32:07.289 [2024-11-18 07:17:21.624163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.289 [2024-11-18 07:17:21.624175] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:07.289 [2024-11-18 07:17:21.624186] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:07.289 [2024-11-18 07:17:21.624200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44944 len:8 PRP1 0x0 PRP2 0x0 00:32:07.289 [2024-11-18 07:17:21.624213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.289 [2024-11-18 07:17:21.624227] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:07.289 [2024-11-18 07:17:21.624237] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:07.289 [2024-11-18 07:17:21.624248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44952 len:8 PRP1 0x0 PRP2 0x0 00:32:07.289 [2024-11-18 07:17:21.624265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.289 [2024-11-18 07:17:21.624279] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:07.289 [2024-11-18 07:17:21.624290] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:07.289 [2024-11-18 07:17:21.624302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44960 len:8 PRP1 0x0 PRP2 0x0 00:32:07.289 [2024-11-18 07:17:21.624315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.289 [2024-11-18 07:17:21.624328] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:07.289 [2024-11-18 07:17:21.624339] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:07.289 [2024-11-18 07:17:21.624349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44968 len:8 PRP1 0x0 PRP2 0x0 00:32:07.289 [2024-11-18 07:17:21.624362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.289 [2024-11-18 07:17:21.624374] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:07.289 [2024-11-18 07:17:21.624385] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:07.289 [2024-11-18 07:17:21.624396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44976 len:8 PRP1 0x0 PRP2 0x0 00:32:07.289 [2024-11-18 07:17:21.624408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.289 [2024-11-18 07:17:21.624421] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:07.289 [2024-11-18 07:17:21.624432] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:07.289 [2024-11-18 07:17:21.624442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44984 len:8 PRP1 0x0 PRP2 0x0 00:32:07.289 [2024-11-18 07:17:21.624455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.289 [2024-11-18 07:17:21.624468] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:07.289 [2024-11-18 07:17:21.624479] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:07.289 [2024-11-18 07:17:21.624497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44992 len:8 PRP1 0x0 PRP2 0x0 00:32:07.289 [2024-11-18 07:17:21.624526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.289 [2024-11-18 07:17:21.624540] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:07.289 [2024-11-18 07:17:21.624552] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:07.289 [2024-11-18 07:17:21.624563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45000 len:8 PRP1 0x0 PRP2 0x0 00:32:07.289 [2024-11-18 07:17:21.624576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.289 [2024-11-18 07:17:21.624589] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:07.289 [2024-11-18 07:17:21.624604] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:07.289 [2024-11-18 07:17:21.624617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45008 len:8 PRP1 0x0 PRP2 0x0 00:32:07.289 [2024-11-18 07:17:21.624630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.289 [2024-11-18 07:17:21.624643] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:07.289 [2024-11-18 07:17:21.624654] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:07.289 [2024-11-18 07:17:21.624665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45016 len:8 PRP1 0x0 PRP2 0x0 00:32:07.289 [2024-11-18 07:17:21.624683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.289 [2024-11-18 07:17:21.624698] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:07.289 [2024-11-18 07:17:21.624709] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:07.289 [2024-11-18 07:17:21.624720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45024 len:8 PRP1 0x0 PRP2 0x0 00:32:07.289 [2024-11-18 07:17:21.624733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.289 [2024-11-18 07:17:21.624746] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:07.289 [2024-11-18 07:17:21.624758] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:07.289 [2024-11-18 07:17:21.624769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45032 len:8 PRP1 0x0 PRP2 0x0 00:32:07.289 [2024-11-18 07:17:21.624782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.289 [2024-11-18 07:17:21.624794] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:07.289 [2024-11-18 07:17:21.624806] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:07.289 [2024-11-18 07:17:21.624817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45040 len:8 PRP1 0x0 PRP2 0x0 00:32:07.289 [2024-11-18 07:17:21.624830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.289 [2024-11-18 07:17:21.624843] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:07.289 [2024-11-18 07:17:21.624853] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:07.289 [2024-11-18 07:17:21.624864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45048 len:8 PRP1 0x0 PRP2 0x0 00:32:07.289 [2024-11-18 07:17:21.624877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.289 [2024-11-18 07:17:21.624890] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:07.289 [2024-11-18 07:17:21.624901] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:07.289 [2024-11-18 07:17:21.624912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45056 len:8 PRP1 0x0 PRP2 0x0 00:32:07.289 [2024-11-18 07:17:21.624924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.289 [2024-11-18 07:17:21.624937] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:07.289 [2024-11-18 07:17:21.624948] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:07.290 [2024-11-18 07:17:21.624959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45064 len:8 PRP1 0x0 PRP2 0x0 00:32:07.290 [2024-11-18 07:17:21.624972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.290 [2024-11-18 07:17:21.624989] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:07.290 [2024-11-18 07:17:21.625001] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:07.290 [2024-11-18 07:17:21.625012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45072 len:8 PRP1 0x0 PRP2 0x0 00:32:07.290 [2024-11-18 07:17:21.625024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.290 [2024-11-18 07:17:21.625053] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:07.290 [2024-11-18 07:17:21.625064] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:07.290 [2024-11-18 07:17:21.625075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45080 len:8 PRP1 0x0 PRP2 0x0 00:32:07.290 [2024-11-18 07:17:21.625088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.290 [2024-11-18 07:17:21.625101] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:07.290 [2024-11-18 07:17:21.625112] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:07.290 [2024-11-18 07:17:21.625123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45088 len:8 PRP1 0x0 PRP2 0x0 00:32:07.290 [2024-11-18 07:17:21.625140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.290 [2024-11-18 07:17:21.625153] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:07.290 [2024-11-18 07:17:21.625163] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:07.290 [2024-11-18 07:17:21.625174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45096 len:8 PRP1 0x0 PRP2 0x0 00:32:07.290 [2024-11-18 07:17:21.625186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.290 [2024-11-18 07:17:21.625199] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:07.290 [2024-11-18 07:17:21.625209] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:07.290 [2024-11-18 07:17:21.625220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45104 len:8 PRP1 0x0 PRP2 0x0 00:32:07.290 [2024-11-18 07:17:21.625232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.290 [2024-11-18 07:17:21.625245] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:07.290 [2024-11-18 07:17:21.625255] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:07.290 [2024-11-18 07:17:21.625266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45112 len:8 PRP1 0x0 PRP2 0x0 00:32:07.290 [2024-11-18 07:17:21.625278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.290 [2024-11-18 07:17:21.625290] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:07.290 [2024-11-18 07:17:21.625300] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:07.290 [2024-11-18 07:17:21.625311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45120 len:8 PRP1 0x0 PRP2 0x0 00:32:07.290 [2024-11-18 07:17:21.625323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.290 [2024-11-18 07:17:21.625336] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:07.290 [2024-11-18 07:17:21.625346] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:07.290 [2024-11-18 07:17:21.625357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45128 len:8 PRP1 0x0 PRP2 0x0 00:32:07.290 [2024-11-18 07:17:21.625373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.290 [2024-11-18 07:17:21.625386] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:07.290 [2024-11-18 07:17:21.625397] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:07.290 [2024-11-18 07:17:21.625408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45136 len:8 PRP1 0x0 PRP2 0x0 00:32:07.290 [2024-11-18 07:17:21.625420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.290 [2024-11-18 07:17:21.625433] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:07.290 [2024-11-18 07:17:21.625443] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:07.290 [2024-11-18 07:17:21.625454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45144 len:8 PRP1 0x0 PRP2 0x0 00:32:07.290 [2024-11-18 07:17:21.625467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.290 [2024-11-18 07:17:21.625479] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:07.290 [2024-11-18 07:17:21.625496] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:07.290 [2024-11-18 07:17:21.625526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45152 len:8 PRP1 0x0 PRP2 0x0 00:32:07.290 [2024-11-18 07:17:21.625544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.290 [2024-11-18 07:17:21.625559] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:07.290 [2024-11-18 07:17:21.625570] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:07.290 [2024-11-18 07:17:21.625581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45160 len:8 PRP1 0x0 PRP2 0x0 00:32:07.290 [2024-11-18 07:17:21.625593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.290 [2024-11-18 07:17:21.625606] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:07.290 [2024-11-18 07:17:21.625617] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:07.290 [2024-11-18 07:17:21.625628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45168 len:8 PRP1 0x0 PRP2 0x0 00:32:07.290 [2024-11-18 07:17:21.625641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.290 [2024-11-18 07:17:21.625653] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:07.290 [2024-11-18 07:17:21.625665] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:07.290 [2024-11-18 07:17:21.625676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45176 len:8 PRP1 0x0 PRP2 0x0 00:32:07.290 [2024-11-18 07:17:21.625688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.290 [2024-11-18 07:17:21.625701] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:07.290 [2024-11-18 07:17:21.625712] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:07.290 [2024-11-18 07:17:21.625723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:44736 len:8 PRP1 0x0 PRP2 0x0 00:32:07.290 [2024-11-18 07:17:21.625736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.290 [2024-11-18 07:17:21.625749] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:07.290 [2024-11-18 07:17:21.625760] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:07.290 [2024-11-18 07:17:21.625777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:44744 len:8 PRP1 0x0 PRP2 0x0 00:32:07.290 [2024-11-18 07:17:21.625791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.290 [2024-11-18 07:17:21.625820] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:07.290 [2024-11-18 07:17:21.625831] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:07.290 [2024-11-18 07:17:21.625842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:44752 len:8 PRP1 0x0 PRP2 0x0 00:32:07.290 [2024-11-18 07:17:21.625854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.290 [2024-11-18 07:17:21.625867] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:07.290 [2024-11-18 07:17:21.625877] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:07.290 [2024-11-18 07:17:21.625888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:44760 len:8 PRP1 0x0 PRP2 0x0 00:32:07.290 [2024-11-18 07:17:21.625901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.290 [2024-11-18 07:17:21.625914] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:07.290 [2024-11-18 07:17:21.625924] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:07.290 [2024-11-18 07:17:21.625935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:44768 len:8 PRP1 0x0 PRP2 0x0 00:32:07.290 [2024-11-18 07:17:21.625948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.290 [2024-11-18 07:17:21.625961] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:07.290 [2024-11-18 07:17:21.625971] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:07.290 [2024-11-18 07:17:21.625982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:44776 len:8 PRP1 0x0 PRP2 0x0 00:32:07.290 [2024-11-18 07:17:21.625994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.291 [2024-11-18 07:17:21.626007] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:07.291 [2024-11-18 07:17:21.626018] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:07.291 [2024-11-18 07:17:21.626029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:44784 len:8 PRP1 0x0 PRP2 0x0 00:32:07.291 [2024-11-18 07:17:21.626041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.291 [2024-11-18 07:17:21.626054] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:07.291 [2024-11-18 07:17:21.626065] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:07.291 [2024-11-18 07:17:21.626076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45184 len:8 PRP1 0x0 PRP2 0x0 00:32:07.291 [2024-11-18 07:17:21.626089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.291 [2024-11-18 07:17:21.626101] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:07.291 [2024-11-18 07:17:21.626112] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:07.291 [2024-11-18 07:17:21.626123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45192 len:8 PRP1 0x0 PRP2 0x0 00:32:07.291 [2024-11-18 07:17:21.626135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.291 [2024-11-18 07:17:21.626152] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:07.291 [2024-11-18 07:17:21.626164] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:07.291 [2024-11-18 07:17:21.626174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45200 len:8 PRP1 0x0 PRP2 0x0 00:32:07.291 [2024-11-18 07:17:21.626186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.291 [2024-11-18 07:17:21.626199] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:07.291 [2024-11-18 07:17:21.626210] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:07.291 [2024-11-18 07:17:21.626221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45208 len:8 PRP1 0x0 PRP2 0x0 00:32:07.291 [2024-11-18 07:17:21.626233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.291 [2024-11-18 07:17:21.626246] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:07.291 [2024-11-18 07:17:21.626256] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:07.291 [2024-11-18 07:17:21.626267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45216 len:8 PRP1 0x0 PRP2 0x0 00:32:07.291 [2024-11-18 07:17:21.626279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.291 [2024-11-18 07:17:21.626292] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:07.291 [2024-11-18 07:17:21.626302] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:07.291 [2024-11-18 07:17:21.626313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45224 len:8 PRP1 0x0 PRP2 0x0 00:32:07.291 [2024-11-18 07:17:21.626326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.291 [2024-11-18 07:17:21.626338] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:07.291 [2024-11-18 07:17:21.626349] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:07.291 [2024-11-18 07:17:21.626360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45232 len:8 PRP1 0x0 PRP2 0x0 00:32:07.291 [2024-11-18 07:17:21.626372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.291 [2024-11-18 07:17:21.626385] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:07.291 [2024-11-18 07:17:21.626396] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:07.291 [2024-11-18 07:17:21.626407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45240 len:8 PRP1 0x0 PRP2 0x0 00:32:07.291 [2024-11-18 07:17:21.626419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.291 [2024-11-18 07:17:21.626433] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:07.291 [2024-11-18 07:17:21.626444] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:07.291 [2024-11-18 07:17:21.626454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45248 len:8 PRP1 0x0 PRP2 0x0 00:32:07.291 [2024-11-18 07:17:21.626467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.291 [2024-11-18 07:17:21.626485] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:07.291 [2024-11-18 07:17:21.626518] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:07.291 [2024-11-18 07:17:21.626531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45256 len:8 PRP1 0x0 PRP2 0x0 00:32:07.291 [2024-11-18 07:17:21.626548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.291 [2024-11-18 07:17:21.626562] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:07.291 [2024-11-18 07:17:21.626574] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:07.291 [2024-11-18 07:17:21.626585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45264 len:8 PRP1 0x0 PRP2 0x0 00:32:07.291 [2024-11-18 07:17:21.626598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.291 [2024-11-18 07:17:21.626611] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:07.291 [2024-11-18 07:17:21.626622] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:07.291 [2024-11-18 07:17:21.626633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45272 len:8 PRP1 0x0 PRP2 0x0 00:32:07.291 [2024-11-18 07:17:21.626645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.291 [2024-11-18 07:17:21.626659] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:07.291 [2024-11-18 07:17:21.626671] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:07.291 [2024-11-18 07:17:21.626681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45280 len:8 PRP1 0x0 PRP2 0x0 00:32:07.291 [2024-11-18 07:17:21.626694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.291 [2024-11-18 07:17:21.626707] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:07.291 [2024-11-18 07:17:21.626718] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:07.291 [2024-11-18 07:17:21.626729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45288 len:8 PRP1 0x0 PRP2 0x0 00:32:07.291 [2024-11-18 07:17:21.626742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.291 [2024-11-18 07:17:21.626755] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:07.291 [2024-11-18 07:17:21.626766] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:07.291 [2024-11-18 07:17:21.626777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45296 len:8 PRP1 0x0 PRP2 0x0 00:32:07.291 [2024-11-18 07:17:21.626817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.291 [2024-11-18 07:17:21.626831] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:07.291 [2024-11-18 07:17:21.626841] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:07.292 [2024-11-18 07:17:21.626852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45304 len:8 PRP1 0x0 PRP2 0x0 00:32:07.292 [2024-11-18 07:17:21.626864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.292 [2024-11-18 07:17:21.626877] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:07.292 [2024-11-18 07:17:21.626888] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:07.292 [2024-11-18 07:17:21.626899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45312 len:8 PRP1 0x0 PRP2 0x0 00:32:07.292 [2024-11-18 07:17:21.626911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.292 [2024-11-18 07:17:21.626923] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:07.292 [2024-11-18 07:17:21.626934] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:07.292 [2024-11-18 07:17:21.626949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45320 len:8 PRP1 0x0 PRP2 0x0 00:32:07.292 [2024-11-18 07:17:21.626962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.292 [2024-11-18 07:17:21.626974] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:07.292 [2024-11-18 07:17:21.626985] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:07.292 [2024-11-18 07:17:21.626996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45328 len:8 PRP1 0x0 PRP2 0x0 00:32:07.292 [2024-11-18 07:17:21.627008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.292 [2024-11-18 07:17:21.627020] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:07.292 [2024-11-18 07:17:21.627031] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:07.292 [2024-11-18 07:17:21.627042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45336 len:8 PRP1 0x0 PRP2 0x0 00:32:07.292 [2024-11-18 07:17:21.627054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.292 [2024-11-18 07:17:21.627067] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:07.292 [2024-11-18 07:17:21.627082] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:07.292 [2024-11-18 07:17:21.627094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45344 len:8 PRP1 0x0 PRP2 0x0 00:32:07.292 [2024-11-18 07:17:21.627106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.292 [2024-11-18 07:17:21.627119] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:07.292 [2024-11-18 07:17:21.627130] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:07.292 [2024-11-18 07:17:21.627141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45352 len:8 PRP1 0x0 PRP2 0x0 00:32:07.292 [2024-11-18 07:17:21.627153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.292 [2024-11-18 07:17:21.627166] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:07.292 [2024-11-18 07:17:21.627177] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:07.292 [2024-11-18 07:17:21.627188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45360 len:8 PRP1 0x0 PRP2 0x0 00:32:07.292 [2024-11-18 07:17:21.627200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.292 [2024-11-18 07:17:21.627213] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:07.292 [2024-11-18 07:17:21.627223] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:07.292 [2024-11-18 07:17:21.627234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45368 len:8 PRP1 0x0 PRP2 0x0 00:32:07.292 [2024-11-18 07:17:21.627247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.292 [2024-11-18 07:17:21.627259] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:07.292 [2024-11-18 07:17:21.627270] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:07.292 [2024-11-18 07:17:21.627280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45376 len:8 PRP1 0x0 PRP2 0x0 00:32:07.292 [2024-11-18 07:17:21.627292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.292 [2024-11-18 07:17:21.627305] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:07.292 [2024-11-18 07:17:21.627319] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:07.292 [2024-11-18 07:17:21.627330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45384 len:8 PRP1 0x0 PRP2 0x0 00:32:07.292 [2024-11-18 07:17:21.627342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.292 [2024-11-18 07:17:21.627355] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:07.292 [2024-11-18 07:17:21.632608] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:07.292 [2024-11-18 07:17:21.632638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45392 len:8 PRP1 0x0 PRP2 0x0 00:32:07.292 [2024-11-18 07:17:21.632655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.292 [2024-11-18 07:17:21.632671] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:07.292 [2024-11-18 07:17:21.632683] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:07.292 [2024-11-18 07:17:21.632694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45400 len:8 PRP1 0x0 PRP2 0x0 00:32:07.292 [2024-11-18 07:17:21.632708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.292 [2024-11-18 07:17:21.632722] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:07.292 [2024-11-18 07:17:21.632734] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:07.292 [2024-11-18 07:17:21.632745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45408 len:8 PRP1 0x0 PRP2 0x0 00:32:07.292 [2024-11-18 07:17:21.632758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.292 [2024-11-18 07:17:21.632771] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:07.292 [2024-11-18 07:17:21.632798] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:07.292 [2024-11-18 07:17:21.632809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45416 len:8 PRP1 0x0 PRP2 0x0 00:32:07.292 [2024-11-18 07:17:21.632823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.292 [2024-11-18 07:17:21.632837] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:07.292 [2024-11-18 07:17:21.632848] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:07.292 [2024-11-18 07:17:21.632859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45424 len:8 PRP1 0x0 PRP2 0x0 00:32:07.292 [2024-11-18 07:17:21.632871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.292 [2024-11-18 07:17:21.632884] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:07.292 [2024-11-18 07:17:21.632895] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:07.292 [2024-11-18 07:17:21.632906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45432 len:8 PRP1 0x0 PRP2 0x0 00:32:07.292 [2024-11-18 07:17:21.632919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.292 [2024-11-18 07:17:21.632931] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:07.292 [2024-11-18 07:17:21.632942] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:07.292 [2024-11-18 07:17:21.632953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45440 len:8 PRP1 0x0 PRP2 0x0 00:32:07.292 [2024-11-18 07:17:21.632965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.292 [2024-11-18 07:17:21.632985] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:07.292 [2024-11-18 07:17:21.632997] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:07.292 [2024-11-18 07:17:21.633008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45448 len:8 PRP1 0x0 PRP2 0x0 00:32:07.292 [2024-11-18 07:17:21.633021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.292 [2024-11-18 07:17:21.633034] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:07.292 [2024-11-18 07:17:21.633045] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:07.292 [2024-11-18 07:17:21.633056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45456 len:8 PRP1 0x0 PRP2 0x0 00:32:07.292 [2024-11-18 07:17:21.633068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.292 [2024-11-18 07:17:21.633081] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:07.292 [2024-11-18 07:17:21.633092] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:07.292 [2024-11-18 07:17:21.633103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45464 len:8 PRP1 0x0 PRP2 0x0 00:32:07.293 [2024-11-18 07:17:21.633115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.293 [2024-11-18 07:17:21.633128] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:07.293 [2024-11-18 07:17:21.633139] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:07.293 [2024-11-18 07:17:21.633150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45472 len:8 PRP1 0x0 PRP2 0x0 00:32:07.293 [2024-11-18 07:17:21.633162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.293 [2024-11-18 07:17:21.633175] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:07.293 [2024-11-18 07:17:21.633186] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:07.293 [2024-11-18 07:17:21.633197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45480 len:8 PRP1 0x0 PRP2 0x0 00:32:07.293 [2024-11-18 07:17:21.633210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.293 [2024-11-18 07:17:21.633223] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:07.293 [2024-11-18 07:17:21.633234] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:07.293 [2024-11-18 07:17:21.633245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45488 len:8 PRP1 0x0 PRP2 0x0 00:32:07.293 [2024-11-18 07:17:21.633258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.293 [2024-11-18 07:17:21.633271] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:07.293 [2024-11-18 07:17:21.633282] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:07.293 [2024-11-18 07:17:21.633293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45496 len:8 PRP1 0x0 PRP2 0x0 00:32:07.293 [2024-11-18 07:17:21.633305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.293 [2024-11-18 07:17:21.633318] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:07.293 [2024-11-18 07:17:21.633329] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:07.293 [2024-11-18 07:17:21.633339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:44792 len:8 PRP1 0x0 PRP2 0x0 00:32:07.293 [2024-11-18 07:17:21.633356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.293 [2024-11-18 07:17:21.633370] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:07.293 [2024-11-18 07:17:21.633381] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:07.293 [2024-11-18 07:17:21.633392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:44800 len:8 PRP1 0x0 PRP2 0x0 00:32:07.293 [2024-11-18 07:17:21.633404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.293 [2024-11-18 07:17:21.633417] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:07.293 [2024-11-18 07:17:21.633428] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:07.293 [2024-11-18 07:17:21.633439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:44808 len:8 PRP1 0x0 PRP2 0x0 00:32:07.293 [2024-11-18 07:17:21.633451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.293 [2024-11-18 07:17:21.633464] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:07.293 [2024-11-18 07:17:21.633505] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:07.293 [2024-11-18 07:17:21.633520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:44816 len:8 PRP1 0x0 PRP2 0x0 00:32:07.293 [2024-11-18 07:17:21.633533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.293 [2024-11-18 07:17:21.633547] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:07.293 [2024-11-18 07:17:21.633558] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:07.293 [2024-11-18 07:17:21.633569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:44824 len:8 PRP1 0x0 PRP2 0x0 00:32:07.293 [2024-11-18 07:17:21.633582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.293 [2024-11-18 07:17:21.633595] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:07.293 [2024-11-18 07:17:21.633606] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:07.293 [2024-11-18 07:17:21.633618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:44832 len:8 PRP1 0x0 PRP2 0x0 00:32:07.293 [2024-11-18 07:17:21.633630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.293 [2024-11-18 07:17:21.633644] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:07.293 [2024-11-18 07:17:21.633655] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:07.293 [2024-11-18 07:17:21.633666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:44840 len:8 PRP1 0x0 PRP2 0x0 00:32:07.293 [2024-11-18 07:17:21.633679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.293 [2024-11-18 07:17:21.633692] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:07.293 [2024-11-18 07:17:21.633703] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:07.293 [2024-11-18 07:17:21.633714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:44848 len:8 PRP1 0x0 PRP2 0x0 00:32:07.293 [2024-11-18 07:17:21.633727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.293 [2024-11-18 07:17:21.633740] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:07.293 [2024-11-18 07:17:21.633751] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:07.293 [2024-11-18 07:17:21.633766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:44856 len:8 PRP1 0x0 PRP2 0x0 00:32:07.293 [2024-11-18 07:17:21.633795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.293 [2024-11-18 07:17:21.633808] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:07.293 [2024-11-18 07:17:21.633820] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:07.293 [2024-11-18 07:17:21.633830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45504 len:8 PRP1 0x0 PRP2 0x0 00:32:07.293 [2024-11-18 07:17:21.633843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.293 [2024-11-18 07:17:21.633856] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:07.293 [2024-11-18 07:17:21.633867] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:07.293 [2024-11-18 07:17:21.633878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45512 len:8 PRP1 0x0 PRP2 0x0 00:32:07.293 [2024-11-18 07:17:21.633890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.293 [2024-11-18 07:17:21.633903] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:07.293 [2024-11-18 07:17:21.633914] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:07.293 [2024-11-18 07:17:21.633925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45520 len:8 PRP1 0x0 PRP2 0x0 00:32:07.293 [2024-11-18 07:17:21.633937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.293 [2024-11-18 07:17:21.633950] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:07.293 [2024-11-18 07:17:21.633961] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:07.293 [2024-11-18 07:17:21.633972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45528 len:8 PRP1 0x0 PRP2 0x0 00:32:07.293 [2024-11-18 07:17:21.633984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.293 [2024-11-18 07:17:21.633997] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:07.293 [2024-11-18 07:17:21.634007] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:07.293 [2024-11-18 07:17:21.634018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45536 len:8 PRP1 0x0 PRP2 0x0 00:32:07.293 [2024-11-18 07:17:21.634031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.293 [2024-11-18 07:17:21.634044] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:07.293 [2024-11-18 07:17:21.634054] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:07.293 [2024-11-18 07:17:21.634065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45544 len:8 PRP1 0x0 PRP2 0x0 00:32:07.293 [2024-11-18 07:17:21.634078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.293 [2024-11-18 07:17:21.634090] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:07.293 [2024-11-18 07:17:21.634101] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:07.293 [2024-11-18 07:17:21.634112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45552 len:8 PRP1 0x0 PRP2 0x0 00:32:07.294 [2024-11-18 07:17:21.634125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.294 [2024-11-18 07:17:21.634141] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:07.294 [2024-11-18 07:17:21.634152] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:07.294 [2024-11-18 07:17:21.634163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45560 len:8 PRP1 0x0 PRP2 0x0 00:32:07.294 [2024-11-18 07:17:21.634176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.294 [2024-11-18 07:17:21.634189] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:07.294 [2024-11-18 07:17:21.634201] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:07.294 [2024-11-18 07:17:21.634212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45568 len:8 PRP1 0x0 PRP2 0x0 00:32:07.294 [2024-11-18 07:17:21.634224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.294 [2024-11-18 07:17:21.634237] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:07.294 [2024-11-18 07:17:21.634248] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:07.294 [2024-11-18 07:17:21.634259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45576 len:8 PRP1 0x0 PRP2 0x0 00:32:07.294 [2024-11-18 07:17:21.634272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.294 [2024-11-18 07:17:21.634285] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:07.294 [2024-11-18 07:17:21.634295] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:07.294 [2024-11-18 07:17:21.634306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45584 len:8 PRP1 0x0 PRP2 0x0 00:32:07.294 [2024-11-18 07:17:21.634318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.294 [2024-11-18 07:17:21.634331] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:07.294 [2024-11-18 07:17:21.634342] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:07.294 [2024-11-18 07:17:21.634353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45592 len:8 PRP1 0x0 PRP2 0x0 00:32:07.294 [2024-11-18 07:17:21.634366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.294 [2024-11-18 07:17:21.634379] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:07.294 [2024-11-18 07:17:21.634390] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:07.294 [2024-11-18 07:17:21.634401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45600 len:8 PRP1 0x0 PRP2 0x0 00:32:07.294 [2024-11-18 07:17:21.634414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.294 [2024-11-18 07:17:21.634427] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:07.294 [2024-11-18 07:17:21.634438] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:07.294 [2024-11-18 07:17:21.634449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45608 len:8 PRP1 0x0 PRP2 0x0 00:32:07.294 [2024-11-18 07:17:21.634461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.294 [2024-11-18 07:17:21.634508] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:07.294 [2024-11-18 07:17:21.634521] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:07.294 [2024-11-18 07:17:21.634533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45616 len:8 PRP1 0x0 PRP2 0x0 00:32:07.294 [2024-11-18 07:17:21.634550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.294 [2024-11-18 07:17:21.634565] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:07.294 [2024-11-18 07:17:21.634576] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:07.294 [2024-11-18 07:17:21.634587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45624 len:8 PRP1 0x0 PRP2 0x0 00:32:07.294 [2024-11-18 07:17:21.634600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.294 [2024-11-18 07:17:21.634662] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:32:07.294 [2024-11-18 07:17:21.634704] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:07.294 [2024-11-18 07:17:21.634723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.294 [2024-11-18 07:17:21.634740] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:07.294 [2024-11-18 07:17:21.634754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.294 [2024-11-18 07:17:21.634768] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:07.294 [2024-11-18 07:17:21.634782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.294 [2024-11-18 07:17:21.634798] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:07.294 [2024-11-18 07:17:21.634811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.294 [2024-11-18 07:17:21.634825] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:32:07.294 [2024-11-18 07:17:21.634866] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9d53b0 (9): Bad file descriptor 00:32:07.294 [2024-11-18 07:17:21.638116] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:32:07.294 [2024-11-18 07:17:21.706772] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:32:07.294 8386.70 IOPS, 32.76 MiB/s [2024-11-18T06:17:28.272Z] 8413.00 IOPS, 32.86 MiB/s [2024-11-18T06:17:28.272Z] 8443.25 IOPS, 32.98 MiB/s [2024-11-18T06:17:28.272Z] 8454.38 IOPS, 33.02 MiB/s [2024-11-18T06:17:28.272Z] 8480.71 IOPS, 33.13 MiB/s 00:32:07.294 Latency(us) 00:32:07.294 [2024-11-18T06:17:28.272Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:07.294 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:32:07.294 Verification LBA range: start 0x0 length 0x4000 00:32:07.294 NVMe0n1 : 15.01 8497.17 33.19 700.35 0.00 13889.76 515.79 23787.14 00:32:07.294 [2024-11-18T06:17:28.272Z] =================================================================================================================== 00:32:07.294 [2024-11-18T06:17:28.272Z] Total : 8497.17 33.19 700.35 0.00 13889.76 515.79 23787.14 00:32:07.294 Received shutdown signal, test time was about 15.000000 seconds 00:32:07.294 00:32:07.294 Latency(us) 00:32:07.294 [2024-11-18T06:17:28.272Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:07.294 [2024-11-18T06:17:28.272Z] =================================================================================================================== 00:32:07.294 [2024-11-18T06:17:28.272Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:07.294 07:17:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:32:07.294 07:17:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:32:07.294 07:17:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:32:07.294 07:17:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=362868 00:32:07.294 07:17:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:32:07.294 07:17:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 362868 /var/tmp/bdevperf.sock 00:32:07.294 07:17:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 362868 ']' 00:32:07.294 07:17:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:32:07.294 07:17:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:07.294 07:17:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:32:07.294 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:32:07.294 07:17:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:07.294 07:17:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:32:07.294 07:17:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:07.294 07:17:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:32:07.294 07:17:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:32:07.294 [2024-11-18 07:17:27.953807] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:32:07.295 07:17:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:32:07.295 [2024-11-18 07:17:28.214524] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:32:07.295 07:17:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:32:07.864 NVMe0n1 00:32:07.864 07:17:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:32:08.433 00:32:08.433 07:17:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:32:08.691 00:32:08.691 07:17:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:32:08.691 07:17:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:32:08.949 07:17:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:32:09.209 07:17:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:32:12.501 07:17:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:32:12.501 07:17:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:32:12.501 07:17:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=363657 00:32:12.501 07:17:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:32:12.501 07:17:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 363657 00:32:13.877 { 00:32:13.877 "results": [ 00:32:13.877 { 00:32:13.877 "job": "NVMe0n1", 00:32:13.877 "core_mask": "0x1", 00:32:13.877 "workload": "verify", 00:32:13.877 "status": "finished", 00:32:13.877 "verify_range": { 00:32:13.877 "start": 0, 00:32:13.877 "length": 16384 00:32:13.877 }, 00:32:13.877 "queue_depth": 128, 00:32:13.877 "io_size": 4096, 00:32:13.877 "runtime": 1.048763, 00:32:13.877 "iops": 8349.836903094407, 00:32:13.877 "mibps": 32.61655040271253, 00:32:13.877 "io_failed": 0, 00:32:13.877 "io_timeout": 0, 00:32:13.877 "avg_latency_us": 14728.03514682434, 00:32:13.877 "min_latency_us": 3373.8903703703704, 00:32:13.877 "max_latency_us": 46020.83555555555 00:32:13.877 } 00:32:13.877 ], 00:32:13.877 "core_count": 1 00:32:13.877 } 00:32:13.877 07:17:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:32:13.877 [2024-11-18 07:17:27.455595] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:32:13.877 [2024-11-18 07:17:27.455702] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid362868 ] 00:32:13.877 [2024-11-18 07:17:27.526033] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:13.877 [2024-11-18 07:17:27.571249] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:13.877 [2024-11-18 07:17:30.132004] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:32:13.877 [2024-11-18 07:17:30.132144] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:13.877 [2024-11-18 07:17:30.132168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.877 [2024-11-18 07:17:30.132186] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:13.877 [2024-11-18 07:17:30.132200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.877 [2024-11-18 07:17:30.132215] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:13.877 [2024-11-18 07:17:30.132229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.877 [2024-11-18 07:17:30.132243] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:13.877 [2024-11-18 07:17:30.132257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.877 [2024-11-18 07:17:30.132285] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:32:13.877 [2024-11-18 07:17:30.132340] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:32:13.877 [2024-11-18 07:17:30.132378] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c163b0 (9): Bad file descriptor 00:32:13.877 [2024-11-18 07:17:30.223609] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:32:13.877 Running I/O for 1 seconds... 00:32:13.877 8629.00 IOPS, 33.71 MiB/s 00:32:13.877 Latency(us) 00:32:13.877 [2024-11-18T06:17:34.855Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:13.877 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:32:13.877 Verification LBA range: start 0x0 length 0x4000 00:32:13.877 NVMe0n1 : 1.05 8349.84 32.62 0.00 0.00 14728.04 3373.89 46020.84 00:32:13.877 [2024-11-18T06:17:34.855Z] =================================================================================================================== 00:32:13.877 [2024-11-18T06:17:34.855Z] Total : 8349.84 32.62 0.00 0.00 14728.04 3373.89 46020.84 00:32:13.878 07:17:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:32:13.878 07:17:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:32:14.136 07:17:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:32:14.394 07:17:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:32:14.394 07:17:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:32:14.653 07:17:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:32:14.911 07:17:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:32:18.201 07:17:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:32:18.201 07:17:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:32:18.201 07:17:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 362868 00:32:18.201 07:17:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 362868 ']' 00:32:18.201 07:17:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 362868 00:32:18.201 07:17:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:32:18.201 07:17:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:18.201 07:17:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 362868 00:32:18.201 07:17:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:18.201 07:17:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:18.201 07:17:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 362868' 00:32:18.201 killing process with pid 362868 00:32:18.201 07:17:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 362868 00:32:18.201 07:17:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 362868 00:32:18.459 07:17:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:32:18.459 07:17:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:18.717 07:17:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:32:18.717 07:17:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:32:18.717 07:17:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:32:18.717 07:17:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:18.717 07:17:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:32:18.717 07:17:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:18.717 07:17:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:32:18.717 07:17:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:18.717 07:17:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:18.717 rmmod nvme_tcp 00:32:18.717 rmmod nvme_fabrics 00:32:18.717 rmmod nvme_keyring 00:32:18.717 07:17:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:18.717 07:17:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:32:18.717 07:17:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:32:18.717 07:17:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@517 -- # '[' -n 360717 ']' 00:32:18.717 07:17:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # killprocess 360717 00:32:18.717 07:17:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 360717 ']' 00:32:18.717 07:17:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 360717 00:32:18.717 07:17:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:32:18.717 07:17:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:18.717 07:17:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 360717 00:32:18.717 07:17:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:32:18.717 07:17:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:32:18.717 07:17:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 360717' 00:32:18.717 killing process with pid 360717 00:32:18.717 07:17:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 360717 00:32:18.717 07:17:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 360717 00:32:18.975 07:17:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:18.975 07:17:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:18.975 07:17:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:18.975 07:17:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:32:18.975 07:17:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-save 00:32:18.975 07:17:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:18.975 07:17:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-restore 00:32:18.975 07:17:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:18.975 07:17:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:18.975 07:17:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:18.975 07:17:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:18.975 07:17:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:21.517 07:17:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:21.517 00:32:21.517 real 0m35.784s 00:32:21.517 user 2m6.494s 00:32:21.517 sys 0m5.858s 00:32:21.517 07:17:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:21.517 07:17:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:32:21.517 ************************************ 00:32:21.517 END TEST nvmf_failover 00:32:21.517 ************************************ 00:32:21.517 07:17:41 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:32:21.517 07:17:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:32:21.517 07:17:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:21.517 07:17:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:32:21.517 ************************************ 00:32:21.517 START TEST nvmf_host_discovery 00:32:21.517 ************************************ 00:32:21.517 07:17:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:32:21.517 * Looking for test storage... 00:32:21.517 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:21.517 07:17:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:32:21.517 07:17:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # lcov --version 00:32:21.517 07:17:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:32:21.517 07:17:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:32:21.517 07:17:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:21.517 07:17:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:21.517 07:17:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:21.517 07:17:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:32:21.517 07:17:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:32:21.517 07:17:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:32:21.517 07:17:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:32:21.517 07:17:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:32:21.517 07:17:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:32:21.517 07:17:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:32:21.517 07:17:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:21.517 07:17:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:32:21.517 07:17:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:32:21.517 07:17:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:21.517 07:17:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:21.517 07:17:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:32:21.517 07:17:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:32:21.517 07:17:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:21.517 07:17:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:32:21.517 07:17:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:32:21.517 07:17:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:32:21.517 07:17:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:32:21.517 07:17:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:21.517 07:17:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:32:21.517 07:17:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:32:21.517 07:17:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:21.517 07:17:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:21.517 07:17:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:32:21.517 07:17:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:21.517 07:17:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:32:21.517 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:21.517 --rc genhtml_branch_coverage=1 00:32:21.517 --rc genhtml_function_coverage=1 00:32:21.517 --rc genhtml_legend=1 00:32:21.517 --rc geninfo_all_blocks=1 00:32:21.518 --rc geninfo_unexecuted_blocks=1 00:32:21.518 00:32:21.518 ' 00:32:21.518 07:17:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:32:21.518 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:21.518 --rc genhtml_branch_coverage=1 00:32:21.518 --rc genhtml_function_coverage=1 00:32:21.518 --rc genhtml_legend=1 00:32:21.518 --rc geninfo_all_blocks=1 00:32:21.518 --rc geninfo_unexecuted_blocks=1 00:32:21.518 00:32:21.518 ' 00:32:21.518 07:17:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:32:21.518 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:21.518 --rc genhtml_branch_coverage=1 00:32:21.518 --rc genhtml_function_coverage=1 00:32:21.518 --rc genhtml_legend=1 00:32:21.518 --rc geninfo_all_blocks=1 00:32:21.518 --rc geninfo_unexecuted_blocks=1 00:32:21.518 00:32:21.518 ' 00:32:21.518 07:17:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:32:21.518 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:21.518 --rc genhtml_branch_coverage=1 00:32:21.518 --rc genhtml_function_coverage=1 00:32:21.518 --rc genhtml_legend=1 00:32:21.518 --rc geninfo_all_blocks=1 00:32:21.518 --rc geninfo_unexecuted_blocks=1 00:32:21.518 00:32:21.518 ' 00:32:21.518 07:17:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:21.518 07:17:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:32:21.518 07:17:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:21.518 07:17:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:21.518 07:17:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:21.518 07:17:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:21.518 07:17:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:21.518 07:17:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:21.518 07:17:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:21.518 07:17:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:21.518 07:17:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:21.518 07:17:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:21.518 07:17:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:32:21.518 07:17:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:32:21.518 07:17:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:21.518 07:17:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:21.518 07:17:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:21.518 07:17:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:21.518 07:17:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:21.518 07:17:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:32:21.518 07:17:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:21.518 07:17:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:21.518 07:17:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:21.518 07:17:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:21.518 07:17:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:21.518 07:17:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:21.518 07:17:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:32:21.518 07:17:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:21.518 07:17:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:32:21.518 07:17:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:21.518 07:17:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:21.518 07:17:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:21.518 07:17:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:21.518 07:17:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:21.518 07:17:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:32:21.518 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:32:21.518 07:17:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:21.518 07:17:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:21.518 07:17:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:21.518 07:17:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:32:21.518 07:17:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:32:21.518 07:17:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:32:21.518 07:17:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:32:21.518 07:17:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:32:21.518 07:17:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:32:21.518 07:17:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:32:21.518 07:17:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:21.518 07:17:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:21.518 07:17:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:21.518 07:17:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:21.518 07:17:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:21.518 07:17:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:21.518 07:17:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:21.518 07:17:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:21.518 07:17:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:21.518 07:17:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:21.518 07:17:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:32:21.518 07:17:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:23.426 07:17:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:23.426 07:17:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:32:23.426 07:17:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:23.426 07:17:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:23.426 07:17:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:23.426 07:17:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:23.426 07:17:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:23.426 07:17:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:32:23.426 07:17:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:23.426 07:17:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # e810=() 00:32:23.426 07:17:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:32:23.426 07:17:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # x722=() 00:32:23.426 07:17:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:32:23.426 07:17:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # mlx=() 00:32:23.426 07:17:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:32:23.426 07:17:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:23.426 07:17:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:23.426 07:17:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:23.426 07:17:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:23.426 07:17:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:23.426 07:17:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:23.426 07:17:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:23.426 07:17:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:23.426 07:17:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:23.426 07:17:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:23.426 07:17:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:23.426 07:17:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:23.426 07:17:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:23.426 07:17:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:23.426 07:17:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:23.426 07:17:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:23.426 07:17:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:23.426 07:17:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:23.426 07:17:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:23.426 07:17:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:32:23.426 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:32:23.426 07:17:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:23.426 07:17:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:23.427 07:17:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:23.427 07:17:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:23.427 07:17:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:23.427 07:17:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:23.427 07:17:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:32:23.427 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:32:23.427 07:17:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:23.427 07:17:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:23.427 07:17:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:23.427 07:17:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:23.427 07:17:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:23.427 07:17:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:23.427 07:17:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:23.427 07:17:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:23.427 07:17:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:23.427 07:17:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:23.427 07:17:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:23.427 07:17:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:23.427 07:17:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:23.427 07:17:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:23.427 07:17:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:23.427 07:17:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:32:23.427 Found net devices under 0000:0a:00.0: cvl_0_0 00:32:23.427 07:17:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:23.427 07:17:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:23.427 07:17:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:23.427 07:17:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:23.427 07:17:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:23.427 07:17:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:23.427 07:17:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:23.427 07:17:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:23.427 07:17:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:32:23.427 Found net devices under 0000:0a:00.1: cvl_0_1 00:32:23.427 07:17:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:23.427 07:17:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:23.427 07:17:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:32:23.427 07:17:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:23.427 07:17:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:23.427 07:17:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:23.427 07:17:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:23.427 07:17:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:23.427 07:17:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:23.427 07:17:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:23.427 07:17:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:23.427 07:17:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:23.427 07:17:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:23.427 07:17:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:23.427 07:17:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:23.427 07:17:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:23.427 07:17:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:23.427 07:17:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:23.427 07:17:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:23.427 07:17:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:23.427 07:17:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:23.427 07:17:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:23.427 07:17:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:23.427 07:17:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:23.427 07:17:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:23.427 07:17:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:23.427 07:17:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:23.427 07:17:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:23.427 07:17:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:23.427 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:23.427 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.239 ms 00:32:23.427 00:32:23.427 --- 10.0.0.2 ping statistics --- 00:32:23.427 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:23.427 rtt min/avg/max/mdev = 0.239/0.239/0.239/0.000 ms 00:32:23.427 07:17:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:23.427 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:23.427 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.085 ms 00:32:23.427 00:32:23.427 --- 10.0.0.1 ping statistics --- 00:32:23.427 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:23.427 rtt min/avg/max/mdev = 0.085/0.085/0.085/0.000 ms 00:32:23.427 07:17:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:23.427 07:17:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@450 -- # return 0 00:32:23.427 07:17:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:23.427 07:17:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:23.427 07:17:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:23.427 07:17:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:23.427 07:17:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:23.427 07:17:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:23.427 07:17:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:23.427 07:17:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:32:23.427 07:17:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:23.427 07:17:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:23.427 07:17:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:23.427 07:17:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # nvmfpid=366260 00:32:23.427 07:17:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:32:23.427 07:17:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # waitforlisten 366260 00:32:23.427 07:17:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 366260 ']' 00:32:23.428 07:17:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:23.428 07:17:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:23.428 07:17:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:23.428 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:23.428 07:17:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:23.428 07:17:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:23.428 [2024-11-18 07:17:44.392846] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:32:23.428 [2024-11-18 07:17:44.392947] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:23.686 [2024-11-18 07:17:44.465469] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:23.686 [2024-11-18 07:17:44.507206] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:23.686 [2024-11-18 07:17:44.507267] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:23.686 [2024-11-18 07:17:44.507290] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:23.686 [2024-11-18 07:17:44.507301] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:23.686 [2024-11-18 07:17:44.507310] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:23.686 [2024-11-18 07:17:44.507911] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:23.686 07:17:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:23.686 07:17:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:32:23.686 07:17:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:23.686 07:17:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:23.686 07:17:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:23.686 07:17:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:23.686 07:17:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:23.686 07:17:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:23.686 07:17:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:23.686 [2024-11-18 07:17:44.646186] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:23.686 07:17:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:23.686 07:17:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:32:23.686 07:17:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:23.686 07:17:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:23.686 [2024-11-18 07:17:44.654352] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:32:23.686 07:17:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:23.686 07:17:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:32:23.686 07:17:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:23.686 07:17:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:23.945 null0 00:32:23.945 07:17:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:23.945 07:17:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:32:23.945 07:17:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:23.945 07:17:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:23.945 null1 00:32:23.945 07:17:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:23.945 07:17:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:32:23.945 07:17:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:23.945 07:17:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:23.945 07:17:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:23.945 07:17:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=366366 00:32:23.945 07:17:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:32:23.945 07:17:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 366366 /tmp/host.sock 00:32:23.945 07:17:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 366366 ']' 00:32:23.945 07:17:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:32:23.945 07:17:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:23.945 07:17:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:32:23.945 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:32:23.945 07:17:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:23.945 07:17:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:23.945 [2024-11-18 07:17:44.729139] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:32:23.945 [2024-11-18 07:17:44.729210] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid366366 ] 00:32:23.945 [2024-11-18 07:17:44.794537] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:23.945 [2024-11-18 07:17:44.840232] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:24.203 07:17:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:24.203 07:17:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:32:24.203 07:17:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:24.203 07:17:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:32:24.203 07:17:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:24.203 07:17:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:24.203 07:17:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:24.203 07:17:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:32:24.203 07:17:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:24.203 07:17:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:24.203 07:17:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:24.203 07:17:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:32:24.203 07:17:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:32:24.203 07:17:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:24.203 07:17:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:24.203 07:17:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:24.204 07:17:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:24.204 07:17:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:24.204 07:17:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:24.204 07:17:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:24.204 07:17:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:32:24.204 07:17:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:32:24.204 07:17:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:24.204 07:17:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:24.204 07:17:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:24.204 07:17:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:24.204 07:17:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:24.204 07:17:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:24.204 07:17:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:24.204 07:17:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:32:24.204 07:17:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:32:24.204 07:17:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:24.204 07:17:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:24.204 07:17:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:24.204 07:17:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:32:24.204 07:17:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:24.204 07:17:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:24.204 07:17:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:24.204 07:17:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:24.204 07:17:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:24.204 07:17:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:24.204 07:17:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:24.204 07:17:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:32:24.204 07:17:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:32:24.204 07:17:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:24.204 07:17:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:24.204 07:17:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:24.204 07:17:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:24.204 07:17:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:24.204 07:17:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:24.204 07:17:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:24.204 07:17:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:32:24.204 07:17:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:32:24.204 07:17:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:24.204 07:17:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:24.204 07:17:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:24.204 07:17:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:32:24.204 07:17:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:24.204 07:17:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:24.204 07:17:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:24.204 07:17:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:24.204 07:17:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:24.204 07:17:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:24.204 07:17:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:24.463 07:17:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:32:24.463 07:17:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:32:24.463 07:17:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:24.463 07:17:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:24.463 07:17:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:24.463 07:17:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:24.463 07:17:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:24.463 07:17:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:24.463 07:17:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:24.463 07:17:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:32:24.463 07:17:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:24.463 07:17:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:24.463 07:17:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:24.463 [2024-11-18 07:17:45.244012] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:24.463 07:17:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:24.463 07:17:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:32:24.463 07:17:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:24.463 07:17:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:24.463 07:17:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:24.463 07:17:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:24.463 07:17:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:24.464 07:17:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:24.464 07:17:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:24.464 07:17:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:32:24.464 07:17:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:32:24.464 07:17:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:24.464 07:17:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:24.464 07:17:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:24.464 07:17:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:24.464 07:17:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:24.464 07:17:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:24.464 07:17:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:24.464 07:17:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:32:24.464 07:17:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:32:24.464 07:17:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:32:24.464 07:17:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:32:24.464 07:17:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:32:24.464 07:17:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:24.464 07:17:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:24.464 07:17:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:32:24.464 07:17:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:32:24.464 07:17:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:32:24.464 07:17:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:32:24.464 07:17:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:24.464 07:17:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:24.464 07:17:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:24.464 07:17:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:32:24.464 07:17:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:32:24.464 07:17:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:32:24.464 07:17:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:32:24.464 07:17:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:32:24.464 07:17:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:24.464 07:17:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:24.464 07:17:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:24.464 07:17:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:32:24.464 07:17:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:32:24.464 07:17:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:24.464 07:17:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:24.464 07:17:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:32:24.464 07:17:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:32:24.464 07:17:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:24.464 07:17:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:24.464 07:17:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:24.464 07:17:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:24.464 07:17:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:24.464 07:17:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:24.464 07:17:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:24.464 07:17:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == \n\v\m\e\0 ]] 00:32:24.464 07:17:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:32:25.035 [2024-11-18 07:17:46.013647] bdev_nvme.c:7384:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:32:25.035 [2024-11-18 07:17:46.013686] bdev_nvme.c:7470:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:32:25.035 [2024-11-18 07:17:46.013707] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:32:25.296 [2024-11-18 07:17:46.099997] bdev_nvme.c:7313:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:32:25.296 [2024-11-18 07:17:46.161778] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:32:25.296 [2024-11-18 07:17:46.162784] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x175a1b0:1 started. 00:32:25.296 [2024-11-18 07:17:46.164538] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:32:25.296 [2024-11-18 07:17:46.164558] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:32:25.296 [2024-11-18 07:17:46.212195] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x175a1b0 was disconnected and freed. delete nvme_qpair. 00:32:25.555 07:17:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:25.555 07:17:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:32:25.555 07:17:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:32:25.555 07:17:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:25.555 07:17:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:25.555 07:17:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:25.555 07:17:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:25.555 07:17:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:25.555 07:17:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:25.555 07:17:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:25.555 07:17:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:25.555 07:17:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:32:25.555 07:17:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:32:25.555 07:17:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:32:25.555 07:17:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:25.555 07:17:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:25.555 07:17:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:32:25.555 07:17:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:32:25.555 07:17:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:25.555 07:17:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:25.555 07:17:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:25.555 07:17:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:25.555 07:17:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:25.555 07:17:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:25.555 07:17:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:25.555 07:17:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:32:25.555 07:17:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:32:25.555 07:17:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:32:25.555 07:17:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:32:25.555 07:17:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:25.555 07:17:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:25.555 07:17:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:32:25.555 07:17:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:32:25.555 07:17:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:32:25.555 07:17:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:25.555 07:17:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:32:25.555 07:17:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:25.555 07:17:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:32:25.555 07:17:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:32:25.555 07:17:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:25.815 07:17:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0 ]] 00:32:25.815 07:17:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:32:25.815 07:17:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:32:25.815 07:17:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:32:25.815 07:17:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:32:25.815 07:17:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:32:25.815 07:17:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:25.815 07:17:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:25.815 07:17:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:32:25.815 07:17:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:32:25.815 07:17:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:32:25.815 07:17:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:32:25.815 07:17:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:25.815 07:17:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:25.815 07:17:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:25.815 07:17:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:32:25.815 07:17:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:32:25.815 07:17:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:32:25.815 07:17:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:32:25.815 07:17:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:32:25.815 07:17:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:25.815 07:17:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:25.815 07:17:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:25.815 07:17:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:32:25.815 07:17:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:32:25.815 07:17:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:25.815 07:17:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:25.815 07:17:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:32:25.815 07:17:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:32:25.815 07:17:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:25.815 07:17:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:25.815 07:17:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:25.815 07:17:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:25.815 07:17:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:25.815 07:17:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:25.815 [2024-11-18 07:17:46.768801] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x17443e0:1 started. 00:32:25.815 [2024-11-18 07:17:46.773296] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x17443e0 was disconnected and freed. delete nvme_qpair. 00:32:25.815 07:17:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:25.815 07:17:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:32:25.815 07:17:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:32:25.815 07:17:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:32:25.815 07:17:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:32:25.815 07:17:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:32:25.815 07:17:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:32:25.815 07:17:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:26.075 07:17:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:26.075 07:17:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:32:26.075 07:17:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:32:26.075 07:17:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:32:26.075 07:17:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:32:26.075 07:17:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:26.075 07:17:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:26.075 07:17:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:26.075 07:17:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:32:26.075 07:17:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:32:26.075 07:17:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:32:26.075 07:17:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:32:26.075 07:17:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:32:26.075 07:17:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:26.075 07:17:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:26.075 [2024-11-18 07:17:46.841362] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:32:26.075 [2024-11-18 07:17:46.841636] bdev_nvme.c:7366:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:32:26.075 [2024-11-18 07:17:46.841666] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:32:26.075 07:17:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:26.075 07:17:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:32:26.076 07:17:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:32:26.076 07:17:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:26.076 07:17:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:26.076 07:17:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:32:26.076 07:17:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:32:26.076 07:17:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:26.076 07:17:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:26.076 07:17:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:26.076 07:17:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:26.076 07:17:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:26.076 07:17:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:26.076 07:17:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:26.076 07:17:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:26.076 07:17:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:32:26.076 07:17:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:32:26.076 07:17:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:32:26.076 07:17:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:26.076 07:17:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:26.076 07:17:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:32:26.076 07:17:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:32:26.076 07:17:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:26.076 07:17:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:26.076 07:17:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:26.076 07:17:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:26.076 07:17:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:26.076 07:17:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:26.076 07:17:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:26.076 [2024-11-18 07:17:46.928269] bdev_nvme.c:7308:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:32:26.076 07:17:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:32:26.076 07:17:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:32:26.076 07:17:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:32:26.076 07:17:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:32:26.076 07:17:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:26.076 07:17:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:26.076 07:17:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:32:26.076 07:17:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:32:26.076 07:17:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:32:26.076 07:17:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:26.076 07:17:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:32:26.076 07:17:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:26.076 07:17:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:32:26.076 07:17:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:32:26.076 07:17:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:26.076 07:17:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:32:26.076 07:17:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:32:26.337 [2024-11-18 07:17:47.230907] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4421 00:32:26.337 [2024-11-18 07:17:47.230953] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:32:26.337 [2024-11-18 07:17:47.230967] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:32:26.337 [2024-11-18 07:17:47.230975] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:32:27.278 07:17:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:27.278 07:17:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:32:27.278 07:17:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:32:27.278 07:17:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:32:27.278 07:17:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:32:27.278 07:17:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:27.278 07:17:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:27.278 07:17:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:32:27.278 07:17:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:32:27.278 07:17:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:27.278 07:17:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:32:27.278 07:17:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:32:27.278 07:17:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:32:27.278 07:17:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:32:27.278 07:17:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:32:27.278 07:17:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:32:27.278 07:17:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:27.278 07:17:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:27.278 07:17:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:32:27.278 07:17:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:32:27.278 07:17:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:32:27.278 07:17:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:32:27.278 07:17:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:27.278 07:17:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:27.279 07:17:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:27.279 07:17:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:32:27.279 07:17:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:32:27.279 07:17:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:32:27.279 07:17:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:32:27.279 07:17:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:27.279 07:17:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:27.279 07:17:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:27.279 [2024-11-18 07:17:48.065331] bdev_nvme.c:7366:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:32:27.279 [2024-11-18 07:17:48.065380] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:32:27.279 07:17:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:27.279 07:17:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:32:27.279 07:17:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:32:27.279 07:17:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:27.279 07:17:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:27.279 07:17:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:32:27.279 07:17:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:32:27.279 07:17:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:27.279 07:17:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:27.279 07:17:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:27.279 07:17:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:27.279 07:17:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:27.279 07:17:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:27.279 [2024-11-18 07:17:48.073634] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:27.279 [2024-11-18 07:17:48.073671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.279 [2024-11-18 07:17:48.073689] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:27.279 [2024-11-18 07:17:48.073704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.279 [2024-11-18 07:17:48.073719] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:27.279 [2024-11-18 07:17:48.073733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.279 [2024-11-18 07:17:48.073748] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:27.279 [2024-11-18 07:17:48.073762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.279 [2024-11-18 07:17:48.073777] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x172c1f0 is same with the state(6) to be set 00:32:27.279 07:17:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:27.279 [2024-11-18 07:17:48.083639] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x172c1f0 (9): Bad file descriptor 00:32:27.279 [2024-11-18 07:17:48.093682] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:32:27.279 [2024-11-18 07:17:48.093706] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:32:27.279 [2024-11-18 07:17:48.093717] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:32:27.279 [2024-11-18 07:17:48.093726] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:32:27.279 [2024-11-18 07:17:48.093758] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:32:27.279 [2024-11-18 07:17:48.093983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:27.279 [2024-11-18 07:17:48.094013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x172c1f0 with addr=10.0.0.2, port=4420 00:32:27.279 [2024-11-18 07:17:48.094031] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x172c1f0 is same with the state(6) to be set 00:32:27.279 [2024-11-18 07:17:48.094054] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x172c1f0 (9): Bad file descriptor 00:32:27.279 [2024-11-18 07:17:48.094088] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:32:27.279 [2024-11-18 07:17:48.094106] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:32:27.279 [2024-11-18 07:17:48.094123] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:32:27.279 [2024-11-18 07:17:48.094136] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:32:27.279 [2024-11-18 07:17:48.094147] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:32:27.279 [2024-11-18 07:17:48.094155] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:32:27.279 [2024-11-18 07:17:48.103796] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:32:27.279 [2024-11-18 07:17:48.103818] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:32:27.279 [2024-11-18 07:17:48.103827] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:32:27.279 [2024-11-18 07:17:48.103849] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:32:27.279 [2024-11-18 07:17:48.103874] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:32:27.279 [2024-11-18 07:17:48.103999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:27.279 [2024-11-18 07:17:48.104028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x172c1f0 with addr=10.0.0.2, port=4420 00:32:27.279 [2024-11-18 07:17:48.104045] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x172c1f0 is same with the state(6) to be set 00:32:27.279 [2024-11-18 07:17:48.104068] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x172c1f0 (9): Bad file descriptor 00:32:27.279 [2024-11-18 07:17:48.104088] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:32:27.279 [2024-11-18 07:17:48.104102] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:32:27.279 [2024-11-18 07:17:48.104116] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:32:27.279 [2024-11-18 07:17:48.104129] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:32:27.279 [2024-11-18 07:17:48.104138] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:32:27.279 [2024-11-18 07:17:48.104146] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:32:27.279 07:17:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:27.279 07:17:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:32:27.279 07:17:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:32:27.279 07:17:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:32:27.279 07:17:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:27.279 07:17:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:27.279 07:17:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:32:27.279 07:17:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:32:27.279 07:17:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:27.279 07:17:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:27.279 07:17:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:27.279 07:17:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:27.279 07:17:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:27.279 07:17:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:27.279 [2024-11-18 07:17:48.113915] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:32:27.279 [2024-11-18 07:17:48.113939] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:32:27.279 [2024-11-18 07:17:48.113950] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:32:27.279 [2024-11-18 07:17:48.113963] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:32:27.279 [2024-11-18 07:17:48.113991] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:32:27.279 [2024-11-18 07:17:48.114187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:27.279 [2024-11-18 07:17:48.114216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x172c1f0 with addr=10.0.0.2, port=4420 00:32:27.279 [2024-11-18 07:17:48.114235] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x172c1f0 is same with the state(6) to be set 00:32:27.279 [2024-11-18 07:17:48.114258] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x172c1f0 (9): Bad file descriptor 00:32:27.279 [2024-11-18 07:17:48.114291] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:32:27.279 [2024-11-18 07:17:48.114309] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:32:27.279 [2024-11-18 07:17:48.114324] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:32:27.279 [2024-11-18 07:17:48.114337] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:32:27.279 [2024-11-18 07:17:48.114346] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:32:27.279 [2024-11-18 07:17:48.114354] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:32:27.280 [2024-11-18 07:17:48.124025] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:32:27.280 [2024-11-18 07:17:48.124048] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:32:27.280 [2024-11-18 07:17:48.124058] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:32:27.280 [2024-11-18 07:17:48.124065] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:32:27.280 [2024-11-18 07:17:48.124090] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:32:27.280 [2024-11-18 07:17:48.124246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:27.280 [2024-11-18 07:17:48.124276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x172c1f0 with addr=10.0.0.2, port=4420 00:32:27.280 [2024-11-18 07:17:48.124293] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x172c1f0 is same with the state(6) to be set 00:32:27.280 [2024-11-18 07:17:48.124327] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x172c1f0 (9): Bad file descriptor 00:32:27.280 [2024-11-18 07:17:48.124350] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:32:27.280 [2024-11-18 07:17:48.124364] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:32:27.280 [2024-11-18 07:17:48.124379] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:32:27.280 [2024-11-18 07:17:48.124391] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:32:27.280 [2024-11-18 07:17:48.124400] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:32:27.280 [2024-11-18 07:17:48.124408] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:32:27.280 [2024-11-18 07:17:48.134138] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:32:27.280 [2024-11-18 07:17:48.134160] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:32:27.280 [2024-11-18 07:17:48.134170] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:32:27.280 [2024-11-18 07:17:48.134186] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:32:27.280 [2024-11-18 07:17:48.134211] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:32:27.280 [2024-11-18 07:17:48.134355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:27.280 [2024-11-18 07:17:48.134385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x172c1f0 with addr=10.0.0.2, port=4420 00:32:27.280 [2024-11-18 07:17:48.134401] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x172c1f0 is same with the state(6) to be set 00:32:27.280 [2024-11-18 07:17:48.134424] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x172c1f0 (9): Bad file descriptor 00:32:27.280 [2024-11-18 07:17:48.134457] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:32:27.280 [2024-11-18 07:17:48.134475] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:32:27.280 [2024-11-18 07:17:48.134500] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:32:27.280 [2024-11-18 07:17:48.134516] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:32:27.280 [2024-11-18 07:17:48.134526] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:32:27.280 [2024-11-18 07:17:48.134534] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:32:27.280 07:17:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:27.280 [2024-11-18 07:17:48.144245] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:32:27.280 [2024-11-18 07:17:48.144267] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:32:27.280 [2024-11-18 07:17:48.144276] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:32:27.280 [2024-11-18 07:17:48.144283] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:32:27.280 [2024-11-18 07:17:48.144307] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:32:27.280 [2024-11-18 07:17:48.144538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:27.280 [2024-11-18 07:17:48.144567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x172c1f0 with addr=10.0.0.2, port=4420 00:32:27.280 [2024-11-18 07:17:48.144584] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x172c1f0 is same with the state(6) to be set 00:32:27.280 [2024-11-18 07:17:48.144605] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x172c1f0 (9): Bad file descriptor 00:32:27.280 [2024-11-18 07:17:48.144627] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:32:27.280 [2024-11-18 07:17:48.144642] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:32:27.280 [2024-11-18 07:17:48.144655] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:32:27.280 [2024-11-18 07:17:48.144668] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:32:27.280 [2024-11-18 07:17:48.144678] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:32:27.280 [2024-11-18 07:17:48.144685] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:32:27.280 07:17:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:32:27.280 07:17:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:32:27.280 07:17:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:32:27.280 07:17:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:32:27.280 07:17:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:27.280 07:17:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:27.280 07:17:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:32:27.280 07:17:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:32:27.280 07:17:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:32:27.280 07:17:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:32:27.280 07:17:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:27.280 07:17:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:27.280 [2024-11-18 07:17:48.153250] bdev_nvme.c:7171:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:32:27.280 [2024-11-18 07:17:48.153278] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:32:27.280 07:17:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:32:27.280 07:17:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:32:27.280 07:17:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:27.280 07:17:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4421 == \4\4\2\1 ]] 00:32:27.280 07:17:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:32:27.280 07:17:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:32:27.280 07:17:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:32:27.280 07:17:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:32:27.280 07:17:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:32:27.280 07:17:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:27.280 07:17:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:27.280 07:17:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:32:27.280 07:17:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:32:27.280 07:17:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:32:27.280 07:17:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:32:27.280 07:17:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:27.280 07:17:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:27.280 07:17:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:27.280 07:17:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:32:27.280 07:17:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:32:27.280 07:17:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:32:27.280 07:17:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:32:27.280 07:17:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:32:27.280 07:17:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:27.280 07:17:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:27.280 07:17:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:27.280 07:17:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:32:27.280 07:17:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:32:27.280 07:17:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:27.280 07:17:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:27.280 07:17:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:32:27.280 07:17:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:32:27.280 07:17:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:27.280 07:17:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:27.280 07:17:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:27.280 07:17:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:27.281 07:17:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:27.281 07:17:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:27.541 07:17:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:27.541 07:17:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:32:27.541 07:17:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:32:27.541 07:17:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:32:27.541 07:17:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:32:27.541 07:17:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:27.541 07:17:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:27.541 07:17:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:32:27.541 07:17:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:32:27.541 07:17:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:27.541 07:17:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:27.541 07:17:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:27.541 07:17:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:27.541 07:17:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:27.541 07:17:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:27.541 07:17:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:27.541 07:17:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:32:27.541 07:17:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:32:27.541 07:17:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:32:27.541 07:17:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:32:27.541 07:17:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:32:27.541 07:17:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:32:27.541 07:17:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:27.541 07:17:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:27.541 07:17:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:32:27.541 07:17:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:32:27.541 07:17:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:32:27.541 07:17:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:27.541 07:17:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:32:27.541 07:17:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:27.541 07:17:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:27.541 07:17:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:32:27.541 07:17:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:32:27.541 07:17:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:32:27.541 07:17:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:32:27.541 07:17:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:32:27.541 07:17:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:27.541 07:17:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:28.483 [2024-11-18 07:17:49.423196] bdev_nvme.c:7384:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:32:28.483 [2024-11-18 07:17:49.423226] bdev_nvme.c:7470:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:32:28.483 [2024-11-18 07:17:49.423246] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:32:28.741 [2024-11-18 07:17:49.510523] bdev_nvme.c:7313:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:32:28.741 [2024-11-18 07:17:49.615326] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.2:4421 00:32:28.741 [2024-11-18 07:17:49.616034] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0x17274d0:1 started. 00:32:28.741 [2024-11-18 07:17:49.618151] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:32:28.741 [2024-11-18 07:17:49.618192] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:32:28.741 07:17:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:28.741 07:17:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:32:28.741 07:17:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:32:28.741 07:17:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:32:28.741 07:17:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:32:28.741 [2024-11-18 07:17:49.621138] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0x17274d0 was disconnected and freed. delete nvme_qpair. 00:32:28.741 07:17:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:28.741 07:17:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:32:28.741 07:17:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:28.741 07:17:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:32:28.741 07:17:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:28.741 07:17:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:28.741 request: 00:32:28.741 { 00:32:28.741 "name": "nvme", 00:32:28.741 "trtype": "tcp", 00:32:28.741 "traddr": "10.0.0.2", 00:32:28.741 "adrfam": "ipv4", 00:32:28.741 "trsvcid": "8009", 00:32:28.741 "hostnqn": "nqn.2021-12.io.spdk:test", 00:32:28.741 "wait_for_attach": true, 00:32:28.741 "method": "bdev_nvme_start_discovery", 00:32:28.741 "req_id": 1 00:32:28.741 } 00:32:28.741 Got JSON-RPC error response 00:32:28.741 response: 00:32:28.741 { 00:32:28.741 "code": -17, 00:32:28.741 "message": "File exists" 00:32:28.741 } 00:32:28.741 07:17:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:32:28.741 07:17:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:32:28.741 07:17:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:32:28.741 07:17:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:32:28.741 07:17:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:32:28.741 07:17:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:32:28.741 07:17:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:32:28.741 07:17:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:28.741 07:17:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:28.741 07:17:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:32:28.741 07:17:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:32:28.741 07:17:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:32:28.741 07:17:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:28.741 07:17:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:32:28.741 07:17:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:32:28.741 07:17:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:28.741 07:17:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:28.741 07:17:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:28.741 07:17:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:28.741 07:17:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:28.741 07:17:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:28.741 07:17:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:28.741 07:17:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:32:28.741 07:17:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:32:28.741 07:17:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:32:28.741 07:17:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:32:28.741 07:17:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:32:28.741 07:17:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:28.741 07:17:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:32:28.741 07:17:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:28.741 07:17:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:32:28.741 07:17:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:28.741 07:17:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:28.999 request: 00:32:28.999 { 00:32:28.999 "name": "nvme_second", 00:32:28.999 "trtype": "tcp", 00:32:28.999 "traddr": "10.0.0.2", 00:32:28.999 "adrfam": "ipv4", 00:32:28.999 "trsvcid": "8009", 00:32:28.999 "hostnqn": "nqn.2021-12.io.spdk:test", 00:32:28.999 "wait_for_attach": true, 00:32:28.999 "method": "bdev_nvme_start_discovery", 00:32:28.999 "req_id": 1 00:32:28.999 } 00:32:28.999 Got JSON-RPC error response 00:32:28.999 response: 00:32:28.999 { 00:32:28.999 "code": -17, 00:32:28.999 "message": "File exists" 00:32:28.999 } 00:32:28.999 07:17:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:32:28.999 07:17:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:32:28.999 07:17:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:32:28.999 07:17:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:32:28.999 07:17:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:32:28.999 07:17:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:32:28.999 07:17:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:32:28.999 07:17:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:28.999 07:17:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:32:28.999 07:17:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:28.999 07:17:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:32:28.999 07:17:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:32:28.999 07:17:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:28.999 07:17:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:32:28.999 07:17:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:32:28.999 07:17:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:28.999 07:17:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:28.999 07:17:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:28.999 07:17:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:28.999 07:17:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:28.999 07:17:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:29.000 07:17:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:29.000 07:17:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:32:29.000 07:17:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:32:29.000 07:17:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:32:29.000 07:17:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:32:29.000 07:17:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:32:29.000 07:17:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:29.000 07:17:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:32:29.000 07:17:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:29.000 07:17:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:32:29.000 07:17:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:29.000 07:17:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:29.935 [2024-11-18 07:17:50.809608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:29.935 [2024-11-18 07:17:50.809664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x175c230 with addr=10.0.0.2, port=8010 00:32:29.935 [2024-11-18 07:17:50.809694] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:32:29.935 [2024-11-18 07:17:50.809710] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:32:29.935 [2024-11-18 07:17:50.809724] bdev_nvme.c:7452:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:32:30.875 [2024-11-18 07:17:51.811978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:30.875 [2024-11-18 07:17:51.812028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1743a90 with addr=10.0.0.2, port=8010 00:32:30.875 [2024-11-18 07:17:51.812053] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:32:30.875 [2024-11-18 07:17:51.812068] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:32:30.875 [2024-11-18 07:17:51.812080] bdev_nvme.c:7452:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:32:32.257 [2024-11-18 07:17:52.814214] bdev_nvme.c:7427:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:32:32.257 request: 00:32:32.257 { 00:32:32.257 "name": "nvme_second", 00:32:32.257 "trtype": "tcp", 00:32:32.257 "traddr": "10.0.0.2", 00:32:32.257 "adrfam": "ipv4", 00:32:32.257 "trsvcid": "8010", 00:32:32.257 "hostnqn": "nqn.2021-12.io.spdk:test", 00:32:32.257 "wait_for_attach": false, 00:32:32.257 "attach_timeout_ms": 3000, 00:32:32.257 "method": "bdev_nvme_start_discovery", 00:32:32.257 "req_id": 1 00:32:32.257 } 00:32:32.257 Got JSON-RPC error response 00:32:32.257 response: 00:32:32.257 { 00:32:32.257 "code": -110, 00:32:32.257 "message": "Connection timed out" 00:32:32.257 } 00:32:32.257 07:17:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:32:32.257 07:17:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:32:32.257 07:17:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:32:32.257 07:17:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:32:32.257 07:17:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:32:32.257 07:17:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:32:32.257 07:17:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:32:32.257 07:17:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:32:32.257 07:17:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:32.257 07:17:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:32:32.257 07:17:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:32.257 07:17:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:32:32.257 07:17:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:32.257 07:17:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:32:32.257 07:17:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:32:32.257 07:17:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 366366 00:32:32.257 07:17:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:32:32.257 07:17:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:32.257 07:17:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:32:32.257 07:17:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:32.257 07:17:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:32:32.257 07:17:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:32.257 07:17:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:32.257 rmmod nvme_tcp 00:32:32.257 rmmod nvme_fabrics 00:32:32.257 rmmod nvme_keyring 00:32:32.257 07:17:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:32.257 07:17:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:32:32.257 07:17:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:32:32.257 07:17:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@517 -- # '[' -n 366260 ']' 00:32:32.257 07:17:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # killprocess 366260 00:32:32.257 07:17:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # '[' -z 366260 ']' 00:32:32.257 07:17:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # kill -0 366260 00:32:32.257 07:17:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # uname 00:32:32.257 07:17:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:32.257 07:17:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 366260 00:32:32.257 07:17:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:32:32.258 07:17:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:32:32.258 07:17:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 366260' 00:32:32.258 killing process with pid 366260 00:32:32.258 07:17:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@973 -- # kill 366260 00:32:32.258 07:17:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@978 -- # wait 366260 00:32:32.258 07:17:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:32.258 07:17:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:32.258 07:17:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:32.258 07:17:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:32:32.258 07:17:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-save 00:32:32.258 07:17:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:32.258 07:17:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:32:32.258 07:17:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:32.258 07:17:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:32.258 07:17:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:32.258 07:17:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:32.258 07:17:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:34.793 07:17:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:34.793 00:32:34.793 real 0m13.280s 00:32:34.793 user 0m19.011s 00:32:34.793 sys 0m2.915s 00:32:34.793 07:17:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:34.793 07:17:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:34.793 ************************************ 00:32:34.793 END TEST nvmf_host_discovery 00:32:34.793 ************************************ 00:32:34.793 07:17:55 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:32:34.793 07:17:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:32:34.793 07:17:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:34.793 07:17:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:32:34.793 ************************************ 00:32:34.793 START TEST nvmf_host_multipath_status 00:32:34.793 ************************************ 00:32:34.793 07:17:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:32:34.793 * Looking for test storage... 00:32:34.793 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:34.793 07:17:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:32:34.793 07:17:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # lcov --version 00:32:34.793 07:17:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:32:34.793 07:17:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:32:34.793 07:17:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:34.793 07:17:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:34.793 07:17:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:34.793 07:17:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:32:34.793 07:17:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:32:34.793 07:17:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:32:34.793 07:17:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:32:34.793 07:17:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:32:34.793 07:17:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:32:34.793 07:17:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:32:34.793 07:17:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:34.793 07:17:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:32:34.793 07:17:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:32:34.793 07:17:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:34.793 07:17:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:34.793 07:17:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:32:34.793 07:17:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:32:34.793 07:17:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:34.793 07:17:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:32:34.793 07:17:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:32:34.793 07:17:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:32:34.793 07:17:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:32:34.793 07:17:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:34.793 07:17:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:32:34.793 07:17:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:32:34.793 07:17:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:34.793 07:17:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:34.793 07:17:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:32:34.793 07:17:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:34.793 07:17:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:32:34.793 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:34.793 --rc genhtml_branch_coverage=1 00:32:34.793 --rc genhtml_function_coverage=1 00:32:34.793 --rc genhtml_legend=1 00:32:34.793 --rc geninfo_all_blocks=1 00:32:34.793 --rc geninfo_unexecuted_blocks=1 00:32:34.793 00:32:34.793 ' 00:32:34.794 07:17:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:32:34.794 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:34.794 --rc genhtml_branch_coverage=1 00:32:34.794 --rc genhtml_function_coverage=1 00:32:34.794 --rc genhtml_legend=1 00:32:34.794 --rc geninfo_all_blocks=1 00:32:34.794 --rc geninfo_unexecuted_blocks=1 00:32:34.794 00:32:34.794 ' 00:32:34.794 07:17:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:32:34.794 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:34.794 --rc genhtml_branch_coverage=1 00:32:34.794 --rc genhtml_function_coverage=1 00:32:34.794 --rc genhtml_legend=1 00:32:34.794 --rc geninfo_all_blocks=1 00:32:34.794 --rc geninfo_unexecuted_blocks=1 00:32:34.794 00:32:34.794 ' 00:32:34.794 07:17:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:32:34.794 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:34.794 --rc genhtml_branch_coverage=1 00:32:34.794 --rc genhtml_function_coverage=1 00:32:34.794 --rc genhtml_legend=1 00:32:34.794 --rc geninfo_all_blocks=1 00:32:34.794 --rc geninfo_unexecuted_blocks=1 00:32:34.794 00:32:34.794 ' 00:32:34.794 07:17:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:34.794 07:17:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:32:34.794 07:17:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:34.794 07:17:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:34.794 07:17:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:34.794 07:17:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:34.794 07:17:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:34.794 07:17:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:34.794 07:17:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:34.794 07:17:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:34.794 07:17:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:34.794 07:17:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:34.794 07:17:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:32:34.794 07:17:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:32:34.794 07:17:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:34.794 07:17:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:34.794 07:17:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:34.794 07:17:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:34.794 07:17:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:34.794 07:17:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:32:34.794 07:17:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:34.794 07:17:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:34.794 07:17:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:34.794 07:17:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:34.794 07:17:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:34.794 07:17:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:34.794 07:17:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:32:34.794 07:17:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:34.794 07:17:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:32:34.794 07:17:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:34.794 07:17:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:34.794 07:17:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:34.794 07:17:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:34.794 07:17:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:34.794 07:17:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:32:34.794 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:32:34.794 07:17:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:34.794 07:17:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:34.794 07:17:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:34.794 07:17:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:32:34.794 07:17:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:32:34.794 07:17:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:34.794 07:17:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:32:34.794 07:17:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:32:34.794 07:17:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:32:34.794 07:17:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:32:34.794 07:17:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:34.794 07:17:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:34.794 07:17:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:34.794 07:17:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:34.794 07:17:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:34.794 07:17:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:34.794 07:17:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:34.794 07:17:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:34.794 07:17:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:34.794 07:17:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:34.794 07:17:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@309 -- # xtrace_disable 00:32:34.794 07:17:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:32:36.700 07:17:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:36.700 07:17:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # pci_devs=() 00:32:36.700 07:17:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:36.700 07:17:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:36.700 07:17:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:36.700 07:17:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:36.700 07:17:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:36.700 07:17:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # net_devs=() 00:32:36.700 07:17:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:36.700 07:17:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # e810=() 00:32:36.700 07:17:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # local -ga e810 00:32:36.700 07:17:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # x722=() 00:32:36.700 07:17:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # local -ga x722 00:32:36.700 07:17:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # mlx=() 00:32:36.700 07:17:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # local -ga mlx 00:32:36.700 07:17:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:36.700 07:17:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:36.700 07:17:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:36.700 07:17:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:36.700 07:17:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:36.700 07:17:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:36.700 07:17:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:36.700 07:17:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:36.700 07:17:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:36.701 07:17:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:36.701 07:17:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:36.701 07:17:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:36.701 07:17:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:36.701 07:17:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:36.701 07:17:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:36.701 07:17:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:36.701 07:17:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:36.701 07:17:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:36.701 07:17:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:36.701 07:17:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:32:36.701 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:32:36.701 07:17:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:36.701 07:17:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:36.701 07:17:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:36.701 07:17:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:36.701 07:17:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:36.701 07:17:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:36.701 07:17:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:32:36.701 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:32:36.701 07:17:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:36.701 07:17:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:36.701 07:17:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:36.701 07:17:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:36.701 07:17:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:36.701 07:17:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:36.701 07:17:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:36.701 07:17:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:36.701 07:17:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:36.701 07:17:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:36.701 07:17:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:36.701 07:17:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:36.701 07:17:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:36.701 07:17:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:36.701 07:17:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:36.701 07:17:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:32:36.701 Found net devices under 0000:0a:00.0: cvl_0_0 00:32:36.701 07:17:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:36.701 07:17:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:36.701 07:17:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:36.701 07:17:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:36.701 07:17:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:36.701 07:17:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:36.701 07:17:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:36.701 07:17:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:36.701 07:17:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:32:36.701 Found net devices under 0000:0a:00.1: cvl_0_1 00:32:36.701 07:17:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:36.701 07:17:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:36.701 07:17:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # is_hw=yes 00:32:36.701 07:17:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:36.701 07:17:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:36.701 07:17:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:36.701 07:17:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:36.701 07:17:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:36.701 07:17:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:36.701 07:17:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:36.701 07:17:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:36.701 07:17:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:36.701 07:17:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:36.701 07:17:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:36.701 07:17:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:36.701 07:17:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:36.701 07:17:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:36.701 07:17:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:36.701 07:17:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:36.701 07:17:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:36.701 07:17:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:36.701 07:17:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:36.701 07:17:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:36.701 07:17:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:36.701 07:17:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:36.963 07:17:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:36.963 07:17:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:36.963 07:17:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:36.963 07:17:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:36.963 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:36.963 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.366 ms 00:32:36.963 00:32:36.963 --- 10.0.0.2 ping statistics --- 00:32:36.963 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:36.963 rtt min/avg/max/mdev = 0.366/0.366/0.366/0.000 ms 00:32:36.963 07:17:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:36.963 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:36.963 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.182 ms 00:32:36.963 00:32:36.963 --- 10.0.0.1 ping statistics --- 00:32:36.963 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:36.963 rtt min/avg/max/mdev = 0.182/0.182/0.182/0.000 ms 00:32:36.963 07:17:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:36.963 07:17:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # return 0 00:32:36.963 07:17:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:36.963 07:17:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:36.963 07:17:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:36.963 07:17:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:36.963 07:17:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:36.963 07:17:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:36.963 07:17:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:36.963 07:17:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:32:36.963 07:17:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:36.963 07:17:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:36.963 07:17:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:32:36.963 07:17:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # nvmfpid=369437 00:32:36.963 07:17:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:32:36.963 07:17:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # waitforlisten 369437 00:32:36.963 07:17:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 369437 ']' 00:32:36.963 07:17:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:36.963 07:17:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:36.963 07:17:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:36.963 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:36.963 07:17:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:36.963 07:17:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:32:36.963 [2024-11-18 07:17:57.783030] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:32:36.963 [2024-11-18 07:17:57.783111] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:36.963 [2024-11-18 07:17:57.855900] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:32:36.963 [2024-11-18 07:17:57.903113] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:36.963 [2024-11-18 07:17:57.903178] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:36.963 [2024-11-18 07:17:57.903191] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:36.963 [2024-11-18 07:17:57.903202] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:36.963 [2024-11-18 07:17:57.903212] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:36.963 [2024-11-18 07:17:57.904628] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:36.963 [2024-11-18 07:17:57.904635] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:37.222 07:17:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:37.222 07:17:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:32:37.222 07:17:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:37.222 07:17:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:37.222 07:17:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:32:37.222 07:17:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:37.222 07:17:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=369437 00:32:37.222 07:17:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:32:37.480 [2024-11-18 07:17:58.324570] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:37.480 07:17:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:32:37.738 Malloc0 00:32:37.738 07:17:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:32:37.996 07:17:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:38.254 07:17:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:38.512 [2024-11-18 07:17:59.448964] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:38.512 07:17:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:32:38.772 [2024-11-18 07:17:59.729726] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:32:39.031 07:17:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=369720 00:32:39.031 07:17:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:32:39.031 07:17:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:32:39.031 07:17:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 369720 /var/tmp/bdevperf.sock 00:32:39.031 07:17:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 369720 ']' 00:32:39.031 07:17:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:32:39.031 07:17:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:39.031 07:17:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:32:39.031 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:32:39.031 07:17:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:39.031 07:17:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:32:39.289 07:18:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:39.289 07:18:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:32:39.289 07:18:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:32:39.547 07:18:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:32:39.806 Nvme0n1 00:32:39.806 07:18:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:32:40.374 Nvme0n1 00:32:40.374 07:18:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:32:40.374 07:18:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:32:42.912 07:18:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:32:42.912 07:18:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:32:42.912 07:18:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:32:42.912 07:18:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:32:43.847 07:18:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:32:43.847 07:18:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:32:43.847 07:18:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:43.847 07:18:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:32:44.416 07:18:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:44.416 07:18:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:32:44.416 07:18:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:44.416 07:18:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:32:44.674 07:18:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:44.674 07:18:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:32:44.674 07:18:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:44.674 07:18:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:32:44.933 07:18:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:44.933 07:18:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:32:44.933 07:18:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:44.933 07:18:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:32:45.191 07:18:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:45.191 07:18:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:32:45.191 07:18:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:45.191 07:18:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:32:45.450 07:18:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:45.450 07:18:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:32:45.450 07:18:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:45.450 07:18:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:32:45.708 07:18:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:45.708 07:18:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:32:45.708 07:18:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:32:45.968 07:18:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:32:46.226 07:18:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:32:47.165 07:18:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:32:47.165 07:18:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:32:47.165 07:18:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:47.165 07:18:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:32:47.424 07:18:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:47.424 07:18:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:32:47.424 07:18:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:47.424 07:18:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:32:47.992 07:18:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:47.992 07:18:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:32:47.992 07:18:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:47.992 07:18:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:32:48.251 07:18:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:48.251 07:18:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:32:48.251 07:18:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:48.251 07:18:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:32:48.508 07:18:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:48.508 07:18:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:32:48.508 07:18:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:48.508 07:18:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:32:48.767 07:18:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:48.767 07:18:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:32:48.767 07:18:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:48.767 07:18:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:32:49.026 07:18:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:49.026 07:18:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:32:49.026 07:18:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:32:49.284 07:18:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:32:49.544 07:18:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:32:50.479 07:18:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:32:50.479 07:18:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:32:50.479 07:18:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:50.479 07:18:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:32:50.738 07:18:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:50.738 07:18:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:32:50.738 07:18:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:50.738 07:18:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:32:50.997 07:18:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:50.997 07:18:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:32:50.997 07:18:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:50.997 07:18:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:32:51.256 07:18:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:51.256 07:18:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:32:51.256 07:18:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:51.256 07:18:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:32:51.825 07:18:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:51.825 07:18:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:32:51.825 07:18:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:51.825 07:18:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:32:51.825 07:18:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:51.825 07:18:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:32:51.825 07:18:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:51.825 07:18:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:32:52.084 07:18:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:52.084 07:18:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:32:52.084 07:18:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:32:52.654 07:18:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:32:52.654 07:18:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:32:54.030 07:18:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:32:54.030 07:18:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:32:54.030 07:18:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:54.030 07:18:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:32:54.030 07:18:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:54.030 07:18:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:32:54.030 07:18:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:54.030 07:18:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:32:54.288 07:18:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:54.288 07:18:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:32:54.288 07:18:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:54.288 07:18:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:32:54.546 07:18:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:54.546 07:18:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:32:54.546 07:18:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:54.546 07:18:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:32:54.805 07:18:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:54.805 07:18:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:32:54.805 07:18:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:54.805 07:18:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:32:55.063 07:18:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:55.063 07:18:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:32:55.063 07:18:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:55.063 07:18:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:32:55.322 07:18:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:55.322 07:18:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:32:55.322 07:18:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:32:55.889 07:18:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:32:55.889 07:18:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:32:57.263 07:18:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:32:57.263 07:18:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:32:57.263 07:18:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:57.263 07:18:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:32:57.263 07:18:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:57.263 07:18:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:32:57.263 07:18:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:57.263 07:18:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:32:57.522 07:18:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:57.522 07:18:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:32:57.522 07:18:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:57.522 07:18:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:32:57.780 07:18:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:57.780 07:18:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:32:57.780 07:18:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:57.780 07:18:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:32:58.039 07:18:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:58.039 07:18:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:32:58.039 07:18:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:58.039 07:18:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:32:58.297 07:18:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:58.297 07:18:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:32:58.297 07:18:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:58.298 07:18:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:32:58.555 07:18:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:58.555 07:18:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:32:58.555 07:18:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:32:58.813 07:18:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:32:59.072 07:18:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:33:00.445 07:18:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:33:00.445 07:18:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:33:00.445 07:18:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:00.445 07:18:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:00.445 07:18:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:00.445 07:18:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:33:00.445 07:18:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:00.445 07:18:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:00.703 07:18:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:00.703 07:18:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:00.703 07:18:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:00.703 07:18:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:00.961 07:18:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:00.961 07:18:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:00.961 07:18:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:00.961 07:18:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:01.219 07:18:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:01.219 07:18:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:33:01.219 07:18:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:01.219 07:18:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:01.478 07:18:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:01.478 07:18:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:33:01.478 07:18:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:01.478 07:18:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:01.736 07:18:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:01.736 07:18:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:33:01.994 07:18:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:33:01.994 07:18:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:33:02.560 07:18:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:33:02.560 07:18:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:33:03.935 07:18:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:33:03.935 07:18:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:33:03.935 07:18:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:03.935 07:18:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:03.935 07:18:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:03.935 07:18:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:33:03.935 07:18:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:03.935 07:18:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:04.193 07:18:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:04.193 07:18:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:04.193 07:18:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:04.193 07:18:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:04.452 07:18:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:04.452 07:18:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:04.452 07:18:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:04.452 07:18:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:04.710 07:18:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:04.710 07:18:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:04.710 07:18:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:04.710 07:18:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:04.968 07:18:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:04.968 07:18:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:33:04.969 07:18:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:04.969 07:18:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:05.227 07:18:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:05.227 07:18:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:33:05.227 07:18:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:33:05.485 07:18:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:33:06.051 07:18:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:33:06.986 07:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:33:06.986 07:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:33:06.986 07:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:06.986 07:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:07.244 07:18:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:07.244 07:18:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:33:07.244 07:18:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:07.244 07:18:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:07.502 07:18:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:07.502 07:18:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:07.502 07:18:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:07.502 07:18:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:07.761 07:18:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:07.761 07:18:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:07.761 07:18:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:07.761 07:18:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:08.020 07:18:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:08.020 07:18:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:08.020 07:18:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:08.020 07:18:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:08.278 07:18:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:08.278 07:18:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:33:08.278 07:18:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:08.278 07:18:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:08.535 07:18:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:08.535 07:18:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:33:08.536 07:18:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:33:08.793 07:18:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:33:09.051 07:18:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:33:10.425 07:18:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:33:10.425 07:18:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:33:10.425 07:18:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:10.425 07:18:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:10.425 07:18:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:10.425 07:18:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:33:10.425 07:18:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:10.425 07:18:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:10.683 07:18:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:10.683 07:18:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:10.683 07:18:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:10.683 07:18:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:10.941 07:18:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:10.941 07:18:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:10.941 07:18:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:10.941 07:18:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:11.200 07:18:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:11.200 07:18:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:11.200 07:18:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:11.200 07:18:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:11.458 07:18:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:11.458 07:18:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:33:11.458 07:18:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:11.458 07:18:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:11.716 07:18:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:11.716 07:18:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:33:11.716 07:18:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:33:12.284 07:18:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:33:12.542 07:18:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:33:13.479 07:18:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:33:13.479 07:18:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:33:13.479 07:18:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:13.479 07:18:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:13.738 07:18:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:13.738 07:18:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:33:13.738 07:18:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:13.738 07:18:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:13.996 07:18:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:13.996 07:18:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:13.996 07:18:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:13.996 07:18:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:14.255 07:18:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:14.255 07:18:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:14.255 07:18:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:14.255 07:18:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:14.513 07:18:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:14.513 07:18:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:14.513 07:18:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:14.513 07:18:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:14.772 07:18:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:14.772 07:18:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:33:14.772 07:18:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:14.772 07:18:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:15.030 07:18:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:15.030 07:18:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 369720 00:33:15.030 07:18:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 369720 ']' 00:33:15.030 07:18:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 369720 00:33:15.030 07:18:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:33:15.030 07:18:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:15.030 07:18:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 369720 00:33:15.030 07:18:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:33:15.030 07:18:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:33:15.030 07:18:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 369720' 00:33:15.030 killing process with pid 369720 00:33:15.030 07:18:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 369720 00:33:15.030 07:18:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 369720 00:33:15.292 { 00:33:15.292 "results": [ 00:33:15.292 { 00:33:15.292 "job": "Nvme0n1", 00:33:15.292 "core_mask": "0x4", 00:33:15.292 "workload": "verify", 00:33:15.292 "status": "terminated", 00:33:15.292 "verify_range": { 00:33:15.292 "start": 0, 00:33:15.292 "length": 16384 00:33:15.292 }, 00:33:15.292 "queue_depth": 128, 00:33:15.292 "io_size": 4096, 00:33:15.292 "runtime": 34.564878, 00:33:15.292 "iops": 8047.938141138528, 00:33:15.292 "mibps": 31.437258363822377, 00:33:15.292 "io_failed": 0, 00:33:15.292 "io_timeout": 0, 00:33:15.292 "avg_latency_us": 15878.96099901581, 00:33:15.292 "min_latency_us": 155.4962962962963, 00:33:15.292 "max_latency_us": 4026531.84 00:33:15.292 } 00:33:15.292 ], 00:33:15.292 "core_count": 1 00:33:15.292 } 00:33:15.292 07:18:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 369720 00:33:15.292 07:18:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:33:15.292 [2024-11-18 07:17:59.797590] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:33:15.292 [2024-11-18 07:17:59.797679] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid369720 ] 00:33:15.292 [2024-11-18 07:17:59.865214] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:15.292 [2024-11-18 07:17:59.910953] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:33:15.292 Running I/O for 90 seconds... 00:33:15.292 8432.00 IOPS, 32.94 MiB/s [2024-11-18T06:18:36.270Z] 8574.50 IOPS, 33.49 MiB/s [2024-11-18T06:18:36.270Z] 8601.00 IOPS, 33.60 MiB/s [2024-11-18T06:18:36.270Z] 8609.25 IOPS, 33.63 MiB/s [2024-11-18T06:18:36.270Z] 8592.00 IOPS, 33.56 MiB/s [2024-11-18T06:18:36.270Z] 8570.83 IOPS, 33.48 MiB/s [2024-11-18T06:18:36.270Z] 8556.86 IOPS, 33.43 MiB/s [2024-11-18T06:18:36.270Z] 8563.38 IOPS, 33.45 MiB/s [2024-11-18T06:18:36.270Z] 8554.44 IOPS, 33.42 MiB/s [2024-11-18T06:18:36.270Z] 8545.80 IOPS, 33.38 MiB/s [2024-11-18T06:18:36.270Z] 8542.45 IOPS, 33.37 MiB/s [2024-11-18T06:18:36.270Z] 8550.67 IOPS, 33.40 MiB/s [2024-11-18T06:18:36.270Z] 8556.00 IOPS, 33.42 MiB/s [2024-11-18T06:18:36.270Z] 8552.14 IOPS, 33.41 MiB/s [2024-11-18T06:18:36.270Z] 8546.73 IOPS, 33.39 MiB/s [2024-11-18T06:18:36.270Z] [2024-11-18 07:18:16.564299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:119344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.292 [2024-11-18 07:18:16.564377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:33:15.292 [2024-11-18 07:18:16.564453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:119424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:15.292 [2024-11-18 07:18:16.564475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:33:15.292 [2024-11-18 07:18:16.564522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:119432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:15.292 [2024-11-18 07:18:16.564541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:15.292 [2024-11-18 07:18:16.564580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:119440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:15.292 [2024-11-18 07:18:16.564598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:33:15.292 [2024-11-18 07:18:16.564620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:119448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:15.292 [2024-11-18 07:18:16.564636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:33:15.292 [2024-11-18 07:18:16.564675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:119456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:15.292 [2024-11-18 07:18:16.564691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:15.292 [2024-11-18 07:18:16.564713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:119464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:15.292 [2024-11-18 07:18:16.564728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:33:15.292 [2024-11-18 07:18:16.564750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:119472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:15.292 [2024-11-18 07:18:16.564766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:33:15.292 [2024-11-18 07:18:16.564803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:119480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:15.292 [2024-11-18 07:18:16.564820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:33:15.292 [2024-11-18 07:18:16.564855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:119488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:15.292 [2024-11-18 07:18:16.564872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:15.292 [2024-11-18 07:18:16.564894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:119496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:15.292 [2024-11-18 07:18:16.564910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:15.292 [2024-11-18 07:18:16.564932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:119504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:15.292 [2024-11-18 07:18:16.564948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:33:15.292 [2024-11-18 07:18:16.564970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:119512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:15.292 [2024-11-18 07:18:16.564986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:33:15.292 [2024-11-18 07:18:16.565007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:119520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:15.292 [2024-11-18 07:18:16.565024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:15.292 [2024-11-18 07:18:16.565046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:119528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:15.292 [2024-11-18 07:18:16.565062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:15.292 [2024-11-18 07:18:16.565099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:119536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:15.292 [2024-11-18 07:18:16.565115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:15.292 [2024-11-18 07:18:16.566673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:119544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:15.292 [2024-11-18 07:18:16.566700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:33:15.292 [2024-11-18 07:18:16.566731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:119552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:15.292 [2024-11-18 07:18:16.566749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:15.292 [2024-11-18 07:18:16.566774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:119560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:15.292 [2024-11-18 07:18:16.566791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:33:15.292 [2024-11-18 07:18:16.566817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:119568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:15.292 [2024-11-18 07:18:16.566833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:33:15.292 [2024-11-18 07:18:16.566859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:119352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.292 [2024-11-18 07:18:16.566875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:15.292 [2024-11-18 07:18:16.566900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:119360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.292 [2024-11-18 07:18:16.566922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:33:15.292 [2024-11-18 07:18:16.566948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:119368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.292 [2024-11-18 07:18:16.566980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:33:15.292 [2024-11-18 07:18:16.567005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:119376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.292 [2024-11-18 07:18:16.567021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:33:15.292 [2024-11-18 07:18:16.567046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:119384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.292 [2024-11-18 07:18:16.567062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:33:15.292 [2024-11-18 07:18:16.567085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:119392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.292 [2024-11-18 07:18:16.567101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:33:15.293 [2024-11-18 07:18:16.567125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:119400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.293 [2024-11-18 07:18:16.567141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:15.293 [2024-11-18 07:18:16.567165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:119408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.293 [2024-11-18 07:18:16.567181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:33:15.293 [2024-11-18 07:18:16.567205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:119576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:15.293 [2024-11-18 07:18:16.567221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:33:15.293 [2024-11-18 07:18:16.567245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:119584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:15.293 [2024-11-18 07:18:16.567261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:33:15.293 [2024-11-18 07:18:16.567285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:119592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:15.293 [2024-11-18 07:18:16.567300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:33:15.293 [2024-11-18 07:18:16.567325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:119600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:15.293 [2024-11-18 07:18:16.567340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:33:15.293 [2024-11-18 07:18:16.567365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:119608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:15.293 [2024-11-18 07:18:16.567380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:33:15.293 [2024-11-18 07:18:16.567404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:119616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:15.293 [2024-11-18 07:18:16.567424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:33:15.293 [2024-11-18 07:18:16.567449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:119624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:15.293 [2024-11-18 07:18:16.567466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:33:15.293 [2024-11-18 07:18:16.567517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:119632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:15.293 [2024-11-18 07:18:16.567535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:33:15.293 [2024-11-18 07:18:16.567560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:119640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:15.293 [2024-11-18 07:18:16.567576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:33:15.293 [2024-11-18 07:18:16.567602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:119648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:15.293 [2024-11-18 07:18:16.567619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:33:15.293 [2024-11-18 07:18:16.567643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:119656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:15.293 [2024-11-18 07:18:16.567659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:15.293 [2024-11-18 07:18:16.567685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:119664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:15.293 [2024-11-18 07:18:16.567701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:15.293 [2024-11-18 07:18:16.567726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:119672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:15.293 [2024-11-18 07:18:16.567742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:15.293 [2024-11-18 07:18:16.567767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:119680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:15.293 [2024-11-18 07:18:16.567783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:15.293 [2024-11-18 07:18:16.567823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:119688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:15.293 [2024-11-18 07:18:16.567839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:15.293 [2024-11-18 07:18:16.567863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:119696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:15.293 [2024-11-18 07:18:16.567879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:15.293 [2024-11-18 07:18:16.567903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:119704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:15.293 [2024-11-18 07:18:16.567920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:15.293 [2024-11-18 07:18:16.567944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:119712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:15.293 [2024-11-18 07:18:16.567959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:15.293 [2024-11-18 07:18:16.567989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:119720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:15.293 [2024-11-18 07:18:16.568005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:15.293 [2024-11-18 07:18:16.568029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:119728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:15.293 [2024-11-18 07:18:16.568045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:33:15.293 [2024-11-18 07:18:16.568069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:119736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:15.293 [2024-11-18 07:18:16.568085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:33:15.293 [2024-11-18 07:18:16.568109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:119744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:15.293 [2024-11-18 07:18:16.568124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:33:15.293 [2024-11-18 07:18:16.568149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:119752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:15.293 [2024-11-18 07:18:16.568165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:33:15.293 [2024-11-18 07:18:16.568189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:119760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:15.293 [2024-11-18 07:18:16.568204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:33:15.293 [2024-11-18 07:18:16.568228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:119768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:15.293 [2024-11-18 07:18:16.568244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:33:15.293 [2024-11-18 07:18:16.568268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:119776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:15.293 [2024-11-18 07:18:16.568284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:33:15.293 [2024-11-18 07:18:16.568308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:119784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:15.293 [2024-11-18 07:18:16.568323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:33:15.293 [2024-11-18 07:18:16.568347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:119792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:15.293 [2024-11-18 07:18:16.568363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:33:15.293 [2024-11-18 07:18:16.568387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:119800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:15.293 [2024-11-18 07:18:16.568403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:33:15.293 [2024-11-18 07:18:16.568427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:119808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:15.293 [2024-11-18 07:18:16.568444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:33:15.293 [2024-11-18 07:18:16.568618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:119416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.293 [2024-11-18 07:18:16.568641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:33:15.293 [2024-11-18 07:18:16.568672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:119816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:15.293 [2024-11-18 07:18:16.568690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:33:15.293 [2024-11-18 07:18:16.568718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:119824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:15.293 [2024-11-18 07:18:16.568735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:33:15.293 [2024-11-18 07:18:16.568763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:119832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:15.293 [2024-11-18 07:18:16.568780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:33:15.293 [2024-11-18 07:18:16.568808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:119840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:15.293 [2024-11-18 07:18:16.568824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:33:15.293 [2024-11-18 07:18:16.568852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:119848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:15.293 [2024-11-18 07:18:16.568868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:33:15.293 [2024-11-18 07:18:16.568896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:119856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:15.294 [2024-11-18 07:18:16.568927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:33:15.294 [2024-11-18 07:18:16.568955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:119864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:15.294 [2024-11-18 07:18:16.568971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:33:15.294 [2024-11-18 07:18:16.568999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:119872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:15.294 [2024-11-18 07:18:16.569014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:33:15.294 [2024-11-18 07:18:16.569042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:119880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:15.294 [2024-11-18 07:18:16.569057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:33:15.294 [2024-11-18 07:18:16.569085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:119888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:15.294 [2024-11-18 07:18:16.569101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:33:15.294 [2024-11-18 07:18:16.569128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:119896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:15.294 [2024-11-18 07:18:16.569143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:33:15.294 [2024-11-18 07:18:16.569175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:119904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:15.294 [2024-11-18 07:18:16.569192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:33:15.294 [2024-11-18 07:18:16.569219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:119912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:15.294 [2024-11-18 07:18:16.569235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:33:15.294 [2024-11-18 07:18:16.569262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:119920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:15.294 [2024-11-18 07:18:16.569278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:33:15.294 [2024-11-18 07:18:16.569305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:119928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:15.294 [2024-11-18 07:18:16.569321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:33:15.294 [2024-11-18 07:18:16.569348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:119936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:15.294 [2024-11-18 07:18:16.569363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:33:15.294 [2024-11-18 07:18:16.569390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:119944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:15.294 [2024-11-18 07:18:16.569406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:33:15.294 [2024-11-18 07:18:16.569433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:119952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:15.294 [2024-11-18 07:18:16.569449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:33:15.294 [2024-11-18 07:18:16.569500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:119960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:15.294 [2024-11-18 07:18:16.569519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:15.294 [2024-11-18 07:18:16.569547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:119968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:15.294 [2024-11-18 07:18:16.569564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:15.294 [2024-11-18 07:18:16.569592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:119976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:15.294 [2024-11-18 07:18:16.569609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:33:15.294 [2024-11-18 07:18:16.569636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:119984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:15.294 [2024-11-18 07:18:16.569652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:33:15.294 [2024-11-18 07:18:16.569680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:119992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:15.294 [2024-11-18 07:18:16.569697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:33:15.294 [2024-11-18 07:18:16.569725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:15.294 [2024-11-18 07:18:16.569747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:33:15.294 [2024-11-18 07:18:16.569776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:120008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:15.294 [2024-11-18 07:18:16.569809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:33:15.294 [2024-11-18 07:18:16.569837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:120016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:15.294 [2024-11-18 07:18:16.569854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:33:15.294 [2024-11-18 07:18:16.569881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:120024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:15.294 [2024-11-18 07:18:16.569897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:33:15.294 [2024-11-18 07:18:16.569925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:120032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:15.294 [2024-11-18 07:18:16.569941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:33:15.294 [2024-11-18 07:18:16.569969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:120040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:15.294 [2024-11-18 07:18:16.569985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:33:15.294 [2024-11-18 07:18:16.570012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:120048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:15.294 [2024-11-18 07:18:16.570029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:33:15.294 8100.38 IOPS, 31.64 MiB/s [2024-11-18T06:18:36.272Z] 7623.88 IOPS, 29.78 MiB/s [2024-11-18T06:18:36.272Z] 7200.33 IOPS, 28.13 MiB/s [2024-11-18T06:18:36.272Z] 6821.37 IOPS, 26.65 MiB/s [2024-11-18T06:18:36.272Z] 6837.85 IOPS, 26.71 MiB/s [2024-11-18T06:18:36.272Z] 6916.76 IOPS, 27.02 MiB/s [2024-11-18T06:18:36.272Z] 7008.41 IOPS, 27.38 MiB/s [2024-11-18T06:18:36.272Z] 7201.22 IOPS, 28.13 MiB/s [2024-11-18T06:18:36.272Z] 7366.33 IOPS, 28.77 MiB/s [2024-11-18T06:18:36.272Z] 7534.96 IOPS, 29.43 MiB/s [2024-11-18T06:18:36.272Z] 7574.19 IOPS, 29.59 MiB/s [2024-11-18T06:18:36.272Z] 7603.70 IOPS, 29.70 MiB/s [2024-11-18T06:18:36.272Z] 7629.54 IOPS, 29.80 MiB/s [2024-11-18T06:18:36.272Z] 7699.79 IOPS, 30.08 MiB/s [2024-11-18T06:18:36.272Z] 7815.33 IOPS, 30.53 MiB/s [2024-11-18T06:18:36.272Z] 7918.23 IOPS, 30.93 MiB/s [2024-11-18T06:18:36.272Z] [2024-11-18 07:18:33.274616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:72080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:15.294 [2024-11-18 07:18:33.274674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:33:15.294 [2024-11-18 07:18:33.274732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:72096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:15.294 [2024-11-18 07:18:33.274753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:33:15.294 [2024-11-18 07:18:33.274802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:72112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:15.294 [2024-11-18 07:18:33.274819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:33:15.294 [2024-11-18 07:18:33.274842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:72128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:15.294 [2024-11-18 07:18:33.274858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:33:15.294 [2024-11-18 07:18:33.274896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:72144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:15.294 [2024-11-18 07:18:33.274913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:33:15.294 [2024-11-18 07:18:33.274935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:72160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:15.294 [2024-11-18 07:18:33.274951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:33:15.294 [2024-11-18 07:18:33.274972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:72176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:15.294 [2024-11-18 07:18:33.275003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:33:15.294 [2024-11-18 07:18:33.275025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:71248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.294 [2024-11-18 07:18:33.275041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:33:15.294 [2024-11-18 07:18:33.275079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:72192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:15.294 [2024-11-18 07:18:33.275096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:33:15.294 [2024-11-18 07:18:33.275118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:71264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.294 [2024-11-18 07:18:33.275134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:15.294 [2024-11-18 07:18:33.275156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:71296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.294 [2024-11-18 07:18:33.275173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:15.294 [2024-11-18 07:18:33.275194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:71328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.295 [2024-11-18 07:18:33.275211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:33:15.295 [2024-11-18 07:18:33.275233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:71360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.295 [2024-11-18 07:18:33.275249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:33:15.295 [2024-11-18 07:18:33.275272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:71392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.295 [2024-11-18 07:18:33.275288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:33:15.295 [2024-11-18 07:18:33.275310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:71424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.295 [2024-11-18 07:18:33.275327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:33:15.295 [2024-11-18 07:18:33.275349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:71456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.295 [2024-11-18 07:18:33.275380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:33:15.295 [2024-11-18 07:18:33.275404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:71488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.295 [2024-11-18 07:18:33.275424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:33:15.295 [2024-11-18 07:18:33.275446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:71520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.295 [2024-11-18 07:18:33.275462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:33:15.295 [2024-11-18 07:18:33.275518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:71552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.295 [2024-11-18 07:18:33.275537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:33:15.295 [2024-11-18 07:18:33.276467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:71576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.295 [2024-11-18 07:18:33.276511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:33:15.295 [2024-11-18 07:18:33.276540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:71608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.295 [2024-11-18 07:18:33.276558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:33:15.295 [2024-11-18 07:18:33.276582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:71640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.295 [2024-11-18 07:18:33.276598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:33:15.295 [2024-11-18 07:18:33.276619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:71672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.295 [2024-11-18 07:18:33.276636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:33:15.295 [2024-11-18 07:18:33.276658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:71704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.295 [2024-11-18 07:18:33.276674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:33:15.295 [2024-11-18 07:18:33.276696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:71736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.295 [2024-11-18 07:18:33.276712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:33:15.295 [2024-11-18 07:18:33.276734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:71768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.295 [2024-11-18 07:18:33.276749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:33:15.295 [2024-11-18 07:18:33.276796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:72208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:15.295 [2024-11-18 07:18:33.276812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:33:15.295 [2024-11-18 07:18:33.276833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:71800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.295 [2024-11-18 07:18:33.276863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:33:15.295 [2024-11-18 07:18:33.276884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:71832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.295 [2024-11-18 07:18:33.276904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:33:15.295 [2024-11-18 07:18:33.276926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:71864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.295 [2024-11-18 07:18:33.276941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:33:15.295 [2024-11-18 07:18:33.276962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:71896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.295 [2024-11-18 07:18:33.276977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:33:15.295 [2024-11-18 07:18:33.276997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:71568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.295 [2024-11-18 07:18:33.277012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:33:15.295 [2024-11-18 07:18:33.277032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:71600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.295 [2024-11-18 07:18:33.277047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:33:15.295 [2024-11-18 07:18:33.277067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:71632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.295 [2024-11-18 07:18:33.277082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:33:15.295 [2024-11-18 07:18:33.277102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:71664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.295 [2024-11-18 07:18:33.277117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:33:15.295 [2024-11-18 07:18:33.277137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:71696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.295 [2024-11-18 07:18:33.277152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:33:15.295 [2024-11-18 07:18:33.277172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:71728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.295 [2024-11-18 07:18:33.277187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:33:15.295 [2024-11-18 07:18:33.277208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:71760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.295 [2024-11-18 07:18:33.277222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:33:15.295 [2024-11-18 07:18:33.277242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:71792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.295 [2024-11-18 07:18:33.277257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:33:15.295 [2024-11-18 07:18:33.277277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:72232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:15.295 [2024-11-18 07:18:33.277292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:33:15.295 [2024-11-18 07:18:33.277312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:71808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.295 [2024-11-18 07:18:33.277331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:33:15.295 [2024-11-18 07:18:33.277352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:71840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.295 [2024-11-18 07:18:33.277383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:15.295 [2024-11-18 07:18:33.277405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:71872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.295 [2024-11-18 07:18:33.277420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:15.295 [2024-11-18 07:18:33.277458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:71904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.295 [2024-11-18 07:18:33.277474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:15.295 [2024-11-18 07:18:33.277505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:71936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.295 [2024-11-18 07:18:33.277523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:33:15.295 [2024-11-18 07:18:33.277545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:72248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:15.295 [2024-11-18 07:18:33.277562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:33:15.295 [2024-11-18 07:18:33.277583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:72264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:15.295 [2024-11-18 07:18:33.277600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:33:15.295 [2024-11-18 07:18:33.277621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:72280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:15.295 [2024-11-18 07:18:33.277637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:33:15.295 [2024-11-18 07:18:33.277659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:72296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:15.295 [2024-11-18 07:18:33.277675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:33:15.295 [2024-11-18 07:18:33.277696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:72312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:15.295 [2024-11-18 07:18:33.277712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:33:15.295 [2024-11-18 07:18:33.277733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:71976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.296 [2024-11-18 07:18:33.277749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:33:15.296 [2024-11-18 07:18:33.277771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:72008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.296 [2024-11-18 07:18:33.277787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:33:15.296 [2024-11-18 07:18:33.277809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:72040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.296 [2024-11-18 07:18:33.277840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:33:15.296 [2024-11-18 07:18:33.277868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:71928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.296 [2024-11-18 07:18:33.277884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:33:15.296 [2024-11-18 07:18:33.278393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:71968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.296 [2024-11-18 07:18:33.278416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:33:15.296 [2024-11-18 07:18:33.278443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:72000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.296 [2024-11-18 07:18:33.278461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:33:15.296 [2024-11-18 07:18:33.278484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:72032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.296 [2024-11-18 07:18:33.278510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:33:15.296 [2024-11-18 07:18:33.278535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:72064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.296 [2024-11-18 07:18:33.278552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:33:15.296 8009.06 IOPS, 31.29 MiB/s [2024-11-18T06:18:36.274Z] 8028.79 IOPS, 31.36 MiB/s [2024-11-18T06:18:36.274Z] 8043.71 IOPS, 31.42 MiB/s [2024-11-18T06:18:36.274Z] Received shutdown signal, test time was about 34.565651 seconds 00:33:15.296 00:33:15.296 Latency(us) 00:33:15.296 [2024-11-18T06:18:36.274Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:15.296 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:33:15.296 Verification LBA range: start 0x0 length 0x4000 00:33:15.296 Nvme0n1 : 34.56 8047.94 31.44 0.00 0.00 15878.96 155.50 4026531.84 00:33:15.296 [2024-11-18T06:18:36.274Z] =================================================================================================================== 00:33:15.296 [2024-11-18T06:18:36.274Z] Total : 8047.94 31.44 0.00 0.00 15878.96 155.50 4026531.84 00:33:15.296 07:18:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:15.555 07:18:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:33:15.555 07:18:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:33:15.555 07:18:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:33:15.555 07:18:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:15.555 07:18:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:33:15.555 07:18:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:15.555 07:18:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:33:15.555 07:18:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:15.555 07:18:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:15.555 rmmod nvme_tcp 00:33:15.555 rmmod nvme_fabrics 00:33:15.555 rmmod nvme_keyring 00:33:15.815 07:18:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:15.815 07:18:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:33:15.815 07:18:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:33:15.815 07:18:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@517 -- # '[' -n 369437 ']' 00:33:15.815 07:18:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # killprocess 369437 00:33:15.815 07:18:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 369437 ']' 00:33:15.815 07:18:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 369437 00:33:15.815 07:18:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:33:15.815 07:18:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:15.815 07:18:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 369437 00:33:15.815 07:18:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:15.815 07:18:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:15.815 07:18:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 369437' 00:33:15.815 killing process with pid 369437 00:33:15.815 07:18:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 369437 00:33:15.815 07:18:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 369437 00:33:16.073 07:18:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:16.073 07:18:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:16.073 07:18:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:16.073 07:18:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:33:16.073 07:18:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-save 00:33:16.073 07:18:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:16.073 07:18:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-restore 00:33:16.073 07:18:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:16.073 07:18:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:16.073 07:18:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:16.073 07:18:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:16.073 07:18:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:17.977 07:18:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:17.977 00:33:17.977 real 0m43.572s 00:33:17.977 user 2m12.701s 00:33:17.977 sys 0m11.017s 00:33:17.977 07:18:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:17.977 07:18:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:33:17.977 ************************************ 00:33:17.977 END TEST nvmf_host_multipath_status 00:33:17.977 ************************************ 00:33:17.977 07:18:38 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:33:17.977 07:18:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:33:17.977 07:18:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:17.977 07:18:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:33:17.977 ************************************ 00:33:17.977 START TEST nvmf_discovery_remove_ifc 00:33:17.977 ************************************ 00:33:17.977 07:18:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:33:18.237 * Looking for test storage... 00:33:18.237 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:18.237 07:18:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:33:18.237 07:18:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # lcov --version 00:33:18.237 07:18:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:33:18.237 07:18:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:33:18.237 07:18:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:18.237 07:18:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:18.237 07:18:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:18.237 07:18:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:33:18.237 07:18:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:33:18.237 07:18:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:33:18.237 07:18:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:33:18.237 07:18:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:33:18.237 07:18:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:33:18.237 07:18:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:33:18.237 07:18:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:18.237 07:18:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:33:18.237 07:18:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:33:18.237 07:18:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:18.237 07:18:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:18.237 07:18:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:33:18.237 07:18:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:33:18.237 07:18:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:18.237 07:18:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:33:18.237 07:18:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:33:18.237 07:18:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:33:18.237 07:18:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:33:18.237 07:18:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:18.237 07:18:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:33:18.237 07:18:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:33:18.237 07:18:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:18.237 07:18:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:18.237 07:18:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:33:18.237 07:18:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:18.237 07:18:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:33:18.237 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:18.237 --rc genhtml_branch_coverage=1 00:33:18.237 --rc genhtml_function_coverage=1 00:33:18.237 --rc genhtml_legend=1 00:33:18.237 --rc geninfo_all_blocks=1 00:33:18.237 --rc geninfo_unexecuted_blocks=1 00:33:18.237 00:33:18.237 ' 00:33:18.237 07:18:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:33:18.237 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:18.237 --rc genhtml_branch_coverage=1 00:33:18.237 --rc genhtml_function_coverage=1 00:33:18.237 --rc genhtml_legend=1 00:33:18.237 --rc geninfo_all_blocks=1 00:33:18.237 --rc geninfo_unexecuted_blocks=1 00:33:18.237 00:33:18.237 ' 00:33:18.237 07:18:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:33:18.237 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:18.237 --rc genhtml_branch_coverage=1 00:33:18.237 --rc genhtml_function_coverage=1 00:33:18.237 --rc genhtml_legend=1 00:33:18.237 --rc geninfo_all_blocks=1 00:33:18.237 --rc geninfo_unexecuted_blocks=1 00:33:18.237 00:33:18.237 ' 00:33:18.237 07:18:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:33:18.237 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:18.237 --rc genhtml_branch_coverage=1 00:33:18.237 --rc genhtml_function_coverage=1 00:33:18.237 --rc genhtml_legend=1 00:33:18.237 --rc geninfo_all_blocks=1 00:33:18.237 --rc geninfo_unexecuted_blocks=1 00:33:18.237 00:33:18.237 ' 00:33:18.237 07:18:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:18.237 07:18:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:33:18.237 07:18:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:18.237 07:18:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:18.237 07:18:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:18.237 07:18:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:18.237 07:18:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:18.237 07:18:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:18.237 07:18:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:18.237 07:18:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:18.237 07:18:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:18.237 07:18:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:18.237 07:18:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:33:18.237 07:18:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:33:18.237 07:18:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:18.237 07:18:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:18.237 07:18:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:18.237 07:18:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:18.237 07:18:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:18.237 07:18:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:33:18.237 07:18:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:18.237 07:18:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:18.237 07:18:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:18.238 07:18:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:18.238 07:18:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:18.238 07:18:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:18.238 07:18:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:33:18.238 07:18:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:18.238 07:18:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:33:18.238 07:18:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:18.238 07:18:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:18.238 07:18:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:18.238 07:18:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:18.238 07:18:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:18.238 07:18:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:33:18.238 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:33:18.238 07:18:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:18.238 07:18:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:18.238 07:18:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:18.238 07:18:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:33:18.238 07:18:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:33:18.238 07:18:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:33:18.238 07:18:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:33:18.238 07:18:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:33:18.238 07:18:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:33:18.238 07:18:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:33:18.238 07:18:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:18.238 07:18:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:18.238 07:18:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:18.238 07:18:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:18.238 07:18:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:18.238 07:18:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:18.238 07:18:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:18.238 07:18:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:18.238 07:18:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:18.238 07:18:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:18.238 07:18:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@309 -- # xtrace_disable 00:33:18.238 07:18:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:20.771 07:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:20.771 07:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # pci_devs=() 00:33:20.771 07:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:20.771 07:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:20.771 07:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:20.771 07:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:20.771 07:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:20.771 07:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # net_devs=() 00:33:20.771 07:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:20.771 07:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # e810=() 00:33:20.771 07:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # local -ga e810 00:33:20.771 07:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # x722=() 00:33:20.771 07:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # local -ga x722 00:33:20.771 07:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # mlx=() 00:33:20.771 07:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # local -ga mlx 00:33:20.771 07:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:20.771 07:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:20.771 07:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:20.771 07:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:20.771 07:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:20.771 07:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:20.771 07:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:20.771 07:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:20.771 07:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:20.771 07:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:20.771 07:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:20.771 07:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:20.771 07:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:20.771 07:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:20.771 07:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:20.771 07:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:20.771 07:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:20.771 07:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:20.771 07:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:20.771 07:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:33:20.771 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:33:20.771 07:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:20.771 07:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:20.771 07:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:20.771 07:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:20.771 07:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:20.772 07:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:20.772 07:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:33:20.772 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:33:20.772 07:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:20.772 07:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:20.772 07:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:20.772 07:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:20.772 07:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:20.772 07:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:20.772 07:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:20.772 07:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:20.772 07:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:20.772 07:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:20.772 07:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:20.772 07:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:20.772 07:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:20.772 07:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:20.772 07:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:20.772 07:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:33:20.772 Found net devices under 0000:0a:00.0: cvl_0_0 00:33:20.772 07:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:20.772 07:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:20.772 07:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:20.772 07:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:20.772 07:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:20.772 07:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:20.772 07:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:20.772 07:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:20.772 07:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:33:20.772 Found net devices under 0000:0a:00.1: cvl_0_1 00:33:20.772 07:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:20.772 07:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:20.772 07:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # is_hw=yes 00:33:20.772 07:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:20.772 07:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:20.772 07:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:20.772 07:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:20.772 07:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:20.772 07:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:20.772 07:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:20.772 07:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:20.772 07:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:20.772 07:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:20.772 07:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:20.772 07:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:20.772 07:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:20.772 07:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:20.772 07:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:20.772 07:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:20.772 07:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:20.772 07:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:20.772 07:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:20.772 07:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:20.772 07:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:20.772 07:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:20.772 07:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:20.772 07:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:20.772 07:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:20.772 07:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:20.772 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:20.772 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.283 ms 00:33:20.772 00:33:20.772 --- 10.0.0.2 ping statistics --- 00:33:20.772 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:20.772 rtt min/avg/max/mdev = 0.283/0.283/0.283/0.000 ms 00:33:20.772 07:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:20.772 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:20.772 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.078 ms 00:33:20.772 00:33:20.772 --- 10.0.0.1 ping statistics --- 00:33:20.772 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:20.772 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:33:20.772 07:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:20.772 07:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # return 0 00:33:20.772 07:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:20.772 07:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:20.772 07:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:20.772 07:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:20.772 07:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:20.772 07:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:20.772 07:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:20.772 07:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:33:20.772 07:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:20.772 07:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:20.772 07:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:20.772 07:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # nvmfpid=376746 00:33:20.772 07:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:33:20.772 07:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # waitforlisten 376746 00:33:20.772 07:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 376746 ']' 00:33:20.772 07:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:20.772 07:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:20.772 07:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:20.772 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:20.772 07:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:20.772 07:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:20.772 [2024-11-18 07:18:41.409613] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:33:20.772 [2024-11-18 07:18:41.409701] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:20.772 [2024-11-18 07:18:41.482255] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:20.772 [2024-11-18 07:18:41.529642] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:20.772 [2024-11-18 07:18:41.529705] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:20.772 [2024-11-18 07:18:41.529720] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:20.772 [2024-11-18 07:18:41.529731] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:20.772 [2024-11-18 07:18:41.529740] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:20.772 [2024-11-18 07:18:41.530366] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:20.772 07:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:20.772 07:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:33:20.772 07:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:20.772 07:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:20.772 07:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:20.773 07:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:20.773 07:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:33:20.773 07:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:20.773 07:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:20.773 [2024-11-18 07:18:41.683440] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:20.773 [2024-11-18 07:18:41.691682] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:33:20.773 null0 00:33:20.773 [2024-11-18 07:18:41.723597] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:20.773 07:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:20.773 07:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=376823 00:33:20.773 07:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:33:20.773 07:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 376823 /tmp/host.sock 00:33:20.773 07:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 376823 ']' 00:33:20.773 07:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:33:20.773 07:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:20.773 07:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:33:20.773 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:33:20.773 07:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:20.773 07:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:21.031 [2024-11-18 07:18:41.790961] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:33:21.031 [2024-11-18 07:18:41.791052] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid376823 ] 00:33:21.031 [2024-11-18 07:18:41.857441] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:21.031 [2024-11-18 07:18:41.905739] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:21.289 07:18:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:21.289 07:18:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:33:21.289 07:18:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:33:21.289 07:18:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:33:21.289 07:18:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:21.289 07:18:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:21.289 07:18:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:21.289 07:18:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:33:21.289 07:18:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:21.289 07:18:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:21.289 07:18:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:21.289 07:18:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:33:21.289 07:18:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:21.289 07:18:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:22.227 [2024-11-18 07:18:43.182592] bdev_nvme.c:7384:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:33:22.227 [2024-11-18 07:18:43.182618] bdev_nvme.c:7470:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:33:22.227 [2024-11-18 07:18:43.182645] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:33:22.486 [2024-11-18 07:18:43.268951] bdev_nvme.c:7313:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:33:22.486 [2024-11-18 07:18:43.450111] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:33:22.486 [2024-11-18 07:18:43.451064] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0xb1fc00:1 started. 00:33:22.486 [2024-11-18 07:18:43.452721] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:33:22.486 [2024-11-18 07:18:43.452790] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:33:22.486 [2024-11-18 07:18:43.452821] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:33:22.486 [2024-11-18 07:18:43.452841] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:33:22.486 [2024-11-18 07:18:43.452869] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:33:22.486 07:18:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:22.486 07:18:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:33:22.486 07:18:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:22.486 07:18:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:22.486 07:18:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:22.486 07:18:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:22.486 07:18:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:22.486 07:18:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:22.486 07:18:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:22.486 [2024-11-18 07:18:43.459807] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0xb1fc00 was disconnected and freed. delete nvme_qpair. 00:33:22.746 07:18:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:22.746 07:18:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:33:22.746 07:18:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:33:22.746 07:18:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:33:22.746 07:18:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:33:22.746 07:18:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:22.746 07:18:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:22.746 07:18:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:22.746 07:18:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:22.746 07:18:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:22.746 07:18:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:22.746 07:18:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:22.746 07:18:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:22.746 07:18:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:33:22.746 07:18:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:23.687 07:18:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:23.687 07:18:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:23.687 07:18:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:23.687 07:18:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:23.687 07:18:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:23.687 07:18:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:23.687 07:18:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:23.687 07:18:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:23.687 07:18:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:33:23.687 07:18:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:25.069 07:18:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:25.069 07:18:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:25.069 07:18:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:25.069 07:18:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:25.069 07:18:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:25.069 07:18:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:25.069 07:18:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:25.069 07:18:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:25.069 07:18:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:33:25.070 07:18:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:26.005 07:18:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:26.005 07:18:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:26.005 07:18:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:26.005 07:18:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:26.005 07:18:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:26.005 07:18:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:26.005 07:18:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:26.005 07:18:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:26.005 07:18:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:33:26.005 07:18:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:26.964 07:18:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:26.964 07:18:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:26.964 07:18:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:26.964 07:18:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:26.964 07:18:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:26.964 07:18:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:26.964 07:18:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:26.964 07:18:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:26.964 07:18:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:33:26.964 07:18:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:27.902 07:18:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:27.902 07:18:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:27.902 07:18:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:27.902 07:18:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:27.902 07:18:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:27.902 07:18:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:27.902 07:18:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:27.902 07:18:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:27.902 07:18:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:33:27.902 07:18:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:28.163 [2024-11-18 07:18:48.894152] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:33:28.163 [2024-11-18 07:18:48.894224] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:33:28.163 [2024-11-18 07:18:48.894245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.163 [2024-11-18 07:18:48.894263] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:33:28.163 [2024-11-18 07:18:48.894279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.163 [2024-11-18 07:18:48.894293] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:33:28.163 [2024-11-18 07:18:48.894306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.163 [2024-11-18 07:18:48.894319] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:33:28.163 [2024-11-18 07:18:48.894331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.163 [2024-11-18 07:18:48.894344] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:33:28.163 [2024-11-18 07:18:48.894358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.163 [2024-11-18 07:18:48.894371] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xafc400 is same with the state(6) to be set 00:33:28.163 [2024-11-18 07:18:48.904168] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xafc400 (9): Bad file descriptor 00:33:28.163 [2024-11-18 07:18:48.914212] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:33:28.163 [2024-11-18 07:18:48.914233] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:33:28.163 [2024-11-18 07:18:48.914242] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:33:28.163 [2024-11-18 07:18:48.914251] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:33:28.163 [2024-11-18 07:18:48.914291] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:33:29.108 07:18:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:29.108 07:18:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:29.108 07:18:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:29.108 07:18:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:29.108 07:18:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:29.108 07:18:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:29.108 07:18:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:29.108 [2024-11-18 07:18:49.968532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:33:29.108 [2024-11-18 07:18:49.968615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xafc400 with addr=10.0.0.2, port=4420 00:33:29.108 [2024-11-18 07:18:49.968649] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xafc400 is same with the state(6) to be set 00:33:29.108 [2024-11-18 07:18:49.968697] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xafc400 (9): Bad file descriptor 00:33:29.108 [2024-11-18 07:18:49.969156] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] Unable to perform failover, already in progress. 00:33:29.108 [2024-11-18 07:18:49.969196] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:33:29.108 [2024-11-18 07:18:49.969212] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:33:29.108 [2024-11-18 07:18:49.969226] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:33:29.108 [2024-11-18 07:18:49.969239] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:33:29.108 [2024-11-18 07:18:49.969248] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:33:29.108 [2024-11-18 07:18:49.969256] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:33:29.108 [2024-11-18 07:18:49.969268] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:33:29.108 [2024-11-18 07:18:49.969277] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:33:29.108 07:18:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:29.108 07:18:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:33:29.108 07:18:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:30.044 [2024-11-18 07:18:50.971786] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:33:30.044 [2024-11-18 07:18:50.971824] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:33:30.044 [2024-11-18 07:18:50.971865] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:33:30.044 [2024-11-18 07:18:50.971879] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:33:30.044 [2024-11-18 07:18:50.971893] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] already in failed state 00:33:30.044 [2024-11-18 07:18:50.971906] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:33:30.044 [2024-11-18 07:18:50.971916] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:33:30.044 [2024-11-18 07:18:50.971924] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:33:30.044 [2024-11-18 07:18:50.971962] bdev_nvme.c:7135:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:33:30.044 [2024-11-18 07:18:50.972004] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:33:30.044 [2024-11-18 07:18:50.972025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.044 [2024-11-18 07:18:50.972043] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:33:30.044 [2024-11-18 07:18:50.972057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.044 [2024-11-18 07:18:50.972071] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:33:30.044 [2024-11-18 07:18:50.972093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.044 [2024-11-18 07:18:50.972107] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:33:30.044 [2024-11-18 07:18:50.972120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.044 [2024-11-18 07:18:50.972135] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:33:30.044 [2024-11-18 07:18:50.972164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.044 [2024-11-18 07:18:50.972178] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] in failed state. 00:33:30.044 [2024-11-18 07:18:50.972353] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xaebb40 (9): Bad file descriptor 00:33:30.044 [2024-11-18 07:18:50.973368] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:33:30.044 [2024-11-18 07:18:50.973390] nvme_ctrlr.c:1217:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] Failed to read the CC register 00:33:30.044 07:18:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:30.044 07:18:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:30.044 07:18:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:30.044 07:18:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:30.044 07:18:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:30.044 07:18:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:30.044 07:18:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:30.044 07:18:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:30.303 07:18:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:33:30.303 07:18:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:30.303 07:18:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:30.303 07:18:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:33:30.303 07:18:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:30.303 07:18:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:30.303 07:18:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:30.303 07:18:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:30.303 07:18:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:30.303 07:18:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:30.303 07:18:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:30.303 07:18:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:30.303 07:18:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:33:30.303 07:18:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:31.241 07:18:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:31.241 07:18:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:31.241 07:18:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:31.241 07:18:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:31.241 07:18:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:31.241 07:18:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:31.242 07:18:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:31.242 07:18:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:31.242 07:18:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:33:31.242 07:18:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:32.177 [2024-11-18 07:18:53.027624] bdev_nvme.c:7384:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:33:32.177 [2024-11-18 07:18:53.027667] bdev_nvme.c:7470:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:33:32.177 [2024-11-18 07:18:53.027691] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:33:32.177 [2024-11-18 07:18:53.114971] bdev_nvme.c:7313:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:33:32.177 07:18:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:32.177 07:18:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:32.177 07:18:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:32.177 07:18:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:32.177 07:18:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:32.177 07:18:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:32.177 07:18:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:32.435 07:18:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:32.435 [2024-11-18 07:18:53.175732] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4420 00:33:32.435 [2024-11-18 07:18:53.176516] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] Connecting qpair 0xafe720:1 started. 00:33:32.435 [2024-11-18 07:18:53.177875] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:33:32.435 [2024-11-18 07:18:53.177917] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:33:32.435 [2024-11-18 07:18:53.177948] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:33:32.435 [2024-11-18 07:18:53.177969] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:33:32.435 [2024-11-18 07:18:53.177982] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:33:32.435 [2024-11-18 07:18:53.185657] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] qpair 0xafe720 was disconnected and freed. delete nvme_qpair. 00:33:32.435 07:18:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:33:32.435 07:18:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:33.374 07:18:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:33.374 07:18:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:33.374 07:18:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:33.374 07:18:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:33.374 07:18:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:33.374 07:18:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:33.374 07:18:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:33.374 07:18:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:33.374 07:18:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:33:33.374 07:18:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:33:33.374 07:18:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 376823 00:33:33.374 07:18:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 376823 ']' 00:33:33.374 07:18:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 376823 00:33:33.374 07:18:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:33:33.374 07:18:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:33.374 07:18:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 376823 00:33:33.374 07:18:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:33.374 07:18:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:33.374 07:18:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 376823' 00:33:33.374 killing process with pid 376823 00:33:33.374 07:18:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 376823 00:33:33.374 07:18:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 376823 00:33:33.633 07:18:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:33:33.633 07:18:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:33.633 07:18:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:33:33.633 07:18:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:33.633 07:18:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:33:33.633 07:18:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:33.633 07:18:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:33.633 rmmod nvme_tcp 00:33:33.633 rmmod nvme_fabrics 00:33:33.633 rmmod nvme_keyring 00:33:33.633 07:18:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:33.633 07:18:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:33:33.633 07:18:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:33:33.633 07:18:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@517 -- # '[' -n 376746 ']' 00:33:33.633 07:18:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # killprocess 376746 00:33:33.633 07:18:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 376746 ']' 00:33:33.633 07:18:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 376746 00:33:33.633 07:18:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:33:33.633 07:18:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:33.634 07:18:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 376746 00:33:33.634 07:18:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:33:33.634 07:18:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:33:33.634 07:18:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 376746' 00:33:33.634 killing process with pid 376746 00:33:33.634 07:18:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 376746 00:33:33.634 07:18:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 376746 00:33:33.892 07:18:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:33.892 07:18:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:33.892 07:18:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:33.892 07:18:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:33:33.892 07:18:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-save 00:33:33.892 07:18:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:33.892 07:18:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-restore 00:33:33.892 07:18:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:33.892 07:18:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:33.892 07:18:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:33.892 07:18:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:33.892 07:18:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:36.427 07:18:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:36.427 00:33:36.427 real 0m17.904s 00:33:36.427 user 0m25.899s 00:33:36.427 sys 0m3.084s 00:33:36.427 07:18:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:36.427 07:18:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:36.427 ************************************ 00:33:36.427 END TEST nvmf_discovery_remove_ifc 00:33:36.427 ************************************ 00:33:36.427 07:18:56 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:33:36.427 07:18:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:33:36.428 07:18:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:36.428 07:18:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:33:36.428 ************************************ 00:33:36.428 START TEST nvmf_identify_kernel_target 00:33:36.428 ************************************ 00:33:36.428 07:18:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:33:36.428 * Looking for test storage... 00:33:36.428 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:36.428 07:18:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:33:36.428 07:18:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # lcov --version 00:33:36.428 07:18:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:33:36.428 07:18:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:33:36.428 07:18:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:36.428 07:18:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:36.428 07:18:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:36.428 07:18:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:33:36.428 07:18:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:33:36.428 07:18:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:33:36.428 07:18:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:33:36.428 07:18:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:33:36.428 07:18:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:33:36.428 07:18:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:33:36.428 07:18:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:36.428 07:18:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:33:36.428 07:18:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:33:36.428 07:18:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:36.428 07:18:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:36.428 07:18:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:33:36.428 07:18:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:33:36.428 07:18:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:36.428 07:18:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:33:36.428 07:18:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:33:36.428 07:18:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:33:36.428 07:18:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:33:36.428 07:18:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:36.428 07:18:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:33:36.428 07:18:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:33:36.428 07:18:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:36.428 07:18:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:36.428 07:18:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:33:36.428 07:18:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:36.428 07:18:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:33:36.428 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:36.428 --rc genhtml_branch_coverage=1 00:33:36.428 --rc genhtml_function_coverage=1 00:33:36.428 --rc genhtml_legend=1 00:33:36.428 --rc geninfo_all_blocks=1 00:33:36.428 --rc geninfo_unexecuted_blocks=1 00:33:36.428 00:33:36.428 ' 00:33:36.428 07:18:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:33:36.428 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:36.428 --rc genhtml_branch_coverage=1 00:33:36.428 --rc genhtml_function_coverage=1 00:33:36.428 --rc genhtml_legend=1 00:33:36.428 --rc geninfo_all_blocks=1 00:33:36.428 --rc geninfo_unexecuted_blocks=1 00:33:36.428 00:33:36.428 ' 00:33:36.428 07:18:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:33:36.428 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:36.428 --rc genhtml_branch_coverage=1 00:33:36.428 --rc genhtml_function_coverage=1 00:33:36.428 --rc genhtml_legend=1 00:33:36.428 --rc geninfo_all_blocks=1 00:33:36.428 --rc geninfo_unexecuted_blocks=1 00:33:36.428 00:33:36.428 ' 00:33:36.428 07:18:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:33:36.428 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:36.428 --rc genhtml_branch_coverage=1 00:33:36.428 --rc genhtml_function_coverage=1 00:33:36.428 --rc genhtml_legend=1 00:33:36.428 --rc geninfo_all_blocks=1 00:33:36.428 --rc geninfo_unexecuted_blocks=1 00:33:36.428 00:33:36.428 ' 00:33:36.428 07:18:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:36.428 07:18:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:33:36.428 07:18:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:36.428 07:18:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:36.428 07:18:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:36.428 07:18:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:36.428 07:18:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:36.428 07:18:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:36.428 07:18:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:36.428 07:18:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:36.428 07:18:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:36.428 07:18:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:36.428 07:18:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:33:36.428 07:18:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:33:36.428 07:18:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:36.428 07:18:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:36.428 07:18:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:36.428 07:18:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:36.428 07:18:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:36.428 07:18:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:33:36.428 07:18:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:36.428 07:18:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:36.428 07:18:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:36.428 07:18:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:36.428 07:18:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:36.428 07:18:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:36.428 07:18:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:33:36.428 07:18:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:36.429 07:18:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:33:36.429 07:18:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:36.429 07:18:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:36.429 07:18:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:36.429 07:18:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:36.429 07:18:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:36.429 07:18:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:33:36.429 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:33:36.429 07:18:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:36.429 07:18:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:36.429 07:18:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:36.429 07:18:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:33:36.429 07:18:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:36.429 07:18:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:36.429 07:18:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:36.429 07:18:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:36.429 07:18:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:36.429 07:18:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:36.429 07:18:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:36.429 07:18:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:36.429 07:18:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:36.429 07:18:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:36.429 07:18:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@309 -- # xtrace_disable 00:33:36.429 07:18:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:33:38.335 07:18:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:38.335 07:18:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # pci_devs=() 00:33:38.335 07:18:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:38.335 07:18:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:38.335 07:18:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:38.335 07:18:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:38.335 07:18:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:38.335 07:18:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # net_devs=() 00:33:38.335 07:18:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:38.335 07:18:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # e810=() 00:33:38.335 07:18:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # local -ga e810 00:33:38.335 07:18:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # x722=() 00:33:38.335 07:18:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # local -ga x722 00:33:38.335 07:18:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # mlx=() 00:33:38.335 07:18:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # local -ga mlx 00:33:38.335 07:18:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:38.335 07:18:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:38.335 07:18:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:38.335 07:18:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:38.335 07:18:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:38.335 07:18:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:38.335 07:18:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:38.335 07:18:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:38.335 07:18:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:38.335 07:18:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:38.335 07:18:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:38.335 07:18:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:38.335 07:18:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:38.335 07:18:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:38.335 07:18:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:38.335 07:18:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:38.335 07:18:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:38.335 07:18:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:38.335 07:18:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:38.335 07:18:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:33:38.335 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:33:38.335 07:18:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:38.335 07:18:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:38.335 07:18:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:38.335 07:18:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:38.335 07:18:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:38.335 07:18:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:38.335 07:18:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:33:38.335 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:33:38.335 07:18:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:38.335 07:18:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:38.335 07:18:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:38.335 07:18:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:38.335 07:18:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:38.335 07:18:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:38.335 07:18:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:38.335 07:18:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:38.335 07:18:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:38.335 07:18:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:38.335 07:18:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:38.335 07:18:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:38.335 07:18:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:38.335 07:18:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:38.335 07:18:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:38.335 07:18:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:33:38.335 Found net devices under 0000:0a:00.0: cvl_0_0 00:33:38.335 07:18:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:38.335 07:18:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:38.335 07:18:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:38.335 07:18:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:38.335 07:18:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:38.335 07:18:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:38.335 07:18:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:38.335 07:18:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:38.335 07:18:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:33:38.335 Found net devices under 0000:0a:00.1: cvl_0_1 00:33:38.335 07:18:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:38.335 07:18:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:38.335 07:18:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # is_hw=yes 00:33:38.335 07:18:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:38.335 07:18:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:38.335 07:18:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:38.335 07:18:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:38.335 07:18:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:38.335 07:18:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:38.336 07:18:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:38.336 07:18:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:38.336 07:18:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:38.336 07:18:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:38.336 07:18:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:38.336 07:18:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:38.336 07:18:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:38.336 07:18:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:38.336 07:18:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:38.336 07:18:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:38.336 07:18:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:38.336 07:18:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:38.336 07:18:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:38.336 07:18:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:38.336 07:18:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:38.336 07:18:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:38.336 07:18:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:38.336 07:18:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:38.336 07:18:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:38.336 07:18:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:38.336 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:38.336 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.299 ms 00:33:38.336 00:33:38.336 --- 10.0.0.2 ping statistics --- 00:33:38.336 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:38.336 rtt min/avg/max/mdev = 0.299/0.299/0.299/0.000 ms 00:33:38.336 07:18:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:38.336 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:38.336 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.162 ms 00:33:38.336 00:33:38.336 --- 10.0.0.1 ping statistics --- 00:33:38.336 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:38.336 rtt min/avg/max/mdev = 0.162/0.162/0.162/0.000 ms 00:33:38.336 07:18:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:38.336 07:18:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # return 0 00:33:38.336 07:18:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:38.336 07:18:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:38.336 07:18:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:38.336 07:18:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:38.336 07:18:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:38.336 07:18:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:38.336 07:18:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:38.336 07:18:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:33:38.336 07:18:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:33:38.336 07:18:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # local ip 00:33:38.336 07:18:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:38.336 07:18:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:38.336 07:18:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:38.336 07:18:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:38.336 07:18:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:38.336 07:18:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:38.336 07:18:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:38.336 07:18:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:38.336 07:18:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:38.336 07:18:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:33:38.336 07:18:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:33:38.336 07:18:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:33:38.336 07:18:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:33:38.336 07:18:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:33:38.336 07:18:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:33:38.336 07:18:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:33:38.336 07:18:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # local block nvme 00:33:38.336 07:18:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:33:38.336 07:18:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@670 -- # modprobe nvmet 00:33:38.336 07:18:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:33:38.336 07:18:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:33:39.735 Waiting for block devices as requested 00:33:39.735 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:33:39.735 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:33:39.735 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:33:40.014 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:33:40.014 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:33:40.014 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:33:40.286 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:33:40.286 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:33:40.286 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:33:40.286 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:33:40.562 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:33:40.562 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:33:40.562 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:33:40.562 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:33:40.843 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:33:40.843 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:33:40.843 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:33:40.843 07:19:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:33:40.843 07:19:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:33:40.843 07:19:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:33:40.843 07:19:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:33:40.843 07:19:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:33:40.843 07:19:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:33:40.843 07:19:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:33:40.843 07:19:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:33:40.843 07:19:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:33:41.153 No valid GPT data, bailing 00:33:41.153 07:19:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:33:41.153 07:19:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:33:41.153 07:19:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:33:41.153 07:19:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:33:41.153 07:19:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:33:41.153 07:19:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:33:41.153 07:19:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:33:41.153 07:19:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:33:41.153 07:19:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:33:41.153 07:19:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:33:41.153 07:19:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:33:41.153 07:19:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 1 00:33:41.153 07:19:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:33:41.153 07:19:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo tcp 00:33:41.153 07:19:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # echo 4420 00:33:41.153 07:19:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@702 -- # echo ipv4 00:33:41.153 07:19:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:33:41.153 07:19:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:33:41.153 00:33:41.153 Discovery Log Number of Records 2, Generation counter 2 00:33:41.153 =====Discovery Log Entry 0====== 00:33:41.153 trtype: tcp 00:33:41.153 adrfam: ipv4 00:33:41.153 subtype: current discovery subsystem 00:33:41.153 treq: not specified, sq flow control disable supported 00:33:41.153 portid: 1 00:33:41.153 trsvcid: 4420 00:33:41.153 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:33:41.153 traddr: 10.0.0.1 00:33:41.153 eflags: none 00:33:41.153 sectype: none 00:33:41.153 =====Discovery Log Entry 1====== 00:33:41.153 trtype: tcp 00:33:41.153 adrfam: ipv4 00:33:41.153 subtype: nvme subsystem 00:33:41.153 treq: not specified, sq flow control disable supported 00:33:41.153 portid: 1 00:33:41.153 trsvcid: 4420 00:33:41.153 subnqn: nqn.2016-06.io.spdk:testnqn 00:33:41.153 traddr: 10.0.0.1 00:33:41.153 eflags: none 00:33:41.153 sectype: none 00:33:41.153 07:19:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:33:41.153 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:33:41.153 ===================================================== 00:33:41.153 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:33:41.153 ===================================================== 00:33:41.153 Controller Capabilities/Features 00:33:41.153 ================================ 00:33:41.153 Vendor ID: 0000 00:33:41.153 Subsystem Vendor ID: 0000 00:33:41.153 Serial Number: 013ebe4123949d66e695 00:33:41.153 Model Number: Linux 00:33:41.153 Firmware Version: 6.8.9-20 00:33:41.153 Recommended Arb Burst: 0 00:33:41.153 IEEE OUI Identifier: 00 00 00 00:33:41.153 Multi-path I/O 00:33:41.153 May have multiple subsystem ports: No 00:33:41.153 May have multiple controllers: No 00:33:41.153 Associated with SR-IOV VF: No 00:33:41.153 Max Data Transfer Size: Unlimited 00:33:41.153 Max Number of Namespaces: 0 00:33:41.153 Max Number of I/O Queues: 1024 00:33:41.153 NVMe Specification Version (VS): 1.3 00:33:41.153 NVMe Specification Version (Identify): 1.3 00:33:41.153 Maximum Queue Entries: 1024 00:33:41.153 Contiguous Queues Required: No 00:33:41.153 Arbitration Mechanisms Supported 00:33:41.153 Weighted Round Robin: Not Supported 00:33:41.153 Vendor Specific: Not Supported 00:33:41.153 Reset Timeout: 7500 ms 00:33:41.153 Doorbell Stride: 4 bytes 00:33:41.153 NVM Subsystem Reset: Not Supported 00:33:41.153 Command Sets Supported 00:33:41.153 NVM Command Set: Supported 00:33:41.153 Boot Partition: Not Supported 00:33:41.153 Memory Page Size Minimum: 4096 bytes 00:33:41.153 Memory Page Size Maximum: 4096 bytes 00:33:41.154 Persistent Memory Region: Not Supported 00:33:41.154 Optional Asynchronous Events Supported 00:33:41.154 Namespace Attribute Notices: Not Supported 00:33:41.154 Firmware Activation Notices: Not Supported 00:33:41.154 ANA Change Notices: Not Supported 00:33:41.154 PLE Aggregate Log Change Notices: Not Supported 00:33:41.154 LBA Status Info Alert Notices: Not Supported 00:33:41.154 EGE Aggregate Log Change Notices: Not Supported 00:33:41.154 Normal NVM Subsystem Shutdown event: Not Supported 00:33:41.154 Zone Descriptor Change Notices: Not Supported 00:33:41.154 Discovery Log Change Notices: Supported 00:33:41.154 Controller Attributes 00:33:41.154 128-bit Host Identifier: Not Supported 00:33:41.154 Non-Operational Permissive Mode: Not Supported 00:33:41.154 NVM Sets: Not Supported 00:33:41.154 Read Recovery Levels: Not Supported 00:33:41.154 Endurance Groups: Not Supported 00:33:41.154 Predictable Latency Mode: Not Supported 00:33:41.154 Traffic Based Keep ALive: Not Supported 00:33:41.154 Namespace Granularity: Not Supported 00:33:41.154 SQ Associations: Not Supported 00:33:41.154 UUID List: Not Supported 00:33:41.154 Multi-Domain Subsystem: Not Supported 00:33:41.154 Fixed Capacity Management: Not Supported 00:33:41.154 Variable Capacity Management: Not Supported 00:33:41.154 Delete Endurance Group: Not Supported 00:33:41.154 Delete NVM Set: Not Supported 00:33:41.154 Extended LBA Formats Supported: Not Supported 00:33:41.154 Flexible Data Placement Supported: Not Supported 00:33:41.154 00:33:41.154 Controller Memory Buffer Support 00:33:41.154 ================================ 00:33:41.154 Supported: No 00:33:41.154 00:33:41.154 Persistent Memory Region Support 00:33:41.154 ================================ 00:33:41.154 Supported: No 00:33:41.154 00:33:41.154 Admin Command Set Attributes 00:33:41.154 ============================ 00:33:41.154 Security Send/Receive: Not Supported 00:33:41.154 Format NVM: Not Supported 00:33:41.154 Firmware Activate/Download: Not Supported 00:33:41.154 Namespace Management: Not Supported 00:33:41.154 Device Self-Test: Not Supported 00:33:41.154 Directives: Not Supported 00:33:41.154 NVMe-MI: Not Supported 00:33:41.154 Virtualization Management: Not Supported 00:33:41.154 Doorbell Buffer Config: Not Supported 00:33:41.154 Get LBA Status Capability: Not Supported 00:33:41.154 Command & Feature Lockdown Capability: Not Supported 00:33:41.154 Abort Command Limit: 1 00:33:41.154 Async Event Request Limit: 1 00:33:41.154 Number of Firmware Slots: N/A 00:33:41.154 Firmware Slot 1 Read-Only: N/A 00:33:41.154 Firmware Activation Without Reset: N/A 00:33:41.154 Multiple Update Detection Support: N/A 00:33:41.154 Firmware Update Granularity: No Information Provided 00:33:41.154 Per-Namespace SMART Log: No 00:33:41.154 Asymmetric Namespace Access Log Page: Not Supported 00:33:41.154 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:33:41.154 Command Effects Log Page: Not Supported 00:33:41.154 Get Log Page Extended Data: Supported 00:33:41.154 Telemetry Log Pages: Not Supported 00:33:41.154 Persistent Event Log Pages: Not Supported 00:33:41.154 Supported Log Pages Log Page: May Support 00:33:41.154 Commands Supported & Effects Log Page: Not Supported 00:33:41.154 Feature Identifiers & Effects Log Page:May Support 00:33:41.154 NVMe-MI Commands & Effects Log Page: May Support 00:33:41.154 Data Area 4 for Telemetry Log: Not Supported 00:33:41.154 Error Log Page Entries Supported: 1 00:33:41.154 Keep Alive: Not Supported 00:33:41.154 00:33:41.154 NVM Command Set Attributes 00:33:41.154 ========================== 00:33:41.154 Submission Queue Entry Size 00:33:41.154 Max: 1 00:33:41.154 Min: 1 00:33:41.154 Completion Queue Entry Size 00:33:41.154 Max: 1 00:33:41.154 Min: 1 00:33:41.154 Number of Namespaces: 0 00:33:41.154 Compare Command: Not Supported 00:33:41.154 Write Uncorrectable Command: Not Supported 00:33:41.154 Dataset Management Command: Not Supported 00:33:41.154 Write Zeroes Command: Not Supported 00:33:41.154 Set Features Save Field: Not Supported 00:33:41.154 Reservations: Not Supported 00:33:41.154 Timestamp: Not Supported 00:33:41.154 Copy: Not Supported 00:33:41.154 Volatile Write Cache: Not Present 00:33:41.154 Atomic Write Unit (Normal): 1 00:33:41.154 Atomic Write Unit (PFail): 1 00:33:41.154 Atomic Compare & Write Unit: 1 00:33:41.154 Fused Compare & Write: Not Supported 00:33:41.154 Scatter-Gather List 00:33:41.154 SGL Command Set: Supported 00:33:41.154 SGL Keyed: Not Supported 00:33:41.154 SGL Bit Bucket Descriptor: Not Supported 00:33:41.154 SGL Metadata Pointer: Not Supported 00:33:41.154 Oversized SGL: Not Supported 00:33:41.154 SGL Metadata Address: Not Supported 00:33:41.154 SGL Offset: Supported 00:33:41.154 Transport SGL Data Block: Not Supported 00:33:41.154 Replay Protected Memory Block: Not Supported 00:33:41.154 00:33:41.154 Firmware Slot Information 00:33:41.154 ========================= 00:33:41.154 Active slot: 0 00:33:41.154 00:33:41.154 00:33:41.154 Error Log 00:33:41.154 ========= 00:33:41.154 00:33:41.154 Active Namespaces 00:33:41.154 ================= 00:33:41.154 Discovery Log Page 00:33:41.154 ================== 00:33:41.154 Generation Counter: 2 00:33:41.154 Number of Records: 2 00:33:41.154 Record Format: 0 00:33:41.154 00:33:41.154 Discovery Log Entry 0 00:33:41.154 ---------------------- 00:33:41.154 Transport Type: 3 (TCP) 00:33:41.154 Address Family: 1 (IPv4) 00:33:41.154 Subsystem Type: 3 (Current Discovery Subsystem) 00:33:41.154 Entry Flags: 00:33:41.154 Duplicate Returned Information: 0 00:33:41.154 Explicit Persistent Connection Support for Discovery: 0 00:33:41.154 Transport Requirements: 00:33:41.154 Secure Channel: Not Specified 00:33:41.154 Port ID: 1 (0x0001) 00:33:41.154 Controller ID: 65535 (0xffff) 00:33:41.154 Admin Max SQ Size: 32 00:33:41.154 Transport Service Identifier: 4420 00:33:41.154 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:33:41.154 Transport Address: 10.0.0.1 00:33:41.154 Discovery Log Entry 1 00:33:41.154 ---------------------- 00:33:41.154 Transport Type: 3 (TCP) 00:33:41.154 Address Family: 1 (IPv4) 00:33:41.154 Subsystem Type: 2 (NVM Subsystem) 00:33:41.154 Entry Flags: 00:33:41.154 Duplicate Returned Information: 0 00:33:41.154 Explicit Persistent Connection Support for Discovery: 0 00:33:41.154 Transport Requirements: 00:33:41.154 Secure Channel: Not Specified 00:33:41.154 Port ID: 1 (0x0001) 00:33:41.154 Controller ID: 65535 (0xffff) 00:33:41.154 Admin Max SQ Size: 32 00:33:41.154 Transport Service Identifier: 4420 00:33:41.154 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:33:41.154 Transport Address: 10.0.0.1 00:33:41.155 07:19:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:33:41.459 get_feature(0x01) failed 00:33:41.459 get_feature(0x02) failed 00:33:41.459 get_feature(0x04) failed 00:33:41.459 ===================================================== 00:33:41.459 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:33:41.459 ===================================================== 00:33:41.459 Controller Capabilities/Features 00:33:41.459 ================================ 00:33:41.459 Vendor ID: 0000 00:33:41.459 Subsystem Vendor ID: 0000 00:33:41.459 Serial Number: 3abe21323d2e7acc079d 00:33:41.459 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:33:41.459 Firmware Version: 6.8.9-20 00:33:41.459 Recommended Arb Burst: 6 00:33:41.459 IEEE OUI Identifier: 00 00 00 00:33:41.459 Multi-path I/O 00:33:41.459 May have multiple subsystem ports: Yes 00:33:41.459 May have multiple controllers: Yes 00:33:41.459 Associated with SR-IOV VF: No 00:33:41.459 Max Data Transfer Size: Unlimited 00:33:41.459 Max Number of Namespaces: 1024 00:33:41.459 Max Number of I/O Queues: 128 00:33:41.459 NVMe Specification Version (VS): 1.3 00:33:41.459 NVMe Specification Version (Identify): 1.3 00:33:41.459 Maximum Queue Entries: 1024 00:33:41.459 Contiguous Queues Required: No 00:33:41.459 Arbitration Mechanisms Supported 00:33:41.459 Weighted Round Robin: Not Supported 00:33:41.459 Vendor Specific: Not Supported 00:33:41.459 Reset Timeout: 7500 ms 00:33:41.459 Doorbell Stride: 4 bytes 00:33:41.459 NVM Subsystem Reset: Not Supported 00:33:41.459 Command Sets Supported 00:33:41.459 NVM Command Set: Supported 00:33:41.459 Boot Partition: Not Supported 00:33:41.459 Memory Page Size Minimum: 4096 bytes 00:33:41.459 Memory Page Size Maximum: 4096 bytes 00:33:41.459 Persistent Memory Region: Not Supported 00:33:41.459 Optional Asynchronous Events Supported 00:33:41.459 Namespace Attribute Notices: Supported 00:33:41.459 Firmware Activation Notices: Not Supported 00:33:41.459 ANA Change Notices: Supported 00:33:41.459 PLE Aggregate Log Change Notices: Not Supported 00:33:41.459 LBA Status Info Alert Notices: Not Supported 00:33:41.459 EGE Aggregate Log Change Notices: Not Supported 00:33:41.459 Normal NVM Subsystem Shutdown event: Not Supported 00:33:41.459 Zone Descriptor Change Notices: Not Supported 00:33:41.459 Discovery Log Change Notices: Not Supported 00:33:41.459 Controller Attributes 00:33:41.460 128-bit Host Identifier: Supported 00:33:41.460 Non-Operational Permissive Mode: Not Supported 00:33:41.460 NVM Sets: Not Supported 00:33:41.460 Read Recovery Levels: Not Supported 00:33:41.460 Endurance Groups: Not Supported 00:33:41.460 Predictable Latency Mode: Not Supported 00:33:41.460 Traffic Based Keep ALive: Supported 00:33:41.460 Namespace Granularity: Not Supported 00:33:41.460 SQ Associations: Not Supported 00:33:41.460 UUID List: Not Supported 00:33:41.460 Multi-Domain Subsystem: Not Supported 00:33:41.460 Fixed Capacity Management: Not Supported 00:33:41.460 Variable Capacity Management: Not Supported 00:33:41.460 Delete Endurance Group: Not Supported 00:33:41.460 Delete NVM Set: Not Supported 00:33:41.460 Extended LBA Formats Supported: Not Supported 00:33:41.460 Flexible Data Placement Supported: Not Supported 00:33:41.460 00:33:41.460 Controller Memory Buffer Support 00:33:41.460 ================================ 00:33:41.460 Supported: No 00:33:41.460 00:33:41.460 Persistent Memory Region Support 00:33:41.460 ================================ 00:33:41.460 Supported: No 00:33:41.460 00:33:41.460 Admin Command Set Attributes 00:33:41.460 ============================ 00:33:41.460 Security Send/Receive: Not Supported 00:33:41.460 Format NVM: Not Supported 00:33:41.460 Firmware Activate/Download: Not Supported 00:33:41.460 Namespace Management: Not Supported 00:33:41.460 Device Self-Test: Not Supported 00:33:41.460 Directives: Not Supported 00:33:41.460 NVMe-MI: Not Supported 00:33:41.460 Virtualization Management: Not Supported 00:33:41.460 Doorbell Buffer Config: Not Supported 00:33:41.460 Get LBA Status Capability: Not Supported 00:33:41.460 Command & Feature Lockdown Capability: Not Supported 00:33:41.460 Abort Command Limit: 4 00:33:41.460 Async Event Request Limit: 4 00:33:41.460 Number of Firmware Slots: N/A 00:33:41.460 Firmware Slot 1 Read-Only: N/A 00:33:41.460 Firmware Activation Without Reset: N/A 00:33:41.460 Multiple Update Detection Support: N/A 00:33:41.460 Firmware Update Granularity: No Information Provided 00:33:41.460 Per-Namespace SMART Log: Yes 00:33:41.460 Asymmetric Namespace Access Log Page: Supported 00:33:41.460 ANA Transition Time : 10 sec 00:33:41.460 00:33:41.460 Asymmetric Namespace Access Capabilities 00:33:41.460 ANA Optimized State : Supported 00:33:41.460 ANA Non-Optimized State : Supported 00:33:41.460 ANA Inaccessible State : Supported 00:33:41.460 ANA Persistent Loss State : Supported 00:33:41.460 ANA Change State : Supported 00:33:41.460 ANAGRPID is not changed : No 00:33:41.460 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:33:41.460 00:33:41.460 ANA Group Identifier Maximum : 128 00:33:41.460 Number of ANA Group Identifiers : 128 00:33:41.460 Max Number of Allowed Namespaces : 1024 00:33:41.460 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:33:41.460 Command Effects Log Page: Supported 00:33:41.460 Get Log Page Extended Data: Supported 00:33:41.460 Telemetry Log Pages: Not Supported 00:33:41.460 Persistent Event Log Pages: Not Supported 00:33:41.460 Supported Log Pages Log Page: May Support 00:33:41.460 Commands Supported & Effects Log Page: Not Supported 00:33:41.460 Feature Identifiers & Effects Log Page:May Support 00:33:41.460 NVMe-MI Commands & Effects Log Page: May Support 00:33:41.460 Data Area 4 for Telemetry Log: Not Supported 00:33:41.460 Error Log Page Entries Supported: 128 00:33:41.460 Keep Alive: Supported 00:33:41.460 Keep Alive Granularity: 1000 ms 00:33:41.460 00:33:41.460 NVM Command Set Attributes 00:33:41.460 ========================== 00:33:41.460 Submission Queue Entry Size 00:33:41.460 Max: 64 00:33:41.460 Min: 64 00:33:41.460 Completion Queue Entry Size 00:33:41.460 Max: 16 00:33:41.460 Min: 16 00:33:41.460 Number of Namespaces: 1024 00:33:41.460 Compare Command: Not Supported 00:33:41.460 Write Uncorrectable Command: Not Supported 00:33:41.460 Dataset Management Command: Supported 00:33:41.460 Write Zeroes Command: Supported 00:33:41.460 Set Features Save Field: Not Supported 00:33:41.460 Reservations: Not Supported 00:33:41.460 Timestamp: Not Supported 00:33:41.460 Copy: Not Supported 00:33:41.460 Volatile Write Cache: Present 00:33:41.460 Atomic Write Unit (Normal): 1 00:33:41.460 Atomic Write Unit (PFail): 1 00:33:41.460 Atomic Compare & Write Unit: 1 00:33:41.460 Fused Compare & Write: Not Supported 00:33:41.460 Scatter-Gather List 00:33:41.460 SGL Command Set: Supported 00:33:41.460 SGL Keyed: Not Supported 00:33:41.460 SGL Bit Bucket Descriptor: Not Supported 00:33:41.460 SGL Metadata Pointer: Not Supported 00:33:41.460 Oversized SGL: Not Supported 00:33:41.460 SGL Metadata Address: Not Supported 00:33:41.460 SGL Offset: Supported 00:33:41.460 Transport SGL Data Block: Not Supported 00:33:41.460 Replay Protected Memory Block: Not Supported 00:33:41.460 00:33:41.460 Firmware Slot Information 00:33:41.460 ========================= 00:33:41.460 Active slot: 0 00:33:41.460 00:33:41.460 Asymmetric Namespace Access 00:33:41.460 =========================== 00:33:41.460 Change Count : 0 00:33:41.460 Number of ANA Group Descriptors : 1 00:33:41.460 ANA Group Descriptor : 0 00:33:41.460 ANA Group ID : 1 00:33:41.460 Number of NSID Values : 1 00:33:41.460 Change Count : 0 00:33:41.460 ANA State : 1 00:33:41.460 Namespace Identifier : 1 00:33:41.460 00:33:41.460 Commands Supported and Effects 00:33:41.460 ============================== 00:33:41.460 Admin Commands 00:33:41.460 -------------- 00:33:41.460 Get Log Page (02h): Supported 00:33:41.460 Identify (06h): Supported 00:33:41.460 Abort (08h): Supported 00:33:41.460 Set Features (09h): Supported 00:33:41.460 Get Features (0Ah): Supported 00:33:41.460 Asynchronous Event Request (0Ch): Supported 00:33:41.460 Keep Alive (18h): Supported 00:33:41.460 I/O Commands 00:33:41.460 ------------ 00:33:41.460 Flush (00h): Supported 00:33:41.460 Write (01h): Supported LBA-Change 00:33:41.460 Read (02h): Supported 00:33:41.460 Write Zeroes (08h): Supported LBA-Change 00:33:41.460 Dataset Management (09h): Supported 00:33:41.460 00:33:41.460 Error Log 00:33:41.460 ========= 00:33:41.460 Entry: 0 00:33:41.460 Error Count: 0x3 00:33:41.460 Submission Queue Id: 0x0 00:33:41.460 Command Id: 0x5 00:33:41.460 Phase Bit: 0 00:33:41.460 Status Code: 0x2 00:33:41.460 Status Code Type: 0x0 00:33:41.460 Do Not Retry: 1 00:33:41.460 Error Location: 0x28 00:33:41.460 LBA: 0x0 00:33:41.460 Namespace: 0x0 00:33:41.460 Vendor Log Page: 0x0 00:33:41.460 ----------- 00:33:41.460 Entry: 1 00:33:41.461 Error Count: 0x2 00:33:41.461 Submission Queue Id: 0x0 00:33:41.461 Command Id: 0x5 00:33:41.461 Phase Bit: 0 00:33:41.461 Status Code: 0x2 00:33:41.461 Status Code Type: 0x0 00:33:41.461 Do Not Retry: 1 00:33:41.461 Error Location: 0x28 00:33:41.461 LBA: 0x0 00:33:41.461 Namespace: 0x0 00:33:41.461 Vendor Log Page: 0x0 00:33:41.461 ----------- 00:33:41.461 Entry: 2 00:33:41.461 Error Count: 0x1 00:33:41.461 Submission Queue Id: 0x0 00:33:41.461 Command Id: 0x4 00:33:41.461 Phase Bit: 0 00:33:41.461 Status Code: 0x2 00:33:41.461 Status Code Type: 0x0 00:33:41.461 Do Not Retry: 1 00:33:41.461 Error Location: 0x28 00:33:41.461 LBA: 0x0 00:33:41.461 Namespace: 0x0 00:33:41.461 Vendor Log Page: 0x0 00:33:41.461 00:33:41.461 Number of Queues 00:33:41.461 ================ 00:33:41.461 Number of I/O Submission Queues: 128 00:33:41.461 Number of I/O Completion Queues: 128 00:33:41.461 00:33:41.461 ZNS Specific Controller Data 00:33:41.461 ============================ 00:33:41.461 Zone Append Size Limit: 0 00:33:41.461 00:33:41.461 00:33:41.461 Active Namespaces 00:33:41.461 ================= 00:33:41.461 get_feature(0x05) failed 00:33:41.461 Namespace ID:1 00:33:41.461 Command Set Identifier: NVM (00h) 00:33:41.461 Deallocate: Supported 00:33:41.461 Deallocated/Unwritten Error: Not Supported 00:33:41.461 Deallocated Read Value: Unknown 00:33:41.461 Deallocate in Write Zeroes: Not Supported 00:33:41.461 Deallocated Guard Field: 0xFFFF 00:33:41.461 Flush: Supported 00:33:41.461 Reservation: Not Supported 00:33:41.461 Namespace Sharing Capabilities: Multiple Controllers 00:33:41.461 Size (in LBAs): 1953525168 (931GiB) 00:33:41.461 Capacity (in LBAs): 1953525168 (931GiB) 00:33:41.461 Utilization (in LBAs): 1953525168 (931GiB) 00:33:41.461 UUID: c66cfca2-d20e-43f8-b28e-eb7e134d56ee 00:33:41.461 Thin Provisioning: Not Supported 00:33:41.461 Per-NS Atomic Units: Yes 00:33:41.461 Atomic Boundary Size (Normal): 0 00:33:41.461 Atomic Boundary Size (PFail): 0 00:33:41.461 Atomic Boundary Offset: 0 00:33:41.461 NGUID/EUI64 Never Reused: No 00:33:41.461 ANA group ID: 1 00:33:41.461 Namespace Write Protected: No 00:33:41.461 Number of LBA Formats: 1 00:33:41.461 Current LBA Format: LBA Format #00 00:33:41.461 LBA Format #00: Data Size: 512 Metadata Size: 0 00:33:41.461 00:33:41.461 07:19:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:33:41.461 07:19:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:41.461 07:19:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:33:41.461 07:19:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:41.461 07:19:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:33:41.461 07:19:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:41.461 07:19:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:41.461 rmmod nvme_tcp 00:33:41.461 rmmod nvme_fabrics 00:33:41.461 07:19:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:41.461 07:19:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:33:41.461 07:19:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:33:41.461 07:19:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:33:41.461 07:19:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:41.461 07:19:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:41.461 07:19:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:41.461 07:19:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:33:41.461 07:19:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-save 00:33:41.461 07:19:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:41.461 07:19:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-restore 00:33:41.461 07:19:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:41.461 07:19:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:41.461 07:19:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:41.461 07:19:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:41.461 07:19:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:43.368 07:19:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:43.368 07:19:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:33:43.368 07:19:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:33:43.368 07:19:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # echo 0 00:33:43.368 07:19:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:33:43.368 07:19:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:33:43.368 07:19:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:33:43.368 07:19:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:33:43.368 07:19:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:33:43.368 07:19:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:33:43.368 07:19:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:33:44.747 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:33:44.747 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:33:44.747 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:33:44.747 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:33:44.747 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:33:44.747 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:33:44.747 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:33:44.747 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:33:44.747 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:33:44.747 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:33:44.747 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:33:44.747 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:33:44.747 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:33:44.747 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:33:44.747 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:33:45.007 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:33:45.944 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:33:45.944 00:33:45.944 real 0m9.914s 00:33:45.944 user 0m2.195s 00:33:45.944 sys 0m3.688s 00:33:45.944 07:19:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:45.944 07:19:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:33:45.944 ************************************ 00:33:45.944 END TEST nvmf_identify_kernel_target 00:33:45.944 ************************************ 00:33:45.944 07:19:06 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:33:45.944 07:19:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:33:45.945 07:19:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:45.945 07:19:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:33:45.945 ************************************ 00:33:45.945 START TEST nvmf_auth_host 00:33:45.945 ************************************ 00:33:45.945 07:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:33:45.945 * Looking for test storage... 00:33:45.945 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:45.945 07:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:33:45.945 07:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # lcov --version 00:33:45.945 07:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:33:46.203 07:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:33:46.203 07:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:46.203 07:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:46.203 07:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:46.203 07:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:33:46.203 07:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:33:46.203 07:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:33:46.203 07:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:33:46.203 07:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:33:46.203 07:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:33:46.203 07:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:33:46.203 07:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:46.203 07:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:33:46.203 07:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:33:46.203 07:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:46.203 07:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:46.203 07:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:33:46.203 07:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:33:46.203 07:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:46.203 07:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:33:46.203 07:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:33:46.203 07:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:33:46.203 07:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:33:46.203 07:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:46.203 07:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:33:46.203 07:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:33:46.203 07:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:46.203 07:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:46.203 07:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:33:46.203 07:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:46.203 07:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:33:46.203 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:46.204 --rc genhtml_branch_coverage=1 00:33:46.204 --rc genhtml_function_coverage=1 00:33:46.204 --rc genhtml_legend=1 00:33:46.204 --rc geninfo_all_blocks=1 00:33:46.204 --rc geninfo_unexecuted_blocks=1 00:33:46.204 00:33:46.204 ' 00:33:46.204 07:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:33:46.204 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:46.204 --rc genhtml_branch_coverage=1 00:33:46.204 --rc genhtml_function_coverage=1 00:33:46.204 --rc genhtml_legend=1 00:33:46.204 --rc geninfo_all_blocks=1 00:33:46.204 --rc geninfo_unexecuted_blocks=1 00:33:46.204 00:33:46.204 ' 00:33:46.204 07:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:33:46.204 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:46.204 --rc genhtml_branch_coverage=1 00:33:46.204 --rc genhtml_function_coverage=1 00:33:46.204 --rc genhtml_legend=1 00:33:46.204 --rc geninfo_all_blocks=1 00:33:46.204 --rc geninfo_unexecuted_blocks=1 00:33:46.204 00:33:46.204 ' 00:33:46.204 07:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:33:46.204 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:46.204 --rc genhtml_branch_coverage=1 00:33:46.204 --rc genhtml_function_coverage=1 00:33:46.204 --rc genhtml_legend=1 00:33:46.204 --rc geninfo_all_blocks=1 00:33:46.204 --rc geninfo_unexecuted_blocks=1 00:33:46.204 00:33:46.204 ' 00:33:46.204 07:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:46.204 07:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:33:46.204 07:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:46.204 07:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:46.204 07:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:46.204 07:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:46.204 07:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:46.204 07:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:46.204 07:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:46.204 07:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:46.204 07:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:46.204 07:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:46.204 07:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:33:46.204 07:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:33:46.204 07:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:46.204 07:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:46.204 07:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:46.204 07:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:46.204 07:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:46.204 07:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:33:46.204 07:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:46.204 07:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:46.204 07:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:46.204 07:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:46.204 07:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:46.204 07:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:46.204 07:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:33:46.204 07:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:46.204 07:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:33:46.204 07:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:46.204 07:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:46.204 07:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:46.204 07:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:46.204 07:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:46.204 07:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:33:46.204 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:33:46.204 07:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:46.204 07:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:46.204 07:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:46.204 07:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:33:46.204 07:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:33:46.204 07:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:33:46.204 07:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:33:46.204 07:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:33:46.204 07:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:33:46.204 07:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:33:46.204 07:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:33:46.204 07:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:33:46.204 07:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:46.204 07:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:46.204 07:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:46.204 07:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:46.204 07:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:46.204 07:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:46.205 07:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:46.205 07:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:46.205 07:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:46.205 07:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:46.205 07:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@309 -- # xtrace_disable 00:33:46.205 07:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:48.740 07:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:48.740 07:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # pci_devs=() 00:33:48.740 07:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:48.740 07:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:48.740 07:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:48.740 07:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:48.740 07:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:48.740 07:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # net_devs=() 00:33:48.740 07:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:48.740 07:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # e810=() 00:33:48.740 07:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # local -ga e810 00:33:48.740 07:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # x722=() 00:33:48.740 07:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # local -ga x722 00:33:48.740 07:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # mlx=() 00:33:48.740 07:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # local -ga mlx 00:33:48.740 07:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:48.740 07:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:48.740 07:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:48.740 07:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:48.740 07:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:48.740 07:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:48.740 07:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:48.740 07:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:48.740 07:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:48.740 07:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:48.740 07:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:48.740 07:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:48.740 07:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:48.740 07:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:48.740 07:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:48.740 07:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:48.740 07:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:48.740 07:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:48.740 07:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:48.740 07:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:33:48.740 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:33:48.740 07:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:48.740 07:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:48.740 07:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:48.740 07:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:48.740 07:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:48.740 07:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:48.740 07:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:33:48.740 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:33:48.740 07:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:48.740 07:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:48.740 07:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:48.740 07:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:48.740 07:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:48.740 07:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:48.740 07:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:48.740 07:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:48.740 07:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:48.740 07:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:48.740 07:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:48.740 07:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:48.740 07:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:48.740 07:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:48.740 07:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:48.740 07:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:33:48.740 Found net devices under 0000:0a:00.0: cvl_0_0 00:33:48.740 07:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:48.740 07:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:48.740 07:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:48.740 07:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:48.740 07:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:48.740 07:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:48.740 07:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:48.740 07:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:48.740 07:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:33:48.740 Found net devices under 0000:0a:00.1: cvl_0_1 00:33:48.740 07:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:48.740 07:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:48.740 07:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # is_hw=yes 00:33:48.740 07:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:48.740 07:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:48.740 07:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:48.740 07:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:48.740 07:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:48.740 07:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:48.740 07:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:48.740 07:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:48.741 07:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:48.741 07:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:48.741 07:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:48.741 07:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:48.741 07:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:48.741 07:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:48.741 07:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:48.741 07:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:48.741 07:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:48.741 07:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:48.741 07:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:48.741 07:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:48.741 07:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:48.741 07:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:48.741 07:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:48.741 07:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:48.741 07:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:48.741 07:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:48.741 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:48.741 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.263 ms 00:33:48.741 00:33:48.741 --- 10.0.0.2 ping statistics --- 00:33:48.741 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:48.741 rtt min/avg/max/mdev = 0.263/0.263/0.263/0.000 ms 00:33:48.741 07:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:48.741 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:48.741 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.147 ms 00:33:48.741 00:33:48.741 --- 10.0.0.1 ping statistics --- 00:33:48.741 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:48.741 rtt min/avg/max/mdev = 0.147/0.147/0.147/0.000 ms 00:33:48.741 07:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:48.741 07:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # return 0 00:33:48.741 07:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:48.741 07:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:48.741 07:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:48.741 07:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:48.741 07:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:48.741 07:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:48.741 07:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:48.741 07:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:33:48.741 07:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:48.741 07:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:48.741 07:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:48.741 07:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # nvmfpid=384048 00:33:48.741 07:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:33:48.741 07:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # waitforlisten 384048 00:33:48.741 07:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 384048 ']' 00:33:48.741 07:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:48.741 07:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:48.741 07:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:48.741 07:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:48.741 07:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:48.741 07:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:48.741 07:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:33:48.741 07:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:48.741 07:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:48.741 07:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:48.741 07:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:48.741 07:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:33:48.741 07:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:33:48.741 07:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:33:48.741 07:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:33:48.741 07:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:33:48.741 07:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:33:48.741 07:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:33:48.741 07:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:33:48.741 07:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=688af4731913e4daeb552e3229aaa0e7 00:33:48.741 07:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:33:48.741 07:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.RUN 00:33:48.741 07:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 688af4731913e4daeb552e3229aaa0e7 0 00:33:48.741 07:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 688af4731913e4daeb552e3229aaa0e7 0 00:33:48.741 07:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:33:48.741 07:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:33:48.741 07:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=688af4731913e4daeb552e3229aaa0e7 00:33:48.741 07:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:33:48.741 07:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:33:48.741 07:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.RUN 00:33:48.741 07:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.RUN 00:33:48.741 07:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.RUN 00:33:48.741 07:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:33:48.741 07:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:33:48.741 07:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:33:48.741 07:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:33:48.741 07:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:33:48.741 07:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:33:48.741 07:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:33:48.741 07:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=830513f70217fab218cf1eb3ad9ff95a20121cb6d00aebac16a2f33682d37bef 00:33:48.741 07:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:33:48.741 07:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.s8V 00:33:48.741 07:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 830513f70217fab218cf1eb3ad9ff95a20121cb6d00aebac16a2f33682d37bef 3 00:33:48.741 07:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 830513f70217fab218cf1eb3ad9ff95a20121cb6d00aebac16a2f33682d37bef 3 00:33:48.741 07:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:33:48.741 07:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:33:48.741 07:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=830513f70217fab218cf1eb3ad9ff95a20121cb6d00aebac16a2f33682d37bef 00:33:48.741 07:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:33:48.741 07:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:33:49.001 07:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.s8V 00:33:49.001 07:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.s8V 00:33:49.001 07:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.s8V 00:33:49.001 07:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:33:49.001 07:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:33:49.001 07:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:33:49.001 07:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:33:49.001 07:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:33:49.001 07:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:33:49.001 07:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:33:49.001 07:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=b4857b8e59d13548c16c781a39fb7c98bed14be2d248e0ea 00:33:49.001 07:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:33:49.001 07:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.AhF 00:33:49.001 07:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key b4857b8e59d13548c16c781a39fb7c98bed14be2d248e0ea 0 00:33:49.001 07:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 b4857b8e59d13548c16c781a39fb7c98bed14be2d248e0ea 0 00:33:49.001 07:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:33:49.001 07:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:33:49.001 07:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=b4857b8e59d13548c16c781a39fb7c98bed14be2d248e0ea 00:33:49.001 07:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:33:49.001 07:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:33:49.002 07:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.AhF 00:33:49.002 07:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.AhF 00:33:49.002 07:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.AhF 00:33:49.002 07:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:33:49.002 07:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:33:49.002 07:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:33:49.002 07:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:33:49.002 07:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:33:49.002 07:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:33:49.002 07:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:33:49.002 07:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=8dc23045ac3da514b3ba4617ca6ea3c8355b77a0cd7bff88 00:33:49.002 07:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:33:49.002 07:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.imt 00:33:49.002 07:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 8dc23045ac3da514b3ba4617ca6ea3c8355b77a0cd7bff88 2 00:33:49.002 07:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 8dc23045ac3da514b3ba4617ca6ea3c8355b77a0cd7bff88 2 00:33:49.002 07:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:33:49.002 07:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:33:49.002 07:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=8dc23045ac3da514b3ba4617ca6ea3c8355b77a0cd7bff88 00:33:49.002 07:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:33:49.002 07:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:33:49.002 07:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.imt 00:33:49.002 07:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.imt 00:33:49.002 07:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.imt 00:33:49.002 07:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:33:49.002 07:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:33:49.002 07:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:33:49.002 07:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:33:49.002 07:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:33:49.002 07:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:33:49.002 07:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:33:49.002 07:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=f5e2fa73dd05efc5ecb4a4e6adad974c 00:33:49.002 07:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:33:49.002 07:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.xHC 00:33:49.002 07:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key f5e2fa73dd05efc5ecb4a4e6adad974c 1 00:33:49.002 07:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 f5e2fa73dd05efc5ecb4a4e6adad974c 1 00:33:49.002 07:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:33:49.002 07:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:33:49.002 07:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=f5e2fa73dd05efc5ecb4a4e6adad974c 00:33:49.002 07:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:33:49.002 07:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:33:49.002 07:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.xHC 00:33:49.002 07:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.xHC 00:33:49.002 07:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.xHC 00:33:49.002 07:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:33:49.002 07:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:33:49.002 07:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:33:49.002 07:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:33:49.002 07:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:33:49.002 07:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:33:49.002 07:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:33:49.002 07:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=46f38f52e6d56deca826df8d5d31de5d 00:33:49.002 07:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:33:49.002 07:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.baa 00:33:49.002 07:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 46f38f52e6d56deca826df8d5d31de5d 1 00:33:49.002 07:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 46f38f52e6d56deca826df8d5d31de5d 1 00:33:49.002 07:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:33:49.002 07:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:33:49.002 07:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=46f38f52e6d56deca826df8d5d31de5d 00:33:49.002 07:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:33:49.002 07:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:33:49.002 07:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.baa 00:33:49.002 07:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.baa 00:33:49.002 07:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.baa 00:33:49.002 07:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:33:49.002 07:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:33:49.002 07:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:33:49.002 07:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:33:49.002 07:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:33:49.002 07:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:33:49.002 07:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:33:49.002 07:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=74bdb8f65dc54290a644d2606b6e916803759ba7c3e688d5 00:33:49.002 07:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:33:49.002 07:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.omy 00:33:49.002 07:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 74bdb8f65dc54290a644d2606b6e916803759ba7c3e688d5 2 00:33:49.002 07:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 74bdb8f65dc54290a644d2606b6e916803759ba7c3e688d5 2 00:33:49.002 07:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:33:49.002 07:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:33:49.002 07:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=74bdb8f65dc54290a644d2606b6e916803759ba7c3e688d5 00:33:49.002 07:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:33:49.002 07:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:33:49.002 07:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.omy 00:33:49.002 07:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.omy 00:33:49.002 07:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.omy 00:33:49.002 07:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:33:49.002 07:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:33:49.002 07:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:33:49.002 07:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:33:49.002 07:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:33:49.002 07:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:33:49.002 07:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:33:49.002 07:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=0ae0539aed7eba5cb7800b6c4440ab6d 00:33:49.002 07:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:33:49.003 07:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.XnA 00:33:49.003 07:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 0ae0539aed7eba5cb7800b6c4440ab6d 0 00:33:49.003 07:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 0ae0539aed7eba5cb7800b6c4440ab6d 0 00:33:49.003 07:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:33:49.003 07:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:33:49.003 07:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=0ae0539aed7eba5cb7800b6c4440ab6d 00:33:49.003 07:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:33:49.003 07:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:33:49.261 07:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.XnA 00:33:49.261 07:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.XnA 00:33:49.261 07:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.XnA 00:33:49.261 07:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:33:49.261 07:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:33:49.261 07:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:33:49.261 07:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:33:49.261 07:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:33:49.261 07:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:33:49.261 07:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:33:49.261 07:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=573bbce49862ccea1a11714552036018beb72cfc86fe9f283e44e2e066e812b6 00:33:49.261 07:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:33:49.261 07:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.JID 00:33:49.261 07:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 573bbce49862ccea1a11714552036018beb72cfc86fe9f283e44e2e066e812b6 3 00:33:49.261 07:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 573bbce49862ccea1a11714552036018beb72cfc86fe9f283e44e2e066e812b6 3 00:33:49.261 07:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:33:49.261 07:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:33:49.261 07:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=573bbce49862ccea1a11714552036018beb72cfc86fe9f283e44e2e066e812b6 00:33:49.261 07:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:33:49.261 07:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:33:49.261 07:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.JID 00:33:49.261 07:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.JID 00:33:49.261 07:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.JID 00:33:49.261 07:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:33:49.261 07:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 384048 00:33:49.261 07:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 384048 ']' 00:33:49.261 07:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:49.261 07:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:49.261 07:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:49.261 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:49.261 07:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:49.261 07:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:49.519 07:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:49.519 07:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:33:49.519 07:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:33:49.519 07:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.RUN 00:33:49.519 07:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:49.519 07:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:49.519 07:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:49.519 07:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.s8V ]] 00:33:49.519 07:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.s8V 00:33:49.519 07:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:49.519 07:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:49.519 07:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:49.519 07:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:33:49.519 07:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.AhF 00:33:49.519 07:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:49.519 07:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:49.519 07:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:49.519 07:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.imt ]] 00:33:49.519 07:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.imt 00:33:49.519 07:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:49.519 07:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:49.519 07:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:49.519 07:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:33:49.519 07:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.xHC 00:33:49.519 07:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:49.519 07:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:49.519 07:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:49.519 07:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.baa ]] 00:33:49.519 07:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.baa 00:33:49.519 07:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:49.519 07:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:49.519 07:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:49.519 07:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:33:49.519 07:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.omy 00:33:49.519 07:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:49.519 07:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:49.519 07:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:49.519 07:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.XnA ]] 00:33:49.519 07:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.XnA 00:33:49.519 07:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:49.519 07:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:49.519 07:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:49.519 07:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:33:49.519 07:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.JID 00:33:49.519 07:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:49.519 07:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:49.519 07:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:49.519 07:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:33:49.519 07:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:33:49.519 07:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:33:49.519 07:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:49.519 07:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:49.519 07:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:49.519 07:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:49.519 07:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:49.519 07:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:49.519 07:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:49.519 07:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:49.519 07:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:49.519 07:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:49.519 07:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:33:49.519 07:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:33:49.519 07:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:33:49.519 07:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:33:49.519 07:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:33:49.519 07:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:33:49.519 07:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # local block nvme 00:33:49.519 07:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:33:49.519 07:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@670 -- # modprobe nvmet 00:33:49.519 07:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:33:49.519 07:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:33:50.896 Waiting for block devices as requested 00:33:50.896 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:33:50.896 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:33:50.896 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:33:51.156 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:33:51.156 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:33:51.156 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:33:51.415 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:33:51.415 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:33:51.415 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:33:51.415 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:33:51.673 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:33:51.673 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:33:51.673 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:33:51.673 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:33:51.930 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:33:51.930 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:33:51.930 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:33:52.495 07:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:33:52.495 07:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:33:52.495 07:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:33:52.495 07:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:33:52.495 07:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:33:52.495 07:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:33:52.495 07:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:33:52.495 07:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:33:52.495 07:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:33:52.495 No valid GPT data, bailing 00:33:52.495 07:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:33:52.495 07:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:33:52.495 07:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:33:52.495 07:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:33:52.495 07:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:33:52.495 07:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:33:52.495 07:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:33:52.495 07:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:33:52.495 07:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:33:52.495 07:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:33:52.495 07:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:33:52.495 07:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 1 00:33:52.495 07:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:33:52.495 07:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo tcp 00:33:52.495 07:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # echo 4420 00:33:52.495 07:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # echo ipv4 00:33:52.495 07:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:33:52.495 07:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:33:52.495 00:33:52.495 Discovery Log Number of Records 2, Generation counter 2 00:33:52.495 =====Discovery Log Entry 0====== 00:33:52.495 trtype: tcp 00:33:52.495 adrfam: ipv4 00:33:52.495 subtype: current discovery subsystem 00:33:52.495 treq: not specified, sq flow control disable supported 00:33:52.495 portid: 1 00:33:52.495 trsvcid: 4420 00:33:52.495 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:33:52.495 traddr: 10.0.0.1 00:33:52.495 eflags: none 00:33:52.495 sectype: none 00:33:52.495 =====Discovery Log Entry 1====== 00:33:52.495 trtype: tcp 00:33:52.495 adrfam: ipv4 00:33:52.495 subtype: nvme subsystem 00:33:52.495 treq: not specified, sq flow control disable supported 00:33:52.495 portid: 1 00:33:52.495 trsvcid: 4420 00:33:52.495 subnqn: nqn.2024-02.io.spdk:cnode0 00:33:52.495 traddr: 10.0.0.1 00:33:52.495 eflags: none 00:33:52.495 sectype: none 00:33:52.495 07:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:33:52.495 07:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:33:52.495 07:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:33:52.495 07:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:33:52.495 07:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:52.495 07:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:52.495 07:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:52.495 07:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:52.495 07:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjQ4NTdiOGU1OWQxMzU0OGMxNmM3ODFhMzlmYjdjOThiZWQxNGJlMmQyNDhlMGVhJkPlHw==: 00:33:52.495 07:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGRjMjMwNDVhYzNkYTUxNGIzYmE0NjE3Y2E2ZWEzYzgzNTViNzdhMGNkN2JmZjg4EjyqOg==: 00:33:52.495 07:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:52.495 07:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:52.754 07:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjQ4NTdiOGU1OWQxMzU0OGMxNmM3ODFhMzlmYjdjOThiZWQxNGJlMmQyNDhlMGVhJkPlHw==: 00:33:52.754 07:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGRjMjMwNDVhYzNkYTUxNGIzYmE0NjE3Y2E2ZWEzYzgzNTViNzdhMGNkN2JmZjg4EjyqOg==: ]] 00:33:52.754 07:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGRjMjMwNDVhYzNkYTUxNGIzYmE0NjE3Y2E2ZWEzYzgzNTViNzdhMGNkN2JmZjg4EjyqOg==: 00:33:52.754 07:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:33:52.754 07:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:33:52.754 07:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:33:52.754 07:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:33:52.754 07:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:33:52.754 07:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:52.754 07:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:33:52.754 07:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:33:52.754 07:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:52.754 07:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:52.754 07:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:33:52.754 07:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:52.754 07:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:52.754 07:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:52.754 07:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:52.754 07:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:52.754 07:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:52.754 07:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:52.754 07:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:52.754 07:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:52.754 07:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:52.754 07:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:52.754 07:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:52.754 07:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:52.754 07:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:52.754 07:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:52.754 07:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:52.754 07:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:52.754 nvme0n1 00:33:52.754 07:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:52.754 07:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:52.754 07:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:52.754 07:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:52.754 07:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:52.754 07:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:52.754 07:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:52.754 07:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:52.754 07:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:52.754 07:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:52.754 07:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:52.754 07:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:33:52.754 07:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:52.754 07:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:52.754 07:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:33:52.754 07:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:52.754 07:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:52.754 07:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:52.754 07:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:52.754 07:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Njg4YWY0NzMxOTEzZTRkYWViNTUyZTMyMjlhYWEwZTfTc87r: 00:33:52.754 07:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODMwNTEzZjcwMjE3ZmFiMjE4Y2YxZWIzYWQ5ZmY5NWEyMDEyMWNiNmQwMGFlYmFjMTZhMmYzMzY4MmQzN2JlZk2BuvQ=: 00:33:52.754 07:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:52.754 07:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:52.754 07:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Njg4YWY0NzMxOTEzZTRkYWViNTUyZTMyMjlhYWEwZTfTc87r: 00:33:52.754 07:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODMwNTEzZjcwMjE3ZmFiMjE4Y2YxZWIzYWQ5ZmY5NWEyMDEyMWNiNmQwMGFlYmFjMTZhMmYzMzY4MmQzN2JlZk2BuvQ=: ]] 00:33:52.754 07:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODMwNTEzZjcwMjE3ZmFiMjE4Y2YxZWIzYWQ5ZmY5NWEyMDEyMWNiNmQwMGFlYmFjMTZhMmYzMzY4MmQzN2JlZk2BuvQ=: 00:33:52.754 07:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:33:52.754 07:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:52.754 07:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:52.754 07:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:52.754 07:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:52.754 07:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:52.754 07:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:33:52.754 07:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:52.754 07:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:53.014 07:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:53.014 07:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:53.014 07:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:53.014 07:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:53.014 07:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:53.014 07:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:53.014 07:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:53.014 07:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:53.014 07:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:53.014 07:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:53.014 07:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:53.014 07:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:53.014 07:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:53.014 07:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:53.014 07:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:53.014 nvme0n1 00:33:53.014 07:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:53.014 07:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:53.014 07:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:53.014 07:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:53.014 07:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:53.014 07:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:53.014 07:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:53.014 07:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:53.014 07:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:53.014 07:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:53.014 07:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:53.014 07:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:53.014 07:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:33:53.014 07:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:53.014 07:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:53.014 07:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:53.014 07:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:53.014 07:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjQ4NTdiOGU1OWQxMzU0OGMxNmM3ODFhMzlmYjdjOThiZWQxNGJlMmQyNDhlMGVhJkPlHw==: 00:33:53.014 07:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGRjMjMwNDVhYzNkYTUxNGIzYmE0NjE3Y2E2ZWEzYzgzNTViNzdhMGNkN2JmZjg4EjyqOg==: 00:33:53.014 07:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:53.014 07:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:53.014 07:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjQ4NTdiOGU1OWQxMzU0OGMxNmM3ODFhMzlmYjdjOThiZWQxNGJlMmQyNDhlMGVhJkPlHw==: 00:33:53.014 07:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGRjMjMwNDVhYzNkYTUxNGIzYmE0NjE3Y2E2ZWEzYzgzNTViNzdhMGNkN2JmZjg4EjyqOg==: ]] 00:33:53.014 07:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGRjMjMwNDVhYzNkYTUxNGIzYmE0NjE3Y2E2ZWEzYzgzNTViNzdhMGNkN2JmZjg4EjyqOg==: 00:33:53.014 07:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:33:53.014 07:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:53.014 07:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:53.014 07:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:53.014 07:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:53.014 07:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:53.014 07:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:33:53.014 07:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:53.014 07:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:53.014 07:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:53.014 07:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:53.014 07:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:53.014 07:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:53.014 07:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:53.014 07:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:53.014 07:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:53.014 07:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:53.014 07:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:53.014 07:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:53.014 07:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:53.014 07:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:53.014 07:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:53.014 07:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:53.014 07:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:53.273 nvme0n1 00:33:53.273 07:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:53.273 07:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:53.273 07:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:53.273 07:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:53.273 07:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:53.273 07:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:53.273 07:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:53.273 07:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:53.273 07:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:53.273 07:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:53.273 07:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:53.273 07:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:53.273 07:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:33:53.273 07:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:53.273 07:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:53.273 07:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:53.273 07:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:53.273 07:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjVlMmZhNzNkZDA1ZWZjNWVjYjRhNGU2YWRhZDk3NGM4RYVv: 00:33:53.273 07:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDZmMzhmNTJlNmQ1NmRlY2E4MjZkZjhkNWQzMWRlNWSeS3d3: 00:33:53.273 07:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:53.273 07:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:53.273 07:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjVlMmZhNzNkZDA1ZWZjNWVjYjRhNGU2YWRhZDk3NGM4RYVv: 00:33:53.273 07:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDZmMzhmNTJlNmQ1NmRlY2E4MjZkZjhkNWQzMWRlNWSeS3d3: ]] 00:33:53.273 07:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDZmMzhmNTJlNmQ1NmRlY2E4MjZkZjhkNWQzMWRlNWSeS3d3: 00:33:53.273 07:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:33:53.273 07:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:53.273 07:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:53.273 07:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:53.273 07:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:53.273 07:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:53.273 07:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:33:53.273 07:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:53.273 07:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:53.273 07:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:53.273 07:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:53.273 07:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:53.273 07:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:53.273 07:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:53.273 07:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:53.273 07:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:53.273 07:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:53.273 07:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:53.273 07:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:53.273 07:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:53.273 07:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:53.273 07:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:53.273 07:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:53.273 07:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:53.532 nvme0n1 00:33:53.532 07:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:53.532 07:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:53.532 07:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:53.532 07:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:53.532 07:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:53.532 07:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:53.532 07:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:53.532 07:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:53.532 07:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:53.532 07:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:53.532 07:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:53.532 07:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:53.532 07:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:33:53.532 07:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:53.532 07:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:53.532 07:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:53.532 07:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:53.532 07:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzRiZGI4ZjY1ZGM1NDI5MGE2NDRkMjYwNmI2ZTkxNjgwMzc1OWJhN2MzZTY4OGQ1DFOfSg==: 00:33:53.532 07:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGFlMDUzOWFlZDdlYmE1Y2I3ODAwYjZjNDQ0MGFiNmRX/p1v: 00:33:53.532 07:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:53.532 07:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:53.532 07:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzRiZGI4ZjY1ZGM1NDI5MGE2NDRkMjYwNmI2ZTkxNjgwMzc1OWJhN2MzZTY4OGQ1DFOfSg==: 00:33:53.532 07:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGFlMDUzOWFlZDdlYmE1Y2I3ODAwYjZjNDQ0MGFiNmRX/p1v: ]] 00:33:53.532 07:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGFlMDUzOWFlZDdlYmE1Y2I3ODAwYjZjNDQ0MGFiNmRX/p1v: 00:33:53.532 07:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:33:53.532 07:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:53.532 07:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:53.532 07:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:53.532 07:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:53.532 07:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:53.532 07:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:33:53.532 07:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:53.532 07:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:53.532 07:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:53.532 07:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:53.532 07:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:53.532 07:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:53.532 07:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:53.532 07:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:53.532 07:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:53.532 07:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:53.532 07:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:53.532 07:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:53.532 07:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:53.532 07:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:53.532 07:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:53.532 07:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:53.532 07:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:53.791 nvme0n1 00:33:53.791 07:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:53.791 07:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:53.791 07:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:53.791 07:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:53.791 07:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:53.791 07:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:53.791 07:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:53.791 07:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:53.791 07:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:53.791 07:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:53.791 07:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:53.791 07:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:53.791 07:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:33:53.791 07:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:53.791 07:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:53.791 07:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:53.791 07:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:53.791 07:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NTczYmJjZTQ5ODYyY2NlYTFhMTE3MTQ1NTIwMzYwMThiZWI3MmNmYzg2ZmU5ZjI4M2U0NGUyZTA2NmU4MTJiNljH8a0=: 00:33:53.791 07:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:53.791 07:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:53.791 07:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:53.791 07:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NTczYmJjZTQ5ODYyY2NlYTFhMTE3MTQ1NTIwMzYwMThiZWI3MmNmYzg2ZmU5ZjI4M2U0NGUyZTA2NmU4MTJiNljH8a0=: 00:33:53.791 07:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:53.791 07:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:33:53.791 07:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:53.791 07:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:53.791 07:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:53.791 07:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:53.791 07:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:53.791 07:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:33:53.791 07:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:53.791 07:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:53.791 07:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:53.791 07:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:53.791 07:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:53.791 07:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:53.791 07:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:53.791 07:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:53.791 07:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:53.791 07:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:53.791 07:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:53.791 07:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:53.791 07:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:53.791 07:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:53.791 07:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:53.791 07:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:53.791 07:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:53.791 nvme0n1 00:33:53.791 07:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:53.791 07:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:53.791 07:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:53.791 07:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:53.791 07:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:53.791 07:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:54.049 07:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:54.049 07:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:54.049 07:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:54.049 07:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:54.049 07:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:54.049 07:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:54.049 07:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:54.049 07:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:33:54.049 07:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:54.049 07:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:54.049 07:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:54.049 07:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:54.049 07:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Njg4YWY0NzMxOTEzZTRkYWViNTUyZTMyMjlhYWEwZTfTc87r: 00:33:54.049 07:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODMwNTEzZjcwMjE3ZmFiMjE4Y2YxZWIzYWQ5ZmY5NWEyMDEyMWNiNmQwMGFlYmFjMTZhMmYzMzY4MmQzN2JlZk2BuvQ=: 00:33:54.049 07:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:54.049 07:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:54.307 07:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Njg4YWY0NzMxOTEzZTRkYWViNTUyZTMyMjlhYWEwZTfTc87r: 00:33:54.308 07:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODMwNTEzZjcwMjE3ZmFiMjE4Y2YxZWIzYWQ5ZmY5NWEyMDEyMWNiNmQwMGFlYmFjMTZhMmYzMzY4MmQzN2JlZk2BuvQ=: ]] 00:33:54.308 07:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODMwNTEzZjcwMjE3ZmFiMjE4Y2YxZWIzYWQ5ZmY5NWEyMDEyMWNiNmQwMGFlYmFjMTZhMmYzMzY4MmQzN2JlZk2BuvQ=: 00:33:54.308 07:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:33:54.308 07:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:54.308 07:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:54.308 07:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:54.308 07:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:54.308 07:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:54.308 07:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:33:54.308 07:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:54.308 07:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:54.308 07:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:54.308 07:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:54.308 07:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:54.308 07:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:54.308 07:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:54.308 07:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:54.308 07:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:54.308 07:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:54.308 07:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:54.308 07:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:54.308 07:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:54.308 07:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:54.308 07:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:54.308 07:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:54.308 07:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:54.308 nvme0n1 00:33:54.308 07:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:54.308 07:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:54.308 07:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:54.308 07:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:54.308 07:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:54.308 07:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:54.308 07:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:54.308 07:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:54.308 07:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:54.308 07:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:54.566 07:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:54.566 07:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:54.566 07:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:33:54.566 07:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:54.566 07:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:54.566 07:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:54.566 07:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:54.566 07:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjQ4NTdiOGU1OWQxMzU0OGMxNmM3ODFhMzlmYjdjOThiZWQxNGJlMmQyNDhlMGVhJkPlHw==: 00:33:54.566 07:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGRjMjMwNDVhYzNkYTUxNGIzYmE0NjE3Y2E2ZWEzYzgzNTViNzdhMGNkN2JmZjg4EjyqOg==: 00:33:54.566 07:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:54.566 07:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:54.566 07:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjQ4NTdiOGU1OWQxMzU0OGMxNmM3ODFhMzlmYjdjOThiZWQxNGJlMmQyNDhlMGVhJkPlHw==: 00:33:54.566 07:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGRjMjMwNDVhYzNkYTUxNGIzYmE0NjE3Y2E2ZWEzYzgzNTViNzdhMGNkN2JmZjg4EjyqOg==: ]] 00:33:54.566 07:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGRjMjMwNDVhYzNkYTUxNGIzYmE0NjE3Y2E2ZWEzYzgzNTViNzdhMGNkN2JmZjg4EjyqOg==: 00:33:54.566 07:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:33:54.566 07:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:54.566 07:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:54.566 07:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:54.566 07:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:54.566 07:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:54.566 07:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:33:54.566 07:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:54.566 07:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:54.566 07:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:54.566 07:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:54.566 07:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:54.566 07:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:54.566 07:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:54.566 07:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:54.567 07:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:54.567 07:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:54.567 07:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:54.567 07:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:54.567 07:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:54.567 07:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:54.567 07:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:54.567 07:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:54.567 07:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:54.567 nvme0n1 00:33:54.567 07:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:54.567 07:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:54.567 07:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:54.567 07:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:54.567 07:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:54.567 07:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:54.567 07:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:54.567 07:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:54.567 07:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:54.567 07:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:54.825 07:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:54.825 07:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:54.825 07:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:33:54.825 07:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:54.825 07:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:54.825 07:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:54.825 07:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:54.825 07:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjVlMmZhNzNkZDA1ZWZjNWVjYjRhNGU2YWRhZDk3NGM4RYVv: 00:33:54.825 07:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDZmMzhmNTJlNmQ1NmRlY2E4MjZkZjhkNWQzMWRlNWSeS3d3: 00:33:54.825 07:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:54.825 07:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:54.825 07:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjVlMmZhNzNkZDA1ZWZjNWVjYjRhNGU2YWRhZDk3NGM4RYVv: 00:33:54.825 07:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDZmMzhmNTJlNmQ1NmRlY2E4MjZkZjhkNWQzMWRlNWSeS3d3: ]] 00:33:54.825 07:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDZmMzhmNTJlNmQ1NmRlY2E4MjZkZjhkNWQzMWRlNWSeS3d3: 00:33:54.825 07:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:33:54.825 07:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:54.825 07:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:54.825 07:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:54.825 07:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:54.825 07:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:54.825 07:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:33:54.825 07:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:54.825 07:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:54.825 07:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:54.825 07:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:54.825 07:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:54.825 07:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:54.825 07:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:54.825 07:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:54.825 07:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:54.825 07:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:54.825 07:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:54.825 07:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:54.826 07:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:54.826 07:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:54.826 07:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:54.826 07:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:54.826 07:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:54.826 nvme0n1 00:33:54.826 07:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:54.826 07:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:54.826 07:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:54.826 07:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:54.826 07:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:54.826 07:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:54.826 07:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:54.826 07:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:54.826 07:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:54.826 07:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:54.826 07:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:55.084 07:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:55.084 07:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:33:55.084 07:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:55.084 07:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:55.084 07:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:55.084 07:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:55.084 07:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzRiZGI4ZjY1ZGM1NDI5MGE2NDRkMjYwNmI2ZTkxNjgwMzc1OWJhN2MzZTY4OGQ1DFOfSg==: 00:33:55.084 07:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGFlMDUzOWFlZDdlYmE1Y2I3ODAwYjZjNDQ0MGFiNmRX/p1v: 00:33:55.084 07:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:55.084 07:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:55.084 07:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzRiZGI4ZjY1ZGM1NDI5MGE2NDRkMjYwNmI2ZTkxNjgwMzc1OWJhN2MzZTY4OGQ1DFOfSg==: 00:33:55.084 07:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGFlMDUzOWFlZDdlYmE1Y2I3ODAwYjZjNDQ0MGFiNmRX/p1v: ]] 00:33:55.084 07:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGFlMDUzOWFlZDdlYmE1Y2I3ODAwYjZjNDQ0MGFiNmRX/p1v: 00:33:55.084 07:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:33:55.084 07:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:55.084 07:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:55.084 07:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:55.084 07:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:55.084 07:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:55.084 07:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:33:55.084 07:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:55.084 07:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:55.084 07:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:55.084 07:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:55.084 07:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:55.084 07:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:55.084 07:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:55.084 07:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:55.084 07:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:55.084 07:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:55.084 07:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:55.084 07:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:55.084 07:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:55.084 07:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:55.084 07:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:55.084 07:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:55.084 07:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:55.084 nvme0n1 00:33:55.084 07:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:55.085 07:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:55.085 07:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:55.085 07:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:55.085 07:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:55.085 07:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:55.085 07:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:55.085 07:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:55.085 07:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:55.085 07:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:55.343 07:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:55.343 07:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:55.343 07:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:33:55.343 07:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:55.343 07:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:55.343 07:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:55.343 07:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:55.343 07:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NTczYmJjZTQ5ODYyY2NlYTFhMTE3MTQ1NTIwMzYwMThiZWI3MmNmYzg2ZmU5ZjI4M2U0NGUyZTA2NmU4MTJiNljH8a0=: 00:33:55.343 07:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:55.343 07:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:55.343 07:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:55.343 07:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NTczYmJjZTQ5ODYyY2NlYTFhMTE3MTQ1NTIwMzYwMThiZWI3MmNmYzg2ZmU5ZjI4M2U0NGUyZTA2NmU4MTJiNljH8a0=: 00:33:55.343 07:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:55.343 07:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:33:55.343 07:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:55.343 07:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:55.343 07:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:55.343 07:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:55.343 07:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:55.343 07:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:33:55.343 07:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:55.343 07:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:55.343 07:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:55.343 07:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:55.343 07:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:55.343 07:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:55.343 07:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:55.343 07:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:55.343 07:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:55.343 07:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:55.343 07:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:55.343 07:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:55.343 07:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:55.343 07:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:55.343 07:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:55.343 07:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:55.343 07:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:55.343 nvme0n1 00:33:55.343 07:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:55.343 07:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:55.343 07:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:55.343 07:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:55.343 07:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:55.343 07:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:55.343 07:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:55.343 07:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:55.343 07:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:55.343 07:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:55.343 07:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:55.343 07:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:55.343 07:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:55.343 07:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:33:55.343 07:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:55.343 07:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:55.343 07:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:55.343 07:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:55.343 07:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Njg4YWY0NzMxOTEzZTRkYWViNTUyZTMyMjlhYWEwZTfTc87r: 00:33:55.343 07:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODMwNTEzZjcwMjE3ZmFiMjE4Y2YxZWIzYWQ5ZmY5NWEyMDEyMWNiNmQwMGFlYmFjMTZhMmYzMzY4MmQzN2JlZk2BuvQ=: 00:33:55.343 07:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:55.343 07:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:56.278 07:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Njg4YWY0NzMxOTEzZTRkYWViNTUyZTMyMjlhYWEwZTfTc87r: 00:33:56.278 07:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODMwNTEzZjcwMjE3ZmFiMjE4Y2YxZWIzYWQ5ZmY5NWEyMDEyMWNiNmQwMGFlYmFjMTZhMmYzMzY4MmQzN2JlZk2BuvQ=: ]] 00:33:56.278 07:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODMwNTEzZjcwMjE3ZmFiMjE4Y2YxZWIzYWQ5ZmY5NWEyMDEyMWNiNmQwMGFlYmFjMTZhMmYzMzY4MmQzN2JlZk2BuvQ=: 00:33:56.278 07:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:33:56.278 07:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:56.278 07:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:56.278 07:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:56.278 07:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:56.278 07:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:56.278 07:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:33:56.278 07:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:56.278 07:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:56.278 07:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:56.278 07:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:56.278 07:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:56.278 07:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:56.278 07:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:56.278 07:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:56.278 07:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:56.278 07:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:56.278 07:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:56.278 07:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:56.278 07:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:56.278 07:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:56.278 07:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:56.278 07:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:56.278 07:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:56.536 nvme0n1 00:33:56.536 07:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:56.536 07:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:56.536 07:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:56.536 07:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:56.536 07:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:56.536 07:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:56.536 07:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:56.536 07:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:56.536 07:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:56.536 07:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:56.536 07:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:56.536 07:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:56.536 07:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:33:56.536 07:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:56.536 07:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:56.536 07:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:56.536 07:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:56.536 07:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjQ4NTdiOGU1OWQxMzU0OGMxNmM3ODFhMzlmYjdjOThiZWQxNGJlMmQyNDhlMGVhJkPlHw==: 00:33:56.536 07:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGRjMjMwNDVhYzNkYTUxNGIzYmE0NjE3Y2E2ZWEzYzgzNTViNzdhMGNkN2JmZjg4EjyqOg==: 00:33:56.536 07:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:56.536 07:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:56.536 07:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjQ4NTdiOGU1OWQxMzU0OGMxNmM3ODFhMzlmYjdjOThiZWQxNGJlMmQyNDhlMGVhJkPlHw==: 00:33:56.536 07:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGRjMjMwNDVhYzNkYTUxNGIzYmE0NjE3Y2E2ZWEzYzgzNTViNzdhMGNkN2JmZjg4EjyqOg==: ]] 00:33:56.536 07:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGRjMjMwNDVhYzNkYTUxNGIzYmE0NjE3Y2E2ZWEzYzgzNTViNzdhMGNkN2JmZjg4EjyqOg==: 00:33:56.536 07:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:33:56.536 07:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:56.536 07:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:56.536 07:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:56.536 07:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:56.536 07:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:56.536 07:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:33:56.536 07:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:56.536 07:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:56.536 07:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:56.536 07:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:56.536 07:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:56.536 07:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:56.536 07:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:56.536 07:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:56.536 07:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:56.536 07:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:56.536 07:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:56.536 07:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:56.536 07:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:56.536 07:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:56.536 07:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:56.536 07:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:56.536 07:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:56.795 nvme0n1 00:33:56.795 07:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:56.795 07:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:56.795 07:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:56.795 07:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:56.795 07:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:56.795 07:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:56.795 07:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:56.795 07:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:56.795 07:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:56.795 07:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:56.795 07:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:56.795 07:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:56.795 07:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:33:56.795 07:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:56.795 07:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:56.795 07:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:56.795 07:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:56.795 07:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjVlMmZhNzNkZDA1ZWZjNWVjYjRhNGU2YWRhZDk3NGM4RYVv: 00:33:56.795 07:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDZmMzhmNTJlNmQ1NmRlY2E4MjZkZjhkNWQzMWRlNWSeS3d3: 00:33:56.795 07:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:56.795 07:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:56.795 07:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjVlMmZhNzNkZDA1ZWZjNWVjYjRhNGU2YWRhZDk3NGM4RYVv: 00:33:56.795 07:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDZmMzhmNTJlNmQ1NmRlY2E4MjZkZjhkNWQzMWRlNWSeS3d3: ]] 00:33:56.795 07:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDZmMzhmNTJlNmQ1NmRlY2E4MjZkZjhkNWQzMWRlNWSeS3d3: 00:33:56.795 07:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:33:56.795 07:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:56.795 07:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:56.795 07:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:56.795 07:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:56.795 07:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:56.795 07:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:33:56.795 07:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:56.795 07:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:56.795 07:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:56.795 07:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:56.795 07:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:56.795 07:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:56.795 07:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:56.795 07:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:56.795 07:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:56.795 07:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:56.795 07:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:56.795 07:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:56.795 07:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:56.795 07:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:56.795 07:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:56.795 07:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:56.795 07:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:57.053 nvme0n1 00:33:57.053 07:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:57.053 07:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:57.053 07:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:57.053 07:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:57.053 07:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:57.053 07:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:57.053 07:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:57.053 07:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:57.053 07:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:57.053 07:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:57.053 07:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:57.053 07:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:57.053 07:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:33:57.053 07:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:57.054 07:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:57.054 07:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:57.054 07:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:57.054 07:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzRiZGI4ZjY1ZGM1NDI5MGE2NDRkMjYwNmI2ZTkxNjgwMzc1OWJhN2MzZTY4OGQ1DFOfSg==: 00:33:57.054 07:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGFlMDUzOWFlZDdlYmE1Y2I3ODAwYjZjNDQ0MGFiNmRX/p1v: 00:33:57.054 07:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:57.054 07:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:57.054 07:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzRiZGI4ZjY1ZGM1NDI5MGE2NDRkMjYwNmI2ZTkxNjgwMzc1OWJhN2MzZTY4OGQ1DFOfSg==: 00:33:57.054 07:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGFlMDUzOWFlZDdlYmE1Y2I3ODAwYjZjNDQ0MGFiNmRX/p1v: ]] 00:33:57.054 07:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGFlMDUzOWFlZDdlYmE1Y2I3ODAwYjZjNDQ0MGFiNmRX/p1v: 00:33:57.054 07:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:33:57.054 07:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:57.054 07:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:57.054 07:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:57.054 07:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:57.054 07:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:57.054 07:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:33:57.054 07:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:57.054 07:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:57.054 07:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:57.054 07:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:57.054 07:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:57.054 07:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:57.054 07:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:57.054 07:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:57.054 07:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:57.054 07:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:57.054 07:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:57.054 07:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:57.054 07:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:57.054 07:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:57.054 07:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:57.054 07:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:57.054 07:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:57.312 nvme0n1 00:33:57.312 07:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:57.312 07:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:57.312 07:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:57.312 07:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:57.312 07:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:57.571 07:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:57.571 07:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:57.571 07:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:57.571 07:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:57.571 07:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:57.571 07:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:57.571 07:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:57.571 07:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:33:57.571 07:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:57.571 07:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:57.571 07:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:57.571 07:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:57.571 07:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NTczYmJjZTQ5ODYyY2NlYTFhMTE3MTQ1NTIwMzYwMThiZWI3MmNmYzg2ZmU5ZjI4M2U0NGUyZTA2NmU4MTJiNljH8a0=: 00:33:57.571 07:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:57.571 07:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:57.571 07:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:57.571 07:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NTczYmJjZTQ5ODYyY2NlYTFhMTE3MTQ1NTIwMzYwMThiZWI3MmNmYzg2ZmU5ZjI4M2U0NGUyZTA2NmU4MTJiNljH8a0=: 00:33:57.571 07:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:57.571 07:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:33:57.571 07:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:57.571 07:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:57.571 07:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:57.571 07:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:57.571 07:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:57.571 07:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:33:57.571 07:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:57.571 07:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:57.571 07:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:57.571 07:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:57.571 07:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:57.571 07:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:57.571 07:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:57.571 07:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:57.571 07:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:57.571 07:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:57.571 07:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:57.571 07:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:57.571 07:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:57.571 07:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:57.571 07:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:57.571 07:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:57.571 07:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:57.829 nvme0n1 00:33:57.829 07:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:57.829 07:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:57.829 07:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:57.829 07:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:57.829 07:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:57.829 07:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:57.829 07:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:57.829 07:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:57.829 07:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:57.829 07:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:57.829 07:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:57.829 07:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:57.829 07:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:57.829 07:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:33:57.829 07:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:57.829 07:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:57.829 07:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:57.829 07:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:57.830 07:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Njg4YWY0NzMxOTEzZTRkYWViNTUyZTMyMjlhYWEwZTfTc87r: 00:33:57.830 07:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODMwNTEzZjcwMjE3ZmFiMjE4Y2YxZWIzYWQ5ZmY5NWEyMDEyMWNiNmQwMGFlYmFjMTZhMmYzMzY4MmQzN2JlZk2BuvQ=: 00:33:57.830 07:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:57.830 07:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:59.741 07:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Njg4YWY0NzMxOTEzZTRkYWViNTUyZTMyMjlhYWEwZTfTc87r: 00:33:59.741 07:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODMwNTEzZjcwMjE3ZmFiMjE4Y2YxZWIzYWQ5ZmY5NWEyMDEyMWNiNmQwMGFlYmFjMTZhMmYzMzY4MmQzN2JlZk2BuvQ=: ]] 00:33:59.741 07:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODMwNTEzZjcwMjE3ZmFiMjE4Y2YxZWIzYWQ5ZmY5NWEyMDEyMWNiNmQwMGFlYmFjMTZhMmYzMzY4MmQzN2JlZk2BuvQ=: 00:33:59.741 07:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:33:59.741 07:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:59.741 07:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:59.741 07:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:59.741 07:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:59.741 07:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:59.741 07:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:33:59.741 07:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:59.741 07:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:59.741 07:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:59.741 07:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:59.741 07:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:59.741 07:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:59.741 07:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:59.741 07:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:59.741 07:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:59.741 07:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:59.741 07:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:59.741 07:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:59.741 07:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:59.741 07:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:59.741 07:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:59.741 07:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:59.741 07:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:00.308 nvme0n1 00:34:00.308 07:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:00.308 07:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:00.308 07:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:00.308 07:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:00.308 07:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:00.308 07:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:00.308 07:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:00.308 07:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:00.308 07:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:00.308 07:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:00.308 07:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:00.308 07:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:00.308 07:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:34:00.309 07:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:00.309 07:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:00.309 07:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:00.309 07:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:00.309 07:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjQ4NTdiOGU1OWQxMzU0OGMxNmM3ODFhMzlmYjdjOThiZWQxNGJlMmQyNDhlMGVhJkPlHw==: 00:34:00.309 07:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGRjMjMwNDVhYzNkYTUxNGIzYmE0NjE3Y2E2ZWEzYzgzNTViNzdhMGNkN2JmZjg4EjyqOg==: 00:34:00.309 07:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:00.309 07:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:00.309 07:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjQ4NTdiOGU1OWQxMzU0OGMxNmM3ODFhMzlmYjdjOThiZWQxNGJlMmQyNDhlMGVhJkPlHw==: 00:34:00.309 07:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGRjMjMwNDVhYzNkYTUxNGIzYmE0NjE3Y2E2ZWEzYzgzNTViNzdhMGNkN2JmZjg4EjyqOg==: ]] 00:34:00.309 07:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGRjMjMwNDVhYzNkYTUxNGIzYmE0NjE3Y2E2ZWEzYzgzNTViNzdhMGNkN2JmZjg4EjyqOg==: 00:34:00.309 07:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:34:00.309 07:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:00.309 07:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:00.309 07:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:00.309 07:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:00.309 07:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:00.309 07:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:34:00.309 07:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:00.309 07:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:00.309 07:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:00.309 07:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:00.309 07:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:00.309 07:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:00.309 07:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:00.309 07:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:00.309 07:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:00.309 07:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:00.309 07:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:00.309 07:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:00.309 07:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:00.309 07:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:00.309 07:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:00.309 07:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:00.309 07:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:00.876 nvme0n1 00:34:00.876 07:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:00.876 07:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:00.876 07:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:00.876 07:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:00.876 07:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:00.876 07:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:00.876 07:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:00.876 07:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:00.876 07:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:00.876 07:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:00.876 07:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:00.876 07:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:00.876 07:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:34:00.876 07:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:00.876 07:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:00.876 07:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:00.876 07:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:00.876 07:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjVlMmZhNzNkZDA1ZWZjNWVjYjRhNGU2YWRhZDk3NGM4RYVv: 00:34:00.876 07:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDZmMzhmNTJlNmQ1NmRlY2E4MjZkZjhkNWQzMWRlNWSeS3d3: 00:34:00.876 07:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:00.876 07:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:00.876 07:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjVlMmZhNzNkZDA1ZWZjNWVjYjRhNGU2YWRhZDk3NGM4RYVv: 00:34:00.876 07:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDZmMzhmNTJlNmQ1NmRlY2E4MjZkZjhkNWQzMWRlNWSeS3d3: ]] 00:34:00.876 07:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDZmMzhmNTJlNmQ1NmRlY2E4MjZkZjhkNWQzMWRlNWSeS3d3: 00:34:00.876 07:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:34:00.876 07:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:00.876 07:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:00.876 07:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:00.876 07:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:00.876 07:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:00.876 07:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:34:00.876 07:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:00.876 07:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:00.876 07:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:00.876 07:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:00.876 07:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:00.876 07:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:00.876 07:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:00.876 07:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:00.876 07:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:00.876 07:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:00.876 07:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:00.876 07:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:00.876 07:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:00.876 07:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:00.876 07:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:00.876 07:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:00.876 07:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:01.443 nvme0n1 00:34:01.443 07:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:01.443 07:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:01.443 07:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:01.443 07:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:01.443 07:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:01.443 07:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:01.443 07:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:01.443 07:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:01.443 07:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:01.443 07:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:01.443 07:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:01.443 07:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:01.443 07:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:34:01.443 07:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:01.443 07:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:01.444 07:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:01.444 07:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:01.444 07:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzRiZGI4ZjY1ZGM1NDI5MGE2NDRkMjYwNmI2ZTkxNjgwMzc1OWJhN2MzZTY4OGQ1DFOfSg==: 00:34:01.444 07:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGFlMDUzOWFlZDdlYmE1Y2I3ODAwYjZjNDQ0MGFiNmRX/p1v: 00:34:01.444 07:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:01.444 07:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:01.444 07:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzRiZGI4ZjY1ZGM1NDI5MGE2NDRkMjYwNmI2ZTkxNjgwMzc1OWJhN2MzZTY4OGQ1DFOfSg==: 00:34:01.444 07:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGFlMDUzOWFlZDdlYmE1Y2I3ODAwYjZjNDQ0MGFiNmRX/p1v: ]] 00:34:01.444 07:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGFlMDUzOWFlZDdlYmE1Y2I3ODAwYjZjNDQ0MGFiNmRX/p1v: 00:34:01.444 07:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:34:01.444 07:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:01.444 07:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:01.444 07:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:01.444 07:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:01.444 07:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:01.444 07:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:34:01.444 07:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:01.444 07:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:01.444 07:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:01.444 07:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:01.444 07:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:01.444 07:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:01.444 07:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:01.444 07:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:01.444 07:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:01.444 07:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:01.444 07:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:01.444 07:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:01.444 07:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:01.444 07:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:01.444 07:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:01.444 07:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:01.444 07:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:02.011 nvme0n1 00:34:02.011 07:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:02.011 07:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:02.011 07:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:02.011 07:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:02.011 07:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:02.011 07:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:02.011 07:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:02.011 07:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:02.011 07:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:02.011 07:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:02.011 07:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:02.011 07:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:02.011 07:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:34:02.011 07:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:02.011 07:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:02.011 07:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:02.011 07:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:02.011 07:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NTczYmJjZTQ5ODYyY2NlYTFhMTE3MTQ1NTIwMzYwMThiZWI3MmNmYzg2ZmU5ZjI4M2U0NGUyZTA2NmU4MTJiNljH8a0=: 00:34:02.011 07:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:02.011 07:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:02.011 07:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:02.011 07:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NTczYmJjZTQ5ODYyY2NlYTFhMTE3MTQ1NTIwMzYwMThiZWI3MmNmYzg2ZmU5ZjI4M2U0NGUyZTA2NmU4MTJiNljH8a0=: 00:34:02.011 07:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:02.011 07:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:34:02.011 07:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:02.011 07:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:02.011 07:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:02.011 07:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:02.011 07:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:02.011 07:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:34:02.011 07:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:02.011 07:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:02.011 07:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:02.011 07:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:02.011 07:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:02.011 07:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:02.011 07:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:02.011 07:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:02.011 07:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:02.011 07:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:02.011 07:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:02.011 07:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:02.011 07:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:02.011 07:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:02.011 07:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:02.011 07:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:02.011 07:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:02.270 nvme0n1 00:34:02.270 07:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:02.270 07:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:02.270 07:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:02.270 07:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:02.270 07:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:02.528 07:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:02.528 07:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:02.528 07:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:02.528 07:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:02.528 07:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:02.528 07:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:02.528 07:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:02.528 07:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:02.528 07:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:34:02.528 07:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:02.528 07:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:02.528 07:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:02.528 07:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:02.528 07:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Njg4YWY0NzMxOTEzZTRkYWViNTUyZTMyMjlhYWEwZTfTc87r: 00:34:02.528 07:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODMwNTEzZjcwMjE3ZmFiMjE4Y2YxZWIzYWQ5ZmY5NWEyMDEyMWNiNmQwMGFlYmFjMTZhMmYzMzY4MmQzN2JlZk2BuvQ=: 00:34:02.528 07:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:02.528 07:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:02.528 07:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Njg4YWY0NzMxOTEzZTRkYWViNTUyZTMyMjlhYWEwZTfTc87r: 00:34:02.528 07:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODMwNTEzZjcwMjE3ZmFiMjE4Y2YxZWIzYWQ5ZmY5NWEyMDEyMWNiNmQwMGFlYmFjMTZhMmYzMzY4MmQzN2JlZk2BuvQ=: ]] 00:34:02.528 07:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODMwNTEzZjcwMjE3ZmFiMjE4Y2YxZWIzYWQ5ZmY5NWEyMDEyMWNiNmQwMGFlYmFjMTZhMmYzMzY4MmQzN2JlZk2BuvQ=: 00:34:02.528 07:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:34:02.528 07:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:02.528 07:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:02.528 07:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:02.528 07:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:02.528 07:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:02.528 07:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:34:02.528 07:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:02.528 07:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:02.528 07:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:02.528 07:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:02.528 07:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:02.528 07:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:02.528 07:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:02.528 07:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:02.528 07:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:02.528 07:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:02.528 07:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:02.528 07:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:02.528 07:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:02.528 07:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:02.528 07:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:02.528 07:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:02.528 07:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:03.462 nvme0n1 00:34:03.462 07:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:03.462 07:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:03.462 07:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:03.462 07:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:03.462 07:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:03.462 07:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:03.462 07:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:03.462 07:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:03.462 07:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:03.462 07:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:03.462 07:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:03.462 07:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:03.463 07:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:34:03.463 07:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:03.463 07:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:03.463 07:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:03.463 07:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:03.463 07:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjQ4NTdiOGU1OWQxMzU0OGMxNmM3ODFhMzlmYjdjOThiZWQxNGJlMmQyNDhlMGVhJkPlHw==: 00:34:03.463 07:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGRjMjMwNDVhYzNkYTUxNGIzYmE0NjE3Y2E2ZWEzYzgzNTViNzdhMGNkN2JmZjg4EjyqOg==: 00:34:03.463 07:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:03.463 07:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:03.463 07:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjQ4NTdiOGU1OWQxMzU0OGMxNmM3ODFhMzlmYjdjOThiZWQxNGJlMmQyNDhlMGVhJkPlHw==: 00:34:03.463 07:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGRjMjMwNDVhYzNkYTUxNGIzYmE0NjE3Y2E2ZWEzYzgzNTViNzdhMGNkN2JmZjg4EjyqOg==: ]] 00:34:03.463 07:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGRjMjMwNDVhYzNkYTUxNGIzYmE0NjE3Y2E2ZWEzYzgzNTViNzdhMGNkN2JmZjg4EjyqOg==: 00:34:03.463 07:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:34:03.463 07:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:03.463 07:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:03.463 07:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:03.463 07:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:03.463 07:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:03.463 07:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:34:03.463 07:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:03.463 07:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:03.463 07:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:03.463 07:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:03.463 07:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:03.463 07:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:03.463 07:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:03.463 07:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:03.463 07:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:03.463 07:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:03.463 07:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:03.463 07:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:03.463 07:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:03.463 07:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:03.463 07:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:03.463 07:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:03.463 07:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:04.398 nvme0n1 00:34:04.398 07:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:04.398 07:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:04.398 07:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:04.398 07:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:04.398 07:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:04.398 07:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:04.398 07:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:04.398 07:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:04.398 07:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:04.398 07:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:04.398 07:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:04.398 07:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:04.398 07:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:34:04.398 07:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:04.398 07:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:04.398 07:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:04.398 07:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:04.398 07:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjVlMmZhNzNkZDA1ZWZjNWVjYjRhNGU2YWRhZDk3NGM4RYVv: 00:34:04.398 07:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDZmMzhmNTJlNmQ1NmRlY2E4MjZkZjhkNWQzMWRlNWSeS3d3: 00:34:04.398 07:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:04.398 07:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:04.398 07:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjVlMmZhNzNkZDA1ZWZjNWVjYjRhNGU2YWRhZDk3NGM4RYVv: 00:34:04.398 07:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDZmMzhmNTJlNmQ1NmRlY2E4MjZkZjhkNWQzMWRlNWSeS3d3: ]] 00:34:04.398 07:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDZmMzhmNTJlNmQ1NmRlY2E4MjZkZjhkNWQzMWRlNWSeS3d3: 00:34:04.398 07:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:34:04.398 07:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:04.398 07:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:04.398 07:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:04.398 07:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:04.398 07:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:04.398 07:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:34:04.398 07:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:04.398 07:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:04.398 07:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:04.398 07:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:04.398 07:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:04.398 07:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:04.398 07:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:04.398 07:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:04.398 07:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:04.398 07:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:04.398 07:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:04.398 07:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:04.398 07:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:04.398 07:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:04.398 07:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:04.398 07:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:04.398 07:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:05.333 nvme0n1 00:34:05.333 07:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:05.333 07:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:05.333 07:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:05.333 07:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:05.333 07:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:05.333 07:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:05.333 07:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:05.333 07:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:05.333 07:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:05.333 07:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:05.333 07:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:05.333 07:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:05.333 07:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:34:05.333 07:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:05.333 07:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:05.333 07:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:05.333 07:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:05.333 07:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzRiZGI4ZjY1ZGM1NDI5MGE2NDRkMjYwNmI2ZTkxNjgwMzc1OWJhN2MzZTY4OGQ1DFOfSg==: 00:34:05.333 07:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGFlMDUzOWFlZDdlYmE1Y2I3ODAwYjZjNDQ0MGFiNmRX/p1v: 00:34:05.333 07:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:05.333 07:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:05.333 07:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzRiZGI4ZjY1ZGM1NDI5MGE2NDRkMjYwNmI2ZTkxNjgwMzc1OWJhN2MzZTY4OGQ1DFOfSg==: 00:34:05.333 07:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGFlMDUzOWFlZDdlYmE1Y2I3ODAwYjZjNDQ0MGFiNmRX/p1v: ]] 00:34:05.333 07:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGFlMDUzOWFlZDdlYmE1Y2I3ODAwYjZjNDQ0MGFiNmRX/p1v: 00:34:05.333 07:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:34:05.333 07:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:05.333 07:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:05.333 07:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:05.333 07:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:05.333 07:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:05.333 07:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:34:05.333 07:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:05.333 07:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:05.333 07:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:05.333 07:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:05.333 07:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:05.333 07:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:05.333 07:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:05.333 07:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:05.333 07:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:05.333 07:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:05.333 07:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:05.333 07:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:05.333 07:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:05.333 07:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:05.333 07:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:05.333 07:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:05.333 07:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:06.268 nvme0n1 00:34:06.268 07:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:06.268 07:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:06.268 07:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:06.268 07:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:06.268 07:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:06.268 07:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:06.268 07:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:06.268 07:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:06.268 07:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:06.268 07:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:06.268 07:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:06.268 07:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:06.268 07:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:34:06.268 07:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:06.268 07:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:06.268 07:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:06.268 07:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:06.268 07:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NTczYmJjZTQ5ODYyY2NlYTFhMTE3MTQ1NTIwMzYwMThiZWI3MmNmYzg2ZmU5ZjI4M2U0NGUyZTA2NmU4MTJiNljH8a0=: 00:34:06.268 07:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:06.268 07:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:06.268 07:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:06.268 07:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NTczYmJjZTQ5ODYyY2NlYTFhMTE3MTQ1NTIwMzYwMThiZWI3MmNmYzg2ZmU5ZjI4M2U0NGUyZTA2NmU4MTJiNljH8a0=: 00:34:06.268 07:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:06.268 07:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:34:06.268 07:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:06.268 07:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:06.268 07:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:06.268 07:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:06.268 07:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:06.268 07:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:34:06.268 07:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:06.268 07:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:06.268 07:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:06.268 07:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:06.268 07:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:06.268 07:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:06.268 07:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:06.268 07:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:06.268 07:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:06.268 07:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:06.268 07:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:06.268 07:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:06.268 07:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:06.268 07:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:06.268 07:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:06.268 07:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:06.268 07:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:06.834 nvme0n1 00:34:06.834 07:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:06.834 07:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:06.834 07:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:06.834 07:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:06.834 07:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:06.834 07:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:07.093 07:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:07.093 07:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:07.093 07:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:07.093 07:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:07.093 07:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:07.093 07:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:34:07.093 07:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:07.093 07:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:07.093 07:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:34:07.093 07:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:07.093 07:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:07.093 07:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:07.093 07:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:07.093 07:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Njg4YWY0NzMxOTEzZTRkYWViNTUyZTMyMjlhYWEwZTfTc87r: 00:34:07.093 07:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODMwNTEzZjcwMjE3ZmFiMjE4Y2YxZWIzYWQ5ZmY5NWEyMDEyMWNiNmQwMGFlYmFjMTZhMmYzMzY4MmQzN2JlZk2BuvQ=: 00:34:07.093 07:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:07.093 07:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:07.093 07:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Njg4YWY0NzMxOTEzZTRkYWViNTUyZTMyMjlhYWEwZTfTc87r: 00:34:07.093 07:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODMwNTEzZjcwMjE3ZmFiMjE4Y2YxZWIzYWQ5ZmY5NWEyMDEyMWNiNmQwMGFlYmFjMTZhMmYzMzY4MmQzN2JlZk2BuvQ=: ]] 00:34:07.093 07:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODMwNTEzZjcwMjE3ZmFiMjE4Y2YxZWIzYWQ5ZmY5NWEyMDEyMWNiNmQwMGFlYmFjMTZhMmYzMzY4MmQzN2JlZk2BuvQ=: 00:34:07.093 07:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:34:07.093 07:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:07.093 07:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:07.093 07:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:07.093 07:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:07.093 07:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:07.093 07:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:34:07.093 07:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:07.093 07:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:07.093 07:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:07.093 07:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:07.093 07:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:07.093 07:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:07.093 07:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:07.093 07:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:07.093 07:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:07.093 07:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:07.093 07:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:07.093 07:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:07.093 07:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:07.093 07:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:07.094 07:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:07.094 07:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:07.094 07:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:07.094 nvme0n1 00:34:07.094 07:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:07.094 07:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:07.094 07:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:07.094 07:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:07.094 07:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:07.094 07:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:07.094 07:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:07.094 07:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:07.094 07:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:07.094 07:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:07.094 07:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:07.094 07:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:07.094 07:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:34:07.094 07:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:07.094 07:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:07.094 07:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:07.094 07:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:07.094 07:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjQ4NTdiOGU1OWQxMzU0OGMxNmM3ODFhMzlmYjdjOThiZWQxNGJlMmQyNDhlMGVhJkPlHw==: 00:34:07.094 07:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGRjMjMwNDVhYzNkYTUxNGIzYmE0NjE3Y2E2ZWEzYzgzNTViNzdhMGNkN2JmZjg4EjyqOg==: 00:34:07.094 07:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:07.094 07:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:07.094 07:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjQ4NTdiOGU1OWQxMzU0OGMxNmM3ODFhMzlmYjdjOThiZWQxNGJlMmQyNDhlMGVhJkPlHw==: 00:34:07.094 07:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGRjMjMwNDVhYzNkYTUxNGIzYmE0NjE3Y2E2ZWEzYzgzNTViNzdhMGNkN2JmZjg4EjyqOg==: ]] 00:34:07.094 07:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGRjMjMwNDVhYzNkYTUxNGIzYmE0NjE3Y2E2ZWEzYzgzNTViNzdhMGNkN2JmZjg4EjyqOg==: 00:34:07.094 07:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:34:07.094 07:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:07.094 07:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:07.094 07:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:07.351 07:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:07.352 07:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:07.352 07:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:34:07.352 07:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:07.352 07:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:07.352 07:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:07.352 07:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:07.352 07:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:07.352 07:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:07.352 07:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:07.352 07:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:07.352 07:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:07.352 07:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:07.352 07:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:07.352 07:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:07.352 07:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:07.352 07:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:07.352 07:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:07.352 07:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:07.352 07:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:07.352 nvme0n1 00:34:07.352 07:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:07.352 07:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:07.352 07:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:07.352 07:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:07.352 07:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:07.352 07:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:07.352 07:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:07.352 07:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:07.352 07:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:07.352 07:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:07.352 07:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:07.352 07:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:07.352 07:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:34:07.352 07:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:07.352 07:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:07.352 07:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:07.352 07:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:07.352 07:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjVlMmZhNzNkZDA1ZWZjNWVjYjRhNGU2YWRhZDk3NGM4RYVv: 00:34:07.352 07:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDZmMzhmNTJlNmQ1NmRlY2E4MjZkZjhkNWQzMWRlNWSeS3d3: 00:34:07.352 07:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:07.352 07:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:07.352 07:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjVlMmZhNzNkZDA1ZWZjNWVjYjRhNGU2YWRhZDk3NGM4RYVv: 00:34:07.352 07:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDZmMzhmNTJlNmQ1NmRlY2E4MjZkZjhkNWQzMWRlNWSeS3d3: ]] 00:34:07.352 07:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDZmMzhmNTJlNmQ1NmRlY2E4MjZkZjhkNWQzMWRlNWSeS3d3: 00:34:07.352 07:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:34:07.352 07:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:07.352 07:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:07.352 07:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:07.352 07:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:07.352 07:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:07.352 07:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:34:07.352 07:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:07.352 07:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:07.352 07:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:07.352 07:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:07.352 07:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:07.352 07:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:07.352 07:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:07.352 07:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:07.352 07:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:07.352 07:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:07.352 07:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:07.352 07:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:07.352 07:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:07.352 07:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:07.352 07:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:07.352 07:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:07.352 07:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:07.610 nvme0n1 00:34:07.610 07:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:07.610 07:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:07.610 07:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:07.610 07:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:07.610 07:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:07.610 07:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:07.610 07:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:07.610 07:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:07.610 07:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:07.610 07:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:07.610 07:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:07.610 07:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:07.610 07:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:34:07.610 07:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:07.610 07:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:07.610 07:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:07.610 07:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:07.610 07:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzRiZGI4ZjY1ZGM1NDI5MGE2NDRkMjYwNmI2ZTkxNjgwMzc1OWJhN2MzZTY4OGQ1DFOfSg==: 00:34:07.610 07:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGFlMDUzOWFlZDdlYmE1Y2I3ODAwYjZjNDQ0MGFiNmRX/p1v: 00:34:07.610 07:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:07.610 07:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:07.610 07:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzRiZGI4ZjY1ZGM1NDI5MGE2NDRkMjYwNmI2ZTkxNjgwMzc1OWJhN2MzZTY4OGQ1DFOfSg==: 00:34:07.610 07:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGFlMDUzOWFlZDdlYmE1Y2I3ODAwYjZjNDQ0MGFiNmRX/p1v: ]] 00:34:07.610 07:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGFlMDUzOWFlZDdlYmE1Y2I3ODAwYjZjNDQ0MGFiNmRX/p1v: 00:34:07.610 07:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:34:07.610 07:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:07.610 07:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:07.610 07:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:07.610 07:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:07.610 07:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:07.610 07:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:34:07.610 07:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:07.610 07:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:07.610 07:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:07.610 07:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:07.610 07:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:07.610 07:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:07.610 07:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:07.610 07:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:07.610 07:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:07.610 07:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:07.610 07:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:07.610 07:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:07.610 07:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:07.610 07:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:07.610 07:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:07.610 07:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:07.610 07:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:07.868 nvme0n1 00:34:07.868 07:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:07.868 07:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:07.868 07:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:07.868 07:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:07.868 07:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:07.868 07:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:07.868 07:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:07.868 07:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:07.868 07:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:07.868 07:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:07.868 07:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:07.868 07:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:07.868 07:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:34:07.868 07:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:07.868 07:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:07.868 07:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:07.868 07:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:07.868 07:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NTczYmJjZTQ5ODYyY2NlYTFhMTE3MTQ1NTIwMzYwMThiZWI3MmNmYzg2ZmU5ZjI4M2U0NGUyZTA2NmU4MTJiNljH8a0=: 00:34:07.868 07:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:07.868 07:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:07.868 07:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:07.868 07:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NTczYmJjZTQ5ODYyY2NlYTFhMTE3MTQ1NTIwMzYwMThiZWI3MmNmYzg2ZmU5ZjI4M2U0NGUyZTA2NmU4MTJiNljH8a0=: 00:34:07.868 07:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:07.868 07:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:34:07.868 07:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:07.868 07:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:07.868 07:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:07.868 07:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:07.868 07:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:07.868 07:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:34:07.868 07:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:07.868 07:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:07.868 07:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:07.868 07:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:07.868 07:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:07.868 07:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:07.868 07:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:07.868 07:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:07.868 07:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:07.868 07:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:07.868 07:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:07.868 07:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:07.868 07:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:07.868 07:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:07.868 07:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:07.868 07:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:07.868 07:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:08.126 nvme0n1 00:34:08.126 07:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:08.126 07:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:08.126 07:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:08.126 07:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:08.126 07:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:08.126 07:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:08.126 07:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:08.126 07:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:08.126 07:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:08.126 07:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:08.126 07:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:08.126 07:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:08.126 07:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:08.126 07:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:34:08.126 07:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:08.126 07:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:08.126 07:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:08.126 07:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:08.126 07:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Njg4YWY0NzMxOTEzZTRkYWViNTUyZTMyMjlhYWEwZTfTc87r: 00:34:08.126 07:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODMwNTEzZjcwMjE3ZmFiMjE4Y2YxZWIzYWQ5ZmY5NWEyMDEyMWNiNmQwMGFlYmFjMTZhMmYzMzY4MmQzN2JlZk2BuvQ=: 00:34:08.126 07:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:08.126 07:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:08.126 07:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Njg4YWY0NzMxOTEzZTRkYWViNTUyZTMyMjlhYWEwZTfTc87r: 00:34:08.126 07:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODMwNTEzZjcwMjE3ZmFiMjE4Y2YxZWIzYWQ5ZmY5NWEyMDEyMWNiNmQwMGFlYmFjMTZhMmYzMzY4MmQzN2JlZk2BuvQ=: ]] 00:34:08.126 07:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODMwNTEzZjcwMjE3ZmFiMjE4Y2YxZWIzYWQ5ZmY5NWEyMDEyMWNiNmQwMGFlYmFjMTZhMmYzMzY4MmQzN2JlZk2BuvQ=: 00:34:08.126 07:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:34:08.126 07:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:08.126 07:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:08.126 07:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:08.126 07:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:08.126 07:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:08.126 07:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:34:08.126 07:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:08.126 07:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:08.126 07:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:08.126 07:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:08.126 07:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:08.126 07:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:08.126 07:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:08.126 07:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:08.126 07:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:08.126 07:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:08.126 07:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:08.126 07:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:08.126 07:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:08.126 07:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:08.126 07:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:08.126 07:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:08.126 07:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:08.388 nvme0n1 00:34:08.388 07:19:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:08.388 07:19:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:08.388 07:19:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:08.388 07:19:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:08.388 07:19:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:08.388 07:19:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:08.388 07:19:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:08.388 07:19:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:08.388 07:19:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:08.388 07:19:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:08.388 07:19:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:08.388 07:19:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:08.388 07:19:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:34:08.388 07:19:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:08.388 07:19:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:08.388 07:19:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:08.388 07:19:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:08.388 07:19:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjQ4NTdiOGU1OWQxMzU0OGMxNmM3ODFhMzlmYjdjOThiZWQxNGJlMmQyNDhlMGVhJkPlHw==: 00:34:08.388 07:19:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGRjMjMwNDVhYzNkYTUxNGIzYmE0NjE3Y2E2ZWEzYzgzNTViNzdhMGNkN2JmZjg4EjyqOg==: 00:34:08.388 07:19:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:08.388 07:19:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:08.388 07:19:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjQ4NTdiOGU1OWQxMzU0OGMxNmM3ODFhMzlmYjdjOThiZWQxNGJlMmQyNDhlMGVhJkPlHw==: 00:34:08.388 07:19:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGRjMjMwNDVhYzNkYTUxNGIzYmE0NjE3Y2E2ZWEzYzgzNTViNzdhMGNkN2JmZjg4EjyqOg==: ]] 00:34:08.388 07:19:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGRjMjMwNDVhYzNkYTUxNGIzYmE0NjE3Y2E2ZWEzYzgzNTViNzdhMGNkN2JmZjg4EjyqOg==: 00:34:08.388 07:19:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:34:08.388 07:19:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:08.388 07:19:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:08.388 07:19:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:08.388 07:19:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:08.388 07:19:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:08.388 07:19:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:34:08.388 07:19:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:08.388 07:19:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:08.388 07:19:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:08.388 07:19:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:08.388 07:19:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:08.388 07:19:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:08.388 07:19:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:08.388 07:19:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:08.388 07:19:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:08.388 07:19:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:08.388 07:19:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:08.388 07:19:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:08.388 07:19:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:08.388 07:19:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:08.388 07:19:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:08.388 07:19:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:08.388 07:19:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:08.647 nvme0n1 00:34:08.647 07:19:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:08.647 07:19:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:08.647 07:19:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:08.647 07:19:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:08.647 07:19:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:08.647 07:19:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:08.647 07:19:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:08.647 07:19:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:08.647 07:19:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:08.647 07:19:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:08.647 07:19:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:08.647 07:19:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:08.647 07:19:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:34:08.647 07:19:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:08.647 07:19:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:08.647 07:19:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:08.647 07:19:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:08.647 07:19:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjVlMmZhNzNkZDA1ZWZjNWVjYjRhNGU2YWRhZDk3NGM4RYVv: 00:34:08.647 07:19:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDZmMzhmNTJlNmQ1NmRlY2E4MjZkZjhkNWQzMWRlNWSeS3d3: 00:34:08.647 07:19:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:08.647 07:19:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:08.647 07:19:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjVlMmZhNzNkZDA1ZWZjNWVjYjRhNGU2YWRhZDk3NGM4RYVv: 00:34:08.647 07:19:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDZmMzhmNTJlNmQ1NmRlY2E4MjZkZjhkNWQzMWRlNWSeS3d3: ]] 00:34:08.647 07:19:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDZmMzhmNTJlNmQ1NmRlY2E4MjZkZjhkNWQzMWRlNWSeS3d3: 00:34:08.647 07:19:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:34:08.647 07:19:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:08.647 07:19:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:08.647 07:19:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:08.647 07:19:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:08.647 07:19:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:08.647 07:19:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:34:08.647 07:19:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:08.647 07:19:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:08.647 07:19:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:08.647 07:19:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:08.647 07:19:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:08.647 07:19:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:08.647 07:19:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:08.647 07:19:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:08.647 07:19:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:08.647 07:19:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:08.647 07:19:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:08.647 07:19:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:08.647 07:19:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:08.647 07:19:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:08.647 07:19:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:08.647 07:19:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:08.647 07:19:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:08.905 nvme0n1 00:34:08.905 07:19:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:08.905 07:19:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:08.905 07:19:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:08.905 07:19:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:08.905 07:19:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:08.905 07:19:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:08.905 07:19:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:08.905 07:19:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:08.905 07:19:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:08.905 07:19:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:08.905 07:19:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:08.905 07:19:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:08.906 07:19:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:34:08.906 07:19:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:08.906 07:19:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:08.906 07:19:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:08.906 07:19:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:08.906 07:19:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzRiZGI4ZjY1ZGM1NDI5MGE2NDRkMjYwNmI2ZTkxNjgwMzc1OWJhN2MzZTY4OGQ1DFOfSg==: 00:34:08.906 07:19:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGFlMDUzOWFlZDdlYmE1Y2I3ODAwYjZjNDQ0MGFiNmRX/p1v: 00:34:08.906 07:19:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:08.906 07:19:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:08.906 07:19:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzRiZGI4ZjY1ZGM1NDI5MGE2NDRkMjYwNmI2ZTkxNjgwMzc1OWJhN2MzZTY4OGQ1DFOfSg==: 00:34:08.906 07:19:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGFlMDUzOWFlZDdlYmE1Y2I3ODAwYjZjNDQ0MGFiNmRX/p1v: ]] 00:34:08.906 07:19:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGFlMDUzOWFlZDdlYmE1Y2I3ODAwYjZjNDQ0MGFiNmRX/p1v: 00:34:08.906 07:19:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:34:08.906 07:19:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:08.906 07:19:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:08.906 07:19:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:08.906 07:19:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:08.906 07:19:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:08.906 07:19:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:34:08.906 07:19:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:08.906 07:19:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:08.906 07:19:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:08.906 07:19:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:08.906 07:19:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:08.906 07:19:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:08.906 07:19:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:08.906 07:19:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:08.906 07:19:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:08.906 07:19:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:08.906 07:19:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:08.906 07:19:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:08.906 07:19:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:08.906 07:19:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:08.906 07:19:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:08.906 07:19:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:08.906 07:19:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:09.164 nvme0n1 00:34:09.164 07:19:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:09.164 07:19:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:09.164 07:19:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:09.164 07:19:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:09.164 07:19:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:09.164 07:19:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:09.164 07:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:09.164 07:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:09.164 07:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:09.164 07:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:09.164 07:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:09.164 07:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:09.164 07:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:34:09.165 07:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:09.165 07:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:09.165 07:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:09.165 07:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:09.165 07:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NTczYmJjZTQ5ODYyY2NlYTFhMTE3MTQ1NTIwMzYwMThiZWI3MmNmYzg2ZmU5ZjI4M2U0NGUyZTA2NmU4MTJiNljH8a0=: 00:34:09.165 07:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:09.165 07:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:09.165 07:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:09.165 07:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NTczYmJjZTQ5ODYyY2NlYTFhMTE3MTQ1NTIwMzYwMThiZWI3MmNmYzg2ZmU5ZjI4M2U0NGUyZTA2NmU4MTJiNljH8a0=: 00:34:09.165 07:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:09.165 07:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:34:09.165 07:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:09.165 07:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:09.165 07:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:09.165 07:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:09.165 07:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:09.165 07:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:34:09.165 07:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:09.165 07:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:09.165 07:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:09.165 07:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:09.165 07:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:09.165 07:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:09.165 07:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:09.165 07:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:09.165 07:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:09.165 07:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:09.165 07:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:09.165 07:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:09.165 07:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:09.165 07:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:09.165 07:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:09.165 07:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:09.165 07:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:09.423 nvme0n1 00:34:09.423 07:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:09.423 07:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:09.423 07:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:09.423 07:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:09.423 07:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:09.423 07:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:09.423 07:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:09.423 07:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:09.423 07:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:09.423 07:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:09.423 07:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:09.423 07:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:09.423 07:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:09.423 07:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:34:09.423 07:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:09.423 07:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:09.423 07:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:09.423 07:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:09.423 07:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Njg4YWY0NzMxOTEzZTRkYWViNTUyZTMyMjlhYWEwZTfTc87r: 00:34:09.423 07:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODMwNTEzZjcwMjE3ZmFiMjE4Y2YxZWIzYWQ5ZmY5NWEyMDEyMWNiNmQwMGFlYmFjMTZhMmYzMzY4MmQzN2JlZk2BuvQ=: 00:34:09.423 07:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:09.423 07:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:09.423 07:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Njg4YWY0NzMxOTEzZTRkYWViNTUyZTMyMjlhYWEwZTfTc87r: 00:34:09.423 07:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODMwNTEzZjcwMjE3ZmFiMjE4Y2YxZWIzYWQ5ZmY5NWEyMDEyMWNiNmQwMGFlYmFjMTZhMmYzMzY4MmQzN2JlZk2BuvQ=: ]] 00:34:09.423 07:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODMwNTEzZjcwMjE3ZmFiMjE4Y2YxZWIzYWQ5ZmY5NWEyMDEyMWNiNmQwMGFlYmFjMTZhMmYzMzY4MmQzN2JlZk2BuvQ=: 00:34:09.423 07:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:34:09.423 07:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:09.423 07:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:09.423 07:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:09.423 07:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:09.423 07:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:09.423 07:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:34:09.423 07:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:09.423 07:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:09.423 07:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:09.423 07:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:09.423 07:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:09.423 07:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:09.423 07:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:09.423 07:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:09.423 07:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:09.423 07:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:09.423 07:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:09.423 07:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:09.423 07:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:09.423 07:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:09.423 07:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:09.423 07:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:09.423 07:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:09.681 nvme0n1 00:34:09.681 07:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:09.681 07:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:09.681 07:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:09.681 07:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:09.681 07:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:09.681 07:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:09.681 07:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:09.681 07:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:09.681 07:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:09.681 07:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:09.681 07:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:09.681 07:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:09.681 07:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:34:09.681 07:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:09.681 07:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:09.681 07:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:09.681 07:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:09.681 07:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjQ4NTdiOGU1OWQxMzU0OGMxNmM3ODFhMzlmYjdjOThiZWQxNGJlMmQyNDhlMGVhJkPlHw==: 00:34:09.681 07:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGRjMjMwNDVhYzNkYTUxNGIzYmE0NjE3Y2E2ZWEzYzgzNTViNzdhMGNkN2JmZjg4EjyqOg==: 00:34:09.681 07:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:09.681 07:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:09.681 07:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjQ4NTdiOGU1OWQxMzU0OGMxNmM3ODFhMzlmYjdjOThiZWQxNGJlMmQyNDhlMGVhJkPlHw==: 00:34:09.681 07:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGRjMjMwNDVhYzNkYTUxNGIzYmE0NjE3Y2E2ZWEzYzgzNTViNzdhMGNkN2JmZjg4EjyqOg==: ]] 00:34:09.681 07:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGRjMjMwNDVhYzNkYTUxNGIzYmE0NjE3Y2E2ZWEzYzgzNTViNzdhMGNkN2JmZjg4EjyqOg==: 00:34:09.681 07:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:34:09.681 07:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:09.681 07:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:09.681 07:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:09.681 07:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:09.681 07:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:09.681 07:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:34:09.681 07:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:09.681 07:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:09.681 07:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:09.681 07:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:09.681 07:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:09.681 07:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:09.681 07:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:09.681 07:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:09.681 07:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:09.681 07:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:09.681 07:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:09.681 07:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:09.681 07:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:09.681 07:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:09.681 07:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:09.681 07:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:09.681 07:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:10.247 nvme0n1 00:34:10.247 07:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:10.247 07:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:10.247 07:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:10.247 07:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:10.247 07:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:10.247 07:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:10.247 07:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:10.247 07:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:10.247 07:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:10.247 07:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:10.247 07:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:10.247 07:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:10.247 07:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:34:10.247 07:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:10.247 07:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:10.247 07:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:10.247 07:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:10.247 07:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjVlMmZhNzNkZDA1ZWZjNWVjYjRhNGU2YWRhZDk3NGM4RYVv: 00:34:10.247 07:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDZmMzhmNTJlNmQ1NmRlY2E4MjZkZjhkNWQzMWRlNWSeS3d3: 00:34:10.247 07:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:10.247 07:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:10.247 07:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjVlMmZhNzNkZDA1ZWZjNWVjYjRhNGU2YWRhZDk3NGM4RYVv: 00:34:10.247 07:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDZmMzhmNTJlNmQ1NmRlY2E4MjZkZjhkNWQzMWRlNWSeS3d3: ]] 00:34:10.247 07:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDZmMzhmNTJlNmQ1NmRlY2E4MjZkZjhkNWQzMWRlNWSeS3d3: 00:34:10.247 07:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:34:10.247 07:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:10.247 07:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:10.247 07:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:10.247 07:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:10.247 07:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:10.247 07:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:34:10.247 07:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:10.247 07:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:10.247 07:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:10.247 07:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:10.247 07:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:10.247 07:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:10.247 07:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:10.247 07:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:10.247 07:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:10.247 07:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:10.247 07:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:10.247 07:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:10.247 07:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:10.247 07:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:10.247 07:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:10.247 07:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:10.247 07:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:10.505 nvme0n1 00:34:10.505 07:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:10.505 07:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:10.505 07:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:10.505 07:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:10.505 07:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:10.505 07:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:10.505 07:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:10.505 07:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:10.505 07:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:10.505 07:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:10.505 07:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:10.505 07:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:10.505 07:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:34:10.505 07:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:10.505 07:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:10.505 07:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:10.505 07:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:10.505 07:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzRiZGI4ZjY1ZGM1NDI5MGE2NDRkMjYwNmI2ZTkxNjgwMzc1OWJhN2MzZTY4OGQ1DFOfSg==: 00:34:10.506 07:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGFlMDUzOWFlZDdlYmE1Y2I3ODAwYjZjNDQ0MGFiNmRX/p1v: 00:34:10.506 07:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:10.506 07:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:10.506 07:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzRiZGI4ZjY1ZGM1NDI5MGE2NDRkMjYwNmI2ZTkxNjgwMzc1OWJhN2MzZTY4OGQ1DFOfSg==: 00:34:10.506 07:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGFlMDUzOWFlZDdlYmE1Y2I3ODAwYjZjNDQ0MGFiNmRX/p1v: ]] 00:34:10.506 07:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGFlMDUzOWFlZDdlYmE1Y2I3ODAwYjZjNDQ0MGFiNmRX/p1v: 00:34:10.506 07:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:34:10.506 07:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:10.506 07:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:10.506 07:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:10.506 07:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:10.506 07:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:10.506 07:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:34:10.506 07:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:10.506 07:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:10.506 07:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:10.506 07:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:10.506 07:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:10.506 07:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:10.506 07:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:10.506 07:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:10.506 07:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:10.506 07:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:10.506 07:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:10.506 07:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:10.506 07:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:10.506 07:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:10.506 07:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:10.506 07:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:10.506 07:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:10.764 nvme0n1 00:34:10.764 07:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:10.764 07:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:10.764 07:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:10.764 07:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:10.764 07:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:10.764 07:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:10.764 07:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:10.764 07:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:10.764 07:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:10.764 07:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:10.764 07:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:10.764 07:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:10.764 07:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:34:10.764 07:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:10.764 07:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:10.764 07:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:10.764 07:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:10.764 07:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NTczYmJjZTQ5ODYyY2NlYTFhMTE3MTQ1NTIwMzYwMThiZWI3MmNmYzg2ZmU5ZjI4M2U0NGUyZTA2NmU4MTJiNljH8a0=: 00:34:10.764 07:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:10.764 07:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:10.764 07:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:10.764 07:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NTczYmJjZTQ5ODYyY2NlYTFhMTE3MTQ1NTIwMzYwMThiZWI3MmNmYzg2ZmU5ZjI4M2U0NGUyZTA2NmU4MTJiNljH8a0=: 00:34:10.764 07:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:10.764 07:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:34:10.764 07:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:10.764 07:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:10.764 07:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:10.764 07:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:10.764 07:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:10.764 07:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:34:10.764 07:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:10.764 07:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:10.764 07:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:10.764 07:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:10.764 07:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:10.764 07:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:10.764 07:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:10.764 07:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:10.764 07:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:10.764 07:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:10.764 07:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:10.764 07:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:10.764 07:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:10.764 07:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:10.764 07:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:10.764 07:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:10.764 07:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.022 nvme0n1 00:34:11.022 07:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:11.022 07:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:11.022 07:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:11.022 07:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:11.022 07:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.022 07:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:11.022 07:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:11.022 07:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:11.022 07:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:11.022 07:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.022 07:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:11.022 07:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:11.022 07:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:11.022 07:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:34:11.022 07:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:11.022 07:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:11.022 07:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:11.022 07:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:11.022 07:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Njg4YWY0NzMxOTEzZTRkYWViNTUyZTMyMjlhYWEwZTfTc87r: 00:34:11.022 07:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODMwNTEzZjcwMjE3ZmFiMjE4Y2YxZWIzYWQ5ZmY5NWEyMDEyMWNiNmQwMGFlYmFjMTZhMmYzMzY4MmQzN2JlZk2BuvQ=: 00:34:11.022 07:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:11.022 07:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:11.022 07:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Njg4YWY0NzMxOTEzZTRkYWViNTUyZTMyMjlhYWEwZTfTc87r: 00:34:11.022 07:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODMwNTEzZjcwMjE3ZmFiMjE4Y2YxZWIzYWQ5ZmY5NWEyMDEyMWNiNmQwMGFlYmFjMTZhMmYzMzY4MmQzN2JlZk2BuvQ=: ]] 00:34:11.022 07:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODMwNTEzZjcwMjE3ZmFiMjE4Y2YxZWIzYWQ5ZmY5NWEyMDEyMWNiNmQwMGFlYmFjMTZhMmYzMzY4MmQzN2JlZk2BuvQ=: 00:34:11.022 07:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:34:11.022 07:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:11.022 07:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:11.022 07:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:11.022 07:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:11.022 07:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:11.023 07:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:34:11.023 07:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:11.023 07:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.023 07:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:11.023 07:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:11.023 07:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:11.023 07:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:11.023 07:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:11.023 07:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:11.023 07:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:11.023 07:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:11.023 07:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:11.023 07:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:11.023 07:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:11.023 07:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:11.023 07:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:11.023 07:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:11.023 07:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.588 nvme0n1 00:34:11.588 07:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:11.588 07:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:11.588 07:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:11.588 07:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.588 07:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:11.588 07:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:11.588 07:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:11.588 07:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:11.588 07:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:11.588 07:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.588 07:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:11.588 07:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:11.588 07:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:34:11.588 07:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:11.588 07:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:11.588 07:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:11.588 07:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:11.588 07:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjQ4NTdiOGU1OWQxMzU0OGMxNmM3ODFhMzlmYjdjOThiZWQxNGJlMmQyNDhlMGVhJkPlHw==: 00:34:11.589 07:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGRjMjMwNDVhYzNkYTUxNGIzYmE0NjE3Y2E2ZWEzYzgzNTViNzdhMGNkN2JmZjg4EjyqOg==: 00:34:11.589 07:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:11.589 07:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:11.589 07:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjQ4NTdiOGU1OWQxMzU0OGMxNmM3ODFhMzlmYjdjOThiZWQxNGJlMmQyNDhlMGVhJkPlHw==: 00:34:11.589 07:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGRjMjMwNDVhYzNkYTUxNGIzYmE0NjE3Y2E2ZWEzYzgzNTViNzdhMGNkN2JmZjg4EjyqOg==: ]] 00:34:11.589 07:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGRjMjMwNDVhYzNkYTUxNGIzYmE0NjE3Y2E2ZWEzYzgzNTViNzdhMGNkN2JmZjg4EjyqOg==: 00:34:11.589 07:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:34:11.589 07:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:11.589 07:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:11.589 07:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:11.589 07:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:11.589 07:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:11.589 07:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:34:11.589 07:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:11.589 07:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.589 07:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:11.589 07:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:11.589 07:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:11.589 07:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:11.589 07:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:11.589 07:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:11.589 07:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:11.589 07:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:11.589 07:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:11.589 07:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:11.589 07:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:11.589 07:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:11.589 07:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:11.589 07:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:11.589 07:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:12.156 nvme0n1 00:34:12.156 07:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:12.156 07:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:12.156 07:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:12.156 07:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:12.156 07:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:12.156 07:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:12.156 07:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:12.156 07:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:12.156 07:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:12.156 07:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:12.156 07:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:12.156 07:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:12.156 07:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:34:12.156 07:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:12.156 07:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:12.156 07:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:12.156 07:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:12.156 07:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjVlMmZhNzNkZDA1ZWZjNWVjYjRhNGU2YWRhZDk3NGM4RYVv: 00:34:12.156 07:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDZmMzhmNTJlNmQ1NmRlY2E4MjZkZjhkNWQzMWRlNWSeS3d3: 00:34:12.156 07:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:12.156 07:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:12.156 07:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjVlMmZhNzNkZDA1ZWZjNWVjYjRhNGU2YWRhZDk3NGM4RYVv: 00:34:12.156 07:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDZmMzhmNTJlNmQ1NmRlY2E4MjZkZjhkNWQzMWRlNWSeS3d3: ]] 00:34:12.156 07:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDZmMzhmNTJlNmQ1NmRlY2E4MjZkZjhkNWQzMWRlNWSeS3d3: 00:34:12.156 07:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:34:12.156 07:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:12.156 07:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:12.156 07:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:12.156 07:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:12.156 07:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:12.156 07:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:34:12.156 07:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:12.156 07:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:12.156 07:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:12.156 07:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:12.156 07:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:12.156 07:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:12.156 07:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:12.156 07:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:12.156 07:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:12.156 07:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:12.156 07:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:12.156 07:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:12.156 07:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:12.156 07:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:12.156 07:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:12.156 07:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:12.156 07:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:12.724 nvme0n1 00:34:12.724 07:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:12.724 07:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:12.724 07:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:12.724 07:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:12.724 07:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:12.724 07:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:12.724 07:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:12.724 07:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:12.724 07:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:12.724 07:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:12.724 07:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:12.724 07:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:12.724 07:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:34:12.724 07:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:12.724 07:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:12.724 07:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:12.724 07:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:12.724 07:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzRiZGI4ZjY1ZGM1NDI5MGE2NDRkMjYwNmI2ZTkxNjgwMzc1OWJhN2MzZTY4OGQ1DFOfSg==: 00:34:12.724 07:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGFlMDUzOWFlZDdlYmE1Y2I3ODAwYjZjNDQ0MGFiNmRX/p1v: 00:34:12.724 07:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:12.724 07:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:12.724 07:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzRiZGI4ZjY1ZGM1NDI5MGE2NDRkMjYwNmI2ZTkxNjgwMzc1OWJhN2MzZTY4OGQ1DFOfSg==: 00:34:12.724 07:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGFlMDUzOWFlZDdlYmE1Y2I3ODAwYjZjNDQ0MGFiNmRX/p1v: ]] 00:34:12.724 07:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGFlMDUzOWFlZDdlYmE1Y2I3ODAwYjZjNDQ0MGFiNmRX/p1v: 00:34:12.724 07:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:34:12.724 07:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:12.724 07:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:12.724 07:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:12.724 07:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:12.724 07:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:12.724 07:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:34:12.724 07:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:12.724 07:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:12.724 07:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:12.724 07:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:12.724 07:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:12.724 07:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:12.724 07:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:12.724 07:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:12.724 07:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:12.724 07:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:12.724 07:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:12.724 07:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:12.724 07:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:12.724 07:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:12.724 07:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:12.724 07:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:12.724 07:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:13.291 nvme0n1 00:34:13.291 07:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:13.291 07:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:13.291 07:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:13.291 07:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:13.291 07:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:13.291 07:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:13.291 07:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:13.291 07:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:13.291 07:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:13.291 07:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:13.291 07:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:13.291 07:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:13.291 07:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:34:13.291 07:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:13.291 07:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:13.291 07:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:13.291 07:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:13.291 07:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NTczYmJjZTQ5ODYyY2NlYTFhMTE3MTQ1NTIwMzYwMThiZWI3MmNmYzg2ZmU5ZjI4M2U0NGUyZTA2NmU4MTJiNljH8a0=: 00:34:13.291 07:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:13.291 07:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:13.291 07:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:13.291 07:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NTczYmJjZTQ5ODYyY2NlYTFhMTE3MTQ1NTIwMzYwMThiZWI3MmNmYzg2ZmU5ZjI4M2U0NGUyZTA2NmU4MTJiNljH8a0=: 00:34:13.291 07:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:13.291 07:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:34:13.291 07:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:13.291 07:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:13.291 07:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:13.291 07:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:13.291 07:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:13.291 07:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:34:13.291 07:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:13.291 07:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:13.291 07:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:13.291 07:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:13.291 07:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:13.291 07:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:13.291 07:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:13.291 07:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:13.291 07:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:13.291 07:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:13.291 07:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:13.291 07:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:13.291 07:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:13.291 07:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:13.291 07:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:13.291 07:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:13.291 07:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:13.858 nvme0n1 00:34:13.858 07:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:13.858 07:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:13.858 07:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:13.858 07:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:13.858 07:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:13.858 07:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:13.858 07:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:13.858 07:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:13.858 07:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:13.858 07:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:13.858 07:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:13.858 07:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:13.858 07:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:13.858 07:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:34:13.858 07:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:13.858 07:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:13.858 07:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:13.858 07:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:13.858 07:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Njg4YWY0NzMxOTEzZTRkYWViNTUyZTMyMjlhYWEwZTfTc87r: 00:34:13.858 07:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODMwNTEzZjcwMjE3ZmFiMjE4Y2YxZWIzYWQ5ZmY5NWEyMDEyMWNiNmQwMGFlYmFjMTZhMmYzMzY4MmQzN2JlZk2BuvQ=: 00:34:13.858 07:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:13.858 07:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:13.858 07:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Njg4YWY0NzMxOTEzZTRkYWViNTUyZTMyMjlhYWEwZTfTc87r: 00:34:13.858 07:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODMwNTEzZjcwMjE3ZmFiMjE4Y2YxZWIzYWQ5ZmY5NWEyMDEyMWNiNmQwMGFlYmFjMTZhMmYzMzY4MmQzN2JlZk2BuvQ=: ]] 00:34:13.858 07:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODMwNTEzZjcwMjE3ZmFiMjE4Y2YxZWIzYWQ5ZmY5NWEyMDEyMWNiNmQwMGFlYmFjMTZhMmYzMzY4MmQzN2JlZk2BuvQ=: 00:34:13.858 07:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:34:13.858 07:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:13.858 07:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:13.858 07:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:13.858 07:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:13.858 07:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:13.858 07:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:34:13.858 07:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:13.858 07:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:13.858 07:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:13.858 07:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:13.858 07:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:13.858 07:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:13.858 07:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:13.858 07:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:13.858 07:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:13.858 07:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:13.858 07:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:13.858 07:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:13.858 07:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:13.858 07:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:13.858 07:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:13.858 07:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:13.858 07:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:14.793 nvme0n1 00:34:14.793 07:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:14.793 07:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:14.793 07:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:14.793 07:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:14.793 07:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:14.793 07:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:14.793 07:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:14.793 07:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:14.793 07:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:14.793 07:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:14.793 07:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:14.793 07:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:14.793 07:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:34:14.793 07:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:14.793 07:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:14.793 07:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:14.793 07:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:14.793 07:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjQ4NTdiOGU1OWQxMzU0OGMxNmM3ODFhMzlmYjdjOThiZWQxNGJlMmQyNDhlMGVhJkPlHw==: 00:34:14.793 07:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGRjMjMwNDVhYzNkYTUxNGIzYmE0NjE3Y2E2ZWEzYzgzNTViNzdhMGNkN2JmZjg4EjyqOg==: 00:34:14.793 07:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:14.793 07:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:14.793 07:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjQ4NTdiOGU1OWQxMzU0OGMxNmM3ODFhMzlmYjdjOThiZWQxNGJlMmQyNDhlMGVhJkPlHw==: 00:34:14.793 07:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGRjMjMwNDVhYzNkYTUxNGIzYmE0NjE3Y2E2ZWEzYzgzNTViNzdhMGNkN2JmZjg4EjyqOg==: ]] 00:34:14.793 07:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGRjMjMwNDVhYzNkYTUxNGIzYmE0NjE3Y2E2ZWEzYzgzNTViNzdhMGNkN2JmZjg4EjyqOg==: 00:34:14.793 07:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:34:14.793 07:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:14.793 07:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:14.793 07:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:14.793 07:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:14.793 07:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:14.793 07:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:34:14.793 07:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:14.793 07:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:14.793 07:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:14.793 07:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:14.793 07:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:14.793 07:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:14.793 07:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:14.793 07:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:14.793 07:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:14.793 07:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:14.793 07:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:14.793 07:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:14.793 07:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:14.793 07:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:14.793 07:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:14.793 07:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:14.793 07:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:15.727 nvme0n1 00:34:15.727 07:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:15.727 07:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:15.727 07:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:15.727 07:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:15.727 07:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:15.727 07:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:15.727 07:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:15.727 07:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:15.727 07:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:15.727 07:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:15.727 07:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:15.727 07:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:15.727 07:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:34:15.727 07:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:15.727 07:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:15.727 07:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:15.727 07:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:15.728 07:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjVlMmZhNzNkZDA1ZWZjNWVjYjRhNGU2YWRhZDk3NGM4RYVv: 00:34:15.728 07:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDZmMzhmNTJlNmQ1NmRlY2E4MjZkZjhkNWQzMWRlNWSeS3d3: 00:34:15.728 07:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:15.728 07:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:15.728 07:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjVlMmZhNzNkZDA1ZWZjNWVjYjRhNGU2YWRhZDk3NGM4RYVv: 00:34:15.728 07:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDZmMzhmNTJlNmQ1NmRlY2E4MjZkZjhkNWQzMWRlNWSeS3d3: ]] 00:34:15.728 07:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDZmMzhmNTJlNmQ1NmRlY2E4MjZkZjhkNWQzMWRlNWSeS3d3: 00:34:15.728 07:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:34:15.728 07:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:15.728 07:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:15.728 07:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:15.728 07:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:15.728 07:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:15.728 07:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:34:15.728 07:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:15.728 07:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:15.728 07:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:15.728 07:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:15.728 07:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:15.728 07:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:15.728 07:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:15.728 07:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:15.728 07:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:15.728 07:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:15.728 07:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:15.728 07:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:15.728 07:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:15.728 07:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:15.728 07:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:15.728 07:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:15.728 07:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:16.663 nvme0n1 00:34:16.663 07:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:16.663 07:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:16.663 07:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:16.663 07:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:16.663 07:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:16.663 07:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:16.663 07:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:16.663 07:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:16.663 07:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:16.663 07:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:16.663 07:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:16.663 07:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:16.663 07:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:34:16.663 07:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:16.663 07:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:16.663 07:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:16.663 07:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:16.663 07:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzRiZGI4ZjY1ZGM1NDI5MGE2NDRkMjYwNmI2ZTkxNjgwMzc1OWJhN2MzZTY4OGQ1DFOfSg==: 00:34:16.663 07:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGFlMDUzOWFlZDdlYmE1Y2I3ODAwYjZjNDQ0MGFiNmRX/p1v: 00:34:16.663 07:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:16.663 07:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:16.663 07:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzRiZGI4ZjY1ZGM1NDI5MGE2NDRkMjYwNmI2ZTkxNjgwMzc1OWJhN2MzZTY4OGQ1DFOfSg==: 00:34:16.663 07:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGFlMDUzOWFlZDdlYmE1Y2I3ODAwYjZjNDQ0MGFiNmRX/p1v: ]] 00:34:16.663 07:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGFlMDUzOWFlZDdlYmE1Y2I3ODAwYjZjNDQ0MGFiNmRX/p1v: 00:34:16.663 07:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:34:16.663 07:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:16.663 07:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:16.663 07:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:16.663 07:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:16.663 07:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:16.663 07:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:34:16.663 07:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:16.663 07:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:16.663 07:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:16.663 07:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:16.663 07:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:16.663 07:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:16.663 07:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:16.663 07:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:16.663 07:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:16.663 07:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:16.663 07:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:16.663 07:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:16.663 07:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:16.663 07:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:16.663 07:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:16.663 07:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:16.663 07:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:17.708 nvme0n1 00:34:17.708 07:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:17.708 07:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:17.708 07:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:17.708 07:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:17.708 07:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:17.708 07:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:17.708 07:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:17.708 07:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:17.708 07:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:17.708 07:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:17.708 07:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:17.708 07:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:17.708 07:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:34:17.708 07:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:17.708 07:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:17.708 07:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:17.708 07:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:17.709 07:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NTczYmJjZTQ5ODYyY2NlYTFhMTE3MTQ1NTIwMzYwMThiZWI3MmNmYzg2ZmU5ZjI4M2U0NGUyZTA2NmU4MTJiNljH8a0=: 00:34:17.709 07:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:17.709 07:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:17.709 07:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:17.709 07:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NTczYmJjZTQ5ODYyY2NlYTFhMTE3MTQ1NTIwMzYwMThiZWI3MmNmYzg2ZmU5ZjI4M2U0NGUyZTA2NmU4MTJiNljH8a0=: 00:34:17.709 07:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:17.709 07:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:34:17.709 07:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:17.709 07:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:17.709 07:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:17.709 07:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:17.709 07:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:17.709 07:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:34:17.709 07:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:17.709 07:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:17.709 07:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:17.709 07:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:17.709 07:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:17.709 07:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:17.709 07:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:17.709 07:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:17.709 07:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:17.709 07:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:17.709 07:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:17.709 07:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:17.709 07:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:17.709 07:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:17.709 07:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:17.709 07:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:17.709 07:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:18.283 nvme0n1 00:34:18.283 07:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:18.283 07:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:18.283 07:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:18.283 07:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:18.283 07:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:18.283 07:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:18.541 07:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:18.541 07:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:18.541 07:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:18.541 07:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:18.541 07:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:18.541 07:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:34:18.541 07:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:18.541 07:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:18.541 07:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:34:18.541 07:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:18.541 07:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:18.541 07:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:18.541 07:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:18.541 07:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Njg4YWY0NzMxOTEzZTRkYWViNTUyZTMyMjlhYWEwZTfTc87r: 00:34:18.541 07:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODMwNTEzZjcwMjE3ZmFiMjE4Y2YxZWIzYWQ5ZmY5NWEyMDEyMWNiNmQwMGFlYmFjMTZhMmYzMzY4MmQzN2JlZk2BuvQ=: 00:34:18.541 07:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:18.541 07:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:18.541 07:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Njg4YWY0NzMxOTEzZTRkYWViNTUyZTMyMjlhYWEwZTfTc87r: 00:34:18.541 07:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODMwNTEzZjcwMjE3ZmFiMjE4Y2YxZWIzYWQ5ZmY5NWEyMDEyMWNiNmQwMGFlYmFjMTZhMmYzMzY4MmQzN2JlZk2BuvQ=: ]] 00:34:18.541 07:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODMwNTEzZjcwMjE3ZmFiMjE4Y2YxZWIzYWQ5ZmY5NWEyMDEyMWNiNmQwMGFlYmFjMTZhMmYzMzY4MmQzN2JlZk2BuvQ=: 00:34:18.541 07:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:34:18.541 07:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:18.541 07:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:18.541 07:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:18.541 07:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:18.541 07:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:18.541 07:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:34:18.541 07:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:18.541 07:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:18.541 07:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:18.541 07:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:18.541 07:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:18.541 07:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:18.541 07:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:18.541 07:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:18.541 07:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:18.541 07:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:18.541 07:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:18.541 07:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:18.541 07:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:18.541 07:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:18.541 07:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:18.541 07:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:18.541 07:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:18.541 nvme0n1 00:34:18.541 07:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:18.541 07:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:18.541 07:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:18.541 07:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:18.541 07:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:18.541 07:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:18.541 07:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:18.541 07:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:18.541 07:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:18.541 07:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:18.541 07:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:18.541 07:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:18.541 07:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:34:18.541 07:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:18.541 07:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:18.541 07:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:18.541 07:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:18.541 07:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjQ4NTdiOGU1OWQxMzU0OGMxNmM3ODFhMzlmYjdjOThiZWQxNGJlMmQyNDhlMGVhJkPlHw==: 00:34:18.541 07:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGRjMjMwNDVhYzNkYTUxNGIzYmE0NjE3Y2E2ZWEzYzgzNTViNzdhMGNkN2JmZjg4EjyqOg==: 00:34:18.541 07:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:18.800 07:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:18.800 07:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjQ4NTdiOGU1OWQxMzU0OGMxNmM3ODFhMzlmYjdjOThiZWQxNGJlMmQyNDhlMGVhJkPlHw==: 00:34:18.800 07:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGRjMjMwNDVhYzNkYTUxNGIzYmE0NjE3Y2E2ZWEzYzgzNTViNzdhMGNkN2JmZjg4EjyqOg==: ]] 00:34:18.800 07:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGRjMjMwNDVhYzNkYTUxNGIzYmE0NjE3Y2E2ZWEzYzgzNTViNzdhMGNkN2JmZjg4EjyqOg==: 00:34:18.800 07:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:34:18.800 07:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:18.800 07:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:18.800 07:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:18.800 07:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:18.800 07:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:18.800 07:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:34:18.800 07:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:18.800 07:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:18.800 07:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:18.800 07:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:18.800 07:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:18.800 07:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:18.800 07:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:18.800 07:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:18.800 07:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:18.800 07:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:18.800 07:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:18.800 07:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:18.800 07:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:18.800 07:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:18.800 07:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:18.800 07:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:18.800 07:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:18.800 nvme0n1 00:34:18.800 07:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:18.800 07:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:18.800 07:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:18.800 07:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:18.800 07:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:18.800 07:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:18.800 07:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:18.800 07:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:18.800 07:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:18.800 07:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:18.800 07:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:18.800 07:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:18.800 07:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:34:18.800 07:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:18.800 07:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:18.800 07:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:18.800 07:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:18.800 07:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjVlMmZhNzNkZDA1ZWZjNWVjYjRhNGU2YWRhZDk3NGM4RYVv: 00:34:18.800 07:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDZmMzhmNTJlNmQ1NmRlY2E4MjZkZjhkNWQzMWRlNWSeS3d3: 00:34:18.800 07:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:18.800 07:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:18.800 07:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjVlMmZhNzNkZDA1ZWZjNWVjYjRhNGU2YWRhZDk3NGM4RYVv: 00:34:18.800 07:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDZmMzhmNTJlNmQ1NmRlY2E4MjZkZjhkNWQzMWRlNWSeS3d3: ]] 00:34:18.800 07:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDZmMzhmNTJlNmQ1NmRlY2E4MjZkZjhkNWQzMWRlNWSeS3d3: 00:34:18.800 07:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:34:18.800 07:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:18.800 07:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:18.800 07:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:18.800 07:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:18.801 07:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:18.801 07:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:34:18.801 07:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:18.801 07:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:18.801 07:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:18.801 07:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:18.801 07:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:18.801 07:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:18.801 07:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:18.801 07:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:18.801 07:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:18.801 07:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:18.801 07:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:18.801 07:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:18.801 07:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:18.801 07:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:18.801 07:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:18.801 07:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:18.801 07:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:19.060 nvme0n1 00:34:19.060 07:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:19.060 07:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:19.060 07:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:19.060 07:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:19.060 07:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:19.060 07:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:19.060 07:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:19.060 07:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:19.060 07:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:19.060 07:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:19.060 07:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:19.060 07:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:19.060 07:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:34:19.060 07:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:19.060 07:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:19.060 07:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:19.060 07:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:19.060 07:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzRiZGI4ZjY1ZGM1NDI5MGE2NDRkMjYwNmI2ZTkxNjgwMzc1OWJhN2MzZTY4OGQ1DFOfSg==: 00:34:19.060 07:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGFlMDUzOWFlZDdlYmE1Y2I3ODAwYjZjNDQ0MGFiNmRX/p1v: 00:34:19.060 07:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:19.060 07:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:19.060 07:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzRiZGI4ZjY1ZGM1NDI5MGE2NDRkMjYwNmI2ZTkxNjgwMzc1OWJhN2MzZTY4OGQ1DFOfSg==: 00:34:19.060 07:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGFlMDUzOWFlZDdlYmE1Y2I3ODAwYjZjNDQ0MGFiNmRX/p1v: ]] 00:34:19.060 07:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGFlMDUzOWFlZDdlYmE1Y2I3ODAwYjZjNDQ0MGFiNmRX/p1v: 00:34:19.060 07:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:34:19.060 07:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:19.060 07:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:19.060 07:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:19.060 07:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:19.060 07:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:19.060 07:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:34:19.060 07:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:19.060 07:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:19.060 07:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:19.060 07:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:19.060 07:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:19.060 07:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:19.060 07:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:19.060 07:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:19.060 07:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:19.060 07:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:19.060 07:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:19.060 07:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:19.060 07:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:19.060 07:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:19.060 07:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:19.060 07:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:19.060 07:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:19.319 nvme0n1 00:34:19.319 07:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:19.319 07:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:19.319 07:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:19.319 07:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:19.319 07:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:19.319 07:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:19.319 07:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:19.319 07:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:19.319 07:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:19.319 07:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:19.319 07:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:19.319 07:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:19.319 07:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:34:19.319 07:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:19.319 07:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:19.319 07:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:19.319 07:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:19.319 07:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NTczYmJjZTQ5ODYyY2NlYTFhMTE3MTQ1NTIwMzYwMThiZWI3MmNmYzg2ZmU5ZjI4M2U0NGUyZTA2NmU4MTJiNljH8a0=: 00:34:19.319 07:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:19.319 07:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:19.319 07:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:19.319 07:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NTczYmJjZTQ5ODYyY2NlYTFhMTE3MTQ1NTIwMzYwMThiZWI3MmNmYzg2ZmU5ZjI4M2U0NGUyZTA2NmU4MTJiNljH8a0=: 00:34:19.319 07:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:19.319 07:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:34:19.319 07:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:19.319 07:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:19.319 07:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:19.319 07:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:19.319 07:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:19.319 07:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:34:19.319 07:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:19.319 07:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:19.319 07:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:19.319 07:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:19.319 07:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:19.319 07:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:19.319 07:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:19.319 07:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:19.319 07:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:19.319 07:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:19.319 07:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:19.319 07:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:19.319 07:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:19.319 07:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:19.319 07:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:19.319 07:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:19.319 07:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:19.578 nvme0n1 00:34:19.578 07:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:19.578 07:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:19.578 07:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:19.578 07:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:19.578 07:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:19.578 07:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:19.578 07:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:19.578 07:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:19.578 07:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:19.578 07:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:19.578 07:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:19.578 07:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:19.578 07:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:19.578 07:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:34:19.578 07:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:19.578 07:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:19.578 07:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:19.578 07:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:19.578 07:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Njg4YWY0NzMxOTEzZTRkYWViNTUyZTMyMjlhYWEwZTfTc87r: 00:34:19.578 07:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODMwNTEzZjcwMjE3ZmFiMjE4Y2YxZWIzYWQ5ZmY5NWEyMDEyMWNiNmQwMGFlYmFjMTZhMmYzMzY4MmQzN2JlZk2BuvQ=: 00:34:19.578 07:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:19.578 07:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:19.578 07:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Njg4YWY0NzMxOTEzZTRkYWViNTUyZTMyMjlhYWEwZTfTc87r: 00:34:19.578 07:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODMwNTEzZjcwMjE3ZmFiMjE4Y2YxZWIzYWQ5ZmY5NWEyMDEyMWNiNmQwMGFlYmFjMTZhMmYzMzY4MmQzN2JlZk2BuvQ=: ]] 00:34:19.578 07:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODMwNTEzZjcwMjE3ZmFiMjE4Y2YxZWIzYWQ5ZmY5NWEyMDEyMWNiNmQwMGFlYmFjMTZhMmYzMzY4MmQzN2JlZk2BuvQ=: 00:34:19.578 07:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:34:19.578 07:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:19.578 07:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:19.578 07:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:19.578 07:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:19.578 07:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:19.578 07:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:34:19.578 07:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:19.578 07:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:19.578 07:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:19.578 07:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:19.578 07:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:19.578 07:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:19.578 07:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:19.578 07:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:19.578 07:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:19.578 07:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:19.578 07:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:19.578 07:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:19.578 07:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:19.578 07:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:19.578 07:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:19.578 07:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:19.578 07:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:19.837 nvme0n1 00:34:19.837 07:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:19.837 07:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:19.837 07:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:19.837 07:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:19.837 07:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:19.837 07:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:19.837 07:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:19.837 07:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:19.837 07:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:19.837 07:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:19.837 07:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:19.837 07:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:19.837 07:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:34:19.837 07:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:19.837 07:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:19.837 07:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:19.837 07:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:19.837 07:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjQ4NTdiOGU1OWQxMzU0OGMxNmM3ODFhMzlmYjdjOThiZWQxNGJlMmQyNDhlMGVhJkPlHw==: 00:34:19.837 07:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGRjMjMwNDVhYzNkYTUxNGIzYmE0NjE3Y2E2ZWEzYzgzNTViNzdhMGNkN2JmZjg4EjyqOg==: 00:34:19.837 07:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:19.837 07:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:19.837 07:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjQ4NTdiOGU1OWQxMzU0OGMxNmM3ODFhMzlmYjdjOThiZWQxNGJlMmQyNDhlMGVhJkPlHw==: 00:34:19.837 07:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGRjMjMwNDVhYzNkYTUxNGIzYmE0NjE3Y2E2ZWEzYzgzNTViNzdhMGNkN2JmZjg4EjyqOg==: ]] 00:34:19.837 07:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGRjMjMwNDVhYzNkYTUxNGIzYmE0NjE3Y2E2ZWEzYzgzNTViNzdhMGNkN2JmZjg4EjyqOg==: 00:34:19.837 07:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:34:19.837 07:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:19.837 07:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:19.837 07:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:19.837 07:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:19.837 07:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:19.837 07:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:34:19.837 07:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:19.837 07:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:19.837 07:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:19.837 07:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:19.837 07:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:19.837 07:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:19.837 07:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:19.837 07:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:19.837 07:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:19.837 07:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:19.837 07:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:19.837 07:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:19.837 07:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:19.837 07:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:19.838 07:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:19.838 07:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:19.838 07:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:20.097 nvme0n1 00:34:20.097 07:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:20.097 07:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:20.097 07:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:20.097 07:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:20.097 07:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:20.097 07:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:20.097 07:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:20.097 07:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:20.097 07:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:20.097 07:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:20.097 07:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:20.097 07:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:20.097 07:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:34:20.097 07:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:20.097 07:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:20.097 07:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:20.097 07:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:20.097 07:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjVlMmZhNzNkZDA1ZWZjNWVjYjRhNGU2YWRhZDk3NGM4RYVv: 00:34:20.097 07:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDZmMzhmNTJlNmQ1NmRlY2E4MjZkZjhkNWQzMWRlNWSeS3d3: 00:34:20.097 07:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:20.097 07:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:20.097 07:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjVlMmZhNzNkZDA1ZWZjNWVjYjRhNGU2YWRhZDk3NGM4RYVv: 00:34:20.097 07:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDZmMzhmNTJlNmQ1NmRlY2E4MjZkZjhkNWQzMWRlNWSeS3d3: ]] 00:34:20.097 07:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDZmMzhmNTJlNmQ1NmRlY2E4MjZkZjhkNWQzMWRlNWSeS3d3: 00:34:20.097 07:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:34:20.097 07:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:20.097 07:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:20.097 07:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:20.097 07:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:20.097 07:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:20.097 07:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:34:20.097 07:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:20.097 07:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:20.097 07:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:20.097 07:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:20.097 07:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:20.097 07:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:20.097 07:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:20.097 07:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:20.097 07:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:20.097 07:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:20.097 07:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:20.097 07:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:20.097 07:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:20.097 07:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:20.097 07:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:20.097 07:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:20.097 07:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:20.356 nvme0n1 00:34:20.356 07:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:20.356 07:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:20.356 07:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:20.356 07:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:20.356 07:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:20.356 07:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:20.356 07:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:20.356 07:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:20.356 07:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:20.356 07:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:20.356 07:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:20.356 07:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:20.356 07:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:34:20.356 07:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:20.356 07:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:20.356 07:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:20.356 07:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:20.356 07:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzRiZGI4ZjY1ZGM1NDI5MGE2NDRkMjYwNmI2ZTkxNjgwMzc1OWJhN2MzZTY4OGQ1DFOfSg==: 00:34:20.356 07:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGFlMDUzOWFlZDdlYmE1Y2I3ODAwYjZjNDQ0MGFiNmRX/p1v: 00:34:20.356 07:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:20.356 07:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:20.356 07:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzRiZGI4ZjY1ZGM1NDI5MGE2NDRkMjYwNmI2ZTkxNjgwMzc1OWJhN2MzZTY4OGQ1DFOfSg==: 00:34:20.356 07:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGFlMDUzOWFlZDdlYmE1Y2I3ODAwYjZjNDQ0MGFiNmRX/p1v: ]] 00:34:20.356 07:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGFlMDUzOWFlZDdlYmE1Y2I3ODAwYjZjNDQ0MGFiNmRX/p1v: 00:34:20.356 07:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:34:20.356 07:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:20.356 07:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:20.356 07:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:20.356 07:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:20.356 07:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:20.356 07:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:34:20.356 07:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:20.356 07:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:20.356 07:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:20.356 07:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:20.356 07:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:20.356 07:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:20.356 07:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:20.356 07:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:20.356 07:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:20.356 07:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:20.356 07:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:20.356 07:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:20.356 07:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:20.356 07:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:20.356 07:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:20.356 07:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:20.356 07:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:20.615 nvme0n1 00:34:20.615 07:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:20.615 07:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:20.615 07:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:20.615 07:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:20.615 07:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:20.616 07:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:20.616 07:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:20.616 07:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:20.616 07:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:20.616 07:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:20.616 07:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:20.616 07:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:20.616 07:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:34:20.616 07:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:20.616 07:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:20.616 07:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:20.616 07:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:20.616 07:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NTczYmJjZTQ5ODYyY2NlYTFhMTE3MTQ1NTIwMzYwMThiZWI3MmNmYzg2ZmU5ZjI4M2U0NGUyZTA2NmU4MTJiNljH8a0=: 00:34:20.616 07:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:20.616 07:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:20.616 07:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:20.616 07:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NTczYmJjZTQ5ODYyY2NlYTFhMTE3MTQ1NTIwMzYwMThiZWI3MmNmYzg2ZmU5ZjI4M2U0NGUyZTA2NmU4MTJiNljH8a0=: 00:34:20.616 07:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:20.616 07:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:34:20.616 07:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:20.616 07:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:20.616 07:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:20.616 07:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:20.616 07:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:20.616 07:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:34:20.616 07:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:20.616 07:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:20.616 07:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:20.616 07:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:20.616 07:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:20.616 07:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:20.616 07:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:20.616 07:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:20.616 07:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:20.616 07:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:20.616 07:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:20.616 07:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:20.616 07:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:20.616 07:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:20.616 07:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:20.616 07:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:20.616 07:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:20.875 nvme0n1 00:34:20.875 07:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:20.875 07:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:20.875 07:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:20.875 07:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:20.875 07:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:20.875 07:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:20.875 07:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:20.875 07:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:20.875 07:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:20.875 07:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:20.875 07:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:20.875 07:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:20.875 07:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:20.875 07:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:34:20.875 07:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:20.875 07:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:20.875 07:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:20.875 07:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:20.875 07:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Njg4YWY0NzMxOTEzZTRkYWViNTUyZTMyMjlhYWEwZTfTc87r: 00:34:20.875 07:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODMwNTEzZjcwMjE3ZmFiMjE4Y2YxZWIzYWQ5ZmY5NWEyMDEyMWNiNmQwMGFlYmFjMTZhMmYzMzY4MmQzN2JlZk2BuvQ=: 00:34:20.875 07:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:20.875 07:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:20.875 07:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Njg4YWY0NzMxOTEzZTRkYWViNTUyZTMyMjlhYWEwZTfTc87r: 00:34:20.875 07:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODMwNTEzZjcwMjE3ZmFiMjE4Y2YxZWIzYWQ5ZmY5NWEyMDEyMWNiNmQwMGFlYmFjMTZhMmYzMzY4MmQzN2JlZk2BuvQ=: ]] 00:34:20.875 07:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODMwNTEzZjcwMjE3ZmFiMjE4Y2YxZWIzYWQ5ZmY5NWEyMDEyMWNiNmQwMGFlYmFjMTZhMmYzMzY4MmQzN2JlZk2BuvQ=: 00:34:20.875 07:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:34:20.875 07:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:20.875 07:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:20.875 07:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:20.875 07:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:20.875 07:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:20.875 07:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:34:20.875 07:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:20.875 07:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:20.875 07:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:20.875 07:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:20.875 07:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:20.875 07:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:20.875 07:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:20.875 07:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:20.875 07:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:20.875 07:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:20.875 07:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:20.875 07:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:20.875 07:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:20.875 07:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:20.875 07:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:20.876 07:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:20.876 07:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:21.135 nvme0n1 00:34:21.135 07:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:21.135 07:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:21.135 07:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:21.135 07:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:21.135 07:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:21.135 07:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:21.135 07:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:21.135 07:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:21.135 07:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:21.135 07:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:21.135 07:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:21.135 07:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:21.135 07:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:34:21.135 07:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:21.135 07:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:21.135 07:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:21.135 07:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:21.135 07:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjQ4NTdiOGU1OWQxMzU0OGMxNmM3ODFhMzlmYjdjOThiZWQxNGJlMmQyNDhlMGVhJkPlHw==: 00:34:21.135 07:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGRjMjMwNDVhYzNkYTUxNGIzYmE0NjE3Y2E2ZWEzYzgzNTViNzdhMGNkN2JmZjg4EjyqOg==: 00:34:21.135 07:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:21.135 07:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:21.135 07:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjQ4NTdiOGU1OWQxMzU0OGMxNmM3ODFhMzlmYjdjOThiZWQxNGJlMmQyNDhlMGVhJkPlHw==: 00:34:21.135 07:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGRjMjMwNDVhYzNkYTUxNGIzYmE0NjE3Y2E2ZWEzYzgzNTViNzdhMGNkN2JmZjg4EjyqOg==: ]] 00:34:21.135 07:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGRjMjMwNDVhYzNkYTUxNGIzYmE0NjE3Y2E2ZWEzYzgzNTViNzdhMGNkN2JmZjg4EjyqOg==: 00:34:21.135 07:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:34:21.135 07:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:21.135 07:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:21.135 07:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:21.135 07:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:21.135 07:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:21.135 07:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:34:21.135 07:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:21.135 07:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:21.135 07:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:21.135 07:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:21.135 07:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:21.135 07:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:21.135 07:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:21.135 07:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:21.135 07:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:21.135 07:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:21.135 07:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:21.135 07:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:21.135 07:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:21.135 07:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:21.135 07:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:21.135 07:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:21.135 07:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:21.394 nvme0n1 00:34:21.394 07:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:21.394 07:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:21.394 07:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:21.394 07:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:21.394 07:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:21.394 07:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:21.394 07:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:21.394 07:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:21.394 07:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:21.394 07:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:21.653 07:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:21.653 07:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:21.653 07:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:34:21.653 07:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:21.653 07:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:21.653 07:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:21.653 07:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:21.653 07:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjVlMmZhNzNkZDA1ZWZjNWVjYjRhNGU2YWRhZDk3NGM4RYVv: 00:34:21.653 07:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDZmMzhmNTJlNmQ1NmRlY2E4MjZkZjhkNWQzMWRlNWSeS3d3: 00:34:21.653 07:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:21.653 07:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:21.653 07:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjVlMmZhNzNkZDA1ZWZjNWVjYjRhNGU2YWRhZDk3NGM4RYVv: 00:34:21.653 07:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDZmMzhmNTJlNmQ1NmRlY2E4MjZkZjhkNWQzMWRlNWSeS3d3: ]] 00:34:21.653 07:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDZmMzhmNTJlNmQ1NmRlY2E4MjZkZjhkNWQzMWRlNWSeS3d3: 00:34:21.653 07:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:34:21.653 07:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:21.653 07:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:21.653 07:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:21.653 07:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:21.653 07:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:21.653 07:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:34:21.653 07:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:21.653 07:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:21.653 07:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:21.653 07:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:21.653 07:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:21.653 07:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:21.653 07:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:21.653 07:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:21.653 07:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:21.653 07:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:21.653 07:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:21.653 07:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:21.653 07:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:21.653 07:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:21.653 07:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:21.653 07:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:21.653 07:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:21.912 nvme0n1 00:34:21.912 07:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:21.912 07:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:21.912 07:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:21.912 07:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:21.912 07:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:21.912 07:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:21.912 07:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:21.912 07:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:21.912 07:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:21.912 07:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:21.912 07:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:21.912 07:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:21.912 07:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:34:21.912 07:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:21.912 07:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:21.912 07:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:21.912 07:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:21.912 07:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzRiZGI4ZjY1ZGM1NDI5MGE2NDRkMjYwNmI2ZTkxNjgwMzc1OWJhN2MzZTY4OGQ1DFOfSg==: 00:34:21.913 07:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGFlMDUzOWFlZDdlYmE1Y2I3ODAwYjZjNDQ0MGFiNmRX/p1v: 00:34:21.913 07:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:21.913 07:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:21.913 07:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzRiZGI4ZjY1ZGM1NDI5MGE2NDRkMjYwNmI2ZTkxNjgwMzc1OWJhN2MzZTY4OGQ1DFOfSg==: 00:34:21.913 07:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGFlMDUzOWFlZDdlYmE1Y2I3ODAwYjZjNDQ0MGFiNmRX/p1v: ]] 00:34:21.913 07:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGFlMDUzOWFlZDdlYmE1Y2I3ODAwYjZjNDQ0MGFiNmRX/p1v: 00:34:21.913 07:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:34:21.913 07:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:21.913 07:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:21.913 07:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:21.913 07:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:21.913 07:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:21.913 07:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:34:21.913 07:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:21.913 07:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:21.913 07:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:21.913 07:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:21.913 07:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:21.913 07:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:21.913 07:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:21.913 07:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:21.913 07:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:21.913 07:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:21.913 07:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:21.913 07:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:21.913 07:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:21.913 07:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:21.913 07:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:21.913 07:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:21.913 07:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.172 nvme0n1 00:34:22.172 07:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:22.172 07:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:22.172 07:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:22.172 07:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:22.172 07:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.172 07:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:22.172 07:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:22.172 07:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:22.172 07:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:22.172 07:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.172 07:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:22.172 07:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:22.172 07:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:34:22.172 07:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:22.172 07:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:22.172 07:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:22.172 07:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:22.172 07:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NTczYmJjZTQ5ODYyY2NlYTFhMTE3MTQ1NTIwMzYwMThiZWI3MmNmYzg2ZmU5ZjI4M2U0NGUyZTA2NmU4MTJiNljH8a0=: 00:34:22.172 07:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:22.172 07:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:22.172 07:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:22.172 07:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NTczYmJjZTQ5ODYyY2NlYTFhMTE3MTQ1NTIwMzYwMThiZWI3MmNmYzg2ZmU5ZjI4M2U0NGUyZTA2NmU4MTJiNljH8a0=: 00:34:22.172 07:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:22.172 07:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:34:22.172 07:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:22.172 07:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:22.172 07:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:22.172 07:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:22.172 07:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:22.172 07:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:34:22.172 07:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:22.172 07:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.172 07:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:22.172 07:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:22.172 07:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:22.172 07:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:22.172 07:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:22.172 07:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:22.172 07:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:22.172 07:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:22.172 07:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:22.172 07:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:22.172 07:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:22.172 07:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:22.172 07:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:22.172 07:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:22.172 07:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.430 nvme0n1 00:34:22.430 07:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:22.430 07:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:22.430 07:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:22.430 07:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.430 07:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:22.430 07:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:22.430 07:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:22.430 07:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:22.430 07:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:22.430 07:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.430 07:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:22.430 07:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:22.430 07:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:22.430 07:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:34:22.430 07:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:22.430 07:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:22.430 07:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:22.430 07:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:22.430 07:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Njg4YWY0NzMxOTEzZTRkYWViNTUyZTMyMjlhYWEwZTfTc87r: 00:34:22.430 07:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODMwNTEzZjcwMjE3ZmFiMjE4Y2YxZWIzYWQ5ZmY5NWEyMDEyMWNiNmQwMGFlYmFjMTZhMmYzMzY4MmQzN2JlZk2BuvQ=: 00:34:22.430 07:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:22.430 07:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:22.430 07:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Njg4YWY0NzMxOTEzZTRkYWViNTUyZTMyMjlhYWEwZTfTc87r: 00:34:22.430 07:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODMwNTEzZjcwMjE3ZmFiMjE4Y2YxZWIzYWQ5ZmY5NWEyMDEyMWNiNmQwMGFlYmFjMTZhMmYzMzY4MmQzN2JlZk2BuvQ=: ]] 00:34:22.430 07:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODMwNTEzZjcwMjE3ZmFiMjE4Y2YxZWIzYWQ5ZmY5NWEyMDEyMWNiNmQwMGFlYmFjMTZhMmYzMzY4MmQzN2JlZk2BuvQ=: 00:34:22.430 07:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:34:22.430 07:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:22.430 07:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:22.430 07:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:22.430 07:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:22.430 07:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:22.430 07:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:34:22.430 07:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:22.430 07:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.430 07:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:22.430 07:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:22.430 07:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:22.431 07:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:22.431 07:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:22.431 07:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:22.431 07:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:22.431 07:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:22.431 07:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:22.431 07:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:22.431 07:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:22.431 07:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:22.431 07:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:22.431 07:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:22.431 07:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.997 nvme0n1 00:34:22.997 07:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:22.997 07:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:22.997 07:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:22.998 07:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.998 07:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:22.998 07:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:22.998 07:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:22.998 07:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:22.998 07:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:22.998 07:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.998 07:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:22.998 07:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:22.998 07:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:34:22.998 07:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:22.998 07:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:22.998 07:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:22.998 07:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:22.998 07:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjQ4NTdiOGU1OWQxMzU0OGMxNmM3ODFhMzlmYjdjOThiZWQxNGJlMmQyNDhlMGVhJkPlHw==: 00:34:22.998 07:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGRjMjMwNDVhYzNkYTUxNGIzYmE0NjE3Y2E2ZWEzYzgzNTViNzdhMGNkN2JmZjg4EjyqOg==: 00:34:22.998 07:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:22.998 07:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:22.998 07:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjQ4NTdiOGU1OWQxMzU0OGMxNmM3ODFhMzlmYjdjOThiZWQxNGJlMmQyNDhlMGVhJkPlHw==: 00:34:22.998 07:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGRjMjMwNDVhYzNkYTUxNGIzYmE0NjE3Y2E2ZWEzYzgzNTViNzdhMGNkN2JmZjg4EjyqOg==: ]] 00:34:22.998 07:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGRjMjMwNDVhYzNkYTUxNGIzYmE0NjE3Y2E2ZWEzYzgzNTViNzdhMGNkN2JmZjg4EjyqOg==: 00:34:22.998 07:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:34:22.998 07:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:22.998 07:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:22.998 07:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:22.998 07:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:22.998 07:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:22.998 07:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:34:22.998 07:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:22.998 07:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.998 07:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:22.998 07:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:22.998 07:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:22.998 07:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:22.998 07:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:22.998 07:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:22.998 07:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:22.998 07:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:22.998 07:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:22.998 07:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:22.998 07:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:22.998 07:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:22.998 07:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:22.998 07:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:22.998 07:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:23.565 nvme0n1 00:34:23.565 07:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:23.565 07:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:23.565 07:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:23.565 07:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:23.565 07:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:23.565 07:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:23.565 07:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:23.565 07:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:23.565 07:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:23.565 07:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:23.565 07:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:23.565 07:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:23.565 07:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:34:23.565 07:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:23.565 07:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:23.565 07:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:23.565 07:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:23.565 07:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjVlMmZhNzNkZDA1ZWZjNWVjYjRhNGU2YWRhZDk3NGM4RYVv: 00:34:23.565 07:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDZmMzhmNTJlNmQ1NmRlY2E4MjZkZjhkNWQzMWRlNWSeS3d3: 00:34:23.565 07:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:23.565 07:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:23.565 07:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjVlMmZhNzNkZDA1ZWZjNWVjYjRhNGU2YWRhZDk3NGM4RYVv: 00:34:23.565 07:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDZmMzhmNTJlNmQ1NmRlY2E4MjZkZjhkNWQzMWRlNWSeS3d3: ]] 00:34:23.565 07:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDZmMzhmNTJlNmQ1NmRlY2E4MjZkZjhkNWQzMWRlNWSeS3d3: 00:34:23.565 07:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:34:23.565 07:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:23.565 07:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:23.565 07:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:23.565 07:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:23.565 07:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:23.565 07:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:34:23.565 07:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:23.565 07:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:23.565 07:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:23.566 07:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:23.566 07:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:23.566 07:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:23.566 07:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:23.566 07:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:23.566 07:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:23.566 07:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:23.566 07:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:23.566 07:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:23.566 07:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:23.566 07:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:23.566 07:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:23.566 07:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:23.566 07:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:24.133 nvme0n1 00:34:24.133 07:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:24.133 07:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:24.133 07:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:24.133 07:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:24.133 07:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:24.133 07:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:24.133 07:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:24.133 07:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:24.133 07:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:24.133 07:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:24.133 07:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:24.133 07:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:24.133 07:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:34:24.133 07:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:24.133 07:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:24.133 07:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:24.133 07:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:24.133 07:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzRiZGI4ZjY1ZGM1NDI5MGE2NDRkMjYwNmI2ZTkxNjgwMzc1OWJhN2MzZTY4OGQ1DFOfSg==: 00:34:24.133 07:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGFlMDUzOWFlZDdlYmE1Y2I3ODAwYjZjNDQ0MGFiNmRX/p1v: 00:34:24.133 07:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:24.133 07:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:24.133 07:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzRiZGI4ZjY1ZGM1NDI5MGE2NDRkMjYwNmI2ZTkxNjgwMzc1OWJhN2MzZTY4OGQ1DFOfSg==: 00:34:24.133 07:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGFlMDUzOWFlZDdlYmE1Y2I3ODAwYjZjNDQ0MGFiNmRX/p1v: ]] 00:34:24.133 07:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGFlMDUzOWFlZDdlYmE1Y2I3ODAwYjZjNDQ0MGFiNmRX/p1v: 00:34:24.133 07:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:34:24.133 07:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:24.133 07:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:24.133 07:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:24.133 07:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:24.133 07:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:24.133 07:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:34:24.133 07:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:24.133 07:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:24.133 07:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:24.133 07:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:24.133 07:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:24.134 07:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:24.134 07:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:24.134 07:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:24.134 07:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:24.134 07:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:24.134 07:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:24.134 07:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:24.134 07:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:24.134 07:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:24.134 07:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:24.134 07:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:24.134 07:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:24.700 nvme0n1 00:34:24.700 07:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:24.700 07:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:24.700 07:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:24.700 07:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:24.700 07:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:24.700 07:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:24.700 07:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:24.700 07:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:24.700 07:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:24.701 07:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:24.701 07:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:24.701 07:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:24.701 07:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:34:24.701 07:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:24.701 07:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:24.701 07:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:24.701 07:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:24.701 07:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NTczYmJjZTQ5ODYyY2NlYTFhMTE3MTQ1NTIwMzYwMThiZWI3MmNmYzg2ZmU5ZjI4M2U0NGUyZTA2NmU4MTJiNljH8a0=: 00:34:24.701 07:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:24.701 07:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:24.701 07:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:24.701 07:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NTczYmJjZTQ5ODYyY2NlYTFhMTE3MTQ1NTIwMzYwMThiZWI3MmNmYzg2ZmU5ZjI4M2U0NGUyZTA2NmU4MTJiNljH8a0=: 00:34:24.701 07:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:24.701 07:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:34:24.701 07:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:24.701 07:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:24.701 07:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:24.701 07:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:24.701 07:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:24.701 07:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:34:24.701 07:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:24.701 07:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:24.701 07:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:24.701 07:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:24.701 07:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:24.701 07:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:24.701 07:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:24.701 07:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:24.701 07:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:24.701 07:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:24.701 07:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:24.701 07:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:24.701 07:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:24.701 07:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:24.701 07:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:24.701 07:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:24.701 07:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:25.267 nvme0n1 00:34:25.267 07:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:25.267 07:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:25.267 07:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:25.267 07:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:25.267 07:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:25.267 07:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:25.267 07:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:25.267 07:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:25.267 07:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:25.267 07:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:25.267 07:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:25.267 07:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:25.267 07:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:25.267 07:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:34:25.267 07:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:25.267 07:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:25.267 07:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:25.267 07:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:25.267 07:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Njg4YWY0NzMxOTEzZTRkYWViNTUyZTMyMjlhYWEwZTfTc87r: 00:34:25.267 07:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODMwNTEzZjcwMjE3ZmFiMjE4Y2YxZWIzYWQ5ZmY5NWEyMDEyMWNiNmQwMGFlYmFjMTZhMmYzMzY4MmQzN2JlZk2BuvQ=: 00:34:25.267 07:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:25.267 07:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:25.267 07:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Njg4YWY0NzMxOTEzZTRkYWViNTUyZTMyMjlhYWEwZTfTc87r: 00:34:25.268 07:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODMwNTEzZjcwMjE3ZmFiMjE4Y2YxZWIzYWQ5ZmY5NWEyMDEyMWNiNmQwMGFlYmFjMTZhMmYzMzY4MmQzN2JlZk2BuvQ=: ]] 00:34:25.268 07:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODMwNTEzZjcwMjE3ZmFiMjE4Y2YxZWIzYWQ5ZmY5NWEyMDEyMWNiNmQwMGFlYmFjMTZhMmYzMzY4MmQzN2JlZk2BuvQ=: 00:34:25.268 07:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:34:25.268 07:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:25.268 07:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:25.268 07:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:25.268 07:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:25.268 07:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:25.268 07:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:34:25.268 07:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:25.268 07:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:25.268 07:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:25.268 07:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:25.268 07:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:25.268 07:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:25.268 07:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:25.268 07:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:25.268 07:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:25.268 07:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:25.268 07:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:25.268 07:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:25.268 07:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:25.268 07:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:25.268 07:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:25.268 07:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:25.268 07:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:26.201 nvme0n1 00:34:26.201 07:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:26.201 07:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:26.201 07:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:26.201 07:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:26.201 07:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:26.201 07:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:26.201 07:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:26.201 07:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:26.201 07:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:26.201 07:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:26.201 07:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:26.201 07:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:26.201 07:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:34:26.201 07:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:26.201 07:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:26.201 07:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:26.201 07:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:26.201 07:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjQ4NTdiOGU1OWQxMzU0OGMxNmM3ODFhMzlmYjdjOThiZWQxNGJlMmQyNDhlMGVhJkPlHw==: 00:34:26.201 07:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGRjMjMwNDVhYzNkYTUxNGIzYmE0NjE3Y2E2ZWEzYzgzNTViNzdhMGNkN2JmZjg4EjyqOg==: 00:34:26.201 07:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:26.201 07:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:26.201 07:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjQ4NTdiOGU1OWQxMzU0OGMxNmM3ODFhMzlmYjdjOThiZWQxNGJlMmQyNDhlMGVhJkPlHw==: 00:34:26.201 07:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGRjMjMwNDVhYzNkYTUxNGIzYmE0NjE3Y2E2ZWEzYzgzNTViNzdhMGNkN2JmZjg4EjyqOg==: ]] 00:34:26.201 07:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGRjMjMwNDVhYzNkYTUxNGIzYmE0NjE3Y2E2ZWEzYzgzNTViNzdhMGNkN2JmZjg4EjyqOg==: 00:34:26.201 07:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:34:26.201 07:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:26.201 07:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:26.201 07:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:26.201 07:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:26.201 07:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:26.201 07:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:34:26.201 07:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:26.201 07:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:26.201 07:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:26.201 07:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:26.201 07:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:26.201 07:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:26.201 07:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:26.201 07:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:26.201 07:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:26.201 07:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:26.201 07:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:26.201 07:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:26.201 07:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:26.201 07:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:26.201 07:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:26.201 07:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:26.201 07:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:27.135 nvme0n1 00:34:27.136 07:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:27.136 07:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:27.136 07:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:27.136 07:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:27.136 07:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:27.136 07:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:27.136 07:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:27.136 07:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:27.136 07:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:27.136 07:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:27.136 07:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:27.136 07:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:27.136 07:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:34:27.136 07:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:27.136 07:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:27.136 07:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:27.136 07:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:27.136 07:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjVlMmZhNzNkZDA1ZWZjNWVjYjRhNGU2YWRhZDk3NGM4RYVv: 00:34:27.136 07:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDZmMzhmNTJlNmQ1NmRlY2E4MjZkZjhkNWQzMWRlNWSeS3d3: 00:34:27.136 07:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:27.136 07:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:27.136 07:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjVlMmZhNzNkZDA1ZWZjNWVjYjRhNGU2YWRhZDk3NGM4RYVv: 00:34:27.136 07:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDZmMzhmNTJlNmQ1NmRlY2E4MjZkZjhkNWQzMWRlNWSeS3d3: ]] 00:34:27.136 07:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDZmMzhmNTJlNmQ1NmRlY2E4MjZkZjhkNWQzMWRlNWSeS3d3: 00:34:27.136 07:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:34:27.136 07:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:27.136 07:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:27.136 07:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:27.136 07:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:27.136 07:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:27.136 07:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:34:27.136 07:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:27.136 07:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:27.136 07:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:27.136 07:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:27.136 07:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:27.136 07:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:27.136 07:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:27.136 07:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:27.136 07:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:27.136 07:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:27.136 07:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:27.136 07:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:27.136 07:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:27.136 07:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:27.136 07:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:27.136 07:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:27.136 07:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:28.071 nvme0n1 00:34:28.071 07:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:28.071 07:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:28.071 07:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:28.071 07:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:28.071 07:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:28.071 07:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:28.071 07:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:28.071 07:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:28.071 07:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:28.071 07:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:28.071 07:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:28.071 07:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:28.071 07:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:34:28.071 07:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:28.071 07:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:28.071 07:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:28.071 07:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:28.071 07:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzRiZGI4ZjY1ZGM1NDI5MGE2NDRkMjYwNmI2ZTkxNjgwMzc1OWJhN2MzZTY4OGQ1DFOfSg==: 00:34:28.071 07:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGFlMDUzOWFlZDdlYmE1Y2I3ODAwYjZjNDQ0MGFiNmRX/p1v: 00:34:28.071 07:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:28.071 07:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:28.071 07:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzRiZGI4ZjY1ZGM1NDI5MGE2NDRkMjYwNmI2ZTkxNjgwMzc1OWJhN2MzZTY4OGQ1DFOfSg==: 00:34:28.071 07:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGFlMDUzOWFlZDdlYmE1Y2I3ODAwYjZjNDQ0MGFiNmRX/p1v: ]] 00:34:28.071 07:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGFlMDUzOWFlZDdlYmE1Y2I3ODAwYjZjNDQ0MGFiNmRX/p1v: 00:34:28.071 07:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:34:28.071 07:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:28.071 07:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:28.071 07:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:28.071 07:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:28.071 07:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:28.071 07:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:34:28.071 07:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:28.071 07:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:28.071 07:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:28.071 07:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:28.071 07:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:28.071 07:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:28.071 07:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:28.071 07:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:28.071 07:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:28.071 07:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:28.071 07:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:28.071 07:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:28.071 07:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:28.071 07:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:28.071 07:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:28.071 07:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:28.071 07:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:29.013 nvme0n1 00:34:29.013 07:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:29.013 07:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:29.013 07:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:29.013 07:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:29.013 07:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:29.013 07:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:29.013 07:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:29.013 07:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:29.013 07:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:29.013 07:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:29.013 07:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:29.013 07:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:29.013 07:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:34:29.013 07:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:29.013 07:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:29.013 07:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:29.013 07:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:29.013 07:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NTczYmJjZTQ5ODYyY2NlYTFhMTE3MTQ1NTIwMzYwMThiZWI3MmNmYzg2ZmU5ZjI4M2U0NGUyZTA2NmU4MTJiNljH8a0=: 00:34:29.013 07:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:29.014 07:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:29.014 07:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:29.014 07:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NTczYmJjZTQ5ODYyY2NlYTFhMTE3MTQ1NTIwMzYwMThiZWI3MmNmYzg2ZmU5ZjI4M2U0NGUyZTA2NmU4MTJiNljH8a0=: 00:34:29.014 07:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:29.014 07:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:34:29.014 07:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:29.014 07:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:29.014 07:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:29.014 07:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:29.014 07:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:29.014 07:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:34:29.014 07:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:29.014 07:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:29.014 07:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:29.014 07:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:29.014 07:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:29.014 07:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:29.014 07:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:29.014 07:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:29.014 07:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:29.014 07:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:29.014 07:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:29.014 07:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:29.014 07:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:29.014 07:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:29.014 07:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:29.014 07:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:29.014 07:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:29.948 nvme0n1 00:34:29.948 07:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:29.948 07:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:29.948 07:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:29.948 07:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:29.948 07:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:29.948 07:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:29.948 07:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:29.948 07:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:29.948 07:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:29.948 07:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:29.948 07:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:29.948 07:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:34:29.948 07:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:29.948 07:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:29.948 07:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:29.948 07:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:29.948 07:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjQ4NTdiOGU1OWQxMzU0OGMxNmM3ODFhMzlmYjdjOThiZWQxNGJlMmQyNDhlMGVhJkPlHw==: 00:34:29.948 07:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGRjMjMwNDVhYzNkYTUxNGIzYmE0NjE3Y2E2ZWEzYzgzNTViNzdhMGNkN2JmZjg4EjyqOg==: 00:34:29.948 07:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:29.948 07:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:29.948 07:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjQ4NTdiOGU1OWQxMzU0OGMxNmM3ODFhMzlmYjdjOThiZWQxNGJlMmQyNDhlMGVhJkPlHw==: 00:34:29.948 07:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGRjMjMwNDVhYzNkYTUxNGIzYmE0NjE3Y2E2ZWEzYzgzNTViNzdhMGNkN2JmZjg4EjyqOg==: ]] 00:34:29.948 07:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGRjMjMwNDVhYzNkYTUxNGIzYmE0NjE3Y2E2ZWEzYzgzNTViNzdhMGNkN2JmZjg4EjyqOg==: 00:34:29.948 07:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:34:29.948 07:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:29.948 07:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:29.948 07:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:29.948 07:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:34:29.948 07:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:29.948 07:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:29.948 07:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:29.948 07:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:29.949 07:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:29.949 07:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:29.949 07:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:29.949 07:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:29.949 07:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:29.949 07:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:29.949 07:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:34:29.949 07:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:34:29.949 07:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:34:29.949 07:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:34:29.949 07:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:29.949 07:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:34:29.949 07:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:29.949 07:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:34:29.949 07:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:29.949 07:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:29.949 request: 00:34:29.949 { 00:34:29.949 "name": "nvme0", 00:34:29.949 "trtype": "tcp", 00:34:29.949 "traddr": "10.0.0.1", 00:34:29.949 "adrfam": "ipv4", 00:34:29.949 "trsvcid": "4420", 00:34:29.949 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:34:29.949 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:34:29.949 "prchk_reftag": false, 00:34:29.949 "prchk_guard": false, 00:34:29.949 "hdgst": false, 00:34:29.949 "ddgst": false, 00:34:29.949 "allow_unrecognized_csi": false, 00:34:29.949 "method": "bdev_nvme_attach_controller", 00:34:29.949 "req_id": 1 00:34:29.949 } 00:34:29.949 Got JSON-RPC error response 00:34:29.949 response: 00:34:29.949 { 00:34:29.949 "code": -5, 00:34:29.949 "message": "Input/output error" 00:34:29.949 } 00:34:29.949 07:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:34:29.949 07:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:34:29.949 07:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:34:29.949 07:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:34:29.949 07:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:34:29.949 07:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:34:29.949 07:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:29.949 07:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:29.949 07:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:34:29.949 07:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:29.949 07:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:34:29.949 07:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:34:29.949 07:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:29.949 07:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:29.949 07:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:29.949 07:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:29.949 07:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:29.949 07:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:29.949 07:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:29.949 07:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:29.949 07:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:29.949 07:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:29.949 07:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:34:29.949 07:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:34:29.949 07:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:34:29.949 07:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:34:29.949 07:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:29.949 07:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:34:29.949 07:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:29.949 07:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:34:29.949 07:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:29.949 07:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:29.949 request: 00:34:29.949 { 00:34:29.949 "name": "nvme0", 00:34:29.949 "trtype": "tcp", 00:34:29.949 "traddr": "10.0.0.1", 00:34:29.949 "adrfam": "ipv4", 00:34:29.949 "trsvcid": "4420", 00:34:29.949 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:34:29.949 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:34:29.949 "prchk_reftag": false, 00:34:29.949 "prchk_guard": false, 00:34:29.949 "hdgst": false, 00:34:29.949 "ddgst": false, 00:34:29.949 "dhchap_key": "key2", 00:34:29.949 "allow_unrecognized_csi": false, 00:34:29.949 "method": "bdev_nvme_attach_controller", 00:34:29.949 "req_id": 1 00:34:29.949 } 00:34:29.949 Got JSON-RPC error response 00:34:29.949 response: 00:34:29.949 { 00:34:29.949 "code": -5, 00:34:29.949 "message": "Input/output error" 00:34:29.949 } 00:34:29.949 07:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:34:29.949 07:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:34:29.949 07:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:34:29.949 07:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:34:29.949 07:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:34:29.949 07:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:34:29.949 07:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:29.949 07:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:34:29.949 07:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:29.949 07:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:30.208 07:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:34:30.208 07:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:34:30.208 07:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:30.208 07:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:30.208 07:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:30.208 07:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:30.208 07:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:30.208 07:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:30.208 07:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:30.208 07:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:30.208 07:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:30.208 07:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:30.208 07:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:34:30.208 07:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:34:30.208 07:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:34:30.208 07:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:34:30.208 07:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:30.208 07:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:34:30.208 07:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:30.208 07:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:34:30.208 07:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:30.208 07:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:30.208 request: 00:34:30.208 { 00:34:30.208 "name": "nvme0", 00:34:30.208 "trtype": "tcp", 00:34:30.208 "traddr": "10.0.0.1", 00:34:30.208 "adrfam": "ipv4", 00:34:30.208 "trsvcid": "4420", 00:34:30.208 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:34:30.208 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:34:30.208 "prchk_reftag": false, 00:34:30.208 "prchk_guard": false, 00:34:30.208 "hdgst": false, 00:34:30.208 "ddgst": false, 00:34:30.208 "dhchap_key": "key1", 00:34:30.208 "dhchap_ctrlr_key": "ckey2", 00:34:30.208 "allow_unrecognized_csi": false, 00:34:30.208 "method": "bdev_nvme_attach_controller", 00:34:30.208 "req_id": 1 00:34:30.208 } 00:34:30.208 Got JSON-RPC error response 00:34:30.208 response: 00:34:30.208 { 00:34:30.208 "code": -5, 00:34:30.208 "message": "Input/output error" 00:34:30.208 } 00:34:30.208 07:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:34:30.208 07:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:34:30.209 07:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:34:30.209 07:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:34:30.209 07:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:34:30.209 07:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:34:30.209 07:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:30.209 07:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:30.209 07:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:30.209 07:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:30.209 07:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:30.209 07:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:30.209 07:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:30.209 07:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:30.209 07:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:30.209 07:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:30.209 07:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:34:30.209 07:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:30.209 07:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:30.209 nvme0n1 00:34:30.209 07:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:30.209 07:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:34:30.209 07:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:30.209 07:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:30.209 07:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:30.209 07:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:30.209 07:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjVlMmZhNzNkZDA1ZWZjNWVjYjRhNGU2YWRhZDk3NGM4RYVv: 00:34:30.209 07:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDZmMzhmNTJlNmQ1NmRlY2E4MjZkZjhkNWQzMWRlNWSeS3d3: 00:34:30.209 07:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:30.209 07:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:30.209 07:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjVlMmZhNzNkZDA1ZWZjNWVjYjRhNGU2YWRhZDk3NGM4RYVv: 00:34:30.209 07:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDZmMzhmNTJlNmQ1NmRlY2E4MjZkZjhkNWQzMWRlNWSeS3d3: ]] 00:34:30.209 07:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDZmMzhmNTJlNmQ1NmRlY2E4MjZkZjhkNWQzMWRlNWSeS3d3: 00:34:30.209 07:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:30.209 07:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:30.209 07:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:30.467 07:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:30.467 07:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:34:30.467 07:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:30.467 07:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:34:30.467 07:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:30.467 07:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:30.467 07:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:30.467 07:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:34:30.467 07:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:34:30.467 07:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:34:30.467 07:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:34:30.467 07:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:30.467 07:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:34:30.467 07:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:30.467 07:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:34:30.467 07:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:30.467 07:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:30.467 request: 00:34:30.467 { 00:34:30.467 "name": "nvme0", 00:34:30.467 "dhchap_key": "key1", 00:34:30.467 "dhchap_ctrlr_key": "ckey2", 00:34:30.467 "method": "bdev_nvme_set_keys", 00:34:30.467 "req_id": 1 00:34:30.467 } 00:34:30.467 Got JSON-RPC error response 00:34:30.467 response: 00:34:30.467 { 00:34:30.467 "code": -13, 00:34:30.467 "message": "Permission denied" 00:34:30.467 } 00:34:30.467 07:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:34:30.467 07:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:34:30.467 07:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:34:30.467 07:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:34:30.467 07:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:34:30.467 07:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:34:30.467 07:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:30.467 07:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:34:30.467 07:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:30.468 07:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:30.468 07:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:34:30.468 07:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:34:31.842 07:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:34:31.842 07:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:34:31.842 07:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:31.842 07:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:31.842 07:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:31.842 07:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:34:31.842 07:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:34:32.778 07:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:34:32.778 07:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:34:32.778 07:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:32.778 07:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:32.778 07:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:32.778 07:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:34:32.778 07:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:34:32.778 07:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:32.778 07:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:32.778 07:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:32.778 07:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:32.778 07:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjQ4NTdiOGU1OWQxMzU0OGMxNmM3ODFhMzlmYjdjOThiZWQxNGJlMmQyNDhlMGVhJkPlHw==: 00:34:32.778 07:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGRjMjMwNDVhYzNkYTUxNGIzYmE0NjE3Y2E2ZWEzYzgzNTViNzdhMGNkN2JmZjg4EjyqOg==: 00:34:32.778 07:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:32.778 07:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:32.778 07:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjQ4NTdiOGU1OWQxMzU0OGMxNmM3ODFhMzlmYjdjOThiZWQxNGJlMmQyNDhlMGVhJkPlHw==: 00:34:32.778 07:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGRjMjMwNDVhYzNkYTUxNGIzYmE0NjE3Y2E2ZWEzYzgzNTViNzdhMGNkN2JmZjg4EjyqOg==: ]] 00:34:32.778 07:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGRjMjMwNDVhYzNkYTUxNGIzYmE0NjE3Y2E2ZWEzYzgzNTViNzdhMGNkN2JmZjg4EjyqOg==: 00:34:32.778 07:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:34:32.778 07:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:32.778 07:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:32.778 07:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:32.778 07:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:32.778 07:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:32.778 07:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:32.778 07:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:32.778 07:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:32.778 07:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:32.778 07:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:32.778 07:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:34:32.778 07:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:32.778 07:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:32.778 nvme0n1 00:34:32.779 07:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:32.779 07:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:34:32.779 07:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:32.779 07:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:32.779 07:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:32.779 07:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:32.779 07:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjVlMmZhNzNkZDA1ZWZjNWVjYjRhNGU2YWRhZDk3NGM4RYVv: 00:34:32.779 07:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDZmMzhmNTJlNmQ1NmRlY2E4MjZkZjhkNWQzMWRlNWSeS3d3: 00:34:32.779 07:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:32.779 07:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:32.779 07:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjVlMmZhNzNkZDA1ZWZjNWVjYjRhNGU2YWRhZDk3NGM4RYVv: 00:34:32.779 07:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDZmMzhmNTJlNmQ1NmRlY2E4MjZkZjhkNWQzMWRlNWSeS3d3: ]] 00:34:32.779 07:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDZmMzhmNTJlNmQ1NmRlY2E4MjZkZjhkNWQzMWRlNWSeS3d3: 00:34:32.779 07:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:34:32.779 07:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:34:32.779 07:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:34:32.779 07:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:34:32.779 07:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:32.779 07:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:34:32.779 07:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:32.779 07:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:34:32.779 07:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:32.779 07:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:32.779 request: 00:34:32.779 { 00:34:32.779 "name": "nvme0", 00:34:32.779 "dhchap_key": "key2", 00:34:32.779 "dhchap_ctrlr_key": "ckey1", 00:34:32.779 "method": "bdev_nvme_set_keys", 00:34:32.779 "req_id": 1 00:34:32.779 } 00:34:32.779 Got JSON-RPC error response 00:34:32.779 response: 00:34:32.779 { 00:34:32.779 "code": -13, 00:34:32.779 "message": "Permission denied" 00:34:32.779 } 00:34:32.779 07:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:34:32.779 07:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:34:32.779 07:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:34:32.779 07:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:34:32.779 07:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:34:32.779 07:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:34:32.779 07:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:32.779 07:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:34:32.779 07:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:32.779 07:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:32.779 07:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:34:32.779 07:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:34:34.154 07:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:34:34.154 07:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:34:34.154 07:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:34.154 07:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:34.154 07:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:34.154 07:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:34:34.154 07:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:34:35.089 07:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:34:35.089 07:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:34:35.089 07:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:35.089 07:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:35.089 07:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:35.089 07:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:34:35.089 07:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:34:35.089 07:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:34:35.089 07:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:34:35.089 07:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:35.089 07:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:34:35.089 07:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:35.089 07:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:34:35.089 07:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:35.089 07:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:35.089 rmmod nvme_tcp 00:34:35.089 rmmod nvme_fabrics 00:34:35.089 07:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:35.089 07:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:34:35.089 07:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:34:35.089 07:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # '[' -n 384048 ']' 00:34:35.089 07:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # killprocess 384048 00:34:35.089 07:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # '[' -z 384048 ']' 00:34:35.089 07:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # kill -0 384048 00:34:35.089 07:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # uname 00:34:35.089 07:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:35.089 07:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 384048 00:34:35.089 07:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:35.089 07:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:35.089 07:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 384048' 00:34:35.089 killing process with pid 384048 00:34:35.089 07:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@973 -- # kill 384048 00:34:35.089 07:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@978 -- # wait 384048 00:34:35.349 07:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:35.349 07:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:35.349 07:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:35.349 07:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:34:35.349 07:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-save 00:34:35.349 07:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:35.349 07:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-restore 00:34:35.349 07:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:35.349 07:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:35.349 07:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:35.349 07:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:35.349 07:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:37.262 07:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:37.262 07:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:34:37.262 07:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:34:37.262 07:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:34:37.262 07:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:34:37.262 07:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # echo 0 00:34:37.262 07:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:34:37.262 07:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:34:37.262 07:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:34:37.262 07:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:34:37.262 07:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:34:37.262 07:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:34:37.262 07:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:34:38.645 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:34:38.645 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:34:38.645 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:34:38.645 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:34:38.645 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:34:38.645 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:34:38.645 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:34:38.645 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:34:38.645 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:34:38.645 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:34:38.645 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:34:38.645 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:34:38.645 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:34:38.645 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:34:38.645 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:34:38.645 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:34:39.583 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:34:39.843 07:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.RUN /tmp/spdk.key-null.AhF /tmp/spdk.key-sha256.xHC /tmp/spdk.key-sha384.omy /tmp/spdk.key-sha512.JID /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:34:39.843 07:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:34:41.223 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:34:41.223 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:34:41.223 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:34:41.223 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:34:41.223 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:34:41.223 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:34:41.223 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:34:41.223 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:34:41.223 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:34:41.223 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:34:41.223 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:34:41.223 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:34:41.223 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:34:41.223 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:34:41.223 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:34:41.223 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:34:41.223 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:34:41.223 00:34:41.223 real 0m55.158s 00:34:41.223 user 0m52.345s 00:34:41.223 sys 0m6.256s 00:34:41.223 07:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:41.223 07:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:41.223 ************************************ 00:34:41.223 END TEST nvmf_auth_host 00:34:41.223 ************************************ 00:34:41.223 07:20:01 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:34:41.223 07:20:01 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:34:41.223 07:20:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:34:41.223 07:20:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:41.223 07:20:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:34:41.223 ************************************ 00:34:41.223 START TEST nvmf_digest 00:34:41.223 ************************************ 00:34:41.223 07:20:02 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:34:41.223 * Looking for test storage... 00:34:41.223 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:34:41.223 07:20:02 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:34:41.223 07:20:02 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # lcov --version 00:34:41.223 07:20:02 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:34:41.223 07:20:02 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:34:41.223 07:20:02 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:41.223 07:20:02 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:41.223 07:20:02 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:41.223 07:20:02 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:34:41.223 07:20:02 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:34:41.223 07:20:02 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:34:41.223 07:20:02 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:34:41.223 07:20:02 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:34:41.223 07:20:02 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:34:41.223 07:20:02 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:34:41.223 07:20:02 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:41.223 07:20:02 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:34:41.223 07:20:02 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:34:41.223 07:20:02 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:41.223 07:20:02 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:41.223 07:20:02 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:34:41.223 07:20:02 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:34:41.223 07:20:02 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:41.223 07:20:02 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:34:41.223 07:20:02 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:34:41.223 07:20:02 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:34:41.223 07:20:02 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:34:41.223 07:20:02 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:41.223 07:20:02 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:34:41.223 07:20:02 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:34:41.223 07:20:02 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:41.223 07:20:02 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:41.223 07:20:02 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:34:41.223 07:20:02 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:41.223 07:20:02 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:34:41.223 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:41.223 --rc genhtml_branch_coverage=1 00:34:41.223 --rc genhtml_function_coverage=1 00:34:41.223 --rc genhtml_legend=1 00:34:41.223 --rc geninfo_all_blocks=1 00:34:41.223 --rc geninfo_unexecuted_blocks=1 00:34:41.223 00:34:41.223 ' 00:34:41.223 07:20:02 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:34:41.223 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:41.223 --rc genhtml_branch_coverage=1 00:34:41.223 --rc genhtml_function_coverage=1 00:34:41.223 --rc genhtml_legend=1 00:34:41.223 --rc geninfo_all_blocks=1 00:34:41.223 --rc geninfo_unexecuted_blocks=1 00:34:41.223 00:34:41.223 ' 00:34:41.223 07:20:02 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:34:41.223 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:41.223 --rc genhtml_branch_coverage=1 00:34:41.223 --rc genhtml_function_coverage=1 00:34:41.223 --rc genhtml_legend=1 00:34:41.223 --rc geninfo_all_blocks=1 00:34:41.223 --rc geninfo_unexecuted_blocks=1 00:34:41.223 00:34:41.223 ' 00:34:41.223 07:20:02 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:34:41.223 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:41.223 --rc genhtml_branch_coverage=1 00:34:41.223 --rc genhtml_function_coverage=1 00:34:41.223 --rc genhtml_legend=1 00:34:41.223 --rc geninfo_all_blocks=1 00:34:41.223 --rc geninfo_unexecuted_blocks=1 00:34:41.223 00:34:41.223 ' 00:34:41.223 07:20:02 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:41.223 07:20:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:34:41.223 07:20:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:41.223 07:20:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:41.223 07:20:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:41.223 07:20:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:41.223 07:20:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:41.223 07:20:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:41.223 07:20:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:41.223 07:20:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:41.223 07:20:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:41.223 07:20:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:41.223 07:20:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:34:41.223 07:20:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:34:41.223 07:20:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:41.223 07:20:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:41.223 07:20:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:41.223 07:20:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:41.223 07:20:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:41.223 07:20:02 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:34:41.224 07:20:02 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:41.224 07:20:02 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:41.224 07:20:02 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:41.224 07:20:02 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:41.224 07:20:02 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:41.224 07:20:02 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:41.224 07:20:02 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:34:41.224 07:20:02 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:41.224 07:20:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:34:41.224 07:20:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:41.224 07:20:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:41.224 07:20:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:41.224 07:20:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:41.224 07:20:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:41.224 07:20:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:34:41.224 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:34:41.224 07:20:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:41.224 07:20:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:41.224 07:20:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:41.224 07:20:02 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:34:41.224 07:20:02 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:34:41.224 07:20:02 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:34:41.224 07:20:02 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:34:41.224 07:20:02 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:34:41.224 07:20:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:41.224 07:20:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:41.224 07:20:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:41.224 07:20:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:41.224 07:20:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:41.224 07:20:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:41.224 07:20:02 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:41.224 07:20:02 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:41.224 07:20:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:41.224 07:20:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:41.224 07:20:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@309 -- # xtrace_disable 00:34:41.224 07:20:02 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:34:43.761 07:20:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:43.761 07:20:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # pci_devs=() 00:34:43.761 07:20:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:43.761 07:20:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:43.761 07:20:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:43.761 07:20:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:43.761 07:20:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:43.761 07:20:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # net_devs=() 00:34:43.761 07:20:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:43.761 07:20:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # e810=() 00:34:43.761 07:20:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # local -ga e810 00:34:43.761 07:20:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # x722=() 00:34:43.761 07:20:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # local -ga x722 00:34:43.761 07:20:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # mlx=() 00:34:43.761 07:20:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # local -ga mlx 00:34:43.761 07:20:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:43.761 07:20:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:43.761 07:20:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:43.761 07:20:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:43.761 07:20:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:43.761 07:20:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:43.761 07:20:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:43.761 07:20:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:43.761 07:20:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:43.761 07:20:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:43.761 07:20:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:43.761 07:20:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:43.761 07:20:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:43.761 07:20:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:43.761 07:20:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:43.761 07:20:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:43.761 07:20:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:43.761 07:20:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:43.761 07:20:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:43.761 07:20:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:34:43.761 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:34:43.761 07:20:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:43.761 07:20:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:43.761 07:20:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:43.761 07:20:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:43.761 07:20:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:43.761 07:20:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:43.761 07:20:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:34:43.761 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:34:43.761 07:20:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:43.761 07:20:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:43.761 07:20:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:43.761 07:20:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:43.761 07:20:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:43.761 07:20:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:43.761 07:20:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:43.761 07:20:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:43.761 07:20:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:43.761 07:20:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:43.761 07:20:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:43.761 07:20:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:43.761 07:20:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:43.761 07:20:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:43.761 07:20:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:43.761 07:20:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:34:43.761 Found net devices under 0000:0a:00.0: cvl_0_0 00:34:43.761 07:20:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:43.762 07:20:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:43.762 07:20:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:43.762 07:20:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:43.762 07:20:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:43.762 07:20:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:43.762 07:20:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:43.762 07:20:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:43.762 07:20:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:34:43.762 Found net devices under 0000:0a:00.1: cvl_0_1 00:34:43.762 07:20:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:43.762 07:20:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:43.762 07:20:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # is_hw=yes 00:34:43.762 07:20:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:43.762 07:20:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:43.762 07:20:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:43.762 07:20:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:43.762 07:20:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:43.762 07:20:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:43.762 07:20:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:43.762 07:20:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:43.762 07:20:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:43.762 07:20:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:43.762 07:20:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:43.762 07:20:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:43.762 07:20:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:43.762 07:20:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:43.762 07:20:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:43.762 07:20:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:43.762 07:20:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:43.762 07:20:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:43.762 07:20:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:43.762 07:20:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:43.762 07:20:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:43.762 07:20:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:43.762 07:20:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:43.762 07:20:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:43.762 07:20:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:43.762 07:20:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:43.762 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:43.762 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.251 ms 00:34:43.762 00:34:43.762 --- 10.0.0.2 ping statistics --- 00:34:43.762 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:43.762 rtt min/avg/max/mdev = 0.251/0.251/0.251/0.000 ms 00:34:43.762 07:20:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:43.762 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:43.762 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.095 ms 00:34:43.762 00:34:43.762 --- 10.0.0.1 ping statistics --- 00:34:43.762 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:43.762 rtt min/avg/max/mdev = 0.095/0.095/0.095/0.000 ms 00:34:43.762 07:20:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:43.762 07:20:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@450 -- # return 0 00:34:43.762 07:20:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:43.762 07:20:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:43.762 07:20:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:43.762 07:20:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:43.762 07:20:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:43.762 07:20:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:43.762 07:20:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:43.762 07:20:04 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:34:43.762 07:20:04 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:34:43.762 07:20:04 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:34:43.762 07:20:04 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:34:43.762 07:20:04 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:43.762 07:20:04 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:34:43.762 ************************************ 00:34:43.762 START TEST nvmf_digest_clean 00:34:43.762 ************************************ 00:34:43.762 07:20:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1129 -- # run_digest 00:34:43.762 07:20:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:34:43.762 07:20:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:34:43.762 07:20:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:34:43.762 07:20:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:34:43.762 07:20:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:34:43.762 07:20:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:43.762 07:20:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:43.762 07:20:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:34:43.762 07:20:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # nvmfpid=394068 00:34:43.762 07:20:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:34:43.762 07:20:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # waitforlisten 394068 00:34:43.762 07:20:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 394068 ']' 00:34:43.762 07:20:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:43.762 07:20:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:43.762 07:20:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:43.762 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:43.762 07:20:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:43.762 07:20:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:34:43.762 [2024-11-18 07:20:04.454184] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:34:43.762 [2024-11-18 07:20:04.454256] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:43.762 [2024-11-18 07:20:04.526206] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:43.762 [2024-11-18 07:20:04.572315] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:43.762 [2024-11-18 07:20:04.572365] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:43.762 [2024-11-18 07:20:04.572393] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:43.762 [2024-11-18 07:20:04.572404] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:43.762 [2024-11-18 07:20:04.572414] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:43.762 [2024-11-18 07:20:04.573062] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:43.762 07:20:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:43.762 07:20:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:34:43.762 07:20:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:43.762 07:20:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:43.762 07:20:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:34:43.762 07:20:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:43.762 07:20:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:34:43.762 07:20:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:34:43.762 07:20:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:34:43.762 07:20:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:43.762 07:20:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:34:44.021 null0 00:34:44.021 [2024-11-18 07:20:04.807260] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:44.021 [2024-11-18 07:20:04.831482] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:44.021 07:20:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:44.021 07:20:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:34:44.022 07:20:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:34:44.022 07:20:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:34:44.022 07:20:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:34:44.022 07:20:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:34:44.022 07:20:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:34:44.022 07:20:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:34:44.022 07:20:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=394095 00:34:44.022 07:20:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 394095 /var/tmp/bperf.sock 00:34:44.022 07:20:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:34:44.022 07:20:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 394095 ']' 00:34:44.022 07:20:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:34:44.022 07:20:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:44.022 07:20:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:34:44.022 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:34:44.022 07:20:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:44.022 07:20:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:34:44.022 [2024-11-18 07:20:04.878641] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:34:44.022 [2024-11-18 07:20:04.878721] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid394095 ] 00:34:44.022 [2024-11-18 07:20:04.943563] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:44.022 [2024-11-18 07:20:04.988243] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:44.280 07:20:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:44.280 07:20:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:34:44.280 07:20:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:34:44.280 07:20:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:34:44.280 07:20:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:34:44.539 07:20:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:44.539 07:20:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:45.106 nvme0n1 00:34:45.106 07:20:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:34:45.106 07:20:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:34:45.364 Running I/O for 2 seconds... 00:34:47.230 18818.00 IOPS, 73.51 MiB/s [2024-11-18T06:20:08.208Z] 18659.00 IOPS, 72.89 MiB/s 00:34:47.230 Latency(us) 00:34:47.230 [2024-11-18T06:20:08.208Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:47.230 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:34:47.230 nvme0n1 : 2.05 18305.67 71.51 0.00 0.00 6847.94 3495.25 48156.82 00:34:47.230 [2024-11-18T06:20:08.208Z] =================================================================================================================== 00:34:47.230 [2024-11-18T06:20:08.208Z] Total : 18305.67 71.51 0.00 0.00 6847.94 3495.25 48156.82 00:34:47.230 { 00:34:47.230 "results": [ 00:34:47.230 { 00:34:47.230 "job": "nvme0n1", 00:34:47.230 "core_mask": "0x2", 00:34:47.230 "workload": "randread", 00:34:47.230 "status": "finished", 00:34:47.230 "queue_depth": 128, 00:34:47.230 "io_size": 4096, 00:34:47.230 "runtime": 2.045596, 00:34:47.230 "iops": 18305.667394734835, 00:34:47.230 "mibps": 71.50651326068295, 00:34:47.230 "io_failed": 0, 00:34:47.230 "io_timeout": 0, 00:34:47.230 "avg_latency_us": 6847.939701970838, 00:34:47.230 "min_latency_us": 3495.2533333333336, 00:34:47.230 "max_latency_us": 48156.8237037037 00:34:47.230 } 00:34:47.230 ], 00:34:47.230 "core_count": 1 00:34:47.230 } 00:34:47.230 07:20:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:34:47.230 07:20:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:34:47.230 07:20:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:34:47.230 07:20:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:34:47.230 | select(.opcode=="crc32c") 00:34:47.230 | "\(.module_name) \(.executed)"' 00:34:47.230 07:20:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:34:47.489 07:20:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:34:47.489 07:20:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:34:47.489 07:20:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:34:47.489 07:20:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:34:47.489 07:20:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 394095 00:34:47.489 07:20:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 394095 ']' 00:34:47.489 07:20:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 394095 00:34:47.489 07:20:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:34:47.489 07:20:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:47.489 07:20:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 394095 00:34:47.747 07:20:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:34:47.747 07:20:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:34:47.747 07:20:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 394095' 00:34:47.747 killing process with pid 394095 00:34:47.747 07:20:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 394095 00:34:47.747 Received shutdown signal, test time was about 2.000000 seconds 00:34:47.747 00:34:47.747 Latency(us) 00:34:47.747 [2024-11-18T06:20:08.725Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:47.747 [2024-11-18T06:20:08.725Z] =================================================================================================================== 00:34:47.747 [2024-11-18T06:20:08.725Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:47.747 07:20:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 394095 00:34:47.747 07:20:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:34:47.747 07:20:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:34:47.747 07:20:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:34:47.747 07:20:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:34:47.747 07:20:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:34:47.747 07:20:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:34:47.747 07:20:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:34:47.747 07:20:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=394578 00:34:47.748 07:20:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:34:47.748 07:20:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 394578 /var/tmp/bperf.sock 00:34:47.748 07:20:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 394578 ']' 00:34:47.748 07:20:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:34:47.748 07:20:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:47.748 07:20:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:34:47.748 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:34:47.748 07:20:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:47.748 07:20:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:34:47.748 [2024-11-18 07:20:08.711066] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:34:47.748 [2024-11-18 07:20:08.711165] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid394578 ] 00:34:47.748 I/O size of 131072 is greater than zero copy threshold (65536). 00:34:47.748 Zero copy mechanism will not be used. 00:34:48.006 [2024-11-18 07:20:08.779330] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:48.006 [2024-11-18 07:20:08.825397] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:48.006 07:20:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:48.006 07:20:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:34:48.006 07:20:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:34:48.006 07:20:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:34:48.006 07:20:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:34:48.573 07:20:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:48.573 07:20:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:49.140 nvme0n1 00:34:49.140 07:20:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:34:49.140 07:20:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:34:49.140 I/O size of 131072 is greater than zero copy threshold (65536). 00:34:49.140 Zero copy mechanism will not be used. 00:34:49.140 Running I/O for 2 seconds... 00:34:51.013 5950.00 IOPS, 743.75 MiB/s [2024-11-18T06:20:11.991Z] 5920.50 IOPS, 740.06 MiB/s 00:34:51.013 Latency(us) 00:34:51.013 [2024-11-18T06:20:11.991Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:51.013 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:34:51.013 nvme0n1 : 2.00 5920.73 740.09 0.00 0.00 2698.10 706.94 4636.07 00:34:51.013 [2024-11-18T06:20:11.991Z] =================================================================================================================== 00:34:51.013 [2024-11-18T06:20:11.991Z] Total : 5920.73 740.09 0.00 0.00 2698.10 706.94 4636.07 00:34:51.013 { 00:34:51.013 "results": [ 00:34:51.013 { 00:34:51.013 "job": "nvme0n1", 00:34:51.013 "core_mask": "0x2", 00:34:51.013 "workload": "randread", 00:34:51.013 "status": "finished", 00:34:51.013 "queue_depth": 16, 00:34:51.013 "io_size": 131072, 00:34:51.013 "runtime": 2.002626, 00:34:51.013 "iops": 5920.726086648231, 00:34:51.013 "mibps": 740.0907608310289, 00:34:51.013 "io_failed": 0, 00:34:51.013 "io_timeout": 0, 00:34:51.013 "avg_latency_us": 2698.095598724304, 00:34:51.013 "min_latency_us": 706.9392592592593, 00:34:51.013 "max_latency_us": 4636.065185185185 00:34:51.013 } 00:34:51.013 ], 00:34:51.013 "core_count": 1 00:34:51.013 } 00:34:51.271 07:20:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:34:51.271 07:20:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:34:51.271 07:20:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:34:51.271 07:20:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:34:51.271 | select(.opcode=="crc32c") 00:34:51.271 | "\(.module_name) \(.executed)"' 00:34:51.271 07:20:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:34:51.530 07:20:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:34:51.530 07:20:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:34:51.530 07:20:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:34:51.530 07:20:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:34:51.530 07:20:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 394578 00:34:51.530 07:20:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 394578 ']' 00:34:51.530 07:20:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 394578 00:34:51.530 07:20:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:34:51.530 07:20:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:51.530 07:20:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 394578 00:34:51.530 07:20:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:34:51.530 07:20:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:34:51.530 07:20:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 394578' 00:34:51.530 killing process with pid 394578 00:34:51.530 07:20:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 394578 00:34:51.530 Received shutdown signal, test time was about 2.000000 seconds 00:34:51.530 00:34:51.530 Latency(us) 00:34:51.530 [2024-11-18T06:20:12.508Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:51.530 [2024-11-18T06:20:12.508Z] =================================================================================================================== 00:34:51.530 [2024-11-18T06:20:12.508Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:51.530 07:20:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 394578 00:34:51.789 07:20:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:34:51.789 07:20:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:34:51.789 07:20:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:34:51.789 07:20:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:34:51.789 07:20:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:34:51.789 07:20:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:34:51.789 07:20:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:34:51.789 07:20:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=395026 00:34:51.789 07:20:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 395026 /var/tmp/bperf.sock 00:34:51.789 07:20:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:34:51.789 07:20:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 395026 ']' 00:34:51.789 07:20:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:34:51.789 07:20:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:51.789 07:20:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:34:51.789 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:34:51.789 07:20:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:51.789 07:20:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:34:51.789 [2024-11-18 07:20:12.576789] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:34:51.789 [2024-11-18 07:20:12.576885] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid395026 ] 00:34:51.789 [2024-11-18 07:20:12.641824] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:51.789 [2024-11-18 07:20:12.686100] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:52.047 07:20:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:52.047 07:20:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:34:52.047 07:20:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:34:52.047 07:20:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:34:52.047 07:20:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:34:52.305 07:20:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:52.306 07:20:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:52.872 nvme0n1 00:34:52.872 07:20:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:34:52.872 07:20:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:34:52.872 Running I/O for 2 seconds... 00:34:55.181 21671.00 IOPS, 84.65 MiB/s [2024-11-18T06:20:16.159Z] 20940.50 IOPS, 81.80 MiB/s 00:34:55.181 Latency(us) 00:34:55.181 [2024-11-18T06:20:16.159Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:55.181 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:55.181 nvme0n1 : 2.01 20940.34 81.80 0.00 0.00 6099.38 2633.58 12379.02 00:34:55.181 [2024-11-18T06:20:16.159Z] =================================================================================================================== 00:34:55.181 [2024-11-18T06:20:16.159Z] Total : 20940.34 81.80 0.00 0.00 6099.38 2633.58 12379.02 00:34:55.181 { 00:34:55.181 "results": [ 00:34:55.181 { 00:34:55.181 "job": "nvme0n1", 00:34:55.181 "core_mask": "0x2", 00:34:55.181 "workload": "randwrite", 00:34:55.181 "status": "finished", 00:34:55.181 "queue_depth": 128, 00:34:55.181 "io_size": 4096, 00:34:55.181 "runtime": 2.007656, 00:34:55.181 "iops": 20940.340377036704, 00:34:55.181 "mibps": 81.79820459779963, 00:34:55.181 "io_failed": 0, 00:34:55.181 "io_timeout": 0, 00:34:55.181 "avg_latency_us": 6099.378737687284, 00:34:55.181 "min_latency_us": 2633.5762962962963, 00:34:55.181 "max_latency_us": 12379.022222222222 00:34:55.181 } 00:34:55.181 ], 00:34:55.181 "core_count": 1 00:34:55.181 } 00:34:55.181 07:20:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:34:55.181 07:20:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:34:55.181 07:20:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:34:55.181 07:20:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:34:55.181 | select(.opcode=="crc32c") 00:34:55.181 | "\(.module_name) \(.executed)"' 00:34:55.181 07:20:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:34:55.181 07:20:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:34:55.181 07:20:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:34:55.181 07:20:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:34:55.181 07:20:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:34:55.181 07:20:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 395026 00:34:55.181 07:20:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 395026 ']' 00:34:55.181 07:20:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 395026 00:34:55.181 07:20:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:34:55.181 07:20:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:55.181 07:20:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 395026 00:34:55.181 07:20:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:34:55.181 07:20:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:34:55.181 07:20:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 395026' 00:34:55.181 killing process with pid 395026 00:34:55.181 07:20:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 395026 00:34:55.181 Received shutdown signal, test time was about 2.000000 seconds 00:34:55.181 00:34:55.181 Latency(us) 00:34:55.181 [2024-11-18T06:20:16.159Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:55.181 [2024-11-18T06:20:16.159Z] =================================================================================================================== 00:34:55.181 [2024-11-18T06:20:16.159Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:55.181 07:20:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 395026 00:34:55.440 07:20:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:34:55.440 07:20:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:34:55.440 07:20:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:34:55.440 07:20:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:34:55.440 07:20:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:34:55.440 07:20:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:34:55.440 07:20:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:34:55.440 07:20:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=395436 00:34:55.440 07:20:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:34:55.441 07:20:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 395436 /var/tmp/bperf.sock 00:34:55.441 07:20:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 395436 ']' 00:34:55.441 07:20:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:34:55.441 07:20:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:55.441 07:20:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:34:55.441 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:34:55.441 07:20:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:55.441 07:20:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:34:55.441 [2024-11-18 07:20:16.316461] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:34:55.441 [2024-11-18 07:20:16.316579] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid395436 ] 00:34:55.441 I/O size of 131072 is greater than zero copy threshold (65536). 00:34:55.441 Zero copy mechanism will not be used. 00:34:55.441 [2024-11-18 07:20:16.382132] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:55.699 [2024-11-18 07:20:16.426076] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:55.699 07:20:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:55.699 07:20:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:34:55.699 07:20:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:34:55.699 07:20:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:34:55.699 07:20:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:34:55.958 07:20:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:55.958 07:20:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:56.524 nvme0n1 00:34:56.524 07:20:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:34:56.524 07:20:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:34:56.524 I/O size of 131072 is greater than zero copy threshold (65536). 00:34:56.524 Zero copy mechanism will not be used. 00:34:56.524 Running I/O for 2 seconds... 00:34:58.831 5546.00 IOPS, 693.25 MiB/s [2024-11-18T06:20:19.809Z] 5534.50 IOPS, 691.81 MiB/s 00:34:58.831 Latency(us) 00:34:58.831 [2024-11-18T06:20:19.809Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:58.831 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:34:58.831 nvme0n1 : 2.00 5532.62 691.58 0.00 0.00 2885.23 1759.76 4854.52 00:34:58.831 [2024-11-18T06:20:19.809Z] =================================================================================================================== 00:34:58.831 [2024-11-18T06:20:19.809Z] Total : 5532.62 691.58 0.00 0.00 2885.23 1759.76 4854.52 00:34:58.831 { 00:34:58.831 "results": [ 00:34:58.831 { 00:34:58.831 "job": "nvme0n1", 00:34:58.831 "core_mask": "0x2", 00:34:58.831 "workload": "randwrite", 00:34:58.831 "status": "finished", 00:34:58.831 "queue_depth": 16, 00:34:58.831 "io_size": 131072, 00:34:58.831 "runtime": 2.003572, 00:34:58.831 "iops": 5532.618742925136, 00:34:58.831 "mibps": 691.577342865642, 00:34:58.831 "io_failed": 0, 00:34:58.831 "io_timeout": 0, 00:34:58.831 "avg_latency_us": 2885.2344706059243, 00:34:58.831 "min_latency_us": 1759.762962962963, 00:34:58.831 "max_latency_us": 4854.518518518518 00:34:58.831 } 00:34:58.831 ], 00:34:58.831 "core_count": 1 00:34:58.831 } 00:34:58.831 07:20:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:34:58.831 07:20:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:34:58.831 07:20:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:34:58.831 07:20:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:34:58.831 | select(.opcode=="crc32c") 00:34:58.831 | "\(.module_name) \(.executed)"' 00:34:58.831 07:20:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:34:58.831 07:20:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:34:58.831 07:20:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:34:58.831 07:20:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:34:58.831 07:20:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:34:58.831 07:20:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 395436 00:34:58.831 07:20:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 395436 ']' 00:34:58.831 07:20:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 395436 00:34:58.831 07:20:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:34:58.831 07:20:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:58.831 07:20:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 395436 00:34:58.831 07:20:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:34:58.831 07:20:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:34:58.831 07:20:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 395436' 00:34:58.831 killing process with pid 395436 00:34:58.831 07:20:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 395436 00:34:58.831 Received shutdown signal, test time was about 2.000000 seconds 00:34:58.831 00:34:58.831 Latency(us) 00:34:58.831 [2024-11-18T06:20:19.809Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:58.831 [2024-11-18T06:20:19.809Z] =================================================================================================================== 00:34:58.831 [2024-11-18T06:20:19.809Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:58.831 07:20:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 395436 00:34:59.090 07:20:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 394068 00:34:59.090 07:20:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 394068 ']' 00:34:59.090 07:20:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 394068 00:34:59.090 07:20:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:34:59.090 07:20:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:59.090 07:20:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 394068 00:34:59.090 07:20:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:59.090 07:20:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:59.090 07:20:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 394068' 00:34:59.090 killing process with pid 394068 00:34:59.090 07:20:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 394068 00:34:59.090 07:20:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 394068 00:34:59.348 00:34:59.348 real 0m15.763s 00:34:59.348 user 0m31.694s 00:34:59.348 sys 0m4.259s 00:34:59.348 07:20:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:59.348 07:20:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:34:59.348 ************************************ 00:34:59.348 END TEST nvmf_digest_clean 00:34:59.348 ************************************ 00:34:59.348 07:20:20 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:34:59.348 07:20:20 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:34:59.348 07:20:20 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:59.348 07:20:20 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:34:59.349 ************************************ 00:34:59.349 START TEST nvmf_digest_error 00:34:59.349 ************************************ 00:34:59.349 07:20:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1129 -- # run_digest_error 00:34:59.349 07:20:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:34:59.349 07:20:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:59.349 07:20:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:59.349 07:20:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:59.349 07:20:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # nvmfpid=395985 00:34:59.349 07:20:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:34:59.349 07:20:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # waitforlisten 395985 00:34:59.349 07:20:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 395985 ']' 00:34:59.349 07:20:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:59.349 07:20:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:59.349 07:20:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:59.349 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:59.349 07:20:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:59.349 07:20:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:59.349 [2024-11-18 07:20:20.270545] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:34:59.349 [2024-11-18 07:20:20.270616] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:59.608 [2024-11-18 07:20:20.344371] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:59.608 [2024-11-18 07:20:20.390217] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:59.608 [2024-11-18 07:20:20.390273] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:59.608 [2024-11-18 07:20:20.390296] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:59.608 [2024-11-18 07:20:20.390307] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:59.608 [2024-11-18 07:20:20.390316] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:59.608 [2024-11-18 07:20:20.390961] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:59.608 07:20:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:59.608 07:20:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:34:59.608 07:20:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:59.608 07:20:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:59.608 07:20:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:59.608 07:20:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:59.608 07:20:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:34:59.608 07:20:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:59.608 07:20:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:59.608 [2024-11-18 07:20:20.535723] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:34:59.608 07:20:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:59.608 07:20:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:34:59.608 07:20:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:34:59.608 07:20:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:59.608 07:20:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:59.867 null0 00:34:59.867 [2024-11-18 07:20:20.654743] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:59.867 [2024-11-18 07:20:20.679007] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:59.867 07:20:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:59.867 07:20:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:34:59.867 07:20:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:34:59.867 07:20:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:34:59.867 07:20:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:34:59.867 07:20:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:34:59.867 07:20:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=396011 00:34:59.867 07:20:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 396011 /var/tmp/bperf.sock 00:34:59.867 07:20:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 396011 ']' 00:34:59.867 07:20:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:34:59.867 07:20:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:34:59.867 07:20:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:59.867 07:20:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:34:59.867 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:34:59.867 07:20:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:59.867 07:20:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:59.867 [2024-11-18 07:20:20.734363] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:34:59.867 [2024-11-18 07:20:20.734437] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid396011 ] 00:34:59.867 [2024-11-18 07:20:20.807047] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:00.125 [2024-11-18 07:20:20.858721] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:00.125 07:20:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:00.125 07:20:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:35:00.125 07:20:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:35:00.125 07:20:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:35:00.384 07:20:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:35:00.384 07:20:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:00.384 07:20:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:00.384 07:20:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:00.384 07:20:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:00.384 07:20:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:00.951 nvme0n1 00:35:00.951 07:20:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:35:00.951 07:20:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:00.951 07:20:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:00.951 07:20:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:00.951 07:20:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:35:00.951 07:20:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:00.951 Running I/O for 2 seconds... 00:35:00.951 [2024-11-18 07:20:21.905967] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21883f0) 00:35:00.951 [2024-11-18 07:20:21.906044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3248 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.951 [2024-11-18 07:20:21.906065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:00.951 [2024-11-18 07:20:21.921391] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21883f0) 00:35:00.951 [2024-11-18 07:20:21.921425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:6195 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.951 [2024-11-18 07:20:21.921443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:01.211 [2024-11-18 07:20:21.936549] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21883f0) 00:35:01.211 [2024-11-18 07:20:21.936582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:16094 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.211 [2024-11-18 07:20:21.936599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:01.211 [2024-11-18 07:20:21.950273] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21883f0) 00:35:01.211 [2024-11-18 07:20:21.950320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:15818 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.211 [2024-11-18 07:20:21.950337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:01.211 [2024-11-18 07:20:21.961900] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21883f0) 00:35:01.211 [2024-11-18 07:20:21.961930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:481 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.211 [2024-11-18 07:20:21.961946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:01.211 [2024-11-18 07:20:21.975823] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21883f0) 00:35:01.211 [2024-11-18 07:20:21.975863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:9968 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.211 [2024-11-18 07:20:21.975880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:01.211 [2024-11-18 07:20:21.990923] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21883f0) 00:35:01.211 [2024-11-18 07:20:21.990971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:12962 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.211 [2024-11-18 07:20:21.990988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:01.211 [2024-11-18 07:20:22.002084] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21883f0) 00:35:01.211 [2024-11-18 07:20:22.002117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:22989 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.211 [2024-11-18 07:20:22.002135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:01.211 [2024-11-18 07:20:22.017399] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21883f0) 00:35:01.211 [2024-11-18 07:20:22.017432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:356 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.211 [2024-11-18 07:20:22.017450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:01.211 [2024-11-18 07:20:22.031865] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21883f0) 00:35:01.211 [2024-11-18 07:20:22.031898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:3964 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.211 [2024-11-18 07:20:22.031915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:01.211 [2024-11-18 07:20:22.041773] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21883f0) 00:35:01.211 [2024-11-18 07:20:22.041819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15286 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.211 [2024-11-18 07:20:22.041836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:01.211 [2024-11-18 07:20:22.056028] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21883f0) 00:35:01.211 [2024-11-18 07:20:22.056058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22907 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.211 [2024-11-18 07:20:22.056074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:01.211 [2024-11-18 07:20:22.070348] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21883f0) 00:35:01.211 [2024-11-18 07:20:22.070377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:5473 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.211 [2024-11-18 07:20:22.070393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:01.211 [2024-11-18 07:20:22.084598] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21883f0) 00:35:01.211 [2024-11-18 07:20:22.084630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:2587 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.211 [2024-11-18 07:20:22.084648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:01.211 [2024-11-18 07:20:22.096429] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21883f0) 00:35:01.211 [2024-11-18 07:20:22.096459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:21594 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.211 [2024-11-18 07:20:22.096476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:01.211 [2024-11-18 07:20:22.111351] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21883f0) 00:35:01.211 [2024-11-18 07:20:22.111383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20465 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.211 [2024-11-18 07:20:22.111401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:01.211 [2024-11-18 07:20:22.126993] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21883f0) 00:35:01.211 [2024-11-18 07:20:22.127024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:21423 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.211 [2024-11-18 07:20:22.127041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:01.211 [2024-11-18 07:20:22.140792] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21883f0) 00:35:01.211 [2024-11-18 07:20:22.140824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:4161 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.211 [2024-11-18 07:20:22.140842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:01.211 [2024-11-18 07:20:22.152102] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21883f0) 00:35:01.211 [2024-11-18 07:20:22.152131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:4476 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.211 [2024-11-18 07:20:22.152146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:01.211 [2024-11-18 07:20:22.167836] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21883f0) 00:35:01.211 [2024-11-18 07:20:22.167868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:8312 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.211 [2024-11-18 07:20:22.167885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:01.211 [2024-11-18 07:20:22.182692] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21883f0) 00:35:01.211 [2024-11-18 07:20:22.182725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:18390 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.211 [2024-11-18 07:20:22.182743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:01.471 [2024-11-18 07:20:22.194409] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21883f0) 00:35:01.471 [2024-11-18 07:20:22.194437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:23185 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.471 [2024-11-18 07:20:22.194453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:01.471 [2024-11-18 07:20:22.208269] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21883f0) 00:35:01.471 [2024-11-18 07:20:22.208300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:8382 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.471 [2024-11-18 07:20:22.208338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:01.471 [2024-11-18 07:20:22.223217] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21883f0) 00:35:01.471 [2024-11-18 07:20:22.223246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20621 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.471 [2024-11-18 07:20:22.223261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:01.471 [2024-11-18 07:20:22.236524] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21883f0) 00:35:01.471 [2024-11-18 07:20:22.236556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:16653 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.471 [2024-11-18 07:20:22.236573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:01.471 [2024-11-18 07:20:22.249180] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21883f0) 00:35:01.471 [2024-11-18 07:20:22.249211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:7903 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.471 [2024-11-18 07:20:22.249227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:01.471 [2024-11-18 07:20:22.261109] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21883f0) 00:35:01.471 [2024-11-18 07:20:22.261138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:2669 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.471 [2024-11-18 07:20:22.261155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:01.471 [2024-11-18 07:20:22.274184] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21883f0) 00:35:01.471 [2024-11-18 07:20:22.274216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:1500 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.471 [2024-11-18 07:20:22.274233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:01.471 [2024-11-18 07:20:22.287779] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21883f0) 00:35:01.471 [2024-11-18 07:20:22.287826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:12348 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.471 [2024-11-18 07:20:22.287844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:01.471 [2024-11-18 07:20:22.302389] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21883f0) 00:35:01.471 [2024-11-18 07:20:22.302423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:6702 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.471 [2024-11-18 07:20:22.302440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:01.471 [2024-11-18 07:20:22.318678] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21883f0) 00:35:01.472 [2024-11-18 07:20:22.318710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:25481 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.472 [2024-11-18 07:20:22.318727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:01.472 [2024-11-18 07:20:22.329372] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21883f0) 00:35:01.472 [2024-11-18 07:20:22.329409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:20544 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.472 [2024-11-18 07:20:22.329426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:01.472 [2024-11-18 07:20:22.344919] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21883f0) 00:35:01.472 [2024-11-18 07:20:22.344948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8699 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.472 [2024-11-18 07:20:22.344964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:01.472 [2024-11-18 07:20:22.360231] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21883f0) 00:35:01.472 [2024-11-18 07:20:22.360260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:25266 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.472 [2024-11-18 07:20:22.360276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:01.472 [2024-11-18 07:20:22.370496] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21883f0) 00:35:01.472 [2024-11-18 07:20:22.370526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:1987 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.472 [2024-11-18 07:20:22.370542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:01.472 [2024-11-18 07:20:22.385627] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21883f0) 00:35:01.472 [2024-11-18 07:20:22.385656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:13253 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.472 [2024-11-18 07:20:22.385673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:01.472 [2024-11-18 07:20:22.399629] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21883f0) 00:35:01.472 [2024-11-18 07:20:22.399660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:7311 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.472 [2024-11-18 07:20:22.399693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:01.472 [2024-11-18 07:20:22.415157] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21883f0) 00:35:01.472 [2024-11-18 07:20:22.415189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9678 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.472 [2024-11-18 07:20:22.415221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:01.472 [2024-11-18 07:20:22.426526] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21883f0) 00:35:01.472 [2024-11-18 07:20:22.426557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:19321 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.472 [2024-11-18 07:20:22.426575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:01.472 [2024-11-18 07:20:22.443362] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21883f0) 00:35:01.472 [2024-11-18 07:20:22.443393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:11941 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.472 [2024-11-18 07:20:22.443431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:01.731 [2024-11-18 07:20:22.460604] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21883f0) 00:35:01.731 [2024-11-18 07:20:22.460644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:1523 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.731 [2024-11-18 07:20:22.460661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:01.731 [2024-11-18 07:20:22.475050] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21883f0) 00:35:01.731 [2024-11-18 07:20:22.475081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:9750 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.731 [2024-11-18 07:20:22.475098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:01.731 [2024-11-18 07:20:22.486035] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21883f0) 00:35:01.731 [2024-11-18 07:20:22.486065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19899 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.731 [2024-11-18 07:20:22.486080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:01.731 [2024-11-18 07:20:22.501990] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21883f0) 00:35:01.731 [2024-11-18 07:20:22.502019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:21324 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.731 [2024-11-18 07:20:22.502036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:01.731 [2024-11-18 07:20:22.518397] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21883f0) 00:35:01.731 [2024-11-18 07:20:22.518428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:1484 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.731 [2024-11-18 07:20:22.518443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:01.731 [2024-11-18 07:20:22.528977] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21883f0) 00:35:01.731 [2024-11-18 07:20:22.529006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:15692 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.731 [2024-11-18 07:20:22.529023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:01.731 [2024-11-18 07:20:22.543669] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21883f0) 00:35:01.731 [2024-11-18 07:20:22.543699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:21066 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.731 [2024-11-18 07:20:22.543716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:01.731 [2024-11-18 07:20:22.557814] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21883f0) 00:35:01.731 [2024-11-18 07:20:22.557842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:1623 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.731 [2024-11-18 07:20:22.557859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:01.731 [2024-11-18 07:20:22.570583] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21883f0) 00:35:01.731 [2024-11-18 07:20:22.570619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:9075 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.731 [2024-11-18 07:20:22.570636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:01.731 [2024-11-18 07:20:22.585110] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21883f0) 00:35:01.731 [2024-11-18 07:20:22.585139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:23141 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.731 [2024-11-18 07:20:22.585155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:01.731 [2024-11-18 07:20:22.597311] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21883f0) 00:35:01.731 [2024-11-18 07:20:22.597339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:22418 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.731 [2024-11-18 07:20:22.597355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:01.732 [2024-11-18 07:20:22.613110] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21883f0) 00:35:01.732 [2024-11-18 07:20:22.613139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4241 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.732 [2024-11-18 07:20:22.613155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:01.732 [2024-11-18 07:20:22.626502] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21883f0) 00:35:01.732 [2024-11-18 07:20:22.626534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:6126 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.732 [2024-11-18 07:20:22.626553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:01.732 [2024-11-18 07:20:22.637803] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21883f0) 00:35:01.732 [2024-11-18 07:20:22.637850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:18308 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.732 [2024-11-18 07:20:22.637867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:01.732 [2024-11-18 07:20:22.653420] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21883f0) 00:35:01.732 [2024-11-18 07:20:22.653449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9398 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.732 [2024-11-18 07:20:22.653465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:01.732 [2024-11-18 07:20:22.663772] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21883f0) 00:35:01.732 [2024-11-18 07:20:22.663804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:13556 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.732 [2024-11-18 07:20:22.663821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:01.732 [2024-11-18 07:20:22.679140] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21883f0) 00:35:01.732 [2024-11-18 07:20:22.679169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:25589 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.732 [2024-11-18 07:20:22.679185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:01.732 [2024-11-18 07:20:22.693565] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21883f0) 00:35:01.732 [2024-11-18 07:20:22.693596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:22526 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.732 [2024-11-18 07:20:22.693613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:01.732 [2024-11-18 07:20:22.706926] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21883f0) 00:35:01.732 [2024-11-18 07:20:22.706958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:25476 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.732 [2024-11-18 07:20:22.706975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:01.992 [2024-11-18 07:20:22.722316] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21883f0) 00:35:01.992 [2024-11-18 07:20:22.722345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:16044 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.992 [2024-11-18 07:20:22.722362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:01.992 [2024-11-18 07:20:22.737064] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21883f0) 00:35:01.992 [2024-11-18 07:20:22.737094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:4593 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.992 [2024-11-18 07:20:22.737110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:01.992 [2024-11-18 07:20:22.749297] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21883f0) 00:35:01.992 [2024-11-18 07:20:22.749327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:13908 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.992 [2024-11-18 07:20:22.749358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:01.992 [2024-11-18 07:20:22.761509] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21883f0) 00:35:01.992 [2024-11-18 07:20:22.761554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:15994 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.992 [2024-11-18 07:20:22.761572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:01.992 [2024-11-18 07:20:22.774650] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21883f0) 00:35:01.992 [2024-11-18 07:20:22.774681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:12053 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.992 [2024-11-18 07:20:22.774698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:01.992 [2024-11-18 07:20:22.789449] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21883f0) 00:35:01.992 [2024-11-18 07:20:22.789502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:11256 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.992 [2024-11-18 07:20:22.789523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:01.992 [2024-11-18 07:20:22.801074] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21883f0) 00:35:01.992 [2024-11-18 07:20:22.801102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:17979 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.992 [2024-11-18 07:20:22.801122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:01.992 [2024-11-18 07:20:22.814592] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21883f0) 00:35:01.992 [2024-11-18 07:20:22.814622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:15050 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.992 [2024-11-18 07:20:22.814637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:01.992 [2024-11-18 07:20:22.829655] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21883f0) 00:35:01.992 [2024-11-18 07:20:22.829687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:17339 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.992 [2024-11-18 07:20:22.829704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:01.992 [2024-11-18 07:20:22.844614] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21883f0) 00:35:01.992 [2024-11-18 07:20:22.844655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:5585 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.992 [2024-11-18 07:20:22.844674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:01.992 [2024-11-18 07:20:22.859208] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21883f0) 00:35:01.992 [2024-11-18 07:20:22.859240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:7830 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.992 [2024-11-18 07:20:22.859257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:01.992 [2024-11-18 07:20:22.870685] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21883f0) 00:35:01.992 [2024-11-18 07:20:22.870714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19027 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.992 [2024-11-18 07:20:22.870730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:01.992 [2024-11-18 07:20:22.883175] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21883f0) 00:35:01.992 [2024-11-18 07:20:22.883203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:15448 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.993 [2024-11-18 07:20:22.883219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:01.993 18376.00 IOPS, 71.78 MiB/s [2024-11-18T06:20:22.971Z] [2024-11-18 07:20:22.896261] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21883f0) 00:35:01.993 [2024-11-18 07:20:22.896293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:21237 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.993 [2024-11-18 07:20:22.896310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:01.993 [2024-11-18 07:20:22.909376] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21883f0) 00:35:01.993 [2024-11-18 07:20:22.909405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:4986 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.993 [2024-11-18 07:20:22.909421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:01.993 [2024-11-18 07:20:22.923320] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21883f0) 00:35:01.993 [2024-11-18 07:20:22.923365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19876 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.993 [2024-11-18 07:20:22.923381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:01.993 [2024-11-18 07:20:22.937628] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21883f0) 00:35:01.993 [2024-11-18 07:20:22.937657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:19673 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.993 [2024-11-18 07:20:22.937673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:01.993 [2024-11-18 07:20:22.949007] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21883f0) 00:35:01.993 [2024-11-18 07:20:22.949036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6282 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.993 [2024-11-18 07:20:22.949052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:01.993 [2024-11-18 07:20:22.964683] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21883f0) 00:35:01.993 [2024-11-18 07:20:22.964714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:4136 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.993 [2024-11-18 07:20:22.964730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:02.251 [2024-11-18 07:20:22.978364] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21883f0) 00:35:02.251 [2024-11-18 07:20:22.978394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:18940 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.251 [2024-11-18 07:20:22.978411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:02.251 [2024-11-18 07:20:22.992768] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21883f0) 00:35:02.251 [2024-11-18 07:20:22.992816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:20313 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.251 [2024-11-18 07:20:22.992834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:02.251 [2024-11-18 07:20:23.003720] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21883f0) 00:35:02.251 [2024-11-18 07:20:23.003751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:12137 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.251 [2024-11-18 07:20:23.003782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:02.251 [2024-11-18 07:20:23.018986] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21883f0) 00:35:02.251 [2024-11-18 07:20:23.019017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:12564 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.251 [2024-11-18 07:20:23.019033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:02.251 [2024-11-18 07:20:23.033390] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21883f0) 00:35:02.251 [2024-11-18 07:20:23.033422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:6988 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.251 [2024-11-18 07:20:23.033460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:02.251 [2024-11-18 07:20:23.046631] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21883f0) 00:35:02.251 [2024-11-18 07:20:23.046663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:5668 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.251 [2024-11-18 07:20:23.046681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:02.251 [2024-11-18 07:20:23.058921] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21883f0) 00:35:02.251 [2024-11-18 07:20:23.058950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:7705 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.251 [2024-11-18 07:20:23.058965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:02.251 [2024-11-18 07:20:23.073467] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21883f0) 00:35:02.251 [2024-11-18 07:20:23.073520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:16867 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.251 [2024-11-18 07:20:23.073537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:02.251 [2024-11-18 07:20:23.088852] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21883f0) 00:35:02.251 [2024-11-18 07:20:23.088895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:22065 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.251 [2024-11-18 07:20:23.088911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:02.251 [2024-11-18 07:20:23.102917] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21883f0) 00:35:02.251 [2024-11-18 07:20:23.102948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:21940 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.251 [2024-11-18 07:20:23.102965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:02.251 [2024-11-18 07:20:23.114302] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21883f0) 00:35:02.251 [2024-11-18 07:20:23.114347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17104 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.251 [2024-11-18 07:20:23.114363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:02.251 [2024-11-18 07:20:23.128852] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21883f0) 00:35:02.251 [2024-11-18 07:20:23.128884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:10088 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.251 [2024-11-18 07:20:23.128901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:02.251 [2024-11-18 07:20:23.140317] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21883f0) 00:35:02.251 [2024-11-18 07:20:23.140346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:7677 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.251 [2024-11-18 07:20:23.140362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:02.251 [2024-11-18 07:20:23.154776] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21883f0) 00:35:02.251 [2024-11-18 07:20:23.154828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:17077 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.251 [2024-11-18 07:20:23.154846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:02.251 [2024-11-18 07:20:23.169272] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21883f0) 00:35:02.251 [2024-11-18 07:20:23.169303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:22904 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.251 [2024-11-18 07:20:23.169320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:02.251 [2024-11-18 07:20:23.185953] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21883f0) 00:35:02.251 [2024-11-18 07:20:23.185983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:4518 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.251 [2024-11-18 07:20:23.186000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:02.251 [2024-11-18 07:20:23.201318] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21883f0) 00:35:02.251 [2024-11-18 07:20:23.201347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:687 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.251 [2024-11-18 07:20:23.201362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:02.251 [2024-11-18 07:20:23.215739] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21883f0) 00:35:02.251 [2024-11-18 07:20:23.215771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:16388 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.251 [2024-11-18 07:20:23.215788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:02.251 [2024-11-18 07:20:23.227192] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21883f0) 00:35:02.251 [2024-11-18 07:20:23.227221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:24999 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.251 [2024-11-18 07:20:23.227252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:02.509 [2024-11-18 07:20:23.241752] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21883f0) 00:35:02.509 [2024-11-18 07:20:23.241797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:8656 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.509 [2024-11-18 07:20:23.241813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:02.509 [2024-11-18 07:20:23.255527] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21883f0) 00:35:02.509 [2024-11-18 07:20:23.255559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:1782 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.509 [2024-11-18 07:20:23.255591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:02.509 [2024-11-18 07:20:23.267064] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21883f0) 00:35:02.509 [2024-11-18 07:20:23.267092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:5291 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.509 [2024-11-18 07:20:23.267108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:02.509 [2024-11-18 07:20:23.281934] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21883f0) 00:35:02.509 [2024-11-18 07:20:23.281964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:1190 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.509 [2024-11-18 07:20:23.281980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:02.509 [2024-11-18 07:20:23.294426] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21883f0) 00:35:02.509 [2024-11-18 07:20:23.294455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7242 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.509 [2024-11-18 07:20:23.294470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:02.509 [2024-11-18 07:20:23.306842] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21883f0) 00:35:02.509 [2024-11-18 07:20:23.306887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:25136 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.509 [2024-11-18 07:20:23.306903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:02.509 [2024-11-18 07:20:23.319933] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21883f0) 00:35:02.509 [2024-11-18 07:20:23.319965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:2534 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.509 [2024-11-18 07:20:23.319981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:02.509 [2024-11-18 07:20:23.334670] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21883f0) 00:35:02.509 [2024-11-18 07:20:23.334700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:20330 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.509 [2024-11-18 07:20:23.334718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:02.509 [2024-11-18 07:20:23.349546] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21883f0) 00:35:02.509 [2024-11-18 07:20:23.349578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:13993 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.509 [2024-11-18 07:20:23.349595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:02.509 [2024-11-18 07:20:23.363019] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21883f0) 00:35:02.509 [2024-11-18 07:20:23.363050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:24176 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.509 [2024-11-18 07:20:23.363067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:02.509 [2024-11-18 07:20:23.374615] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21883f0) 00:35:02.509 [2024-11-18 07:20:23.374645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:7457 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.510 [2024-11-18 07:20:23.374662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:02.510 [2024-11-18 07:20:23.389433] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21883f0) 00:35:02.510 [2024-11-18 07:20:23.389463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:22683 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.510 [2024-11-18 07:20:23.389510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:02.510 [2024-11-18 07:20:23.404060] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21883f0) 00:35:02.510 [2024-11-18 07:20:23.404090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8622 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.510 [2024-11-18 07:20:23.404107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:02.510 [2024-11-18 07:20:23.415972] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21883f0) 00:35:02.510 [2024-11-18 07:20:23.416000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:18034 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.510 [2024-11-18 07:20:23.416015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:02.510 [2024-11-18 07:20:23.431914] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21883f0) 00:35:02.510 [2024-11-18 07:20:23.431942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8694 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.510 [2024-11-18 07:20:23.431957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:02.510 [2024-11-18 07:20:23.446321] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21883f0) 00:35:02.510 [2024-11-18 07:20:23.446353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:13971 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.510 [2024-11-18 07:20:23.446371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:02.510 [2024-11-18 07:20:23.458246] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21883f0) 00:35:02.510 [2024-11-18 07:20:23.458276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:24398 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.510 [2024-11-18 07:20:23.458293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:02.510 [2024-11-18 07:20:23.473121] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21883f0) 00:35:02.510 [2024-11-18 07:20:23.473150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:6056 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.510 [2024-11-18 07:20:23.473166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:02.769 [2024-11-18 07:20:23.488651] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21883f0) 00:35:02.769 [2024-11-18 07:20:23.488698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:15410 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.769 [2024-11-18 07:20:23.488715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:02.769 [2024-11-18 07:20:23.500174] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21883f0) 00:35:02.769 [2024-11-18 07:20:23.500202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:8535 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.769 [2024-11-18 07:20:23.500217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:02.769 [2024-11-18 07:20:23.514969] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21883f0) 00:35:02.769 [2024-11-18 07:20:23.515005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:5293 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.769 [2024-11-18 07:20:23.515041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:02.769 [2024-11-18 07:20:23.530378] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21883f0) 00:35:02.769 [2024-11-18 07:20:23.530408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16432 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.769 [2024-11-18 07:20:23.530423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:02.769 [2024-11-18 07:20:23.547211] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21883f0) 00:35:02.769 [2024-11-18 07:20:23.547240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:17872 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.769 [2024-11-18 07:20:23.547270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:02.769 [2024-11-18 07:20:23.561027] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21883f0) 00:35:02.769 [2024-11-18 07:20:23.561058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23711 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.769 [2024-11-18 07:20:23.561075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:02.769 [2024-11-18 07:20:23.571960] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21883f0) 00:35:02.769 [2024-11-18 07:20:23.571988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:1053 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.769 [2024-11-18 07:20:23.572004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:02.769 [2024-11-18 07:20:23.585381] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21883f0) 00:35:02.769 [2024-11-18 07:20:23.585411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:8411 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.769 [2024-11-18 07:20:23.585426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:02.769 [2024-11-18 07:20:23.599561] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21883f0) 00:35:02.769 [2024-11-18 07:20:23.599595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:19706 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.769 [2024-11-18 07:20:23.599613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:02.769 [2024-11-18 07:20:23.613481] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21883f0) 00:35:02.769 [2024-11-18 07:20:23.613517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:9683 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.769 [2024-11-18 07:20:23.613534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:02.769 [2024-11-18 07:20:23.630156] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21883f0) 00:35:02.769 [2024-11-18 07:20:23.630188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:10018 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.769 [2024-11-18 07:20:23.630205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:02.769 [2024-11-18 07:20:23.640812] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21883f0) 00:35:02.769 [2024-11-18 07:20:23.640843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:24358 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.769 [2024-11-18 07:20:23.640859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:02.769 [2024-11-18 07:20:23.655299] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21883f0) 00:35:02.769 [2024-11-18 07:20:23.655327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:5646 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.769 [2024-11-18 07:20:23.655343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:02.769 [2024-11-18 07:20:23.670239] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21883f0) 00:35:02.769 [2024-11-18 07:20:23.670268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:6667 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.769 [2024-11-18 07:20:23.670299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:02.769 [2024-11-18 07:20:23.686326] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21883f0) 00:35:02.769 [2024-11-18 07:20:23.686355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19568 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.769 [2024-11-18 07:20:23.686371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:02.769 [2024-11-18 07:20:23.701957] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21883f0) 00:35:02.769 [2024-11-18 07:20:23.701988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:14147 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.769 [2024-11-18 07:20:23.702004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:02.769 [2024-11-18 07:20:23.717760] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21883f0) 00:35:02.769 [2024-11-18 07:20:23.717804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:5527 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.769 [2024-11-18 07:20:23.717819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:02.769 [2024-11-18 07:20:23.732528] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21883f0) 00:35:02.769 [2024-11-18 07:20:23.732559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:8205 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.769 [2024-11-18 07:20:23.732576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:02.769 [2024-11-18 07:20:23.743096] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21883f0) 00:35:02.769 [2024-11-18 07:20:23.743125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:1513 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.769 [2024-11-18 07:20:23.743142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:03.029 [2024-11-18 07:20:23.757208] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21883f0) 00:35:03.029 [2024-11-18 07:20:23.757247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:15889 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.029 [2024-11-18 07:20:23.757279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:03.029 [2024-11-18 07:20:23.771340] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21883f0) 00:35:03.029 [2024-11-18 07:20:23.771368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:25167 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.029 [2024-11-18 07:20:23.771383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:03.029 [2024-11-18 07:20:23.783512] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21883f0) 00:35:03.029 [2024-11-18 07:20:23.783541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:9809 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.029 [2024-11-18 07:20:23.783557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:03.029 [2024-11-18 07:20:23.795018] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21883f0) 00:35:03.029 [2024-11-18 07:20:23.795047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:24227 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.029 [2024-11-18 07:20:23.795063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:03.029 [2024-11-18 07:20:23.808348] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21883f0) 00:35:03.029 [2024-11-18 07:20:23.808376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:11038 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.029 [2024-11-18 07:20:23.808392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:03.029 [2024-11-18 07:20:23.821581] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21883f0) 00:35:03.029 [2024-11-18 07:20:23.821612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:8995 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.029 [2024-11-18 07:20:23.821630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:03.029 [2024-11-18 07:20:23.833512] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21883f0) 00:35:03.029 [2024-11-18 07:20:23.833564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:4351 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.029 [2024-11-18 07:20:23.833581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:03.029 [2024-11-18 07:20:23.846167] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21883f0) 00:35:03.029 [2024-11-18 07:20:23.846196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:17632 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.029 [2024-11-18 07:20:23.846212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:03.029 [2024-11-18 07:20:23.859201] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21883f0) 00:35:03.029 [2024-11-18 07:20:23.859232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:17067 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.029 [2024-11-18 07:20:23.859249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:03.029 [2024-11-18 07:20:23.871828] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21883f0) 00:35:03.029 [2024-11-18 07:20:23.871859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:17710 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.029 [2024-11-18 07:20:23.871877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:03.029 [2024-11-18 07:20:23.888202] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21883f0) 00:35:03.029 [2024-11-18 07:20:23.888231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:16367 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.029 [2024-11-18 07:20:23.888248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:03.029 18480.00 IOPS, 72.19 MiB/s 00:35:03.029 Latency(us) 00:35:03.029 [2024-11-18T06:20:24.007Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:03.029 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:35:03.029 nvme0n1 : 2.01 18500.49 72.27 0.00 0.00 6909.66 3495.25 23010.42 00:35:03.029 [2024-11-18T06:20:24.007Z] =================================================================================================================== 00:35:03.030 [2024-11-18T06:20:24.008Z] Total : 18500.49 72.27 0.00 0.00 6909.66 3495.25 23010.42 00:35:03.030 { 00:35:03.030 "results": [ 00:35:03.030 { 00:35:03.030 "job": "nvme0n1", 00:35:03.030 "core_mask": "0x2", 00:35:03.030 "workload": "randread", 00:35:03.030 "status": "finished", 00:35:03.030 "queue_depth": 128, 00:35:03.030 "io_size": 4096, 00:35:03.030 "runtime": 2.006379, 00:35:03.030 "iops": 18500.4926786016, 00:35:03.030 "mibps": 72.2675495257875, 00:35:03.030 "io_failed": 0, 00:35:03.030 "io_timeout": 0, 00:35:03.030 "avg_latency_us": 6909.657530764418, 00:35:03.030 "min_latency_us": 3495.2533333333336, 00:35:03.030 "max_latency_us": 23010.417777777777 00:35:03.030 } 00:35:03.030 ], 00:35:03.030 "core_count": 1 00:35:03.030 } 00:35:03.030 07:20:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:35:03.030 07:20:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:35:03.030 07:20:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:35:03.030 | .driver_specific 00:35:03.030 | .nvme_error 00:35:03.030 | .status_code 00:35:03.030 | .command_transient_transport_error' 00:35:03.030 07:20:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:35:03.288 07:20:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 145 > 0 )) 00:35:03.288 07:20:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 396011 00:35:03.288 07:20:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 396011 ']' 00:35:03.288 07:20:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 396011 00:35:03.288 07:20:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:35:03.288 07:20:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:03.288 07:20:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 396011 00:35:03.288 07:20:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:03.288 07:20:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:03.288 07:20:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 396011' 00:35:03.288 killing process with pid 396011 00:35:03.288 07:20:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 396011 00:35:03.288 Received shutdown signal, test time was about 2.000000 seconds 00:35:03.288 00:35:03.288 Latency(us) 00:35:03.288 [2024-11-18T06:20:24.266Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:03.288 [2024-11-18T06:20:24.266Z] =================================================================================================================== 00:35:03.288 [2024-11-18T06:20:24.266Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:03.288 07:20:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 396011 00:35:03.547 07:20:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:35:03.547 07:20:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:35:03.547 07:20:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:35:03.547 07:20:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:35:03.547 07:20:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:35:03.547 07:20:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=396422 00:35:03.547 07:20:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:35:03.547 07:20:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 396422 /var/tmp/bperf.sock 00:35:03.547 07:20:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 396422 ']' 00:35:03.547 07:20:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:03.547 07:20:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:03.547 07:20:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:03.547 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:03.547 07:20:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:03.547 07:20:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:03.547 [2024-11-18 07:20:24.430275] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:35:03.547 [2024-11-18 07:20:24.430372] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid396422 ] 00:35:03.547 I/O size of 131072 is greater than zero copy threshold (65536). 00:35:03.547 Zero copy mechanism will not be used. 00:35:03.547 [2024-11-18 07:20:24.497141] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:03.806 [2024-11-18 07:20:24.545362] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:03.806 07:20:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:03.806 07:20:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:35:03.806 07:20:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:35:03.806 07:20:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:35:04.065 07:20:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:35:04.065 07:20:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:04.065 07:20:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:04.065 07:20:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:04.065 07:20:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:04.065 07:20:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:04.633 nvme0n1 00:35:04.633 07:20:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:35:04.633 07:20:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:04.633 07:20:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:04.633 07:20:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:04.633 07:20:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:35:04.633 07:20:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:04.633 I/O size of 131072 is greater than zero copy threshold (65536). 00:35:04.633 Zero copy mechanism will not be used. 00:35:04.633 Running I/O for 2 seconds... 00:35:04.633 [2024-11-18 07:20:25.606170] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:04.633 [2024-11-18 07:20:25.606242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.633 [2024-11-18 07:20:25.606278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:04.893 [2024-11-18 07:20:25.611867] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:04.893 [2024-11-18 07:20:25.611904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.893 [2024-11-18 07:20:25.611922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:04.893 [2024-11-18 07:20:25.618255] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:04.893 [2024-11-18 07:20:25.618287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.893 [2024-11-18 07:20:25.618305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:04.893 [2024-11-18 07:20:25.625951] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:04.893 [2024-11-18 07:20:25.625983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.893 [2024-11-18 07:20:25.626001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:04.893 [2024-11-18 07:20:25.632012] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:04.893 [2024-11-18 07:20:25.632044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.893 [2024-11-18 07:20:25.632061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:04.893 [2024-11-18 07:20:25.637664] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:04.893 [2024-11-18 07:20:25.637696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.893 [2024-11-18 07:20:25.637725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:04.893 [2024-11-18 07:20:25.643018] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:04.893 [2024-11-18 07:20:25.643051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.893 [2024-11-18 07:20:25.643068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:04.893 [2024-11-18 07:20:25.649317] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:04.893 [2024-11-18 07:20:25.649349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.893 [2024-11-18 07:20:25.649368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:04.893 [2024-11-18 07:20:25.654158] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:04.893 [2024-11-18 07:20:25.654205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.893 [2024-11-18 07:20:25.654223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:04.894 [2024-11-18 07:20:25.659418] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:04.894 [2024-11-18 07:20:25.659451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.894 [2024-11-18 07:20:25.659484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:04.894 [2024-11-18 07:20:25.665395] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:04.894 [2024-11-18 07:20:25.665427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.894 [2024-11-18 07:20:25.665445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:04.894 [2024-11-18 07:20:25.670757] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:04.894 [2024-11-18 07:20:25.670789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.894 [2024-11-18 07:20:25.670807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:04.894 [2024-11-18 07:20:25.676357] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:04.894 [2024-11-18 07:20:25.676402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.894 [2024-11-18 07:20:25.676420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:04.894 [2024-11-18 07:20:25.682276] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:04.894 [2024-11-18 07:20:25.682308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.894 [2024-11-18 07:20:25.682326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:04.894 [2024-11-18 07:20:25.687813] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:04.894 [2024-11-18 07:20:25.687852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.894 [2024-11-18 07:20:25.687870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:04.894 [2024-11-18 07:20:25.691360] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:04.894 [2024-11-18 07:20:25.691390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.894 [2024-11-18 07:20:25.691407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:04.894 [2024-11-18 07:20:25.697347] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:04.894 [2024-11-18 07:20:25.697391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.894 [2024-11-18 07:20:25.697409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:04.894 [2024-11-18 07:20:25.703447] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:04.894 [2024-11-18 07:20:25.703499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.894 [2024-11-18 07:20:25.703518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:04.894 [2024-11-18 07:20:25.709476] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:04.894 [2024-11-18 07:20:25.709529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.894 [2024-11-18 07:20:25.709547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:04.894 [2024-11-18 07:20:25.716261] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:04.894 [2024-11-18 07:20:25.716291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.894 [2024-11-18 07:20:25.716307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:04.894 [2024-11-18 07:20:25.722520] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:04.894 [2024-11-18 07:20:25.722551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.894 [2024-11-18 07:20:25.722569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:04.894 [2024-11-18 07:20:25.728609] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:04.894 [2024-11-18 07:20:25.728641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.894 [2024-11-18 07:20:25.728658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:04.894 [2024-11-18 07:20:25.734616] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:04.894 [2024-11-18 07:20:25.734648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.894 [2024-11-18 07:20:25.734672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:04.894 [2024-11-18 07:20:25.740972] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:04.894 [2024-11-18 07:20:25.741003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.894 [2024-11-18 07:20:25.741019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:04.894 [2024-11-18 07:20:25.747734] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:04.894 [2024-11-18 07:20:25.747767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.894 [2024-11-18 07:20:25.747800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:04.894 [2024-11-18 07:20:25.753713] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:04.894 [2024-11-18 07:20:25.753744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.894 [2024-11-18 07:20:25.753761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:04.894 [2024-11-18 07:20:25.759575] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:04.894 [2024-11-18 07:20:25.759607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.894 [2024-11-18 07:20:25.759640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:04.894 [2024-11-18 07:20:25.765036] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:04.894 [2024-11-18 07:20:25.765068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.894 [2024-11-18 07:20:25.765086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:04.894 [2024-11-18 07:20:25.770845] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:04.894 [2024-11-18 07:20:25.770890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.894 [2024-11-18 07:20:25.770907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:04.894 [2024-11-18 07:20:25.776327] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:04.894 [2024-11-18 07:20:25.776357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.894 [2024-11-18 07:20:25.776374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:04.894 [2024-11-18 07:20:25.782597] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:04.894 [2024-11-18 07:20:25.782628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.894 [2024-11-18 07:20:25.782645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:04.894 [2024-11-18 07:20:25.790136] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:04.895 [2024-11-18 07:20:25.790172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.895 [2024-11-18 07:20:25.790204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:04.895 [2024-11-18 07:20:25.796727] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:04.895 [2024-11-18 07:20:25.796758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.895 [2024-11-18 07:20:25.796790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:04.895 [2024-11-18 07:20:25.804111] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:04.895 [2024-11-18 07:20:25.804143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.895 [2024-11-18 07:20:25.804175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:04.895 [2024-11-18 07:20:25.810232] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:04.895 [2024-11-18 07:20:25.810264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.895 [2024-11-18 07:20:25.810281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:04.895 [2024-11-18 07:20:25.815127] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:04.895 [2024-11-18 07:20:25.815174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.895 [2024-11-18 07:20:25.815193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:04.895 [2024-11-18 07:20:25.819387] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:04.895 [2024-11-18 07:20:25.819418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.895 [2024-11-18 07:20:25.819436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:04.895 [2024-11-18 07:20:25.823868] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:04.895 [2024-11-18 07:20:25.823900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.895 [2024-11-18 07:20:25.823917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:04.895 [2024-11-18 07:20:25.828509] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:04.895 [2024-11-18 07:20:25.828540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.895 [2024-11-18 07:20:25.828557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:04.895 [2024-11-18 07:20:25.833060] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:04.895 [2024-11-18 07:20:25.833091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.895 [2024-11-18 07:20:25.833108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:04.895 [2024-11-18 07:20:25.837571] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:04.895 [2024-11-18 07:20:25.837602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.895 [2024-11-18 07:20:25.837619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:04.895 [2024-11-18 07:20:25.842285] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:04.895 [2024-11-18 07:20:25.842316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.895 [2024-11-18 07:20:25.842333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:04.895 [2024-11-18 07:20:25.846871] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:04.895 [2024-11-18 07:20:25.846902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.895 [2024-11-18 07:20:25.846919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:04.895 [2024-11-18 07:20:25.851504] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:04.895 [2024-11-18 07:20:25.851534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.895 [2024-11-18 07:20:25.851550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:04.895 [2024-11-18 07:20:25.855934] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:04.895 [2024-11-18 07:20:25.855965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.895 [2024-11-18 07:20:25.855983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:04.895 [2024-11-18 07:20:25.860533] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:04.895 [2024-11-18 07:20:25.860564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.895 [2024-11-18 07:20:25.860580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:04.895 [2024-11-18 07:20:25.865153] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:04.895 [2024-11-18 07:20:25.865184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.895 [2024-11-18 07:20:25.865200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:04.895 [2024-11-18 07:20:25.869742] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:04.895 [2024-11-18 07:20:25.869772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.895 [2024-11-18 07:20:25.869793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:05.156 [2024-11-18 07:20:25.874532] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:05.156 [2024-11-18 07:20:25.874563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.156 [2024-11-18 07:20:25.874585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:05.156 [2024-11-18 07:20:25.879309] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:05.157 [2024-11-18 07:20:25.879340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.157 [2024-11-18 07:20:25.879357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:05.157 [2024-11-18 07:20:25.884123] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:05.157 [2024-11-18 07:20:25.884154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.157 [2024-11-18 07:20:25.884172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:05.157 [2024-11-18 07:20:25.889441] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:05.157 [2024-11-18 07:20:25.889496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.157 [2024-11-18 07:20:25.889517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:05.157 [2024-11-18 07:20:25.893850] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:05.157 [2024-11-18 07:20:25.893882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.157 [2024-11-18 07:20:25.893900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:05.157 [2024-11-18 07:20:25.896631] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:05.157 [2024-11-18 07:20:25.896661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.157 [2024-11-18 07:20:25.896678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:05.157 [2024-11-18 07:20:25.900537] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:05.157 [2024-11-18 07:20:25.900575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.157 [2024-11-18 07:20:25.900592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:05.157 [2024-11-18 07:20:25.904858] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:05.157 [2024-11-18 07:20:25.904889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.157 [2024-11-18 07:20:25.904906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:05.157 [2024-11-18 07:20:25.909581] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:05.157 [2024-11-18 07:20:25.909612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.157 [2024-11-18 07:20:25.909629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:05.157 [2024-11-18 07:20:25.914143] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:05.157 [2024-11-18 07:20:25.914180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.157 [2024-11-18 07:20:25.914198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:05.157 [2024-11-18 07:20:25.918841] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:05.157 [2024-11-18 07:20:25.918873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.157 [2024-11-18 07:20:25.918890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:05.157 [2024-11-18 07:20:25.923548] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:05.157 [2024-11-18 07:20:25.923578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.157 [2024-11-18 07:20:25.923595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:05.157 [2024-11-18 07:20:25.928216] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:05.157 [2024-11-18 07:20:25.928247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.157 [2024-11-18 07:20:25.928265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:05.157 [2024-11-18 07:20:25.932802] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:05.157 [2024-11-18 07:20:25.932833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.157 [2024-11-18 07:20:25.932850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:05.157 [2024-11-18 07:20:25.938079] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:05.157 [2024-11-18 07:20:25.938111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.157 [2024-11-18 07:20:25.938130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:05.157 [2024-11-18 07:20:25.943079] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:05.157 [2024-11-18 07:20:25.943110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.157 [2024-11-18 07:20:25.943134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:05.157 [2024-11-18 07:20:25.948822] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:05.157 [2024-11-18 07:20:25.948855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.157 [2024-11-18 07:20:25.948874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:05.157 [2024-11-18 07:20:25.953504] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:05.157 [2024-11-18 07:20:25.953545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.157 [2024-11-18 07:20:25.953562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:05.157 [2024-11-18 07:20:25.956653] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:05.157 [2024-11-18 07:20:25.956684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.157 [2024-11-18 07:20:25.956717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:05.157 [2024-11-18 07:20:25.961506] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:05.157 [2024-11-18 07:20:25.961537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.157 [2024-11-18 07:20:25.961555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:05.157 [2024-11-18 07:20:25.966100] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:05.157 [2024-11-18 07:20:25.966132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.157 [2024-11-18 07:20:25.966149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:05.157 [2024-11-18 07:20:25.971707] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:05.157 [2024-11-18 07:20:25.971738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.157 [2024-11-18 07:20:25.971754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:05.157 [2024-11-18 07:20:25.977034] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:05.157 [2024-11-18 07:20:25.977066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.157 [2024-11-18 07:20:25.977084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:05.157 [2024-11-18 07:20:25.982054] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:05.157 [2024-11-18 07:20:25.982085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.158 [2024-11-18 07:20:25.982103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:05.158 [2024-11-18 07:20:25.987301] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:05.158 [2024-11-18 07:20:25.987333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.158 [2024-11-18 07:20:25.987349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:05.158 [2024-11-18 07:20:25.991925] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:05.158 [2024-11-18 07:20:25.991956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.158 [2024-11-18 07:20:25.991973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:05.158 [2024-11-18 07:20:25.996456] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:05.158 [2024-11-18 07:20:25.996487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.158 [2024-11-18 07:20:25.996519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:05.158 [2024-11-18 07:20:26.001140] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:05.158 [2024-11-18 07:20:26.001171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.158 [2024-11-18 07:20:26.001188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:05.158 [2024-11-18 07:20:26.006817] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:05.158 [2024-11-18 07:20:26.006849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.158 [2024-11-18 07:20:26.006868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:05.158 [2024-11-18 07:20:26.012753] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:05.158 [2024-11-18 07:20:26.012784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.158 [2024-11-18 07:20:26.012802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:05.158 [2024-11-18 07:20:26.018524] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:05.158 [2024-11-18 07:20:26.018556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.158 [2024-11-18 07:20:26.018573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:05.158 [2024-11-18 07:20:26.025532] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:05.158 [2024-11-18 07:20:26.025564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.158 [2024-11-18 07:20:26.025582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:05.158 [2024-11-18 07:20:26.030935] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:05.158 [2024-11-18 07:20:26.030968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.158 [2024-11-18 07:20:26.030986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:05.158 [2024-11-18 07:20:26.035925] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:05.158 [2024-11-18 07:20:26.035958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.158 [2024-11-18 07:20:26.035976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:05.158 [2024-11-18 07:20:26.039538] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:05.158 [2024-11-18 07:20:26.039569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.158 [2024-11-18 07:20:26.039588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:05.158 [2024-11-18 07:20:26.043365] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:05.158 [2024-11-18 07:20:26.043395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.158 [2024-11-18 07:20:26.043412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:05.158 [2024-11-18 07:20:26.047810] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:05.158 [2024-11-18 07:20:26.047843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.158 [2024-11-18 07:20:26.047864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:05.158 [2024-11-18 07:20:26.052693] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:05.158 [2024-11-18 07:20:26.052725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.158 [2024-11-18 07:20:26.052749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:05.158 [2024-11-18 07:20:26.057039] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:05.158 [2024-11-18 07:20:26.057074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.158 [2024-11-18 07:20:26.057100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:05.158 [2024-11-18 07:20:26.061639] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:05.158 [2024-11-18 07:20:26.061670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.158 [2024-11-18 07:20:26.061693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:05.158 [2024-11-18 07:20:26.066268] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:05.158 [2024-11-18 07:20:26.066299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.158 [2024-11-18 07:20:26.066323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:05.158 [2024-11-18 07:20:26.071449] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:05.158 [2024-11-18 07:20:26.071500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.158 [2024-11-18 07:20:26.071520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:05.158 [2024-11-18 07:20:26.078118] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:05.158 [2024-11-18 07:20:26.078162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.158 [2024-11-18 07:20:26.078184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:05.158 [2024-11-18 07:20:26.085677] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:05.158 [2024-11-18 07:20:26.085709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.158 [2024-11-18 07:20:26.085743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:05.158 [2024-11-18 07:20:26.091291] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:05.158 [2024-11-18 07:20:26.091336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.158 [2024-11-18 07:20:26.091354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:05.158 [2024-11-18 07:20:26.097034] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:05.158 [2024-11-18 07:20:26.097083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.158 [2024-11-18 07:20:26.097111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:05.158 [2024-11-18 07:20:26.101853] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:05.158 [2024-11-18 07:20:26.101898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.158 [2024-11-18 07:20:26.101915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:05.158 [2024-11-18 07:20:26.106473] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:05.158 [2024-11-18 07:20:26.106533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.158 [2024-11-18 07:20:26.106552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:05.159 [2024-11-18 07:20:26.110913] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:05.159 [2024-11-18 07:20:26.110943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.159 [2024-11-18 07:20:26.110967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:05.159 [2024-11-18 07:20:26.115339] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:05.159 [2024-11-18 07:20:26.115370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.159 [2024-11-18 07:20:26.115389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:05.159 [2024-11-18 07:20:26.121092] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:05.159 [2024-11-18 07:20:26.121124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.159 [2024-11-18 07:20:26.121148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:05.159 [2024-11-18 07:20:26.128812] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:05.159 [2024-11-18 07:20:26.128854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.159 [2024-11-18 07:20:26.128871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:05.419 [2024-11-18 07:20:26.135333] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:05.419 [2024-11-18 07:20:26.135371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.419 [2024-11-18 07:20:26.135390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:05.419 [2024-11-18 07:20:26.142118] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:05.419 [2024-11-18 07:20:26.142151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.419 [2024-11-18 07:20:26.142171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:05.419 [2024-11-18 07:20:26.148553] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:05.419 [2024-11-18 07:20:26.148586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.419 [2024-11-18 07:20:26.148605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:05.419 [2024-11-18 07:20:26.153540] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:05.419 [2024-11-18 07:20:26.153572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.419 [2024-11-18 07:20:26.153590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:05.419 [2024-11-18 07:20:26.156692] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:05.419 [2024-11-18 07:20:26.156721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.419 [2024-11-18 07:20:26.156742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:05.419 [2024-11-18 07:20:26.162098] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:05.419 [2024-11-18 07:20:26.162142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.419 [2024-11-18 07:20:26.162166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:05.419 [2024-11-18 07:20:26.167281] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:05.419 [2024-11-18 07:20:26.167310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.419 [2024-11-18 07:20:26.167330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:05.419 [2024-11-18 07:20:26.172089] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:05.419 [2024-11-18 07:20:26.172121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.419 [2024-11-18 07:20:26.172152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:05.419 [2024-11-18 07:20:26.177771] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:05.419 [2024-11-18 07:20:26.177823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.419 [2024-11-18 07:20:26.177848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:05.419 [2024-11-18 07:20:26.182549] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:05.419 [2024-11-18 07:20:26.182578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.419 [2024-11-18 07:20:26.182599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:05.419 [2024-11-18 07:20:26.187113] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:05.419 [2024-11-18 07:20:26.187141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.419 [2024-11-18 07:20:26.187158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:05.419 [2024-11-18 07:20:26.191998] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:05.419 [2024-11-18 07:20:26.192026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.419 [2024-11-18 07:20:26.192045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:05.419 [2024-11-18 07:20:26.196378] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:05.419 [2024-11-18 07:20:26.196408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.419 [2024-11-18 07:20:26.196447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:05.419 [2024-11-18 07:20:26.201996] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:05.419 [2024-11-18 07:20:26.202026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.419 [2024-11-18 07:20:26.202045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:05.419 [2024-11-18 07:20:26.209109] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:05.419 [2024-11-18 07:20:26.209155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.419 [2024-11-18 07:20:26.209172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:05.419 [2024-11-18 07:20:26.216566] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:05.419 [2024-11-18 07:20:26.216597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.419 [2024-11-18 07:20:26.216629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:05.419 [2024-11-18 07:20:26.224145] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:05.419 [2024-11-18 07:20:26.224175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.419 [2024-11-18 07:20:26.224191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:05.419 [2024-11-18 07:20:26.232069] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:05.419 [2024-11-18 07:20:26.232113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.419 [2024-11-18 07:20:26.232140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:05.419 [2024-11-18 07:20:26.240531] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:05.419 [2024-11-18 07:20:26.240563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.419 [2024-11-18 07:20:26.240595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:05.419 [2024-11-18 07:20:26.247930] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:05.419 [2024-11-18 07:20:26.247962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.419 [2024-11-18 07:20:26.247994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:05.419 [2024-11-18 07:20:26.253981] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:05.420 [2024-11-18 07:20:26.254010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.420 [2024-11-18 07:20:26.254026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:05.420 [2024-11-18 07:20:26.259914] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:05.420 [2024-11-18 07:20:26.259944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.420 [2024-11-18 07:20:26.259962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:05.420 [2024-11-18 07:20:26.265634] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:05.420 [2024-11-18 07:20:26.265680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.420 [2024-11-18 07:20:26.265697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:05.420 [2024-11-18 07:20:26.272369] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:05.420 [2024-11-18 07:20:26.272400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.420 [2024-11-18 07:20:26.272418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:05.420 [2024-11-18 07:20:26.279072] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:05.420 [2024-11-18 07:20:26.279102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.420 [2024-11-18 07:20:26.279133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:05.420 [2024-11-18 07:20:26.285612] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:05.420 [2024-11-18 07:20:26.285642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.420 [2024-11-18 07:20:26.285659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:05.420 [2024-11-18 07:20:26.291648] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:05.420 [2024-11-18 07:20:26.291678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.420 [2024-11-18 07:20:26.291700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:05.420 [2024-11-18 07:20:26.297691] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:05.420 [2024-11-18 07:20:26.297721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.420 [2024-11-18 07:20:26.297741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:05.420 [2024-11-18 07:20:26.303421] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:05.420 [2024-11-18 07:20:26.303454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.420 [2024-11-18 07:20:26.303472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:05.420 [2024-11-18 07:20:26.309531] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:05.420 [2024-11-18 07:20:26.309576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.420 [2024-11-18 07:20:26.309594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:05.420 [2024-11-18 07:20:26.315376] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:05.420 [2024-11-18 07:20:26.315422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.420 [2024-11-18 07:20:26.315439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:05.420 [2024-11-18 07:20:26.321693] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:05.420 [2024-11-18 07:20:26.321724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.420 [2024-11-18 07:20:26.321743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:05.420 [2024-11-18 07:20:26.327411] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:05.420 [2024-11-18 07:20:26.327443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.420 [2024-11-18 07:20:26.327464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:05.420 [2024-11-18 07:20:26.333403] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:05.420 [2024-11-18 07:20:26.333435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.420 [2024-11-18 07:20:26.333455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:05.420 [2024-11-18 07:20:26.338794] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:05.420 [2024-11-18 07:20:26.338824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.420 [2024-11-18 07:20:26.338855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:05.420 [2024-11-18 07:20:26.344282] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:05.420 [2024-11-18 07:20:26.344313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.420 [2024-11-18 07:20:26.344333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:05.420 [2024-11-18 07:20:26.350594] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:05.420 [2024-11-18 07:20:26.350625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.420 [2024-11-18 07:20:26.350644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:05.420 [2024-11-18 07:20:26.356628] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:05.420 [2024-11-18 07:20:26.356659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.420 [2024-11-18 07:20:26.356679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:05.420 [2024-11-18 07:20:26.362648] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:05.420 [2024-11-18 07:20:26.362680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.420 [2024-11-18 07:20:26.362698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:05.420 [2024-11-18 07:20:26.368871] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:05.420 [2024-11-18 07:20:26.368903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.420 [2024-11-18 07:20:26.368922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:05.420 [2024-11-18 07:20:26.374788] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:05.420 [2024-11-18 07:20:26.374819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.420 [2024-11-18 07:20:26.374847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:05.420 [2024-11-18 07:20:26.379903] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:05.420 [2024-11-18 07:20:26.379935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.420 [2024-11-18 07:20:26.379955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:05.420 [2024-11-18 07:20:26.384392] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:05.420 [2024-11-18 07:20:26.384423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.420 [2024-11-18 07:20:26.384442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:05.420 [2024-11-18 07:20:26.390032] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:05.420 [2024-11-18 07:20:26.390069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.420 [2024-11-18 07:20:26.390088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:05.420 [2024-11-18 07:20:26.394876] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:05.420 [2024-11-18 07:20:26.394906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.420 [2024-11-18 07:20:26.394926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:05.680 [2024-11-18 07:20:26.399990] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:05.680 [2024-11-18 07:20:26.400021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.680 [2024-11-18 07:20:26.400045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:05.680 [2024-11-18 07:20:26.403807] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:05.680 [2024-11-18 07:20:26.403850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.680 [2024-11-18 07:20:26.403873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:05.680 [2024-11-18 07:20:26.409215] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:05.680 [2024-11-18 07:20:26.409260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.680 [2024-11-18 07:20:26.409281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:05.680 [2024-11-18 07:20:26.416874] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:05.680 [2024-11-18 07:20:26.416905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.680 [2024-11-18 07:20:26.416942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:05.680 [2024-11-18 07:20:26.423683] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:05.680 [2024-11-18 07:20:26.423713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.680 [2024-11-18 07:20:26.423730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:05.680 [2024-11-18 07:20:26.429722] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:05.680 [2024-11-18 07:20:26.429753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.680 [2024-11-18 07:20:26.429774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:05.680 [2024-11-18 07:20:26.435831] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:05.680 [2024-11-18 07:20:26.435862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.680 [2024-11-18 07:20:26.435887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:05.680 [2024-11-18 07:20:26.440930] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:05.680 [2024-11-18 07:20:26.440975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.680 [2024-11-18 07:20:26.440992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:05.680 [2024-11-18 07:20:26.446329] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:05.680 [2024-11-18 07:20:26.446360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.680 [2024-11-18 07:20:26.446379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:05.680 [2024-11-18 07:20:26.451398] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:05.680 [2024-11-18 07:20:26.451429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.680 [2024-11-18 07:20:26.451450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:05.681 [2024-11-18 07:20:26.456697] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:05.681 [2024-11-18 07:20:26.456743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.681 [2024-11-18 07:20:26.456760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:05.681 [2024-11-18 07:20:26.461346] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:05.681 [2024-11-18 07:20:26.461377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.681 [2024-11-18 07:20:26.461397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:05.681 [2024-11-18 07:20:26.466095] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:05.681 [2024-11-18 07:20:26.466126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.681 [2024-11-18 07:20:26.466146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:05.681 [2024-11-18 07:20:26.471476] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:05.681 [2024-11-18 07:20:26.471513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.681 [2024-11-18 07:20:26.471532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:05.681 [2024-11-18 07:20:26.478271] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:05.681 [2024-11-18 07:20:26.478301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.681 [2024-11-18 07:20:26.478333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:05.681 [2024-11-18 07:20:26.485613] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:05.681 [2024-11-18 07:20:26.485646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.681 [2024-11-18 07:20:26.485671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:05.681 [2024-11-18 07:20:26.492642] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:05.681 [2024-11-18 07:20:26.492672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.681 [2024-11-18 07:20:26.492704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:05.681 [2024-11-18 07:20:26.499183] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:05.681 [2024-11-18 07:20:26.499214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.681 [2024-11-18 07:20:26.499234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:05.681 [2024-11-18 07:20:26.505695] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:05.681 [2024-11-18 07:20:26.505725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.681 [2024-11-18 07:20:26.505758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:05.681 [2024-11-18 07:20:26.510650] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:05.681 [2024-11-18 07:20:26.510681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.681 [2024-11-18 07:20:26.510699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:05.681 [2024-11-18 07:20:26.516424] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:05.681 [2024-11-18 07:20:26.516455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.681 [2024-11-18 07:20:26.516474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:05.681 [2024-11-18 07:20:26.523855] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:05.681 [2024-11-18 07:20:26.523885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.681 [2024-11-18 07:20:26.523908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:05.681 [2024-11-18 07:20:26.530760] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:05.681 [2024-11-18 07:20:26.530806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.681 [2024-11-18 07:20:26.530825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:05.681 [2024-11-18 07:20:26.537235] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:05.681 [2024-11-18 07:20:26.537266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.681 [2024-11-18 07:20:26.537300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:05.681 [2024-11-18 07:20:26.543549] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:05.681 [2024-11-18 07:20:26.543586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.681 [2024-11-18 07:20:26.543606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:05.681 [2024-11-18 07:20:26.549580] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:05.681 [2024-11-18 07:20:26.549611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.681 [2024-11-18 07:20:26.549630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:05.681 [2024-11-18 07:20:26.554423] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:05.681 [2024-11-18 07:20:26.554454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.681 [2024-11-18 07:20:26.554472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:05.681 [2024-11-18 07:20:26.557823] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:05.681 [2024-11-18 07:20:26.557869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.681 [2024-11-18 07:20:26.557886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:05.681 [2024-11-18 07:20:26.563468] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:05.681 [2024-11-18 07:20:26.563521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.681 [2024-11-18 07:20:26.563539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:05.681 [2024-11-18 07:20:26.569116] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:05.681 [2024-11-18 07:20:26.569146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.681 [2024-11-18 07:20:26.569164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:05.681 [2024-11-18 07:20:26.574118] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:05.681 [2024-11-18 07:20:26.574148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.681 [2024-11-18 07:20:26.574179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:05.681 [2024-11-18 07:20:26.579543] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:05.681 [2024-11-18 07:20:26.579572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.681 [2024-11-18 07:20:26.579591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:05.681 [2024-11-18 07:20:26.585145] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:05.681 [2024-11-18 07:20:26.585175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.681 [2024-11-18 07:20:26.585212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:05.681 [2024-11-18 07:20:26.590600] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:05.681 [2024-11-18 07:20:26.590630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.681 [2024-11-18 07:20:26.590648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:05.681 [2024-11-18 07:20:26.595532] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:05.681 [2024-11-18 07:20:26.595561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.682 [2024-11-18 07:20:26.595580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:05.682 [2024-11-18 07:20:26.600141] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:05.682 [2024-11-18 07:20:26.600173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.682 [2024-11-18 07:20:26.600192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:05.682 [2024-11-18 07:20:26.605202] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:05.682 [2024-11-18 07:20:26.605233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.682 [2024-11-18 07:20:26.605252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:05.682 5613.00 IOPS, 701.62 MiB/s [2024-11-18T06:20:26.660Z] [2024-11-18 07:20:26.610945] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:05.682 [2024-11-18 07:20:26.610976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.682 [2024-11-18 07:20:26.610993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:05.682 [2024-11-18 07:20:26.615264] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:05.682 [2024-11-18 07:20:26.615295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.682 [2024-11-18 07:20:26.615313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:05.682 [2024-11-18 07:20:26.619788] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:05.682 [2024-11-18 07:20:26.619817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.682 [2024-11-18 07:20:26.619836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:05.682 [2024-11-18 07:20:26.624277] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:05.682 [2024-11-18 07:20:26.624307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.682 [2024-11-18 07:20:26.624323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:05.682 [2024-11-18 07:20:26.628864] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:05.682 [2024-11-18 07:20:26.628899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.682 [2024-11-18 07:20:26.628919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:05.682 [2024-11-18 07:20:26.633452] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:05.682 [2024-11-18 07:20:26.633486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.682 [2024-11-18 07:20:26.633513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:05.682 [2024-11-18 07:20:26.638100] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:05.682 [2024-11-18 07:20:26.638130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.682 [2024-11-18 07:20:26.638149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:05.682 [2024-11-18 07:20:26.642648] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:05.682 [2024-11-18 07:20:26.642677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.682 [2024-11-18 07:20:26.642694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:05.682 [2024-11-18 07:20:26.647777] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:05.682 [2024-11-18 07:20:26.647807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.682 [2024-11-18 07:20:26.647826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:05.682 [2024-11-18 07:20:26.652708] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:05.682 [2024-11-18 07:20:26.652739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.682 [2024-11-18 07:20:26.652758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:05.682 [2024-11-18 07:20:26.657349] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:05.682 [2024-11-18 07:20:26.657379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.682 [2024-11-18 07:20:26.657399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:05.941 [2024-11-18 07:20:26.661955] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:05.942 [2024-11-18 07:20:26.661985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.942 [2024-11-18 07:20:26.662009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:05.942 [2024-11-18 07:20:26.666509] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:05.942 [2024-11-18 07:20:26.666539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.942 [2024-11-18 07:20:26.666561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:05.942 [2024-11-18 07:20:26.672018] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:05.942 [2024-11-18 07:20:26.672050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.942 [2024-11-18 07:20:26.672068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:05.942 [2024-11-18 07:20:26.678669] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:05.942 [2024-11-18 07:20:26.678700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.942 [2024-11-18 07:20:26.678717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:05.942 [2024-11-18 07:20:26.686777] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:05.942 [2024-11-18 07:20:26.686823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.942 [2024-11-18 07:20:26.686842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:05.942 [2024-11-18 07:20:26.693525] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:05.942 [2024-11-18 07:20:26.693556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.942 [2024-11-18 07:20:26.693588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:05.942 [2024-11-18 07:20:26.699908] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:05.942 [2024-11-18 07:20:26.699939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.942 [2024-11-18 07:20:26.699959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:05.942 [2024-11-18 07:20:26.704834] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:05.942 [2024-11-18 07:20:26.704864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.942 [2024-11-18 07:20:26.704881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:05.942 [2024-11-18 07:20:26.709881] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:05.942 [2024-11-18 07:20:26.709912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.942 [2024-11-18 07:20:26.709934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:05.942 [2024-11-18 07:20:26.715427] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:05.942 [2024-11-18 07:20:26.715458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.942 [2024-11-18 07:20:26.715476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:05.942 [2024-11-18 07:20:26.720999] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:05.942 [2024-11-18 07:20:26.721031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.942 [2024-11-18 07:20:26.721061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:05.942 [2024-11-18 07:20:26.726039] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:05.942 [2024-11-18 07:20:26.726070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.942 [2024-11-18 07:20:26.726090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:05.942 [2024-11-18 07:20:26.731799] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:05.942 [2024-11-18 07:20:26.731830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.942 [2024-11-18 07:20:26.731849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:05.942 [2024-11-18 07:20:26.737857] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:05.942 [2024-11-18 07:20:26.737889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.942 [2024-11-18 07:20:26.737909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:05.942 [2024-11-18 07:20:26.744046] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:05.942 [2024-11-18 07:20:26.744078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.942 [2024-11-18 07:20:26.744098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:05.942 [2024-11-18 07:20:26.749451] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:05.942 [2024-11-18 07:20:26.749483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.942 [2024-11-18 07:20:26.749518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:05.942 [2024-11-18 07:20:26.754412] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:05.942 [2024-11-18 07:20:26.754444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.942 [2024-11-18 07:20:26.754462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:05.942 [2024-11-18 07:20:26.759589] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:05.942 [2024-11-18 07:20:26.759619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.942 [2024-11-18 07:20:26.759638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:05.942 [2024-11-18 07:20:26.764748] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:05.942 [2024-11-18 07:20:26.764780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.942 [2024-11-18 07:20:26.764797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:05.942 [2024-11-18 07:20:26.769744] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:05.942 [2024-11-18 07:20:26.769780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.942 [2024-11-18 07:20:26.769798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:05.942 [2024-11-18 07:20:26.774668] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:05.942 [2024-11-18 07:20:26.774698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.942 [2024-11-18 07:20:26.774718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:05.942 [2024-11-18 07:20:26.780051] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:05.942 [2024-11-18 07:20:26.780082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.942 [2024-11-18 07:20:26.780101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:05.943 [2024-11-18 07:20:26.784573] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:05.943 [2024-11-18 07:20:26.784603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.943 [2024-11-18 07:20:26.784621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:05.943 [2024-11-18 07:20:26.789547] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:05.943 [2024-11-18 07:20:26.789577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.943 [2024-11-18 07:20:26.789598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:05.943 [2024-11-18 07:20:26.795394] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:05.943 [2024-11-18 07:20:26.795424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.943 [2024-11-18 07:20:26.795442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:05.943 [2024-11-18 07:20:26.803018] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:05.943 [2024-11-18 07:20:26.803050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.943 [2024-11-18 07:20:26.803069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:05.943 [2024-11-18 07:20:26.809081] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:05.943 [2024-11-18 07:20:26.809113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.943 [2024-11-18 07:20:26.809130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:05.943 [2024-11-18 07:20:26.814646] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:05.943 [2024-11-18 07:20:26.814677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.943 [2024-11-18 07:20:26.814697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:05.943 [2024-11-18 07:20:26.819846] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:05.943 [2024-11-18 07:20:26.819876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.943 [2024-11-18 07:20:26.819895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:05.943 [2024-11-18 07:20:26.824283] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:05.943 [2024-11-18 07:20:26.824313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.943 [2024-11-18 07:20:26.824332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:05.943 [2024-11-18 07:20:26.827629] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:05.943 [2024-11-18 07:20:26.827660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.943 [2024-11-18 07:20:26.827680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:05.943 [2024-11-18 07:20:26.831963] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:05.943 [2024-11-18 07:20:26.832008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.943 [2024-11-18 07:20:26.832026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:05.943 [2024-11-18 07:20:26.837670] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:05.943 [2024-11-18 07:20:26.837701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.943 [2024-11-18 07:20:26.837726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:05.943 [2024-11-18 07:20:26.842738] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:05.943 [2024-11-18 07:20:26.842788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.943 [2024-11-18 07:20:26.842805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:05.943 [2024-11-18 07:20:26.847220] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:05.943 [2024-11-18 07:20:26.847251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.943 [2024-11-18 07:20:26.847268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:05.943 [2024-11-18 07:20:26.851864] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:05.943 [2024-11-18 07:20:26.851895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.943 [2024-11-18 07:20:26.851912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:05.943 [2024-11-18 07:20:26.856350] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:05.943 [2024-11-18 07:20:26.856380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.943 [2024-11-18 07:20:26.856408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:05.943 [2024-11-18 07:20:26.860412] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:05.943 [2024-11-18 07:20:26.860442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.943 [2024-11-18 07:20:26.860461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:05.943 [2024-11-18 07:20:26.863441] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:05.943 [2024-11-18 07:20:26.863469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.943 [2024-11-18 07:20:26.863509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:05.943 [2024-11-18 07:20:26.867091] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:05.943 [2024-11-18 07:20:26.867121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.943 [2024-11-18 07:20:26.867138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:05.943 [2024-11-18 07:20:26.871614] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:05.943 [2024-11-18 07:20:26.871645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.943 [2024-11-18 07:20:26.871663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:05.943 [2024-11-18 07:20:26.876137] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:05.943 [2024-11-18 07:20:26.876168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.943 [2024-11-18 07:20:26.876187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:05.943 [2024-11-18 07:20:26.880680] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:05.943 [2024-11-18 07:20:26.880711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.943 [2024-11-18 07:20:26.880728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:05.943 [2024-11-18 07:20:26.885202] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:05.943 [2024-11-18 07:20:26.885232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.943 [2024-11-18 07:20:26.885252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:05.943 [2024-11-18 07:20:26.889679] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:05.943 [2024-11-18 07:20:26.889709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.943 [2024-11-18 07:20:26.889726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:05.943 [2024-11-18 07:20:26.894121] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:05.943 [2024-11-18 07:20:26.894166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.943 [2024-11-18 07:20:26.894185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:05.944 [2024-11-18 07:20:26.898756] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:05.944 [2024-11-18 07:20:26.898787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.944 [2024-11-18 07:20:26.898821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:05.944 [2024-11-18 07:20:26.903320] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:05.944 [2024-11-18 07:20:26.903350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.944 [2024-11-18 07:20:26.903368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:05.944 [2024-11-18 07:20:26.907958] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:05.944 [2024-11-18 07:20:26.907987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.944 [2024-11-18 07:20:26.908005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:05.944 [2024-11-18 07:20:26.912969] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:05.944 [2024-11-18 07:20:26.913015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.944 [2024-11-18 07:20:26.913034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:05.944 [2024-11-18 07:20:26.918339] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:05.944 [2024-11-18 07:20:26.918370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.944 [2024-11-18 07:20:26.918403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:06.206 [2024-11-18 07:20:26.923309] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:06.206 [2024-11-18 07:20:26.923355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.206 [2024-11-18 07:20:26.923377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:06.206 [2024-11-18 07:20:26.928965] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:06.206 [2024-11-18 07:20:26.929008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.206 [2024-11-18 07:20:26.929026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:06.206 [2024-11-18 07:20:26.934802] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:06.206 [2024-11-18 07:20:26.934833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.206 [2024-11-18 07:20:26.934857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:06.206 [2024-11-18 07:20:26.940903] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:06.206 [2024-11-18 07:20:26.940946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.206 [2024-11-18 07:20:26.940964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:06.206 [2024-11-18 07:20:26.948127] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:06.206 [2024-11-18 07:20:26.948159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.206 [2024-11-18 07:20:26.948192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:06.206 [2024-11-18 07:20:26.954899] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:06.206 [2024-11-18 07:20:26.954931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.206 [2024-11-18 07:20:26.954951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:06.206 [2024-11-18 07:20:26.962984] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:06.206 [2024-11-18 07:20:26.963016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.206 [2024-11-18 07:20:26.963035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:06.206 [2024-11-18 07:20:26.970228] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:06.206 [2024-11-18 07:20:26.970261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.206 [2024-11-18 07:20:26.970278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:06.206 [2024-11-18 07:20:26.976069] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:06.206 [2024-11-18 07:20:26.976102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.206 [2024-11-18 07:20:26.976122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:06.206 [2024-11-18 07:20:26.979922] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:06.206 [2024-11-18 07:20:26.979954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.206 [2024-11-18 07:20:26.979975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:06.206 [2024-11-18 07:20:26.983800] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:06.206 [2024-11-18 07:20:26.983830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.206 [2024-11-18 07:20:26.983861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:06.206 [2024-11-18 07:20:26.988972] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:06.206 [2024-11-18 07:20:26.989009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.206 [2024-11-18 07:20:26.989028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:06.206 [2024-11-18 07:20:26.994644] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:06.206 [2024-11-18 07:20:26.994676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.206 [2024-11-18 07:20:26.994693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:06.206 [2024-11-18 07:20:27.000452] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:06.206 [2024-11-18 07:20:27.000501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.206 [2024-11-18 07:20:27.000523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:06.206 [2024-11-18 07:20:27.005749] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:06.206 [2024-11-18 07:20:27.005805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.206 [2024-11-18 07:20:27.005821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:06.206 [2024-11-18 07:20:27.011551] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:06.206 [2024-11-18 07:20:27.011582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.206 [2024-11-18 07:20:27.011601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:06.206 [2024-11-18 07:20:27.017045] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:06.206 [2024-11-18 07:20:27.017077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.206 [2024-11-18 07:20:27.017095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:06.206 [2024-11-18 07:20:27.022225] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:06.206 [2024-11-18 07:20:27.022254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.206 [2024-11-18 07:20:27.022274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:06.206 [2024-11-18 07:20:27.027577] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:06.206 [2024-11-18 07:20:27.027609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.206 [2024-11-18 07:20:27.027627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:06.206 [2024-11-18 07:20:27.033143] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:06.206 [2024-11-18 07:20:27.033173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.206 [2024-11-18 07:20:27.033189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:06.206 [2024-11-18 07:20:27.038131] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:06.206 [2024-11-18 07:20:27.038163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.207 [2024-11-18 07:20:27.038181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:06.207 [2024-11-18 07:20:27.042621] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:06.207 [2024-11-18 07:20:27.042666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.207 [2024-11-18 07:20:27.042684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:06.207 [2024-11-18 07:20:27.047159] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:06.207 [2024-11-18 07:20:27.047190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.207 [2024-11-18 07:20:27.047222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:06.207 [2024-11-18 07:20:27.051704] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:06.207 [2024-11-18 07:20:27.051734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.207 [2024-11-18 07:20:27.051750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:06.207 [2024-11-18 07:20:27.056213] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:06.207 [2024-11-18 07:20:27.056259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.207 [2024-11-18 07:20:27.056276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:06.207 [2024-11-18 07:20:27.060780] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:06.207 [2024-11-18 07:20:27.060825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.207 [2024-11-18 07:20:27.060841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:06.207 [2024-11-18 07:20:27.065550] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:06.207 [2024-11-18 07:20:27.065581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.207 [2024-11-18 07:20:27.065599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:06.207 [2024-11-18 07:20:27.070102] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:06.207 [2024-11-18 07:20:27.070132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.207 [2024-11-18 07:20:27.070150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:06.207 [2024-11-18 07:20:27.074902] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:06.207 [2024-11-18 07:20:27.074948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.207 [2024-11-18 07:20:27.074970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:06.207 [2024-11-18 07:20:27.079623] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:06.207 [2024-11-18 07:20:27.079652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.207 [2024-11-18 07:20:27.079669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:06.207 [2024-11-18 07:20:27.084259] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:06.207 [2024-11-18 07:20:27.084289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.207 [2024-11-18 07:20:27.084306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:06.207 [2024-11-18 07:20:27.089479] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:06.207 [2024-11-18 07:20:27.089518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.207 [2024-11-18 07:20:27.089537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:06.207 [2024-11-18 07:20:27.096167] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:06.207 [2024-11-18 07:20:27.096198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.207 [2024-11-18 07:20:27.096216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:06.207 [2024-11-18 07:20:27.103370] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:06.207 [2024-11-18 07:20:27.103403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.207 [2024-11-18 07:20:27.103421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:06.207 [2024-11-18 07:20:27.108763] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:06.207 [2024-11-18 07:20:27.108794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.207 [2024-11-18 07:20:27.108811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:06.207 [2024-11-18 07:20:27.114524] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:06.207 [2024-11-18 07:20:27.114555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.207 [2024-11-18 07:20:27.114572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:06.207 [2024-11-18 07:20:27.120244] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:06.207 [2024-11-18 07:20:27.120277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.207 [2024-11-18 07:20:27.120295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:06.207 [2024-11-18 07:20:27.126810] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:06.207 [2024-11-18 07:20:27.126849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.207 [2024-11-18 07:20:27.126868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:06.207 [2024-11-18 07:20:27.132488] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:06.207 [2024-11-18 07:20:27.132545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.207 [2024-11-18 07:20:27.132565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:06.207 [2024-11-18 07:20:27.137750] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:06.207 [2024-11-18 07:20:27.137782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.207 [2024-11-18 07:20:27.137800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:06.207 [2024-11-18 07:20:27.143314] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:06.207 [2024-11-18 07:20:27.143346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.207 [2024-11-18 07:20:27.143364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:06.207 [2024-11-18 07:20:27.149564] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:06.207 [2024-11-18 07:20:27.149595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.207 [2024-11-18 07:20:27.149629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:06.207 [2024-11-18 07:20:27.156352] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:06.207 [2024-11-18 07:20:27.156385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.207 [2024-11-18 07:20:27.156403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:06.207 [2024-11-18 07:20:27.163125] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:06.207 [2024-11-18 07:20:27.163173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.207 [2024-11-18 07:20:27.163190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:06.207 [2024-11-18 07:20:27.169450] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:06.207 [2024-11-18 07:20:27.169482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.207 [2024-11-18 07:20:27.169509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:06.208 [2024-11-18 07:20:27.175762] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:06.208 [2024-11-18 07:20:27.175794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.208 [2024-11-18 07:20:27.175812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:06.208 [2024-11-18 07:20:27.182204] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:06.208 [2024-11-18 07:20:27.182236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.208 [2024-11-18 07:20:27.182255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:06.470 [2024-11-18 07:20:27.187571] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:06.470 [2024-11-18 07:20:27.187604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.470 [2024-11-18 07:20:27.187622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:06.470 [2024-11-18 07:20:27.192877] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:06.470 [2024-11-18 07:20:27.192909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.470 [2024-11-18 07:20:27.192927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:06.470 [2024-11-18 07:20:27.197840] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:06.470 [2024-11-18 07:20:27.197873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.470 [2024-11-18 07:20:27.197890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:06.470 [2024-11-18 07:20:27.202533] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:06.470 [2024-11-18 07:20:27.202564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.470 [2024-11-18 07:20:27.202582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:06.470 [2024-11-18 07:20:27.207573] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:06.470 [2024-11-18 07:20:27.207606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.470 [2024-11-18 07:20:27.207624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:06.470 [2024-11-18 07:20:27.213440] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:06.470 [2024-11-18 07:20:27.213487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.470 [2024-11-18 07:20:27.213513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:06.470 [2024-11-18 07:20:27.218947] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:06.470 [2024-11-18 07:20:27.218978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.470 [2024-11-18 07:20:27.218997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:06.470 [2024-11-18 07:20:27.224136] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:06.470 [2024-11-18 07:20:27.224168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.470 [2024-11-18 07:20:27.224192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:06.470 [2024-11-18 07:20:27.229243] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:06.470 [2024-11-18 07:20:27.229274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.470 [2024-11-18 07:20:27.229292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:06.470 [2024-11-18 07:20:27.234619] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:06.470 [2024-11-18 07:20:27.234651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.470 [2024-11-18 07:20:27.234669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:06.470 [2024-11-18 07:20:27.241207] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:06.470 [2024-11-18 07:20:27.241240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.470 [2024-11-18 07:20:27.241259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:06.470 [2024-11-18 07:20:27.248761] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:06.470 [2024-11-18 07:20:27.248803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.470 [2024-11-18 07:20:27.248820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:06.470 [2024-11-18 07:20:27.256590] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:06.470 [2024-11-18 07:20:27.256621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.470 [2024-11-18 07:20:27.256639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:06.470 [2024-11-18 07:20:27.264210] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:06.470 [2024-11-18 07:20:27.264243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.470 [2024-11-18 07:20:27.264276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:06.470 [2024-11-18 07:20:27.272050] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:06.470 [2024-11-18 07:20:27.272098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.470 [2024-11-18 07:20:27.272116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:06.470 [2024-11-18 07:20:27.280043] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:06.471 [2024-11-18 07:20:27.280075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.471 [2024-11-18 07:20:27.280093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:06.471 [2024-11-18 07:20:27.287720] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:06.471 [2024-11-18 07:20:27.287755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.471 [2024-11-18 07:20:27.287774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:06.471 [2024-11-18 07:20:27.295285] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:06.471 [2024-11-18 07:20:27.295316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.471 [2024-11-18 07:20:27.295335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:06.471 [2024-11-18 07:20:27.302904] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:06.471 [2024-11-18 07:20:27.302936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.471 [2024-11-18 07:20:27.302954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:06.471 [2024-11-18 07:20:27.310662] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:06.471 [2024-11-18 07:20:27.310694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.471 [2024-11-18 07:20:27.310712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:06.471 [2024-11-18 07:20:27.318170] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:06.471 [2024-11-18 07:20:27.318202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.471 [2024-11-18 07:20:27.318220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:06.471 [2024-11-18 07:20:27.325898] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:06.471 [2024-11-18 07:20:27.325933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.471 [2024-11-18 07:20:27.325952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:06.471 [2024-11-18 07:20:27.333466] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:06.471 [2024-11-18 07:20:27.333506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.471 [2024-11-18 07:20:27.333527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:06.471 [2024-11-18 07:20:27.341076] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:06.471 [2024-11-18 07:20:27.341109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.471 [2024-11-18 07:20:27.341127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:06.471 [2024-11-18 07:20:27.348712] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:06.471 [2024-11-18 07:20:27.348744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.471 [2024-11-18 07:20:27.348768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:06.471 [2024-11-18 07:20:27.355653] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:06.471 [2024-11-18 07:20:27.355687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.471 [2024-11-18 07:20:27.355706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:06.471 [2024-11-18 07:20:27.361071] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:06.471 [2024-11-18 07:20:27.361104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.471 [2024-11-18 07:20:27.361122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:06.471 [2024-11-18 07:20:27.366687] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:06.471 [2024-11-18 07:20:27.366720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.471 [2024-11-18 07:20:27.366738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:06.471 [2024-11-18 07:20:27.373161] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:06.471 [2024-11-18 07:20:27.373193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.471 [2024-11-18 07:20:27.373210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:06.471 [2024-11-18 07:20:27.379130] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:06.471 [2024-11-18 07:20:27.379163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.471 [2024-11-18 07:20:27.379181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:06.471 [2024-11-18 07:20:27.384608] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:06.471 [2024-11-18 07:20:27.384640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.471 [2024-11-18 07:20:27.384658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:06.471 [2024-11-18 07:20:27.389633] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:06.471 [2024-11-18 07:20:27.389665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.471 [2024-11-18 07:20:27.389683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:06.471 [2024-11-18 07:20:27.394408] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:06.471 [2024-11-18 07:20:27.394440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.471 [2024-11-18 07:20:27.394458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:06.471 [2024-11-18 07:20:27.399645] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:06.471 [2024-11-18 07:20:27.399682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.471 [2024-11-18 07:20:27.399700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:06.471 [2024-11-18 07:20:27.405296] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:06.471 [2024-11-18 07:20:27.405327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.471 [2024-11-18 07:20:27.405345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:06.471 [2024-11-18 07:20:27.411059] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:06.471 [2024-11-18 07:20:27.411092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.471 [2024-11-18 07:20:27.411110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:06.471 [2024-11-18 07:20:27.416163] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:06.471 [2024-11-18 07:20:27.416195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.471 [2024-11-18 07:20:27.416213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:06.471 [2024-11-18 07:20:27.422129] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:06.471 [2024-11-18 07:20:27.422163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.471 [2024-11-18 07:20:27.422181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:06.472 [2024-11-18 07:20:27.428596] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:06.472 [2024-11-18 07:20:27.428627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.472 [2024-11-18 07:20:27.428645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:06.472 [2024-11-18 07:20:27.434576] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:06.472 [2024-11-18 07:20:27.434610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.472 [2024-11-18 07:20:27.434628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:06.472 [2024-11-18 07:20:27.440710] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:06.472 [2024-11-18 07:20:27.440743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.472 [2024-11-18 07:20:27.440761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:06.472 [2024-11-18 07:20:27.446644] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:06.472 [2024-11-18 07:20:27.446676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.472 [2024-11-18 07:20:27.446694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:06.732 [2024-11-18 07:20:27.449910] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:06.732 [2024-11-18 07:20:27.449942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.732 [2024-11-18 07:20:27.449960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:06.732 [2024-11-18 07:20:27.454793] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:06.732 [2024-11-18 07:20:27.454824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.732 [2024-11-18 07:20:27.454855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:06.732 [2024-11-18 07:20:27.459428] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:06.732 [2024-11-18 07:20:27.459460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.732 [2024-11-18 07:20:27.459477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:06.732 [2024-11-18 07:20:27.464720] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:06.732 [2024-11-18 07:20:27.464752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.732 [2024-11-18 07:20:27.464769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:06.732 [2024-11-18 07:20:27.469941] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:06.732 [2024-11-18 07:20:27.469972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.732 [2024-11-18 07:20:27.469988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:06.732 [2024-11-18 07:20:27.474683] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:06.732 [2024-11-18 07:20:27.474715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.732 [2024-11-18 07:20:27.474732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:06.732 [2024-11-18 07:20:27.479284] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:06.732 [2024-11-18 07:20:27.479315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.732 [2024-11-18 07:20:27.479332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:06.732 [2024-11-18 07:20:27.483864] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:06.732 [2024-11-18 07:20:27.483895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.732 [2024-11-18 07:20:27.483912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:06.732 [2024-11-18 07:20:27.488383] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:06.732 [2024-11-18 07:20:27.488414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.732 [2024-11-18 07:20:27.488452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:06.732 [2024-11-18 07:20:27.492967] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:06.732 [2024-11-18 07:20:27.492997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.732 [2024-11-18 07:20:27.493014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:06.732 [2024-11-18 07:20:27.498204] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:06.732 [2024-11-18 07:20:27.498235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.732 [2024-11-18 07:20:27.498252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:06.732 [2024-11-18 07:20:27.504924] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:06.732 [2024-11-18 07:20:27.504957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.732 [2024-11-18 07:20:27.504974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:06.732 [2024-11-18 07:20:27.512052] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:06.732 [2024-11-18 07:20:27.512084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.732 [2024-11-18 07:20:27.512102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:06.732 [2024-11-18 07:20:27.517528] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:06.732 [2024-11-18 07:20:27.517560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.732 [2024-11-18 07:20:27.517577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:06.732 [2024-11-18 07:20:27.523149] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:06.732 [2024-11-18 07:20:27.523180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.732 [2024-11-18 07:20:27.523197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:06.732 [2024-11-18 07:20:27.527839] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:06.732 [2024-11-18 07:20:27.527870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.732 [2024-11-18 07:20:27.527888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:06.732 [2024-11-18 07:20:27.532908] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:06.732 [2024-11-18 07:20:27.532939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.732 [2024-11-18 07:20:27.532972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:06.732 [2024-11-18 07:20:27.538058] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:06.732 [2024-11-18 07:20:27.538095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.732 [2024-11-18 07:20:27.538113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:06.732 [2024-11-18 07:20:27.543929] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:06.732 [2024-11-18 07:20:27.543962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.732 [2024-11-18 07:20:27.543994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:06.733 [2024-11-18 07:20:27.551483] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:06.733 [2024-11-18 07:20:27.551538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.733 [2024-11-18 07:20:27.551556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:06.733 [2024-11-18 07:20:27.557689] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:06.733 [2024-11-18 07:20:27.557721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.733 [2024-11-18 07:20:27.557738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:06.733 [2024-11-18 07:20:27.563298] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:06.733 [2024-11-18 07:20:27.563346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.733 [2024-11-18 07:20:27.563363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:06.733 [2024-11-18 07:20:27.568608] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:06.733 [2024-11-18 07:20:27.568640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.733 [2024-11-18 07:20:27.568657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:06.733 [2024-11-18 07:20:27.573771] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:06.733 [2024-11-18 07:20:27.573802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.733 [2024-11-18 07:20:27.573836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:06.733 [2024-11-18 07:20:27.578484] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:06.733 [2024-11-18 07:20:27.578522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.733 [2024-11-18 07:20:27.578540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:06.733 [2024-11-18 07:20:27.582987] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:06.733 [2024-11-18 07:20:27.583019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.733 [2024-11-18 07:20:27.583036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:06.733 [2024-11-18 07:20:27.587405] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:06.733 [2024-11-18 07:20:27.587436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.733 [2024-11-18 07:20:27.587453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:06.733 [2024-11-18 07:20:27.590657] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:06.733 [2024-11-18 07:20:27.590688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.733 [2024-11-18 07:20:27.590704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:06.733 [2024-11-18 07:20:27.594661] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:06.733 [2024-11-18 07:20:27.594692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.733 [2024-11-18 07:20:27.594709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:06.733 [2024-11-18 07:20:27.599709] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:06.733 [2024-11-18 07:20:27.599740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.733 [2024-11-18 07:20:27.599757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:06.733 [2024-11-18 07:20:27.605676] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:06.733 [2024-11-18 07:20:27.605723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.733 [2024-11-18 07:20:27.605740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:06.733 5632.50 IOPS, 704.06 MiB/s [2024-11-18T06:20:27.711Z] [2024-11-18 07:20:27.611469] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae5930) 00:35:06.733 [2024-11-18 07:20:27.611508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.733 [2024-11-18 07:20:27.611528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:06.733 00:35:06.733 Latency(us) 00:35:06.733 [2024-11-18T06:20:27.711Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:06.733 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:35:06.733 nvme0n1 : 2.00 5633.30 704.16 0.00 0.00 2834.57 694.80 8349.77 00:35:06.733 [2024-11-18T06:20:27.711Z] =================================================================================================================== 00:35:06.733 [2024-11-18T06:20:27.711Z] Total : 5633.30 704.16 0.00 0.00 2834.57 694.80 8349.77 00:35:06.733 { 00:35:06.733 "results": [ 00:35:06.733 { 00:35:06.733 "job": "nvme0n1", 00:35:06.733 "core_mask": "0x2", 00:35:06.733 "workload": "randread", 00:35:06.733 "status": "finished", 00:35:06.733 "queue_depth": 16, 00:35:06.733 "io_size": 131072, 00:35:06.733 "runtime": 2.00433, 00:35:06.733 "iops": 5633.303897062859, 00:35:06.733 "mibps": 704.1629871328573, 00:35:06.733 "io_failed": 0, 00:35:06.733 "io_timeout": 0, 00:35:06.733 "avg_latency_us": 2834.5663043328514, 00:35:06.733 "min_latency_us": 694.802962962963, 00:35:06.733 "max_latency_us": 8349.771851851852 00:35:06.733 } 00:35:06.733 ], 00:35:06.733 "core_count": 1 00:35:06.733 } 00:35:06.733 07:20:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:35:06.733 07:20:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:35:06.733 07:20:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:35:06.733 07:20:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:35:06.733 | .driver_specific 00:35:06.733 | .nvme_error 00:35:06.733 | .status_code 00:35:06.733 | .command_transient_transport_error' 00:35:06.994 07:20:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 365 > 0 )) 00:35:06.994 07:20:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 396422 00:35:06.994 07:20:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 396422 ']' 00:35:06.994 07:20:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 396422 00:35:06.994 07:20:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:35:06.994 07:20:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:06.994 07:20:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 396422 00:35:06.994 07:20:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:06.994 07:20:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:06.994 07:20:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 396422' 00:35:06.994 killing process with pid 396422 00:35:06.994 07:20:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 396422 00:35:06.994 Received shutdown signal, test time was about 2.000000 seconds 00:35:06.994 00:35:06.994 Latency(us) 00:35:06.994 [2024-11-18T06:20:27.972Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:06.994 [2024-11-18T06:20:27.972Z] =================================================================================================================== 00:35:06.994 [2024-11-18T06:20:27.972Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:06.994 07:20:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 396422 00:35:07.253 07:20:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:35:07.253 07:20:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:35:07.253 07:20:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:35:07.253 07:20:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:35:07.253 07:20:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:35:07.253 07:20:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=396934 00:35:07.253 07:20:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:35:07.253 07:20:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 396934 /var/tmp/bperf.sock 00:35:07.253 07:20:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 396934 ']' 00:35:07.253 07:20:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:07.253 07:20:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:07.253 07:20:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:07.253 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:07.253 07:20:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:07.253 07:20:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:07.253 [2024-11-18 07:20:28.170260] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:35:07.253 [2024-11-18 07:20:28.170354] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid396934 ] 00:35:07.512 [2024-11-18 07:20:28.236708] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:07.512 [2024-11-18 07:20:28.283835] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:07.512 07:20:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:07.512 07:20:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:35:07.512 07:20:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:35:07.512 07:20:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:35:07.771 07:20:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:35:07.771 07:20:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:07.771 07:20:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:07.771 07:20:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:07.771 07:20:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:07.771 07:20:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:08.340 nvme0n1 00:35:08.340 07:20:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:35:08.340 07:20:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:08.340 07:20:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:08.340 07:20:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:08.340 07:20:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:35:08.340 07:20:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:08.340 Running I/O for 2 seconds... 00:35:08.340 [2024-11-18 07:20:29.283853] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03460) with pdu=0x2000166f46d0 00:35:08.340 [2024-11-18 07:20:29.284621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:22720 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:08.340 [2024-11-18 07:20:29.284675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:35:08.340 [2024-11-18 07:20:29.296594] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03460) with pdu=0x2000166e0a68 00:35:08.340 [2024-11-18 07:20:29.297406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:25110 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:08.340 [2024-11-18 07:20:29.297459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:35:08.340 [2024-11-18 07:20:29.308686] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03460) with pdu=0x2000166f8a50 00:35:08.340 [2024-11-18 07:20:29.309742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:22811 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:08.340 [2024-11-18 07:20:29.309786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:35:08.599 [2024-11-18 07:20:29.320902] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03460) with pdu=0x2000166ee190 00:35:08.599 [2024-11-18 07:20:29.321649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:22897 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:08.599 [2024-11-18 07:20:29.321680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:35:08.599 [2024-11-18 07:20:29.333061] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03460) with pdu=0x2000166e6fa8 00:35:08.599 [2024-11-18 07:20:29.334125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:1226 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:08.599 [2024-11-18 07:20:29.334167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:35:08.599 [2024-11-18 07:20:29.344946] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03460) with pdu=0x2000166ea248 00:35:08.599 [2024-11-18 07:20:29.345985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23110 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:08.599 [2024-11-18 07:20:29.346028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:35:08.599 [2024-11-18 07:20:29.356629] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03460) with pdu=0x2000166f7538 00:35:08.599 [2024-11-18 07:20:29.357677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:7227 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:08.599 [2024-11-18 07:20:29.357706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:35:08.600 [2024-11-18 07:20:29.368314] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03460) with pdu=0x2000166e4de8 00:35:08.600 [2024-11-18 07:20:29.369596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:2772 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:08.600 [2024-11-18 07:20:29.369639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:35:08.600 [2024-11-18 07:20:29.379912] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03460) with pdu=0x2000166ea248 00:35:08.600 [2024-11-18 07:20:29.380918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:12498 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:08.600 [2024-11-18 07:20:29.380961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:35:08.600 [2024-11-18 07:20:29.390791] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03460) with pdu=0x2000166eea00 00:35:08.600 [2024-11-18 07:20:29.391721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:9957 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:08.600 [2024-11-18 07:20:29.391766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:35:08.600 [2024-11-18 07:20:29.405090] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03460) with pdu=0x2000166fac10 00:35:08.600 [2024-11-18 07:20:29.406566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:7391 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:08.600 [2024-11-18 07:20:29.406618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:35:08.600 [2024-11-18 07:20:29.417409] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03460) with pdu=0x2000166e5ec8 00:35:08.600 [2024-11-18 07:20:29.419069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:24393 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:08.600 [2024-11-18 07:20:29.419112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:35:08.600 [2024-11-18 07:20:29.425726] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03460) with pdu=0x2000166f2948 00:35:08.600 [2024-11-18 07:20:29.426499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17038 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:08.600 [2024-11-18 07:20:29.426542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:35:08.600 [2024-11-18 07:20:29.440008] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03460) with pdu=0x2000166e1710 00:35:08.600 [2024-11-18 07:20:29.441361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3140 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:08.600 [2024-11-18 07:20:29.441404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:35:08.600 [2024-11-18 07:20:29.451695] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03460) with pdu=0x2000166ec840 00:35:08.600 [2024-11-18 07:20:29.452957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:1493 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:08.600 [2024-11-18 07:20:29.453000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:35:08.600 [2024-11-18 07:20:29.462617] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03460) with pdu=0x2000166feb58 00:35:08.600 [2024-11-18 07:20:29.463701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:11003 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:08.600 [2024-11-18 07:20:29.463746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:35:08.600 [2024-11-18 07:20:29.476773] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03460) with pdu=0x2000166fd208 00:35:08.600 [2024-11-18 07:20:29.478420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:12646 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:08.600 [2024-11-18 07:20:29.478463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:35:08.600 [2024-11-18 07:20:29.485089] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03460) with pdu=0x2000166e1710 00:35:08.600 [2024-11-18 07:20:29.485927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:25064 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:08.600 [2024-11-18 07:20:29.485970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:35:08.600 [2024-11-18 07:20:29.499417] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03460) with pdu=0x2000166eea00 00:35:08.600 [2024-11-18 07:20:29.500881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:21148 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:08.600 [2024-11-18 07:20:29.500909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:35:08.600 [2024-11-18 07:20:29.511829] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03460) with pdu=0x2000166eaef0 00:35:08.600 [2024-11-18 07:20:29.513378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:17479 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:08.600 [2024-11-18 07:20:29.513420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:35:08.600 [2024-11-18 07:20:29.520152] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03460) with pdu=0x2000166f8a50 00:35:08.600 [2024-11-18 07:20:29.520886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:20593 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:08.600 [2024-11-18 07:20:29.520912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:35:08.600 [2024-11-18 07:20:29.531862] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03460) with pdu=0x2000166fef90 00:35:08.600 [2024-11-18 07:20:29.532592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:23685 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:08.600 [2024-11-18 07:20:29.532619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:35:08.600 [2024-11-18 07:20:29.543976] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03460) with pdu=0x2000166e84c0 00:35:08.600 [2024-11-18 07:20:29.544737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:8549 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:08.600 [2024-11-18 07:20:29.544782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:35:08.600 [2024-11-18 07:20:29.558058] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03460) with pdu=0x2000166f35f0 00:35:08.600 [2024-11-18 07:20:29.559406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:25015 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:08.600 [2024-11-18 07:20:29.559449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:35:08.600 [2024-11-18 07:20:29.570188] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03460) with pdu=0x2000166fb480 00:35:08.600 [2024-11-18 07:20:29.571505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12407 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:08.600 [2024-11-18 07:20:29.571549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:35:08.861 [2024-11-18 07:20:29.581830] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03460) with pdu=0x2000166e6fa8 00:35:08.861 [2024-11-18 07:20:29.583049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:22527 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:08.861 [2024-11-18 07:20:29.583092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:35:08.861 [2024-11-18 07:20:29.593281] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03460) with pdu=0x2000166e4578 00:35:08.861 [2024-11-18 07:20:29.594456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:17677 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:08.861 [2024-11-18 07:20:29.594507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:35:08.861 [2024-11-18 07:20:29.607590] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03460) with pdu=0x2000166e1b48 00:35:08.861 [2024-11-18 07:20:29.609375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:13318 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:08.861 [2024-11-18 07:20:29.609417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:35:08.861 [2024-11-18 07:20:29.616020] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03460) with pdu=0x2000166f20d8 00:35:08.861 [2024-11-18 07:20:29.616784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:24047 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:08.861 [2024-11-18 07:20:29.616826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:35:08.861 [2024-11-18 07:20:29.628259] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03460) with pdu=0x2000166e9e10 00:35:08.861 [2024-11-18 07:20:29.629367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:10600 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:08.861 [2024-11-18 07:20:29.629410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:35:08.861 [2024-11-18 07:20:29.640588] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03460) with pdu=0x2000166eb328 00:35:08.861 [2024-11-18 07:20:29.641791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:12070 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:08.861 [2024-11-18 07:20:29.641839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:35:08.861 [2024-11-18 07:20:29.654738] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03460) with pdu=0x2000166e5ec8 00:35:08.861 [2024-11-18 07:20:29.656520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:8226 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:08.861 [2024-11-18 07:20:29.656567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:08.861 [2024-11-18 07:20:29.663094] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03460) with pdu=0x2000166ee5c8 00:35:08.861 [2024-11-18 07:20:29.664061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:11008 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:08.861 [2024-11-18 07:20:29.664104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:35:08.861 [2024-11-18 07:20:29.677521] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03460) with pdu=0x2000166eff18 00:35:08.861 [2024-11-18 07:20:29.679065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:15621 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:08.861 [2024-11-18 07:20:29.679108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:35:08.861 [2024-11-18 07:20:29.689942] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03460) with pdu=0x2000166fb048 00:35:08.861 [2024-11-18 07:20:29.691628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:15855 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:08.861 [2024-11-18 07:20:29.691670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:35:08.861 [2024-11-18 07:20:29.698267] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03460) with pdu=0x2000166ee5c8 00:35:08.861 [2024-11-18 07:20:29.698972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:10532 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:08.861 [2024-11-18 07:20:29.698999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:35:08.861 [2024-11-18 07:20:29.710537] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03460) with pdu=0x2000166f7100 00:35:08.862 [2024-11-18 07:20:29.711636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:23939 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:08.862 [2024-11-18 07:20:29.711686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:08.862 [2024-11-18 07:20:29.724946] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03460) with pdu=0x2000166e49b0 00:35:08.862 [2024-11-18 07:20:29.726507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:6216 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:08.862 [2024-11-18 07:20:29.726549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:08.862 [2024-11-18 07:20:29.733245] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03460) with pdu=0x2000166fb8b8 00:35:08.862 [2024-11-18 07:20:29.734006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12887 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:08.862 [2024-11-18 07:20:29.734048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:35:08.862 [2024-11-18 07:20:29.746817] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03460) with pdu=0x2000166f8a50 00:35:08.862 [2024-11-18 07:20:29.747892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:11865 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:08.862 [2024-11-18 07:20:29.747920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:35:08.862 [2024-11-18 07:20:29.758347] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03460) with pdu=0x2000166edd58 00:35:08.862 [2024-11-18 07:20:29.759321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:5375 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:08.862 [2024-11-18 07:20:29.759351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:35:08.862 [2024-11-18 07:20:29.772222] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03460) with pdu=0x2000166ee190 00:35:08.862 [2024-11-18 07:20:29.773886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:9173 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:08.862 [2024-11-18 07:20:29.773914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:35:08.862 [2024-11-18 07:20:29.780487] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03460) with pdu=0x2000166ee190 00:35:08.862 [2024-11-18 07:20:29.781282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:509 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:08.862 [2024-11-18 07:20:29.781309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:35:08.862 [2024-11-18 07:20:29.792854] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03460) with pdu=0x2000166f6020 00:35:08.862 [2024-11-18 07:20:29.793792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:12603 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:08.862 [2024-11-18 07:20:29.793819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:35:08.862 [2024-11-18 07:20:29.805083] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03460) with pdu=0x2000166eb328 00:35:08.862 [2024-11-18 07:20:29.806206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:6315 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:08.862 [2024-11-18 07:20:29.806247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:35:08.862 [2024-11-18 07:20:29.817382] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03460) with pdu=0x2000166e7818 00:35:08.862 [2024-11-18 07:20:29.818479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:21888 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:08.862 [2024-11-18 07:20:29.818528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:35:08.862 [2024-11-18 07:20:29.828689] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03460) with pdu=0x2000166fb048 00:35:08.862 [2024-11-18 07:20:29.829658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:8497 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:08.862 [2024-11-18 07:20:29.829701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:35:09.124 [2024-11-18 07:20:29.840350] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03460) with pdu=0x2000166fd640 00:35:09.124 [2024-11-18 07:20:29.841337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:13176 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.124 [2024-11-18 07:20:29.841364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:35:09.124 [2024-11-18 07:20:29.852810] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03460) with pdu=0x2000166ef270 00:35:09.124 [2024-11-18 07:20:29.853942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:1391 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.124 [2024-11-18 07:20:29.853969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:35:09.124 [2024-11-18 07:20:29.865179] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03460) with pdu=0x2000166e4de8 00:35:09.124 [2024-11-18 07:20:29.866414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:9804 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.124 [2024-11-18 07:20:29.866456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:35:09.124 [2024-11-18 07:20:29.876902] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03460) with pdu=0x2000166efae0 00:35:09.124 [2024-11-18 07:20:29.878012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:3315 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.124 [2024-11-18 07:20:29.878055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:35:09.124 [2024-11-18 07:20:29.890091] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03460) with pdu=0x2000166e38d0 00:35:09.124 [2024-11-18 07:20:29.891738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:9448 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.124 [2024-11-18 07:20:29.891781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:35:09.124 [2024-11-18 07:20:29.898682] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03460) with pdu=0x2000166dfdc0 00:35:09.124 [2024-11-18 07:20:29.899370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:6115 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.125 [2024-11-18 07:20:29.899411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:09.125 [2024-11-18 07:20:29.913127] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03460) with pdu=0x2000166e95a0 00:35:09.125 [2024-11-18 07:20:29.914446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:8301 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.125 [2024-11-18 07:20:29.914488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:35:09.125 [2024-11-18 07:20:29.924063] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03460) with pdu=0x2000166f31b8 00:35:09.125 [2024-11-18 07:20:29.925132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:13554 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.125 [2024-11-18 07:20:29.925162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:09.125 [2024-11-18 07:20:29.935410] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03460) with pdu=0x2000166f7538 00:35:09.125 [2024-11-18 07:20:29.936201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:20890 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.125 [2024-11-18 07:20:29.936244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:35:09.125 [2024-11-18 07:20:29.949782] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03460) with pdu=0x2000166f2948 00:35:09.125 [2024-11-18 07:20:29.951406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:21069 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.125 [2024-11-18 07:20:29.951434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:35:09.125 [2024-11-18 07:20:29.961950] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03460) with pdu=0x2000166ed0b0 00:35:09.125 [2024-11-18 07:20:29.963583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:19781 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.125 [2024-11-18 07:20:29.963628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:35:09.125 [2024-11-18 07:20:29.970062] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03460) with pdu=0x2000166f8e88 00:35:09.125 [2024-11-18 07:20:29.970813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:11661 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.125 [2024-11-18 07:20:29.970840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:35:09.125 [2024-11-18 07:20:29.982587] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03460) with pdu=0x2000166f31b8 00:35:09.125 [2024-11-18 07:20:29.983497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:23495 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.125 [2024-11-18 07:20:29.983551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:35:09.125 [2024-11-18 07:20:29.994979] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03460) with pdu=0x2000166e3060 00:35:09.125 [2024-11-18 07:20:29.996035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:16037 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.125 [2024-11-18 07:20:29.996063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:35:09.125 [2024-11-18 07:20:30.006927] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03460) with pdu=0x2000166eee38 00:35:09.125 [2024-11-18 07:20:30.007752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:20329 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.125 [2024-11-18 07:20:30.007783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:35:09.125 [2024-11-18 07:20:30.022342] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03460) with pdu=0x2000166f46d0 00:35:09.125 [2024-11-18 07:20:30.024104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:468 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.125 [2024-11-18 07:20:30.024159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:35:09.125 [2024-11-18 07:20:30.035269] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03460) with pdu=0x2000166fa7d8 00:35:09.125 [2024-11-18 07:20:30.037127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:16860 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.125 [2024-11-18 07:20:30.037172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:35:09.125 [2024-11-18 07:20:30.044307] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03460) with pdu=0x2000166feb58 00:35:09.125 [2024-11-18 07:20:30.045177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:6291 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.125 [2024-11-18 07:20:30.045223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:35:09.125 [2024-11-18 07:20:30.057134] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03460) with pdu=0x2000166ebfd0 00:35:09.125 [2024-11-18 07:20:30.058270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:19613 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.125 [2024-11-18 07:20:30.058320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:35:09.125 [2024-11-18 07:20:30.072148] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03460) with pdu=0x2000166e99d8 00:35:09.125 [2024-11-18 07:20:30.073868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:5511 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.125 [2024-11-18 07:20:30.073914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:35:09.125 [2024-11-18 07:20:30.084941] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03460) with pdu=0x2000166e9e10 00:35:09.125 [2024-11-18 07:20:30.086857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:1139 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.125 [2024-11-18 07:20:30.086888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:35:09.125 [2024-11-18 07:20:30.093695] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03460) with pdu=0x2000166f0350 00:35:09.125 [2024-11-18 07:20:30.094681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:12143 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.125 [2024-11-18 07:20:30.094711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:35:09.384 [2024-11-18 07:20:30.106270] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03460) with pdu=0x2000166f7538 00:35:09.384 [2024-11-18 07:20:30.107286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:5075 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.384 [2024-11-18 07:20:30.107315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:35:09.384 [2024-11-18 07:20:30.118475] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03460) with pdu=0x2000166fda78 00:35:09.384 [2024-11-18 07:20:30.119502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:11029 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.384 [2024-11-18 07:20:30.119532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:35:09.384 [2024-11-18 07:20:30.130781] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03460) with pdu=0x2000166f8a50 00:35:09.384 [2024-11-18 07:20:30.131588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22617 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.384 [2024-11-18 07:20:30.131617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:35:09.384 [2024-11-18 07:20:30.142070] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03460) with pdu=0x2000166de470 00:35:09.384 [2024-11-18 07:20:30.142796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:11327 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.384 [2024-11-18 07:20:30.142824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:09.384 [2024-11-18 07:20:30.156293] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03460) with pdu=0x2000166e3d08 00:35:09.384 [2024-11-18 07:20:30.157371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:16176 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.384 [2024-11-18 07:20:30.157400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:35:09.384 [2024-11-18 07:20:30.167475] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03460) with pdu=0x2000166de470 00:35:09.384 [2024-11-18 07:20:30.168463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:7581 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.384 [2024-11-18 07:20:30.168496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:35:09.384 [2024-11-18 07:20:30.180347] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03460) with pdu=0x2000166f0788 00:35:09.384 [2024-11-18 07:20:30.181528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:23689 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.384 [2024-11-18 07:20:30.181573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:35:09.384 [2024-11-18 07:20:30.193109] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03460) with pdu=0x2000166f6890 00:35:09.384 [2024-11-18 07:20:30.194435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:16315 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.384 [2024-11-18 07:20:30.194464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:35:09.384 [2024-11-18 07:20:30.205549] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03460) with pdu=0x2000166e0630 00:35:09.384 [2024-11-18 07:20:30.206465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:8710 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.385 [2024-11-18 07:20:30.206514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:35:09.385 [2024-11-18 07:20:30.217384] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03460) with pdu=0x2000166e27f0 00:35:09.385 [2024-11-18 07:20:30.218602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:9968 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.385 [2024-11-18 07:20:30.218632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:35:09.385 [2024-11-18 07:20:30.229387] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03460) with pdu=0x2000166f4298 00:35:09.385 [2024-11-18 07:20:30.230644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:15406 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.385 [2024-11-18 07:20:30.230674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:35:09.385 [2024-11-18 07:20:30.244250] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03460) with pdu=0x2000166fcdd0 00:35:09.385 [2024-11-18 07:20:30.246127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:18508 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.385 [2024-11-18 07:20:30.246170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:35:09.385 [2024-11-18 07:20:30.253017] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03460) with pdu=0x2000166f3e60 00:35:09.385 [2024-11-18 07:20:30.253953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:7898 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.385 [2024-11-18 07:20:30.253982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:35:09.385 [2024-11-18 07:20:30.268661] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03460) with pdu=0x2000166eb760 00:35:09.385 [2024-11-18 07:20:30.272210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:17804 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.385 [2024-11-18 07:20:30.272254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:35:09.385 21150.00 IOPS, 82.62 MiB/s [2024-11-18T06:20:30.363Z] [2024-11-18 07:20:30.281650] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03460) with pdu=0x2000166fb480 00:35:09.385 [2024-11-18 07:20:30.282822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:12453 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.385 [2024-11-18 07:20:30.282852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:35:09.385 [2024-11-18 07:20:30.293578] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03460) with pdu=0x2000166ec840 00:35:09.385 [2024-11-18 07:20:30.294810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:11272 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.385 [2024-11-18 07:20:30.294855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:35:09.385 [2024-11-18 07:20:30.306136] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03460) with pdu=0x2000166ddc00 00:35:09.385 [2024-11-18 07:20:30.307472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:23901 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.385 [2024-11-18 07:20:30.307523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:35:09.385 [2024-11-18 07:20:30.317845] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03460) with pdu=0x2000166f0ff8 00:35:09.385 [2024-11-18 07:20:30.319006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:15009 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.385 [2024-11-18 07:20:30.319037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:35:09.385 [2024-11-18 07:20:30.329815] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03460) with pdu=0x2000166f7538 00:35:09.385 [2024-11-18 07:20:30.330942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:11128 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.385 [2024-11-18 07:20:30.330971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:35:09.385 [2024-11-18 07:20:30.342218] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03460) with pdu=0x2000166f4298 00:35:09.385 [2024-11-18 07:20:30.343351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:25270 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.385 [2024-11-18 07:20:30.343386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:35:09.385 [2024-11-18 07:20:30.353897] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03460) with pdu=0x2000166e38d0 00:35:09.385 [2024-11-18 07:20:30.354763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:17526 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.385 [2024-11-18 07:20:30.354792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:35:09.646 [2024-11-18 07:20:30.366164] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03460) with pdu=0x2000166ed0b0 00:35:09.646 [2024-11-18 07:20:30.366971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:8207 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.646 [2024-11-18 07:20:30.367000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:09.646 [2024-11-18 07:20:30.377941] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03460) with pdu=0x2000166e8088 00:35:09.646 [2024-11-18 07:20:30.378803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:97 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.646 [2024-11-18 07:20:30.378831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:09.646 [2024-11-18 07:20:30.389335] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03460) with pdu=0x2000166f31b8 00:35:09.646 [2024-11-18 07:20:30.390146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:15310 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.646 [2024-11-18 07:20:30.390175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:35:09.646 [2024-11-18 07:20:30.401124] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03460) with pdu=0x2000166e88f8 00:35:09.646 [2024-11-18 07:20:30.401890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:12341 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.646 [2024-11-18 07:20:30.401918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:09.646 [2024-11-18 07:20:30.413526] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03460) with pdu=0x2000166e6fa8 00:35:09.646 [2024-11-18 07:20:30.414326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:19045 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.646 [2024-11-18 07:20:30.414354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:35:09.646 [2024-11-18 07:20:30.425448] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03460) with pdu=0x2000166e99d8 00:35:09.646 [2024-11-18 07:20:30.426260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:10064 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.646 [2024-11-18 07:20:30.426288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:09.646 [2024-11-18 07:20:30.438047] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03460) with pdu=0x2000166e6300 00:35:09.646 [2024-11-18 07:20:30.438803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:4217 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.646 [2024-11-18 07:20:30.438832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:35:09.646 [2024-11-18 07:20:30.450023] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03460) with pdu=0x2000166ed4e8 00:35:09.647 [2024-11-18 07:20:30.450752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:12257 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.647 [2024-11-18 07:20:30.450781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:09.647 [2024-11-18 07:20:30.462513] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03460) with pdu=0x2000166f6890 00:35:09.647 [2024-11-18 07:20:30.463305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:4834 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.647 [2024-11-18 07:20:30.463333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:35:09.647 [2024-11-18 07:20:30.476678] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03460) with pdu=0x2000166f81e0 00:35:09.647 [2024-11-18 07:20:30.477805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:5725 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.647 [2024-11-18 07:20:30.477833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:35:09.647 [2024-11-18 07:20:30.487954] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03460) with pdu=0x2000166fd208 00:35:09.647 [2024-11-18 07:20:30.488981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:3469 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.647 [2024-11-18 07:20:30.489009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:35:09.647 [2024-11-18 07:20:30.499941] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03460) with pdu=0x2000166f8618 00:35:09.647 [2024-11-18 07:20:30.500726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:17387 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.647 [2024-11-18 07:20:30.500757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:35:09.647 [2024-11-18 07:20:30.514802] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03460) with pdu=0x2000166fe720 00:35:09.647 [2024-11-18 07:20:30.516513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:4470 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.647 [2024-11-18 07:20:30.516543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:35:09.647 [2024-11-18 07:20:30.525779] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03460) with pdu=0x2000166f0ff8 00:35:09.647 [2024-11-18 07:20:30.525956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:2004 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.647 [2024-11-18 07:20:30.525997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:09.647 [2024-11-18 07:20:30.539144] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03460) with pdu=0x2000166f0ff8 00:35:09.647 [2024-11-18 07:20:30.539327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:7570 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.647 [2024-11-18 07:20:30.539367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:09.647 [2024-11-18 07:20:30.552699] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03460) with pdu=0x2000166f0ff8 00:35:09.647 [2024-11-18 07:20:30.552891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:20471 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.647 [2024-11-18 07:20:30.552931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:09.647 [2024-11-18 07:20:30.566568] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03460) with pdu=0x2000166f0ff8 00:35:09.647 [2024-11-18 07:20:30.566777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:5825 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.647 [2024-11-18 07:20:30.566802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:09.647 [2024-11-18 07:20:30.580371] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03460) with pdu=0x2000166f0ff8 00:35:09.647 [2024-11-18 07:20:30.580545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:15409 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.647 [2024-11-18 07:20:30.580585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:09.647 [2024-11-18 07:20:30.594100] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03460) with pdu=0x2000166f0ff8 00:35:09.647 [2024-11-18 07:20:30.594258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:20159 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.647 [2024-11-18 07:20:30.594283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:09.647 [2024-11-18 07:20:30.607906] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03460) with pdu=0x2000166f0ff8 00:35:09.647 [2024-11-18 07:20:30.608064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:5336 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.647 [2024-11-18 07:20:30.608103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:09.647 [2024-11-18 07:20:30.621692] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03460) with pdu=0x2000166f0ff8 00:35:09.647 [2024-11-18 07:20:30.621876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:530 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.647 [2024-11-18 07:20:30.621916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:09.909 [2024-11-18 07:20:30.635143] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03460) with pdu=0x2000166f0ff8 00:35:09.909 [2024-11-18 07:20:30.635303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:6177 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.909 [2024-11-18 07:20:30.635328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:09.909 [2024-11-18 07:20:30.648397] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03460) with pdu=0x2000166f0ff8 00:35:09.909 [2024-11-18 07:20:30.648588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:7419 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.909 [2024-11-18 07:20:30.648616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:09.909 [2024-11-18 07:20:30.661755] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03460) with pdu=0x2000166f0ff8 00:35:09.909 [2024-11-18 07:20:30.661930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:18067 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.909 [2024-11-18 07:20:30.661970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:09.909 [2024-11-18 07:20:30.675115] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03460) with pdu=0x2000166f0ff8 00:35:09.909 [2024-11-18 07:20:30.675274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:12136 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.909 [2024-11-18 07:20:30.675307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:09.909 [2024-11-18 07:20:30.688368] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03460) with pdu=0x2000166f0ff8 00:35:09.909 [2024-11-18 07:20:30.688564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:6780 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.909 [2024-11-18 07:20:30.688592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:09.909 [2024-11-18 07:20:30.701749] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03460) with pdu=0x2000166f0ff8 00:35:09.909 [2024-11-18 07:20:30.701957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:4064 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.909 [2024-11-18 07:20:30.701981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:09.909 [2024-11-18 07:20:30.715120] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03460) with pdu=0x2000166f0ff8 00:35:09.909 [2024-11-18 07:20:30.715279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:6746 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.909 [2024-11-18 07:20:30.715304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:09.909 [2024-11-18 07:20:30.728421] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03460) with pdu=0x2000166f0ff8 00:35:09.909 [2024-11-18 07:20:30.728624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:22900 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.909 [2024-11-18 07:20:30.728652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:09.909 [2024-11-18 07:20:30.741768] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03460) with pdu=0x2000166f0ff8 00:35:09.909 [2024-11-18 07:20:30.741947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:592 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.909 [2024-11-18 07:20:30.741988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:09.909 [2024-11-18 07:20:30.755046] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03460) with pdu=0x2000166f0ff8 00:35:09.909 [2024-11-18 07:20:30.755205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:178 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.909 [2024-11-18 07:20:30.755231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:09.909 [2024-11-18 07:20:30.768323] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03460) with pdu=0x2000166f0ff8 00:35:09.909 [2024-11-18 07:20:30.768511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:18578 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.909 [2024-11-18 07:20:30.768556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:09.910 [2024-11-18 07:20:30.781895] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03460) with pdu=0x2000166f0ff8 00:35:09.910 [2024-11-18 07:20:30.782052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:25492 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.910 [2024-11-18 07:20:30.782079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:09.910 [2024-11-18 07:20:30.795098] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03460) with pdu=0x2000166f0ff8 00:35:09.910 [2024-11-18 07:20:30.795270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:7384 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.910 [2024-11-18 07:20:30.795302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:09.910 [2024-11-18 07:20:30.808418] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03460) with pdu=0x2000166f0ff8 00:35:09.910 [2024-11-18 07:20:30.808623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:24684 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.910 [2024-11-18 07:20:30.808662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:09.910 [2024-11-18 07:20:30.821807] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03460) with pdu=0x2000166f0ff8 00:35:09.910 [2024-11-18 07:20:30.821979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:23245 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.910 [2024-11-18 07:20:30.822004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:09.910 [2024-11-18 07:20:30.835247] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03460) with pdu=0x2000166f0ff8 00:35:09.910 [2024-11-18 07:20:30.835422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:22488 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.910 [2024-11-18 07:20:30.835447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:09.910 [2024-11-18 07:20:30.848453] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03460) with pdu=0x2000166f0ff8 00:35:09.910 [2024-11-18 07:20:30.848641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:23424 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.910 [2024-11-18 07:20:30.848668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:09.910 [2024-11-18 07:20:30.861726] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03460) with pdu=0x2000166f0ff8 00:35:09.910 [2024-11-18 07:20:30.861907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:5554 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.910 [2024-11-18 07:20:30.861932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:09.910 [2024-11-18 07:20:30.874693] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03460) with pdu=0x2000166f0ff8 00:35:09.910 [2024-11-18 07:20:30.874889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:463 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.910 [2024-11-18 07:20:30.874915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:10.171 [2024-11-18 07:20:30.888075] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03460) with pdu=0x2000166f0ff8 00:35:10.171 [2024-11-18 07:20:30.888254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:1280 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.171 [2024-11-18 07:20:30.888294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:10.171 [2024-11-18 07:20:30.901417] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03460) with pdu=0x2000166f0ff8 00:35:10.171 [2024-11-18 07:20:30.901598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:17889 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.171 [2024-11-18 07:20:30.901622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:10.171 [2024-11-18 07:20:30.914582] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03460) with pdu=0x2000166f0ff8 00:35:10.171 [2024-11-18 07:20:30.914745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:15071 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.171 [2024-11-18 07:20:30.914785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:10.171 [2024-11-18 07:20:30.927766] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03460) with pdu=0x2000166f0ff8 00:35:10.171 [2024-11-18 07:20:30.927972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:2957 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.172 [2024-11-18 07:20:30.927997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:10.172 [2024-11-18 07:20:30.941059] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03460) with pdu=0x2000166f0ff8 00:35:10.172 [2024-11-18 07:20:30.941231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:123 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.172 [2024-11-18 07:20:30.941256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:10.172 [2024-11-18 07:20:30.954221] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03460) with pdu=0x2000166f0ff8 00:35:10.172 [2024-11-18 07:20:30.954392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:16331 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.172 [2024-11-18 07:20:30.954416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:10.172 [2024-11-18 07:20:30.967520] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03460) with pdu=0x2000166f0ff8 00:35:10.172 [2024-11-18 07:20:30.967714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:5112 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.172 [2024-11-18 07:20:30.967742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:10.172 [2024-11-18 07:20:30.980714] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03460) with pdu=0x2000166f0ff8 00:35:10.172 [2024-11-18 07:20:30.980892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:15656 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.172 [2024-11-18 07:20:30.980917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:10.172 [2024-11-18 07:20:30.993878] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03460) with pdu=0x2000166f0ff8 00:35:10.172 [2024-11-18 07:20:30.994038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:3176 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.172 [2024-11-18 07:20:30.994063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:10.172 [2024-11-18 07:20:31.007107] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03460) with pdu=0x2000166f0ff8 00:35:10.172 [2024-11-18 07:20:31.007279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:20277 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.172 [2024-11-18 07:20:31.007303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:10.172 [2024-11-18 07:20:31.020419] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03460) with pdu=0x2000166f0ff8 00:35:10.172 [2024-11-18 07:20:31.020611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:181 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.172 [2024-11-18 07:20:31.020637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:10.172 [2024-11-18 07:20:31.033636] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03460) with pdu=0x2000166f0ff8 00:35:10.172 [2024-11-18 07:20:31.033817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:6389 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.172 [2024-11-18 07:20:31.033843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:10.172 [2024-11-18 07:20:31.046832] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03460) with pdu=0x2000166f0ff8 00:35:10.172 [2024-11-18 07:20:31.047022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:9623 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.172 [2024-11-18 07:20:31.047046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:10.172 [2024-11-18 07:20:31.060290] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03460) with pdu=0x2000166f0ff8 00:35:10.172 [2024-11-18 07:20:31.060449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:14926 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.172 [2024-11-18 07:20:31.060475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:10.172 [2024-11-18 07:20:31.073788] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03460) with pdu=0x2000166f0ff8 00:35:10.172 [2024-11-18 07:20:31.073978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:18046 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.172 [2024-11-18 07:20:31.074003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:10.172 [2024-11-18 07:20:31.087037] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03460) with pdu=0x2000166f0ff8 00:35:10.172 [2024-11-18 07:20:31.087231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:1143 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.172 [2024-11-18 07:20:31.087256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:10.172 [2024-11-18 07:20:31.100643] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03460) with pdu=0x2000166f0ff8 00:35:10.172 [2024-11-18 07:20:31.100821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:24968 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.172 [2024-11-18 07:20:31.100861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:10.172 [2024-11-18 07:20:31.113912] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03460) with pdu=0x2000166f0ff8 00:35:10.172 [2024-11-18 07:20:31.114078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:13678 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.172 [2024-11-18 07:20:31.114102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:10.172 [2024-11-18 07:20:31.127162] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03460) with pdu=0x2000166f0ff8 00:35:10.172 [2024-11-18 07:20:31.127320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:12907 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.172 [2024-11-18 07:20:31.127344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:10.172 [2024-11-18 07:20:31.140370] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03460) with pdu=0x2000166f0ff8 00:35:10.172 [2024-11-18 07:20:31.140554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:8802 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.172 [2024-11-18 07:20:31.140586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:10.431 [2024-11-18 07:20:31.153746] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03460) with pdu=0x2000166f0ff8 00:35:10.431 [2024-11-18 07:20:31.153955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:21610 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.431 [2024-11-18 07:20:31.153980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:10.431 [2024-11-18 07:20:31.166948] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03460) with pdu=0x2000166f0ff8 00:35:10.431 [2024-11-18 07:20:31.167107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:23843 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.431 [2024-11-18 07:20:31.167146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:10.432 [2024-11-18 07:20:31.180431] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03460) with pdu=0x2000166f0ff8 00:35:10.432 [2024-11-18 07:20:31.180619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:8002 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.432 [2024-11-18 07:20:31.180659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:10.432 [2024-11-18 07:20:31.193629] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03460) with pdu=0x2000166f0ff8 00:35:10.432 [2024-11-18 07:20:31.193813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:5273 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.432 [2024-11-18 07:20:31.193837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:10.432 [2024-11-18 07:20:31.206954] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03460) with pdu=0x2000166f0ff8 00:35:10.432 [2024-11-18 07:20:31.207142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:20252 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.432 [2024-11-18 07:20:31.207166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:10.432 [2024-11-18 07:20:31.220339] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03460) with pdu=0x2000166f0ff8 00:35:10.432 [2024-11-18 07:20:31.220517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:1065 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.432 [2024-11-18 07:20:31.220545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:10.432 [2024-11-18 07:20:31.233617] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03460) with pdu=0x2000166f0ff8 00:35:10.432 [2024-11-18 07:20:31.233783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:11131 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.432 [2024-11-18 07:20:31.233825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:10.432 [2024-11-18 07:20:31.246897] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03460) with pdu=0x2000166f0ff8 00:35:10.432 [2024-11-18 07:20:31.247070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:5680 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.432 [2024-11-18 07:20:31.247096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:10.432 [2024-11-18 07:20:31.260040] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03460) with pdu=0x2000166f0ff8 00:35:10.432 [2024-11-18 07:20:31.260201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:24879 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.432 [2024-11-18 07:20:31.260241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:10.432 [2024-11-18 07:20:31.273236] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03460) with pdu=0x2000166f0ff8 00:35:10.432 [2024-11-18 07:20:31.274676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:2410 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.432 [2024-11-18 07:20:31.274705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:10.432 20389.50 IOPS, 79.65 MiB/s 00:35:10.432 Latency(us) 00:35:10.432 [2024-11-18T06:20:31.410Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:10.432 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:35:10.432 nvme0n1 : 2.01 20388.37 79.64 0.00 0.00 6264.50 2645.71 19515.16 00:35:10.432 [2024-11-18T06:20:31.410Z] =================================================================================================================== 00:35:10.432 [2024-11-18T06:20:31.410Z] Total : 20388.37 79.64 0.00 0.00 6264.50 2645.71 19515.16 00:35:10.432 { 00:35:10.432 "results": [ 00:35:10.432 { 00:35:10.432 "job": "nvme0n1", 00:35:10.432 "core_mask": "0x2", 00:35:10.432 "workload": "randwrite", 00:35:10.432 "status": "finished", 00:35:10.432 "queue_depth": 128, 00:35:10.432 "io_size": 4096, 00:35:10.432 "runtime": 2.007909, 00:35:10.432 "iops": 20388.374174327622, 00:35:10.432 "mibps": 79.64208661846727, 00:35:10.432 "io_failed": 0, 00:35:10.432 "io_timeout": 0, 00:35:10.432 "avg_latency_us": 6264.501611434093, 00:35:10.432 "min_latency_us": 2645.7125925925925, 00:35:10.432 "max_latency_us": 19515.164444444443 00:35:10.432 } 00:35:10.432 ], 00:35:10.432 "core_count": 1 00:35:10.432 } 00:35:10.432 07:20:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:35:10.432 07:20:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:35:10.432 07:20:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:35:10.432 | .driver_specific 00:35:10.432 | .nvme_error 00:35:10.432 | .status_code 00:35:10.432 | .command_transient_transport_error' 00:35:10.432 07:20:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:35:10.693 07:20:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 160 > 0 )) 00:35:10.693 07:20:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 396934 00:35:10.693 07:20:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 396934 ']' 00:35:10.693 07:20:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 396934 00:35:10.693 07:20:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:35:10.693 07:20:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:10.693 07:20:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 396934 00:35:10.693 07:20:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:10.693 07:20:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:10.693 07:20:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 396934' 00:35:10.693 killing process with pid 396934 00:35:10.693 07:20:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 396934 00:35:10.693 Received shutdown signal, test time was about 2.000000 seconds 00:35:10.693 00:35:10.693 Latency(us) 00:35:10.693 [2024-11-18T06:20:31.671Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:10.693 [2024-11-18T06:20:31.671Z] =================================================================================================================== 00:35:10.693 [2024-11-18T06:20:31.671Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:10.693 07:20:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 396934 00:35:10.953 07:20:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:35:10.953 07:20:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:35:10.953 07:20:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:35:10.953 07:20:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:35:10.953 07:20:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:35:10.953 07:20:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=397349 00:35:10.953 07:20:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:35:10.953 07:20:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 397349 /var/tmp/bperf.sock 00:35:10.953 07:20:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 397349 ']' 00:35:10.953 07:20:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:10.953 07:20:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:10.953 07:20:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:10.953 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:10.953 07:20:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:10.953 07:20:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:10.953 [2024-11-18 07:20:31.873272] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:35:10.953 [2024-11-18 07:20:31.873367] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid397349 ] 00:35:10.953 I/O size of 131072 is greater than zero copy threshold (65536). 00:35:10.953 Zero copy mechanism will not be used. 00:35:11.212 [2024-11-18 07:20:31.941468] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:11.212 [2024-11-18 07:20:31.991149] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:11.212 07:20:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:11.212 07:20:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:35:11.212 07:20:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:35:11.212 07:20:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:35:11.471 07:20:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:35:11.471 07:20:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:11.471 07:20:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:11.471 07:20:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:11.471 07:20:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:11.471 07:20:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:12.038 nvme0n1 00:35:12.038 07:20:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:35:12.038 07:20:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:12.038 07:20:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:12.038 07:20:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:12.038 07:20:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:35:12.038 07:20:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:12.300 I/O size of 131072 is greater than zero copy threshold (65536). 00:35:12.300 Zero copy mechanism will not be used. 00:35:12.300 Running I/O for 2 seconds... 00:35:12.300 [2024-11-18 07:20:33.126193] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:12.300 [2024-11-18 07:20:33.126456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.300 [2024-11-18 07:20:33.126507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:12.300 [2024-11-18 07:20:33.133610] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:12.300 [2024-11-18 07:20:33.133827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.300 [2024-11-18 07:20:33.133861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:12.300 [2024-11-18 07:20:33.139902] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:12.300 [2024-11-18 07:20:33.140077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.300 [2024-11-18 07:20:33.140107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:12.300 [2024-11-18 07:20:33.146309] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:12.300 [2024-11-18 07:20:33.146423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.300 [2024-11-18 07:20:33.146452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:12.300 [2024-11-18 07:20:33.152575] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:12.300 [2024-11-18 07:20:33.152742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.300 [2024-11-18 07:20:33.152771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:12.300 [2024-11-18 07:20:33.158964] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:12.300 [2024-11-18 07:20:33.159077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.300 [2024-11-18 07:20:33.159105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:12.300 [2024-11-18 07:20:33.165337] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:12.300 [2024-11-18 07:20:33.165450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.300 [2024-11-18 07:20:33.165478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:12.300 [2024-11-18 07:20:33.171351] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:12.300 [2024-11-18 07:20:33.171488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.300 [2024-11-18 07:20:33.171525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:12.300 [2024-11-18 07:20:33.176574] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:12.300 [2024-11-18 07:20:33.176663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.300 [2024-11-18 07:20:33.176691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:12.300 [2024-11-18 07:20:33.181994] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:12.300 [2024-11-18 07:20:33.182130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.300 [2024-11-18 07:20:33.182158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:12.300 [2024-11-18 07:20:33.188480] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:12.300 [2024-11-18 07:20:33.188583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.300 [2024-11-18 07:20:33.188612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:12.300 [2024-11-18 07:20:33.194996] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:12.300 [2024-11-18 07:20:33.195074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.300 [2024-11-18 07:20:33.195102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:12.300 [2024-11-18 07:20:33.200865] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:12.300 [2024-11-18 07:20:33.200949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.300 [2024-11-18 07:20:33.200978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:12.300 [2024-11-18 07:20:33.206628] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:12.300 [2024-11-18 07:20:33.206701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.300 [2024-11-18 07:20:33.206729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:12.300 [2024-11-18 07:20:33.212376] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:12.300 [2024-11-18 07:20:33.212453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.300 [2024-11-18 07:20:33.212488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:12.300 [2024-11-18 07:20:33.218148] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:12.300 [2024-11-18 07:20:33.218228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.300 [2024-11-18 07:20:33.218256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:12.300 [2024-11-18 07:20:33.223849] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:12.300 [2024-11-18 07:20:33.223937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.300 [2024-11-18 07:20:33.223965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:12.300 [2024-11-18 07:20:33.229465] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:12.300 [2024-11-18 07:20:33.229549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.300 [2024-11-18 07:20:33.229577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:12.301 [2024-11-18 07:20:33.235326] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:12.301 [2024-11-18 07:20:33.235406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.301 [2024-11-18 07:20:33.235434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:12.301 [2024-11-18 07:20:33.241054] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:12.301 [2024-11-18 07:20:33.241127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.301 [2024-11-18 07:20:33.241155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:12.301 [2024-11-18 07:20:33.246528] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:12.301 [2024-11-18 07:20:33.246607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.301 [2024-11-18 07:20:33.246635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:12.301 [2024-11-18 07:20:33.251884] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:12.301 [2024-11-18 07:20:33.251957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.301 [2024-11-18 07:20:33.251985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:12.301 [2024-11-18 07:20:33.257217] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:12.301 [2024-11-18 07:20:33.257312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.301 [2024-11-18 07:20:33.257340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:12.301 [2024-11-18 07:20:33.262794] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:12.301 [2024-11-18 07:20:33.262894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.301 [2024-11-18 07:20:33.262923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:12.301 [2024-11-18 07:20:33.268375] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:12.301 [2024-11-18 07:20:33.268522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.301 [2024-11-18 07:20:33.268551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:12.301 [2024-11-18 07:20:33.275448] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:12.301 [2024-11-18 07:20:33.275656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.301 [2024-11-18 07:20:33.275686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:12.563 [2024-11-18 07:20:33.282028] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:12.563 [2024-11-18 07:20:33.282117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.563 [2024-11-18 07:20:33.282146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:12.563 [2024-11-18 07:20:33.289273] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:12.563 [2024-11-18 07:20:33.289400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.563 [2024-11-18 07:20:33.289428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:12.563 [2024-11-18 07:20:33.295522] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:12.563 [2024-11-18 07:20:33.295605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.563 [2024-11-18 07:20:33.295634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:12.563 [2024-11-18 07:20:33.300521] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:12.563 [2024-11-18 07:20:33.300606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.563 [2024-11-18 07:20:33.300634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:12.563 [2024-11-18 07:20:33.305477] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:12.563 [2024-11-18 07:20:33.305566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.563 [2024-11-18 07:20:33.305594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:12.564 [2024-11-18 07:20:33.310431] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:12.564 [2024-11-18 07:20:33.310515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.564 [2024-11-18 07:20:33.310544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:12.564 [2024-11-18 07:20:33.315533] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:12.564 [2024-11-18 07:20:33.315625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.564 [2024-11-18 07:20:33.315653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:12.564 [2024-11-18 07:20:33.320555] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:12.564 [2024-11-18 07:20:33.320641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.564 [2024-11-18 07:20:33.320669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:12.564 [2024-11-18 07:20:33.325516] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:12.564 [2024-11-18 07:20:33.325612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.564 [2024-11-18 07:20:33.325639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:12.564 [2024-11-18 07:20:33.330512] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:12.564 [2024-11-18 07:20:33.330588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.564 [2024-11-18 07:20:33.330616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:12.564 [2024-11-18 07:20:33.335779] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:12.564 [2024-11-18 07:20:33.335861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.564 [2024-11-18 07:20:33.335889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:12.564 [2024-11-18 07:20:33.340831] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:12.564 [2024-11-18 07:20:33.340912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.564 [2024-11-18 07:20:33.340940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:12.564 [2024-11-18 07:20:33.346272] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:12.564 [2024-11-18 07:20:33.346346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.564 [2024-11-18 07:20:33.346373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:12.564 [2024-11-18 07:20:33.351480] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:12.564 [2024-11-18 07:20:33.351561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.564 [2024-11-18 07:20:33.351589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:12.564 [2024-11-18 07:20:33.356484] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:12.564 [2024-11-18 07:20:33.356583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.564 [2024-11-18 07:20:33.356617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:12.564 [2024-11-18 07:20:33.361365] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:12.564 [2024-11-18 07:20:33.361469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.564 [2024-11-18 07:20:33.361506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:12.564 [2024-11-18 07:20:33.366242] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:12.564 [2024-11-18 07:20:33.366317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.564 [2024-11-18 07:20:33.366346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:12.564 [2024-11-18 07:20:33.371334] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:12.564 [2024-11-18 07:20:33.371407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.564 [2024-11-18 07:20:33.371435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:12.564 [2024-11-18 07:20:33.376333] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:12.564 [2024-11-18 07:20:33.376409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.564 [2024-11-18 07:20:33.376438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:12.564 [2024-11-18 07:20:33.381312] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:12.564 [2024-11-18 07:20:33.381392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.564 [2024-11-18 07:20:33.381420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:12.564 [2024-11-18 07:20:33.386936] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:12.564 [2024-11-18 07:20:33.387030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.564 [2024-11-18 07:20:33.387058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:12.564 [2024-11-18 07:20:33.392792] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:12.564 [2024-11-18 07:20:33.392914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.564 [2024-11-18 07:20:33.392941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:12.564 [2024-11-18 07:20:33.398724] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:12.564 [2024-11-18 07:20:33.398862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.564 [2024-11-18 07:20:33.398889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:12.564 [2024-11-18 07:20:33.405901] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:12.564 [2024-11-18 07:20:33.406125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.564 [2024-11-18 07:20:33.406153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:12.564 [2024-11-18 07:20:33.412676] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:12.564 [2024-11-18 07:20:33.412795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.564 [2024-11-18 07:20:33.412823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:12.564 [2024-11-18 07:20:33.419050] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:12.564 [2024-11-18 07:20:33.419194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.564 [2024-11-18 07:20:33.419222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:12.564 [2024-11-18 07:20:33.425320] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:12.564 [2024-11-18 07:20:33.425517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.564 [2024-11-18 07:20:33.425545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:12.564 [2024-11-18 07:20:33.431669] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:12.564 [2024-11-18 07:20:33.431843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.564 [2024-11-18 07:20:33.431871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:12.564 [2024-11-18 07:20:33.438451] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:12.564 [2024-11-18 07:20:33.438673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.564 [2024-11-18 07:20:33.438703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:12.564 [2024-11-18 07:20:33.445439] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:12.564 [2024-11-18 07:20:33.445615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.564 [2024-11-18 07:20:33.445643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:12.564 [2024-11-18 07:20:33.451781] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:12.564 [2024-11-18 07:20:33.451916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.564 [2024-11-18 07:20:33.451943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:12.564 [2024-11-18 07:20:33.458213] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:12.564 [2024-11-18 07:20:33.458398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.564 [2024-11-18 07:20:33.458426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:12.565 [2024-11-18 07:20:33.464959] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:12.565 [2024-11-18 07:20:33.465077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.565 [2024-11-18 07:20:33.465105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:12.565 [2024-11-18 07:20:33.472360] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:12.565 [2024-11-18 07:20:33.472461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.565 [2024-11-18 07:20:33.472496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:12.565 [2024-11-18 07:20:33.479451] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:12.565 [2024-11-18 07:20:33.479561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.565 [2024-11-18 07:20:33.479590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:12.565 [2024-11-18 07:20:33.486957] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:12.565 [2024-11-18 07:20:33.487171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.565 [2024-11-18 07:20:33.487199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:12.565 [2024-11-18 07:20:33.494425] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:12.565 [2024-11-18 07:20:33.494558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.565 [2024-11-18 07:20:33.494586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:12.565 [2024-11-18 07:20:33.501618] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:12.565 [2024-11-18 07:20:33.501759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.565 [2024-11-18 07:20:33.501787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:12.565 [2024-11-18 07:20:33.507560] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:12.565 [2024-11-18 07:20:33.507637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.565 [2024-11-18 07:20:33.507666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:12.565 [2024-11-18 07:20:33.512564] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:12.565 [2024-11-18 07:20:33.512649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.565 [2024-11-18 07:20:33.512677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:12.565 [2024-11-18 07:20:33.517601] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:12.565 [2024-11-18 07:20:33.517702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.565 [2024-11-18 07:20:33.517736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:12.565 [2024-11-18 07:20:33.523075] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:12.565 [2024-11-18 07:20:33.523153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.565 [2024-11-18 07:20:33.523181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:12.565 [2024-11-18 07:20:33.528651] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:12.565 [2024-11-18 07:20:33.528747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.565 [2024-11-18 07:20:33.528775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:12.565 [2024-11-18 07:20:33.535054] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:12.565 [2024-11-18 07:20:33.535180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.565 [2024-11-18 07:20:33.535208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:12.825 [2024-11-18 07:20:33.542307] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:12.825 [2024-11-18 07:20:33.542418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.825 [2024-11-18 07:20:33.542446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:12.825 [2024-11-18 07:20:33.549077] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:12.825 [2024-11-18 07:20:33.549215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.825 [2024-11-18 07:20:33.549243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:12.826 [2024-11-18 07:20:33.555513] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:12.826 [2024-11-18 07:20:33.555672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.826 [2024-11-18 07:20:33.555700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:12.826 [2024-11-18 07:20:33.562063] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:12.826 [2024-11-18 07:20:33.562192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.826 [2024-11-18 07:20:33.562220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:12.826 [2024-11-18 07:20:33.568525] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:12.826 [2024-11-18 07:20:33.568664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.826 [2024-11-18 07:20:33.568692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:12.826 [2024-11-18 07:20:33.574834] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:12.826 [2024-11-18 07:20:33.574943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.826 [2024-11-18 07:20:33.574972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:12.826 [2024-11-18 07:20:33.581156] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:12.826 [2024-11-18 07:20:33.581293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.826 [2024-11-18 07:20:33.581321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:12.826 [2024-11-18 07:20:33.587428] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:12.826 [2024-11-18 07:20:33.587579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.826 [2024-11-18 07:20:33.587607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:12.826 [2024-11-18 07:20:33.593934] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:12.826 [2024-11-18 07:20:33.594059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.826 [2024-11-18 07:20:33.594087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:12.826 [2024-11-18 07:20:33.600562] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:12.826 [2024-11-18 07:20:33.600683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.826 [2024-11-18 07:20:33.600711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:12.826 [2024-11-18 07:20:33.606957] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:12.826 [2024-11-18 07:20:33.607084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.826 [2024-11-18 07:20:33.607112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:12.826 [2024-11-18 07:20:33.613458] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:12.826 [2024-11-18 07:20:33.613618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.826 [2024-11-18 07:20:33.613645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:12.826 [2024-11-18 07:20:33.620077] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:12.826 [2024-11-18 07:20:33.620269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.826 [2024-11-18 07:20:33.620297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:12.826 [2024-11-18 07:20:33.626627] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:12.826 [2024-11-18 07:20:33.626796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.826 [2024-11-18 07:20:33.626823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:12.826 [2024-11-18 07:20:33.632359] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:12.826 [2024-11-18 07:20:33.632445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.826 [2024-11-18 07:20:33.632473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:12.826 [2024-11-18 07:20:33.637741] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:12.826 [2024-11-18 07:20:33.637881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.826 [2024-11-18 07:20:33.637908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:12.826 [2024-11-18 07:20:33.643081] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:12.826 [2024-11-18 07:20:33.643190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.826 [2024-11-18 07:20:33.643217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:12.826 [2024-11-18 07:20:33.648038] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:12.826 [2024-11-18 07:20:33.648140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.826 [2024-11-18 07:20:33.648167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:12.826 [2024-11-18 07:20:33.653040] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:12.826 [2024-11-18 07:20:33.653208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.826 [2024-11-18 07:20:33.653235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:12.826 [2024-11-18 07:20:33.659525] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:12.826 [2024-11-18 07:20:33.659672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.826 [2024-11-18 07:20:33.659701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:12.826 [2024-11-18 07:20:33.665745] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:12.826 [2024-11-18 07:20:33.665900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.826 [2024-11-18 07:20:33.665928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:12.826 [2024-11-18 07:20:33.672159] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:12.826 [2024-11-18 07:20:33.672267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.826 [2024-11-18 07:20:33.672295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:12.826 [2024-11-18 07:20:33.678668] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:12.826 [2024-11-18 07:20:33.678774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.826 [2024-11-18 07:20:33.678808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:12.826 [2024-11-18 07:20:33.684059] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:12.826 [2024-11-18 07:20:33.684192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.826 [2024-11-18 07:20:33.684223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:12.826 [2024-11-18 07:20:33.689016] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:12.826 [2024-11-18 07:20:33.689115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.826 [2024-11-18 07:20:33.689143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:12.826 [2024-11-18 07:20:33.694113] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:12.826 [2024-11-18 07:20:33.694227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.826 [2024-11-18 07:20:33.694254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:12.826 [2024-11-18 07:20:33.699722] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:12.826 [2024-11-18 07:20:33.699854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.826 [2024-11-18 07:20:33.699884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:12.826 [2024-11-18 07:20:33.704816] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:12.826 [2024-11-18 07:20:33.704900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.826 [2024-11-18 07:20:33.704927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:12.826 [2024-11-18 07:20:33.709826] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:12.826 [2024-11-18 07:20:33.709954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.826 [2024-11-18 07:20:33.709983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:12.827 [2024-11-18 07:20:33.716122] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:12.827 [2024-11-18 07:20:33.716285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.827 [2024-11-18 07:20:33.716315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:12.827 [2024-11-18 07:20:33.722603] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:12.827 [2024-11-18 07:20:33.722811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.827 [2024-11-18 07:20:33.722840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:12.827 [2024-11-18 07:20:33.728881] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:12.827 [2024-11-18 07:20:33.729054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.827 [2024-11-18 07:20:33.729084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:12.827 [2024-11-18 07:20:33.735474] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:12.827 [2024-11-18 07:20:33.735677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.827 [2024-11-18 07:20:33.735707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:12.827 [2024-11-18 07:20:33.742214] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:12.827 [2024-11-18 07:20:33.742315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.827 [2024-11-18 07:20:33.742343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:12.827 [2024-11-18 07:20:33.749223] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:12.827 [2024-11-18 07:20:33.749367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.827 [2024-11-18 07:20:33.749397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:12.827 [2024-11-18 07:20:33.756443] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:12.827 [2024-11-18 07:20:33.756594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.827 [2024-11-18 07:20:33.756624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:12.827 [2024-11-18 07:20:33.764168] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:12.827 [2024-11-18 07:20:33.764392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.827 [2024-11-18 07:20:33.764422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:12.827 [2024-11-18 07:20:33.771607] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:12.827 [2024-11-18 07:20:33.771818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.827 [2024-11-18 07:20:33.771848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:12.827 [2024-11-18 07:20:33.778853] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:12.827 [2024-11-18 07:20:33.778965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.827 [2024-11-18 07:20:33.778992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:12.827 [2024-11-18 07:20:33.786343] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:12.827 [2024-11-18 07:20:33.786552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.827 [2024-11-18 07:20:33.786581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:12.827 [2024-11-18 07:20:33.793598] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:12.827 [2024-11-18 07:20:33.793793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.827 [2024-11-18 07:20:33.793823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:12.827 [2024-11-18 07:20:33.801093] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:12.827 [2024-11-18 07:20:33.801305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.827 [2024-11-18 07:20:33.801335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:13.089 [2024-11-18 07:20:33.808331] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:13.089 [2024-11-18 07:20:33.808474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.089 [2024-11-18 07:20:33.808514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:13.089 [2024-11-18 07:20:33.814289] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:13.089 [2024-11-18 07:20:33.814360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.089 [2024-11-18 07:20:33.814388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:13.089 [2024-11-18 07:20:33.820431] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:13.089 [2024-11-18 07:20:33.820519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.089 [2024-11-18 07:20:33.820547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:13.089 [2024-11-18 07:20:33.825778] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:13.089 [2024-11-18 07:20:33.825859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.089 [2024-11-18 07:20:33.825887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:13.089 [2024-11-18 07:20:33.830790] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:13.089 [2024-11-18 07:20:33.830871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.089 [2024-11-18 07:20:33.830898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:13.089 [2024-11-18 07:20:33.835763] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:13.089 [2024-11-18 07:20:33.835858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.089 [2024-11-18 07:20:33.835885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:13.089 [2024-11-18 07:20:33.840984] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:13.089 [2024-11-18 07:20:33.841071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.089 [2024-11-18 07:20:33.841106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:13.089 [2024-11-18 07:20:33.846113] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:13.089 [2024-11-18 07:20:33.846193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.089 [2024-11-18 07:20:33.846221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:13.089 [2024-11-18 07:20:33.851994] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:13.089 [2024-11-18 07:20:33.852135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.089 [2024-11-18 07:20:33.852163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:13.089 [2024-11-18 07:20:33.858476] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:13.089 [2024-11-18 07:20:33.858663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.089 [2024-11-18 07:20:33.858691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:13.089 [2024-11-18 07:20:33.864898] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:13.089 [2024-11-18 07:20:33.865055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.089 [2024-11-18 07:20:33.865083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:13.090 [2024-11-18 07:20:33.871620] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:13.090 [2024-11-18 07:20:33.871749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.090 [2024-11-18 07:20:33.871776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:13.090 [2024-11-18 07:20:33.877961] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:13.090 [2024-11-18 07:20:33.878091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.090 [2024-11-18 07:20:33.878120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:13.090 [2024-11-18 07:20:33.882942] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:13.090 [2024-11-18 07:20:33.883028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.090 [2024-11-18 07:20:33.883057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:13.090 [2024-11-18 07:20:33.888161] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:13.090 [2024-11-18 07:20:33.888275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.090 [2024-11-18 07:20:33.888303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:13.090 [2024-11-18 07:20:33.893545] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:13.090 [2024-11-18 07:20:33.893663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.090 [2024-11-18 07:20:33.893691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:13.090 [2024-11-18 07:20:33.899938] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:13.090 [2024-11-18 07:20:33.900072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.090 [2024-11-18 07:20:33.900100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:13.090 [2024-11-18 07:20:33.904598] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:13.090 [2024-11-18 07:20:33.904916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.090 [2024-11-18 07:20:33.904946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:13.090 [2024-11-18 07:20:33.909434] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:13.090 [2024-11-18 07:20:33.909765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.090 [2024-11-18 07:20:33.909795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:13.090 [2024-11-18 07:20:33.914520] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:13.090 [2024-11-18 07:20:33.914828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.090 [2024-11-18 07:20:33.914873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:13.090 [2024-11-18 07:20:33.919998] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:13.090 [2024-11-18 07:20:33.920345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.090 [2024-11-18 07:20:33.920374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:13.090 [2024-11-18 07:20:33.925877] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:13.090 [2024-11-18 07:20:33.926177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.090 [2024-11-18 07:20:33.926208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:13.090 [2024-11-18 07:20:33.930665] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:13.090 [2024-11-18 07:20:33.930948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.090 [2024-11-18 07:20:33.930993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:13.090 [2024-11-18 07:20:33.935390] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:13.090 [2024-11-18 07:20:33.935686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.090 [2024-11-18 07:20:33.935718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:13.090 [2024-11-18 07:20:33.939889] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:13.090 [2024-11-18 07:20:33.940110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.090 [2024-11-18 07:20:33.940139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:13.090 [2024-11-18 07:20:33.944905] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:13.090 [2024-11-18 07:20:33.945175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.090 [2024-11-18 07:20:33.945204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:13.090 [2024-11-18 07:20:33.950342] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:13.090 [2024-11-18 07:20:33.950593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.090 [2024-11-18 07:20:33.950623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:13.090 [2024-11-18 07:20:33.955223] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:13.090 [2024-11-18 07:20:33.955498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.090 [2024-11-18 07:20:33.955528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:13.090 [2024-11-18 07:20:33.960625] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:13.090 [2024-11-18 07:20:33.960933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.090 [2024-11-18 07:20:33.960977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:13.090 [2024-11-18 07:20:33.964948] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:13.090 [2024-11-18 07:20:33.965156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.090 [2024-11-18 07:20:33.965183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:13.090 [2024-11-18 07:20:33.969121] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:13.090 [2024-11-18 07:20:33.969362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.090 [2024-11-18 07:20:33.969392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:13.090 [2024-11-18 07:20:33.973468] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:13.090 [2024-11-18 07:20:33.973701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.090 [2024-11-18 07:20:33.973731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:13.090 [2024-11-18 07:20:33.977895] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:13.090 [2024-11-18 07:20:33.978128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.090 [2024-11-18 07:20:33.978162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:13.090 [2024-11-18 07:20:33.982545] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:13.090 [2024-11-18 07:20:33.982766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.090 [2024-11-18 07:20:33.982795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:13.090 [2024-11-18 07:20:33.987082] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:13.090 [2024-11-18 07:20:33.987323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.090 [2024-11-18 07:20:33.987353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:13.090 [2024-11-18 07:20:33.991662] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:13.090 [2024-11-18 07:20:33.991887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.090 [2024-11-18 07:20:33.991916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:13.090 [2024-11-18 07:20:33.996239] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:13.090 [2024-11-18 07:20:33.996450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.090 [2024-11-18 07:20:33.996477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:13.090 [2024-11-18 07:20:34.000912] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:13.090 [2024-11-18 07:20:34.001133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.090 [2024-11-18 07:20:34.001162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:13.091 [2024-11-18 07:20:34.006207] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:13.091 [2024-11-18 07:20:34.006433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.091 [2024-11-18 07:20:34.006462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:13.091 [2024-11-18 07:20:34.011364] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:13.091 [2024-11-18 07:20:34.011630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.091 [2024-11-18 07:20:34.011659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:13.091 [2024-11-18 07:20:34.015730] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:13.091 [2024-11-18 07:20:34.015932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.091 [2024-11-18 07:20:34.015959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:13.091 [2024-11-18 07:20:34.020188] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:13.091 [2024-11-18 07:20:34.020443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.091 [2024-11-18 07:20:34.020478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:13.091 [2024-11-18 07:20:34.024819] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:13.091 [2024-11-18 07:20:34.025048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.091 [2024-11-18 07:20:34.025077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:13.091 [2024-11-18 07:20:34.029423] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:13.091 [2024-11-18 07:20:34.029666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.091 [2024-11-18 07:20:34.029696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:13.091 [2024-11-18 07:20:34.034146] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:13.091 [2024-11-18 07:20:34.034412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.091 [2024-11-18 07:20:34.034456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:13.091 [2024-11-18 07:20:34.038673] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:13.091 [2024-11-18 07:20:34.038919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.091 [2024-11-18 07:20:34.038948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:13.091 [2024-11-18 07:20:34.043207] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:13.091 [2024-11-18 07:20:34.043424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.091 [2024-11-18 07:20:34.043463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:13.091 [2024-11-18 07:20:34.047767] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:13.091 [2024-11-18 07:20:34.047994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.091 [2024-11-18 07:20:34.048023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:13.091 [2024-11-18 07:20:34.052061] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:13.091 [2024-11-18 07:20:34.052283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.091 [2024-11-18 07:20:34.052311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:13.091 [2024-11-18 07:20:34.056909] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:13.091 [2024-11-18 07:20:34.057095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.091 [2024-11-18 07:20:34.057122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:13.091 [2024-11-18 07:20:34.061961] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:13.091 [2024-11-18 07:20:34.062183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.091 [2024-11-18 07:20:34.062222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:13.354 [2024-11-18 07:20:34.067391] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:13.354 [2024-11-18 07:20:34.067673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.354 [2024-11-18 07:20:34.067704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:13.354 [2024-11-18 07:20:34.073296] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:13.354 [2024-11-18 07:20:34.073514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.354 [2024-11-18 07:20:34.073542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:13.354 [2024-11-18 07:20:34.078612] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:13.354 [2024-11-18 07:20:34.078852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.354 [2024-11-18 07:20:34.078882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:13.354 [2024-11-18 07:20:34.083725] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:13.354 [2024-11-18 07:20:34.083979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.354 [2024-11-18 07:20:34.084010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:13.354 [2024-11-18 07:20:34.088917] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:13.354 [2024-11-18 07:20:34.089132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.354 [2024-11-18 07:20:34.089159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:13.354 [2024-11-18 07:20:34.094080] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:13.354 [2024-11-18 07:20:34.094365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.354 [2024-11-18 07:20:34.094395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:13.354 [2024-11-18 07:20:34.099182] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:13.354 [2024-11-18 07:20:34.099415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.354 [2024-11-18 07:20:34.099444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:13.354 [2024-11-18 07:20:34.104532] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:13.354 [2024-11-18 07:20:34.104760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.354 [2024-11-18 07:20:34.104789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:13.354 [2024-11-18 07:20:34.109685] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:13.354 [2024-11-18 07:20:34.109972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.354 [2024-11-18 07:20:34.110001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:13.354 [2024-11-18 07:20:34.114957] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:13.354 [2024-11-18 07:20:34.115232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.354 [2024-11-18 07:20:34.115260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:13.354 [2024-11-18 07:20:34.120179] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:13.354 [2024-11-18 07:20:34.120375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.354 [2024-11-18 07:20:34.120405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:13.354 5378.00 IOPS, 672.25 MiB/s [2024-11-18T06:20:34.333Z] [2024-11-18 07:20:34.126797] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:13.355 [2024-11-18 07:20:34.127030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.355 [2024-11-18 07:20:34.127059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:13.355 [2024-11-18 07:20:34.131436] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:13.355 [2024-11-18 07:20:34.131607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.355 [2024-11-18 07:20:34.131638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:13.355 [2024-11-18 07:20:34.135659] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:13.355 [2024-11-18 07:20:34.135856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.355 [2024-11-18 07:20:34.135885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:13.355 [2024-11-18 07:20:34.139976] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:13.355 [2024-11-18 07:20:34.140155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.355 [2024-11-18 07:20:34.140185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:13.355 [2024-11-18 07:20:34.144229] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:13.355 [2024-11-18 07:20:34.144415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.355 [2024-11-18 07:20:34.144444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:13.355 [2024-11-18 07:20:34.148929] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:13.355 [2024-11-18 07:20:34.149108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.355 [2024-11-18 07:20:34.149140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:13.355 [2024-11-18 07:20:34.153950] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:13.355 [2024-11-18 07:20:34.154047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.355 [2024-11-18 07:20:34.154089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:13.355 [2024-11-18 07:20:34.158994] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:13.355 [2024-11-18 07:20:34.159183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.355 [2024-11-18 07:20:34.159210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:13.355 [2024-11-18 07:20:34.163154] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:13.355 [2024-11-18 07:20:34.163362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.355 [2024-11-18 07:20:34.163392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:13.355 [2024-11-18 07:20:34.167445] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:13.355 [2024-11-18 07:20:34.167663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.355 [2024-11-18 07:20:34.167693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:13.355 [2024-11-18 07:20:34.171654] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:13.355 [2024-11-18 07:20:34.171838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.355 [2024-11-18 07:20:34.171865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:13.355 [2024-11-18 07:20:34.175973] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:13.355 [2024-11-18 07:20:34.176194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.355 [2024-11-18 07:20:34.176220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:13.355 [2024-11-18 07:20:34.180140] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:13.355 [2024-11-18 07:20:34.180313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.355 [2024-11-18 07:20:34.180340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:13.355 [2024-11-18 07:20:34.184453] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:13.355 [2024-11-18 07:20:34.184641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.355 [2024-11-18 07:20:34.184671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:13.355 [2024-11-18 07:20:34.188716] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:13.355 [2024-11-18 07:20:34.188896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.355 [2024-11-18 07:20:34.188924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:13.355 [2024-11-18 07:20:34.192941] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:13.355 [2024-11-18 07:20:34.193155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.355 [2024-11-18 07:20:34.193184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:13.355 [2024-11-18 07:20:34.197147] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:13.355 [2024-11-18 07:20:34.197341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.355 [2024-11-18 07:20:34.197370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:13.355 [2024-11-18 07:20:34.201336] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:13.355 [2024-11-18 07:20:34.201547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.355 [2024-11-18 07:20:34.201576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:13.355 [2024-11-18 07:20:34.205457] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:13.355 [2024-11-18 07:20:34.205661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.355 [2024-11-18 07:20:34.205690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:13.355 [2024-11-18 07:20:34.209602] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:13.355 [2024-11-18 07:20:34.209783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.355 [2024-11-18 07:20:34.209811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:13.355 [2024-11-18 07:20:34.213698] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:13.355 [2024-11-18 07:20:34.213905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.355 [2024-11-18 07:20:34.213934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:13.355 [2024-11-18 07:20:34.217881] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:13.355 [2024-11-18 07:20:34.218079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.355 [2024-11-18 07:20:34.218108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:13.355 [2024-11-18 07:20:34.222085] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:13.355 [2024-11-18 07:20:34.222273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.355 [2024-11-18 07:20:34.222301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:13.355 [2024-11-18 07:20:34.226298] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:13.355 [2024-11-18 07:20:34.226496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.355 [2024-11-18 07:20:34.226523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:13.355 [2024-11-18 07:20:34.230470] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:13.355 [2024-11-18 07:20:34.230665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.355 [2024-11-18 07:20:34.230694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:13.355 [2024-11-18 07:20:34.234647] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:13.355 [2024-11-18 07:20:34.234879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.355 [2024-11-18 07:20:34.234909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:13.355 [2024-11-18 07:20:34.238883] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:13.355 [2024-11-18 07:20:34.239110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.355 [2024-11-18 07:20:34.239138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:13.355 [2024-11-18 07:20:34.243098] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:13.356 [2024-11-18 07:20:34.243312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.356 [2024-11-18 07:20:34.243339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:13.356 [2024-11-18 07:20:34.247240] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:13.356 [2024-11-18 07:20:34.247414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.356 [2024-11-18 07:20:34.247440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:13.356 [2024-11-18 07:20:34.251416] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:13.356 [2024-11-18 07:20:34.251613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.356 [2024-11-18 07:20:34.251642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:13.356 [2024-11-18 07:20:34.255552] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:13.356 [2024-11-18 07:20:34.255756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.356 [2024-11-18 07:20:34.255783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:13.356 [2024-11-18 07:20:34.259719] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:13.356 [2024-11-18 07:20:34.259947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.356 [2024-11-18 07:20:34.259981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:13.356 [2024-11-18 07:20:34.263978] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:13.356 [2024-11-18 07:20:34.264180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.356 [2024-11-18 07:20:34.264207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:13.356 [2024-11-18 07:20:34.268241] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:13.356 [2024-11-18 07:20:34.268421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.356 [2024-11-18 07:20:34.268447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:13.356 [2024-11-18 07:20:34.272427] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:13.356 [2024-11-18 07:20:34.272605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.356 [2024-11-18 07:20:34.272632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:13.356 [2024-11-18 07:20:34.276604] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:13.356 [2024-11-18 07:20:34.276807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.356 [2024-11-18 07:20:34.276835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:13.356 [2024-11-18 07:20:34.280819] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:13.356 [2024-11-18 07:20:34.281037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.356 [2024-11-18 07:20:34.281064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:13.356 [2024-11-18 07:20:34.285054] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:13.356 [2024-11-18 07:20:34.285244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.356 [2024-11-18 07:20:34.285271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:13.356 [2024-11-18 07:20:34.289226] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:13.356 [2024-11-18 07:20:34.289413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.356 [2024-11-18 07:20:34.289440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:13.356 [2024-11-18 07:20:34.293504] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:13.356 [2024-11-18 07:20:34.293691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.356 [2024-11-18 07:20:34.293718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:13.356 [2024-11-18 07:20:34.297648] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:13.356 [2024-11-18 07:20:34.297834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.356 [2024-11-18 07:20:34.297861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:13.356 [2024-11-18 07:20:34.301847] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:13.356 [2024-11-18 07:20:34.302038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.356 [2024-11-18 07:20:34.302065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:13.356 [2024-11-18 07:20:34.306049] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:13.356 [2024-11-18 07:20:34.306242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.356 [2024-11-18 07:20:34.306269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:13.356 [2024-11-18 07:20:34.310289] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:13.356 [2024-11-18 07:20:34.310464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.356 [2024-11-18 07:20:34.310498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:13.356 [2024-11-18 07:20:34.314657] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:13.356 [2024-11-18 07:20:34.314895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.356 [2024-11-18 07:20:34.314924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:13.356 [2024-11-18 07:20:34.319291] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:13.356 [2024-11-18 07:20:34.319482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.356 [2024-11-18 07:20:34.319519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:13.356 [2024-11-18 07:20:34.323778] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:13.356 [2024-11-18 07:20:34.324030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.356 [2024-11-18 07:20:34.324060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:13.356 [2024-11-18 07:20:34.328308] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:13.356 [2024-11-18 07:20:34.328482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.356 [2024-11-18 07:20:34.328518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:13.619 [2024-11-18 07:20:34.332514] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:13.619 [2024-11-18 07:20:34.332689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.619 [2024-11-18 07:20:34.332716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:13.619 [2024-11-18 07:20:34.336799] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:13.620 [2024-11-18 07:20:34.336976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.620 [2024-11-18 07:20:34.337003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:13.620 [2024-11-18 07:20:34.341002] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:13.620 [2024-11-18 07:20:34.341183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.620 [2024-11-18 07:20:34.341211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:13.620 [2024-11-18 07:20:34.345363] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:13.620 [2024-11-18 07:20:34.345563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.620 [2024-11-18 07:20:34.345590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:13.620 [2024-11-18 07:20:34.350625] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:13.620 [2024-11-18 07:20:34.350806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.620 [2024-11-18 07:20:34.350833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:13.620 [2024-11-18 07:20:34.355596] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:13.620 [2024-11-18 07:20:34.355779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.620 [2024-11-18 07:20:34.355806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:13.620 [2024-11-18 07:20:34.359906] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:13.620 [2024-11-18 07:20:34.360109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.620 [2024-11-18 07:20:34.360136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:13.620 [2024-11-18 07:20:34.364697] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:13.620 [2024-11-18 07:20:34.364898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.620 [2024-11-18 07:20:34.364925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:13.620 [2024-11-18 07:20:34.369885] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:13.620 [2024-11-18 07:20:34.370159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.620 [2024-11-18 07:20:34.370189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:13.620 [2024-11-18 07:20:34.375097] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:13.620 [2024-11-18 07:20:34.375326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.620 [2024-11-18 07:20:34.375375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:13.620 [2024-11-18 07:20:34.381126] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:13.620 [2024-11-18 07:20:34.381343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.620 [2024-11-18 07:20:34.381373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:13.620 [2024-11-18 07:20:34.386818] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:13.620 [2024-11-18 07:20:34.387087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.620 [2024-11-18 07:20:34.387115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:13.620 [2024-11-18 07:20:34.392985] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:13.620 [2024-11-18 07:20:34.393315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.620 [2024-11-18 07:20:34.393347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:13.620 [2024-11-18 07:20:34.399086] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:13.620 [2024-11-18 07:20:34.399286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.620 [2024-11-18 07:20:34.399313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:13.620 [2024-11-18 07:20:34.405869] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:13.620 [2024-11-18 07:20:34.406010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.620 [2024-11-18 07:20:34.406037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:13.620 [2024-11-18 07:20:34.411623] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:13.620 [2024-11-18 07:20:34.411696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.620 [2024-11-18 07:20:34.411724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:13.620 [2024-11-18 07:20:34.416033] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:13.620 [2024-11-18 07:20:34.416121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.620 [2024-11-18 07:20:34.416149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:13.620 [2024-11-18 07:20:34.420256] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:13.620 [2024-11-18 07:20:34.420334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.620 [2024-11-18 07:20:34.420362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:13.620 [2024-11-18 07:20:34.424654] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:13.620 [2024-11-18 07:20:34.424750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.620 [2024-11-18 07:20:34.424778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:13.620 [2024-11-18 07:20:34.429036] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:13.620 [2024-11-18 07:20:34.429117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.620 [2024-11-18 07:20:34.429145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:13.620 [2024-11-18 07:20:34.433368] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:13.620 [2024-11-18 07:20:34.433459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.620 [2024-11-18 07:20:34.433486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:13.620 [2024-11-18 07:20:34.437789] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:13.620 [2024-11-18 07:20:34.437867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.620 [2024-11-18 07:20:34.437894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:13.620 [2024-11-18 07:20:34.442162] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:13.620 [2024-11-18 07:20:34.442241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.620 [2024-11-18 07:20:34.442269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:13.620 [2024-11-18 07:20:34.446502] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:13.620 [2024-11-18 07:20:34.446582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.620 [2024-11-18 07:20:34.446609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:13.620 [2024-11-18 07:20:34.450774] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:13.620 [2024-11-18 07:20:34.450851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.620 [2024-11-18 07:20:34.450879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:13.620 [2024-11-18 07:20:34.455070] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:13.620 [2024-11-18 07:20:34.455153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.620 [2024-11-18 07:20:34.455181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:13.620 [2024-11-18 07:20:34.459510] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:13.620 [2024-11-18 07:20:34.459615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.620 [2024-11-18 07:20:34.459643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:13.620 [2024-11-18 07:20:34.463876] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:13.620 [2024-11-18 07:20:34.463979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.620 [2024-11-18 07:20:34.464006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:13.621 [2024-11-18 07:20:34.468223] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:13.621 [2024-11-18 07:20:34.468312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.621 [2024-11-18 07:20:34.468341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:13.621 [2024-11-18 07:20:34.472591] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:13.621 [2024-11-18 07:20:34.472704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.621 [2024-11-18 07:20:34.472731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:13.621 [2024-11-18 07:20:34.476866] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:13.621 [2024-11-18 07:20:34.476938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.621 [2024-11-18 07:20:34.476965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:13.621 [2024-11-18 07:20:34.481200] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:13.621 [2024-11-18 07:20:34.481291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.621 [2024-11-18 07:20:34.481318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:13.621 [2024-11-18 07:20:34.486541] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:13.621 [2024-11-18 07:20:34.486720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.621 [2024-11-18 07:20:34.486748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:13.621 [2024-11-18 07:20:34.491617] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:13.621 [2024-11-18 07:20:34.491798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.621 [2024-11-18 07:20:34.491841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:13.621 [2024-11-18 07:20:34.496687] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:13.621 [2024-11-18 07:20:34.496891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.621 [2024-11-18 07:20:34.496918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:13.621 [2024-11-18 07:20:34.501759] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:13.621 [2024-11-18 07:20:34.501954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.621 [2024-11-18 07:20:34.501987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:13.621 [2024-11-18 07:20:34.506771] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:13.621 [2024-11-18 07:20:34.506912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.621 [2024-11-18 07:20:34.506939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:13.621 [2024-11-18 07:20:34.512424] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:13.621 [2024-11-18 07:20:34.512599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.621 [2024-11-18 07:20:34.512626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:13.621 [2024-11-18 07:20:34.517775] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:13.621 [2024-11-18 07:20:34.517908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.621 [2024-11-18 07:20:34.517935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:13.621 [2024-11-18 07:20:34.522117] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:13.621 [2024-11-18 07:20:34.522212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.621 [2024-11-18 07:20:34.522239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:13.621 [2024-11-18 07:20:34.526672] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:13.621 [2024-11-18 07:20:34.526756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.621 [2024-11-18 07:20:34.526783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:13.621 [2024-11-18 07:20:34.531076] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:13.621 [2024-11-18 07:20:34.531211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.621 [2024-11-18 07:20:34.531238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:13.621 [2024-11-18 07:20:34.535438] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:13.621 [2024-11-18 07:20:34.535517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.621 [2024-11-18 07:20:34.535544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:13.621 [2024-11-18 07:20:34.539661] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:13.621 [2024-11-18 07:20:34.539784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.621 [2024-11-18 07:20:34.539811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:13.621 [2024-11-18 07:20:34.544071] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:13.621 [2024-11-18 07:20:34.544147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.621 [2024-11-18 07:20:34.544174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:13.621 [2024-11-18 07:20:34.548309] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:13.621 [2024-11-18 07:20:34.548454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.621 [2024-11-18 07:20:34.548481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:13.621 [2024-11-18 07:20:34.553270] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:13.621 [2024-11-18 07:20:34.553431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.621 [2024-11-18 07:20:34.553458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:13.621 [2024-11-18 07:20:34.558349] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:13.621 [2024-11-18 07:20:34.558524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.621 [2024-11-18 07:20:34.558552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:13.621 [2024-11-18 07:20:34.563732] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:13.621 [2024-11-18 07:20:34.563838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.621 [2024-11-18 07:20:34.563880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:13.621 [2024-11-18 07:20:34.569141] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:13.621 [2024-11-18 07:20:34.569266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.621 [2024-11-18 07:20:34.569293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:13.621 [2024-11-18 07:20:34.573329] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:13.621 [2024-11-18 07:20:34.573420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.621 [2024-11-18 07:20:34.573447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:13.621 [2024-11-18 07:20:34.577827] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:13.621 [2024-11-18 07:20:34.577932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.621 [2024-11-18 07:20:34.577958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:13.621 [2024-11-18 07:20:34.582377] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:13.621 [2024-11-18 07:20:34.582498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.621 [2024-11-18 07:20:34.582526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:13.621 [2024-11-18 07:20:34.586757] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:13.621 [2024-11-18 07:20:34.586863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.621 [2024-11-18 07:20:34.586890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:13.621 [2024-11-18 07:20:34.591138] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:13.621 [2024-11-18 07:20:34.591275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.621 [2024-11-18 07:20:34.591303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:13.622 [2024-11-18 07:20:34.595342] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:13.622 [2024-11-18 07:20:34.595411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.622 [2024-11-18 07:20:34.595438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:13.882 [2024-11-18 07:20:34.599543] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:13.883 [2024-11-18 07:20:34.599637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.883 [2024-11-18 07:20:34.599665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:13.883 [2024-11-18 07:20:34.604426] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:13.883 [2024-11-18 07:20:34.604584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.883 [2024-11-18 07:20:34.604612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:13.883 [2024-11-18 07:20:34.609770] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:13.883 [2024-11-18 07:20:34.609905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.883 [2024-11-18 07:20:34.609932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:13.883 [2024-11-18 07:20:34.615659] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:13.883 [2024-11-18 07:20:34.615828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.883 [2024-11-18 07:20:34.615854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:13.883 [2024-11-18 07:20:34.620152] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:13.883 [2024-11-18 07:20:34.620281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.883 [2024-11-18 07:20:34.620309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:13.883 [2024-11-18 07:20:34.624546] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:13.883 [2024-11-18 07:20:34.624670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.883 [2024-11-18 07:20:34.624702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:13.883 [2024-11-18 07:20:34.628963] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:13.883 [2024-11-18 07:20:34.629057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.883 [2024-11-18 07:20:34.629085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:13.883 [2024-11-18 07:20:34.633438] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:13.883 [2024-11-18 07:20:34.633513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.883 [2024-11-18 07:20:34.633541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:13.883 [2024-11-18 07:20:34.637918] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:13.883 [2024-11-18 07:20:34.638032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.883 [2024-11-18 07:20:34.638059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:13.883 [2024-11-18 07:20:34.642247] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:13.883 [2024-11-18 07:20:34.642316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.883 [2024-11-18 07:20:34.642342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:13.883 [2024-11-18 07:20:34.646568] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:13.883 [2024-11-18 07:20:34.646654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.883 [2024-11-18 07:20:34.646681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:13.883 [2024-11-18 07:20:34.650825] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:13.883 [2024-11-18 07:20:34.650955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.883 [2024-11-18 07:20:34.650981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:13.883 [2024-11-18 07:20:34.655269] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:13.883 [2024-11-18 07:20:34.655386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.883 [2024-11-18 07:20:34.655413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:13.883 [2024-11-18 07:20:34.659660] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:13.883 [2024-11-18 07:20:34.659778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.883 [2024-11-18 07:20:34.659805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:13.883 [2024-11-18 07:20:34.664135] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:13.883 [2024-11-18 07:20:34.664270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.883 [2024-11-18 07:20:34.664297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:13.883 [2024-11-18 07:20:34.668467] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:13.883 [2024-11-18 07:20:34.668546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.883 [2024-11-18 07:20:34.668574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:13.883 [2024-11-18 07:20:34.672760] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:13.883 [2024-11-18 07:20:34.672878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.883 [2024-11-18 07:20:34.672905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:13.883 [2024-11-18 07:20:34.677584] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:13.883 [2024-11-18 07:20:34.677731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.883 [2024-11-18 07:20:34.677759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:13.883 [2024-11-18 07:20:34.682719] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:13.883 [2024-11-18 07:20:34.682924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.883 [2024-11-18 07:20:34.682952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:13.883 [2024-11-18 07:20:34.687820] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:13.883 [2024-11-18 07:20:34.688018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.883 [2024-11-18 07:20:34.688045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:13.883 [2024-11-18 07:20:34.692901] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:13.883 [2024-11-18 07:20:34.693092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.883 [2024-11-18 07:20:34.693119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:13.883 [2024-11-18 07:20:34.699026] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:13.883 [2024-11-18 07:20:34.699107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.883 [2024-11-18 07:20:34.699134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:13.883 [2024-11-18 07:20:34.703502] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:13.883 [2024-11-18 07:20:34.703575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.883 [2024-11-18 07:20:34.703602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:13.883 [2024-11-18 07:20:34.707627] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:13.883 [2024-11-18 07:20:34.707753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.883 [2024-11-18 07:20:34.707780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:13.883 [2024-11-18 07:20:34.712078] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:13.884 [2024-11-18 07:20:34.712223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.884 [2024-11-18 07:20:34.712251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:13.884 [2024-11-18 07:20:34.716561] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:13.884 [2024-11-18 07:20:34.716674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.884 [2024-11-18 07:20:34.716700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:13.884 [2024-11-18 07:20:34.720989] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:13.884 [2024-11-18 07:20:34.721079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.884 [2024-11-18 07:20:34.721105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:13.884 [2024-11-18 07:20:34.725506] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:13.884 [2024-11-18 07:20:34.725601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.884 [2024-11-18 07:20:34.725628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:13.884 [2024-11-18 07:20:34.729996] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:13.884 [2024-11-18 07:20:34.730104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.884 [2024-11-18 07:20:34.730131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:13.884 [2024-11-18 07:20:34.734471] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:13.884 [2024-11-18 07:20:34.734609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.884 [2024-11-18 07:20:34.734637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:13.884 [2024-11-18 07:20:34.738909] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:13.884 [2024-11-18 07:20:34.738989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.884 [2024-11-18 07:20:34.739016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:13.884 [2024-11-18 07:20:34.743375] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:13.884 [2024-11-18 07:20:34.743488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.884 [2024-11-18 07:20:34.743527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:13.884 [2024-11-18 07:20:34.747922] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:13.884 [2024-11-18 07:20:34.748045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.884 [2024-11-18 07:20:34.748073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:13.884 [2024-11-18 07:20:34.752353] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:13.884 [2024-11-18 07:20:34.752465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.884 [2024-11-18 07:20:34.752499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:13.884 [2024-11-18 07:20:34.756741] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:13.884 [2024-11-18 07:20:34.756856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.884 [2024-11-18 07:20:34.756884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:13.884 [2024-11-18 07:20:34.761197] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:13.884 [2024-11-18 07:20:34.761305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.884 [2024-11-18 07:20:34.761334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:13.884 [2024-11-18 07:20:34.765621] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:13.884 [2024-11-18 07:20:34.765761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.884 [2024-11-18 07:20:34.765789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:13.884 [2024-11-18 07:20:34.770097] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:13.884 [2024-11-18 07:20:34.770183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.884 [2024-11-18 07:20:34.770209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:13.884 [2024-11-18 07:20:34.774516] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:13.884 [2024-11-18 07:20:34.774660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.884 [2024-11-18 07:20:34.774687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:13.884 [2024-11-18 07:20:34.778763] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:13.884 [2024-11-18 07:20:34.778899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.884 [2024-11-18 07:20:34.778925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:13.884 [2024-11-18 07:20:34.783208] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:13.884 [2024-11-18 07:20:34.783317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.884 [2024-11-18 07:20:34.783345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:13.884 [2024-11-18 07:20:34.787768] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:13.884 [2024-11-18 07:20:34.787877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.884 [2024-11-18 07:20:34.787904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:13.884 [2024-11-18 07:20:34.792184] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:13.884 [2024-11-18 07:20:34.792315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.884 [2024-11-18 07:20:34.792342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:13.884 [2024-11-18 07:20:34.796470] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:13.884 [2024-11-18 07:20:34.796551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.884 [2024-11-18 07:20:34.796577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:13.884 [2024-11-18 07:20:34.800896] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:13.884 [2024-11-18 07:20:34.800987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.884 [2024-11-18 07:20:34.801015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:13.884 [2024-11-18 07:20:34.805890] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:13.884 [2024-11-18 07:20:34.806069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.884 [2024-11-18 07:20:34.806097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:13.885 [2024-11-18 07:20:34.810865] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:13.885 [2024-11-18 07:20:34.811023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.885 [2024-11-18 07:20:34.811050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:13.885 [2024-11-18 07:20:34.816556] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:13.885 [2024-11-18 07:20:34.816750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.885 [2024-11-18 07:20:34.816778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:13.885 [2024-11-18 07:20:34.821445] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:13.885 [2024-11-18 07:20:34.821523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.885 [2024-11-18 07:20:34.821551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:13.885 [2024-11-18 07:20:34.825729] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:13.885 [2024-11-18 07:20:34.825860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.885 [2024-11-18 07:20:34.825887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:13.885 [2024-11-18 07:20:34.830090] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:13.885 [2024-11-18 07:20:34.830195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.885 [2024-11-18 07:20:34.830222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:13.885 [2024-11-18 07:20:34.834484] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:13.885 [2024-11-18 07:20:34.834632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.885 [2024-11-18 07:20:34.834661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:13.885 [2024-11-18 07:20:34.838792] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:13.885 [2024-11-18 07:20:34.838910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.885 [2024-11-18 07:20:34.838942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:13.885 [2024-11-18 07:20:34.843313] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:13.885 [2024-11-18 07:20:34.843425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.885 [2024-11-18 07:20:34.843452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:13.885 [2024-11-18 07:20:34.847556] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:13.885 [2024-11-18 07:20:34.847691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.885 [2024-11-18 07:20:34.847721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:13.885 [2024-11-18 07:20:34.851995] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:13.885 [2024-11-18 07:20:34.852158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.885 [2024-11-18 07:20:34.852187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:13.885 [2024-11-18 07:20:34.857170] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:13.885 [2024-11-18 07:20:34.857354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.885 [2024-11-18 07:20:34.857383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:14.147 [2024-11-18 07:20:34.862148] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:14.147 [2024-11-18 07:20:34.862317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.147 [2024-11-18 07:20:34.862353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:14.147 [2024-11-18 07:20:34.868298] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:14.147 [2024-11-18 07:20:34.868418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.147 [2024-11-18 07:20:34.868445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:14.147 [2024-11-18 07:20:34.873047] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:14.147 [2024-11-18 07:20:34.873118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.147 [2024-11-18 07:20:34.873145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:14.147 [2024-11-18 07:20:34.877516] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:14.147 [2024-11-18 07:20:34.877640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.147 [2024-11-18 07:20:34.877669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:14.147 [2024-11-18 07:20:34.881994] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:14.147 [2024-11-18 07:20:34.882082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.147 [2024-11-18 07:20:34.882108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:14.147 [2024-11-18 07:20:34.886449] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:14.147 [2024-11-18 07:20:34.886581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.147 [2024-11-18 07:20:34.886608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:14.147 [2024-11-18 07:20:34.890849] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:14.147 [2024-11-18 07:20:34.890982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.147 [2024-11-18 07:20:34.891009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:14.147 [2024-11-18 07:20:34.895361] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:14.147 [2024-11-18 07:20:34.895511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.147 [2024-11-18 07:20:34.895541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:14.147 [2024-11-18 07:20:34.899678] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:14.147 [2024-11-18 07:20:34.899771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.147 [2024-11-18 07:20:34.899798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:14.147 [2024-11-18 07:20:34.904194] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:14.147 [2024-11-18 07:20:34.904302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.147 [2024-11-18 07:20:34.904339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:14.147 [2024-11-18 07:20:34.908593] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:14.147 [2024-11-18 07:20:34.908757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.147 [2024-11-18 07:20:34.908786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:14.147 [2024-11-18 07:20:34.912788] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:14.147 [2024-11-18 07:20:34.912858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.147 [2024-11-18 07:20:34.912884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:14.147 [2024-11-18 07:20:34.917250] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:14.147 [2024-11-18 07:20:34.917390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.148 [2024-11-18 07:20:34.917419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:14.148 [2024-11-18 07:20:34.922389] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:14.148 [2024-11-18 07:20:34.922586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.148 [2024-11-18 07:20:34.922616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:14.148 [2024-11-18 07:20:34.927519] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:14.148 [2024-11-18 07:20:34.927680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.148 [2024-11-18 07:20:34.927710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:14.148 [2024-11-18 07:20:34.933752] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:14.148 [2024-11-18 07:20:34.933824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.148 [2024-11-18 07:20:34.933851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:14.148 [2024-11-18 07:20:34.938283] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:14.148 [2024-11-18 07:20:34.938355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.148 [2024-11-18 07:20:34.938382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:14.148 [2024-11-18 07:20:34.942651] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:14.148 [2024-11-18 07:20:34.942751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.148 [2024-11-18 07:20:34.942789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:14.148 [2024-11-18 07:20:34.947053] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:14.148 [2024-11-18 07:20:34.947168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.148 [2024-11-18 07:20:34.947194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:14.148 [2024-11-18 07:20:34.951264] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:14.148 [2024-11-18 07:20:34.951341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.148 [2024-11-18 07:20:34.951367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:14.148 [2024-11-18 07:20:34.955911] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:14.148 [2024-11-18 07:20:34.956074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.148 [2024-11-18 07:20:34.956103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:14.148 [2024-11-18 07:20:34.960978] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:14.148 [2024-11-18 07:20:34.961144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.148 [2024-11-18 07:20:34.961173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:14.148 [2024-11-18 07:20:34.966138] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:14.148 [2024-11-18 07:20:34.966252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.148 [2024-11-18 07:20:34.966278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:14.148 [2024-11-18 07:20:34.971879] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:14.148 [2024-11-18 07:20:34.972020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.148 [2024-11-18 07:20:34.972050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:14.148 [2024-11-18 07:20:34.976161] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:14.148 [2024-11-18 07:20:34.976233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.148 [2024-11-18 07:20:34.976261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:14.148 [2024-11-18 07:20:34.980471] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:14.148 [2024-11-18 07:20:34.980599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.148 [2024-11-18 07:20:34.980626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:14.148 [2024-11-18 07:20:34.984901] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:14.148 [2024-11-18 07:20:34.985048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.148 [2024-11-18 07:20:34.985077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:14.148 [2024-11-18 07:20:34.989430] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:14.148 [2024-11-18 07:20:34.989548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.148 [2024-11-18 07:20:34.989575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:14.148 [2024-11-18 07:20:34.993776] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:14.148 [2024-11-18 07:20:34.993861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.148 [2024-11-18 07:20:34.993888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:14.148 [2024-11-18 07:20:34.998509] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:14.148 [2024-11-18 07:20:34.998694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.148 [2024-11-18 07:20:34.998723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:14.148 [2024-11-18 07:20:35.003621] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:14.148 [2024-11-18 07:20:35.003791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.148 [2024-11-18 07:20:35.003821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:14.148 [2024-11-18 07:20:35.009439] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:14.148 [2024-11-18 07:20:35.009617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.148 [2024-11-18 07:20:35.009646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:14.148 [2024-11-18 07:20:35.014643] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:14.148 [2024-11-18 07:20:35.014765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.148 [2024-11-18 07:20:35.014791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:14.148 [2024-11-18 07:20:35.018849] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:14.148 [2024-11-18 07:20:35.018963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.148 [2024-11-18 07:20:35.018991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:14.148 [2024-11-18 07:20:35.023211] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:14.148 [2024-11-18 07:20:35.023344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.148 [2024-11-18 07:20:35.023370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:14.148 [2024-11-18 07:20:35.028374] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:14.148 [2024-11-18 07:20:35.028501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.148 [2024-11-18 07:20:35.028542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:14.148 [2024-11-18 07:20:35.033499] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:14.148 [2024-11-18 07:20:35.033644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.148 [2024-11-18 07:20:35.033671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:14.148 [2024-11-18 07:20:35.038012] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:14.148 [2024-11-18 07:20:35.038123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.149 [2024-11-18 07:20:35.038150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:14.149 [2024-11-18 07:20:35.042270] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:14.149 [2024-11-18 07:20:35.042408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.149 [2024-11-18 07:20:35.042435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:14.149 [2024-11-18 07:20:35.046559] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:14.149 [2024-11-18 07:20:35.046649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.149 [2024-11-18 07:20:35.046678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:14.149 [2024-11-18 07:20:35.050851] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:14.149 [2024-11-18 07:20:35.050975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.149 [2024-11-18 07:20:35.051002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:14.149 [2024-11-18 07:20:35.055368] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:14.149 [2024-11-18 07:20:35.055477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.149 [2024-11-18 07:20:35.055511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:14.149 [2024-11-18 07:20:35.060786] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:14.149 [2024-11-18 07:20:35.060972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.149 [2024-11-18 07:20:35.060999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:14.149 [2024-11-18 07:20:35.066552] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:14.149 [2024-11-18 07:20:35.066720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.149 [2024-11-18 07:20:35.066748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:14.149 [2024-11-18 07:20:35.072167] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:14.149 [2024-11-18 07:20:35.072325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.149 [2024-11-18 07:20:35.072353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:14.149 [2024-11-18 07:20:35.076882] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:14.149 [2024-11-18 07:20:35.077021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.149 [2024-11-18 07:20:35.077049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:14.149 [2024-11-18 07:20:35.081119] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:14.149 [2024-11-18 07:20:35.081198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.149 [2024-11-18 07:20:35.081226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:14.149 [2024-11-18 07:20:35.085363] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:14.149 [2024-11-18 07:20:35.085479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.149 [2024-11-18 07:20:35.085515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:14.149 [2024-11-18 07:20:35.089690] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:14.149 [2024-11-18 07:20:35.089841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.149 [2024-11-18 07:20:35.089868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:14.149 [2024-11-18 07:20:35.094855] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:14.149 [2024-11-18 07:20:35.094962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.149 [2024-11-18 07:20:35.094989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:14.149 [2024-11-18 07:20:35.099476] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:14.149 [2024-11-18 07:20:35.099571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.149 [2024-11-18 07:20:35.099598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:14.149 [2024-11-18 07:20:35.104131] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:14.149 [2024-11-18 07:20:35.104209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.149 [2024-11-18 07:20:35.104237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:14.149 [2024-11-18 07:20:35.109199] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:14.149 [2024-11-18 07:20:35.109277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.149 [2024-11-18 07:20:35.109305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:14.149 [2024-11-18 07:20:35.114402] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:14.149 [2024-11-18 07:20:35.114476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.149 [2024-11-18 07:20:35.114512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:14.149 [2024-11-18 07:20:35.119653] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:14.149 [2024-11-18 07:20:35.119728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.149 [2024-11-18 07:20:35.119756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:14.409 [2024-11-18 07:20:35.124204] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd037a0) with pdu=0x2000166ff3c8 00:35:14.409 [2024-11-18 07:20:35.124275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.409 [2024-11-18 07:20:35.124302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:14.409 6048.00 IOPS, 756.00 MiB/s 00:35:14.409 Latency(us) 00:35:14.409 [2024-11-18T06:20:35.387Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:14.409 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:35:14.409 nvme0n1 : 2.00 6045.97 755.75 0.00 0.00 2639.66 1953.94 10097.40 00:35:14.409 [2024-11-18T06:20:35.387Z] =================================================================================================================== 00:35:14.409 [2024-11-18T06:20:35.387Z] Total : 6045.97 755.75 0.00 0.00 2639.66 1953.94 10097.40 00:35:14.409 { 00:35:14.409 "results": [ 00:35:14.409 { 00:35:14.409 "job": "nvme0n1", 00:35:14.409 "core_mask": "0x2", 00:35:14.409 "workload": "randwrite", 00:35:14.409 "status": "finished", 00:35:14.409 "queue_depth": 16, 00:35:14.409 "io_size": 131072, 00:35:14.409 "runtime": 2.003154, 00:35:14.409 "iops": 6045.965512386966, 00:35:14.409 "mibps": 755.7456890483708, 00:35:14.409 "io_failed": 0, 00:35:14.409 "io_timeout": 0, 00:35:14.409 "avg_latency_us": 2639.6589747306552, 00:35:14.409 "min_latency_us": 1953.9437037037037, 00:35:14.409 "max_latency_us": 10097.39851851852 00:35:14.409 } 00:35:14.409 ], 00:35:14.409 "core_count": 1 00:35:14.409 } 00:35:14.409 07:20:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:35:14.409 07:20:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:35:14.410 07:20:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:35:14.410 07:20:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:35:14.410 | .driver_specific 00:35:14.410 | .nvme_error 00:35:14.410 | .status_code 00:35:14.410 | .command_transient_transport_error' 00:35:14.670 07:20:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 391 > 0 )) 00:35:14.670 07:20:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 397349 00:35:14.670 07:20:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 397349 ']' 00:35:14.670 07:20:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 397349 00:35:14.670 07:20:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:35:14.670 07:20:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:14.670 07:20:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 397349 00:35:14.670 07:20:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:14.670 07:20:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:14.670 07:20:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 397349' 00:35:14.670 killing process with pid 397349 00:35:14.670 07:20:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 397349 00:35:14.670 Received shutdown signal, test time was about 2.000000 seconds 00:35:14.670 00:35:14.670 Latency(us) 00:35:14.670 [2024-11-18T06:20:35.648Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:14.670 [2024-11-18T06:20:35.648Z] =================================================================================================================== 00:35:14.670 [2024-11-18T06:20:35.649Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:14.671 07:20:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 397349 00:35:14.934 07:20:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 395985 00:35:14.934 07:20:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 395985 ']' 00:35:14.934 07:20:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 395985 00:35:14.934 07:20:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:35:14.934 07:20:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:14.934 07:20:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 395985 00:35:14.934 07:20:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:14.934 07:20:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:14.934 07:20:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 395985' 00:35:14.934 killing process with pid 395985 00:35:14.934 07:20:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 395985 00:35:14.934 07:20:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 395985 00:35:14.934 00:35:14.934 real 0m15.690s 00:35:14.934 user 0m31.647s 00:35:14.934 sys 0m4.302s 00:35:14.934 07:20:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:14.934 07:20:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:14.934 ************************************ 00:35:14.934 END TEST nvmf_digest_error 00:35:14.934 ************************************ 00:35:15.195 07:20:35 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:35:15.195 07:20:35 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:35:15.195 07:20:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:15.195 07:20:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:35:15.195 07:20:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:15.195 07:20:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:35:15.195 07:20:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:15.195 07:20:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:15.195 rmmod nvme_tcp 00:35:15.195 rmmod nvme_fabrics 00:35:15.195 rmmod nvme_keyring 00:35:15.195 07:20:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:15.195 07:20:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:35:15.195 07:20:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:35:15.195 07:20:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@517 -- # '[' -n 395985 ']' 00:35:15.195 07:20:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # killprocess 395985 00:35:15.195 07:20:35 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # '[' -z 395985 ']' 00:35:15.195 07:20:35 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@958 -- # kill -0 395985 00:35:15.195 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (395985) - No such process 00:35:15.195 07:20:35 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@981 -- # echo 'Process with pid 395985 is not found' 00:35:15.195 Process with pid 395985 is not found 00:35:15.195 07:20:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:35:15.195 07:20:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:15.195 07:20:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:15.195 07:20:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:35:15.195 07:20:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-save 00:35:15.195 07:20:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:15.195 07:20:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-restore 00:35:15.195 07:20:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:15.195 07:20:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:15.195 07:20:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:15.195 07:20:35 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:15.195 07:20:35 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:17.109 07:20:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:17.109 00:35:17.109 real 0m36.002s 00:35:17.109 user 1m4.216s 00:35:17.109 sys 0m10.249s 00:35:17.109 07:20:38 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:17.109 07:20:38 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:35:17.109 ************************************ 00:35:17.109 END TEST nvmf_digest 00:35:17.109 ************************************ 00:35:17.109 07:20:38 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:35:17.109 07:20:38 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:35:17.109 07:20:38 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:35:17.109 07:20:38 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:35:17.109 07:20:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:35:17.109 07:20:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:17.109 07:20:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:35:17.109 ************************************ 00:35:17.109 START TEST nvmf_bdevperf 00:35:17.109 ************************************ 00:35:17.109 07:20:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:35:17.369 * Looking for test storage... 00:35:17.369 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:35:17.369 07:20:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:35:17.369 07:20:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1693 -- # lcov --version 00:35:17.369 07:20:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:35:17.369 07:20:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:35:17.369 07:20:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:17.369 07:20:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:17.369 07:20:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:17.369 07:20:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:35:17.369 07:20:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:35:17.369 07:20:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:35:17.369 07:20:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:35:17.369 07:20:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:35:17.369 07:20:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:35:17.369 07:20:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:35:17.369 07:20:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:17.369 07:20:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:35:17.369 07:20:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@345 -- # : 1 00:35:17.369 07:20:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:17.369 07:20:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:17.369 07:20:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:35:17.369 07:20:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=1 00:35:17.369 07:20:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:17.369 07:20:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 1 00:35:17.369 07:20:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:35:17.369 07:20:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:35:17.369 07:20:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=2 00:35:17.369 07:20:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:17.369 07:20:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 2 00:35:17.369 07:20:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:35:17.369 07:20:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:17.369 07:20:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:17.369 07:20:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # return 0 00:35:17.369 07:20:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:17.369 07:20:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:35:17.369 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:17.369 --rc genhtml_branch_coverage=1 00:35:17.369 --rc genhtml_function_coverage=1 00:35:17.369 --rc genhtml_legend=1 00:35:17.369 --rc geninfo_all_blocks=1 00:35:17.369 --rc geninfo_unexecuted_blocks=1 00:35:17.369 00:35:17.369 ' 00:35:17.369 07:20:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:35:17.369 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:17.369 --rc genhtml_branch_coverage=1 00:35:17.369 --rc genhtml_function_coverage=1 00:35:17.369 --rc genhtml_legend=1 00:35:17.369 --rc geninfo_all_blocks=1 00:35:17.369 --rc geninfo_unexecuted_blocks=1 00:35:17.369 00:35:17.369 ' 00:35:17.369 07:20:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:35:17.369 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:17.369 --rc genhtml_branch_coverage=1 00:35:17.369 --rc genhtml_function_coverage=1 00:35:17.369 --rc genhtml_legend=1 00:35:17.369 --rc geninfo_all_blocks=1 00:35:17.369 --rc geninfo_unexecuted_blocks=1 00:35:17.369 00:35:17.369 ' 00:35:17.369 07:20:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:35:17.369 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:17.369 --rc genhtml_branch_coverage=1 00:35:17.369 --rc genhtml_function_coverage=1 00:35:17.369 --rc genhtml_legend=1 00:35:17.369 --rc geninfo_all_blocks=1 00:35:17.369 --rc geninfo_unexecuted_blocks=1 00:35:17.369 00:35:17.369 ' 00:35:17.369 07:20:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:17.369 07:20:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:35:17.369 07:20:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:17.369 07:20:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:17.369 07:20:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:17.369 07:20:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:17.369 07:20:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:17.369 07:20:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:17.369 07:20:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:17.369 07:20:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:17.369 07:20:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:17.369 07:20:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:17.369 07:20:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:35:17.369 07:20:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:35:17.369 07:20:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:17.369 07:20:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:17.369 07:20:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:17.369 07:20:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:17.369 07:20:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:17.369 07:20:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@15 -- # shopt -s extglob 00:35:17.370 07:20:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:17.370 07:20:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:17.370 07:20:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:17.370 07:20:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:17.370 07:20:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:17.370 07:20:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:17.370 07:20:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:35:17.370 07:20:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:17.370 07:20:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # : 0 00:35:17.370 07:20:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:17.370 07:20:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:17.370 07:20:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:17.370 07:20:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:17.370 07:20:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:17.370 07:20:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:17.370 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:17.370 07:20:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:17.370 07:20:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:17.370 07:20:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:17.370 07:20:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:35:17.370 07:20:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:35:17.370 07:20:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:35:17.370 07:20:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:35:17.370 07:20:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:17.370 07:20:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@476 -- # prepare_net_devs 00:35:17.370 07:20:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:35:17.370 07:20:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:35:17.370 07:20:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:17.370 07:20:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:17.370 07:20:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:17.370 07:20:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:35:17.370 07:20:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:35:17.370 07:20:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@309 -- # xtrace_disable 00:35:17.370 07:20:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:19.279 07:20:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:19.279 07:20:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # pci_devs=() 00:35:19.279 07:20:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:19.279 07:20:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:19.279 07:20:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:19.279 07:20:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:19.279 07:20:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:19.279 07:20:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # net_devs=() 00:35:19.279 07:20:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:19.279 07:20:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # e810=() 00:35:19.279 07:20:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # local -ga e810 00:35:19.279 07:20:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # x722=() 00:35:19.279 07:20:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # local -ga x722 00:35:19.279 07:20:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # mlx=() 00:35:19.279 07:20:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # local -ga mlx 00:35:19.279 07:20:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:19.279 07:20:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:19.279 07:20:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:19.279 07:20:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:19.279 07:20:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:19.279 07:20:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:19.279 07:20:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:19.279 07:20:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:19.279 07:20:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:19.279 07:20:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:19.279 07:20:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:19.279 07:20:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:19.279 07:20:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:19.279 07:20:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:19.279 07:20:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:19.279 07:20:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:19.279 07:20:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:19.279 07:20:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:19.279 07:20:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:19.279 07:20:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:35:19.279 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:35:19.279 07:20:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:19.279 07:20:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:19.279 07:20:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:19.279 07:20:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:19.279 07:20:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:19.279 07:20:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:19.279 07:20:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:35:19.279 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:35:19.279 07:20:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:19.279 07:20:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:19.279 07:20:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:19.279 07:20:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:19.279 07:20:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:19.279 07:20:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:19.279 07:20:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:19.279 07:20:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:19.279 07:20:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:19.279 07:20:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:19.279 07:20:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:19.279 07:20:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:19.279 07:20:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:19.279 07:20:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:19.279 07:20:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:19.279 07:20:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:35:19.279 Found net devices under 0000:0a:00.0: cvl_0_0 00:35:19.279 07:20:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:19.279 07:20:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:19.279 07:20:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:19.279 07:20:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:19.279 07:20:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:19.279 07:20:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:19.279 07:20:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:19.279 07:20:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:19.279 07:20:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:35:19.279 Found net devices under 0000:0a:00.1: cvl_0_1 00:35:19.279 07:20:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:19.279 07:20:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:35:19.279 07:20:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # is_hw=yes 00:35:19.279 07:20:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:35:19.279 07:20:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:35:19.279 07:20:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:35:19.279 07:20:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:19.279 07:20:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:19.279 07:20:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:19.279 07:20:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:19.279 07:20:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:19.279 07:20:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:19.279 07:20:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:19.279 07:20:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:19.279 07:20:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:19.279 07:20:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:19.279 07:20:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:19.279 07:20:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:19.279 07:20:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:19.279 07:20:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:19.279 07:20:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:19.538 07:20:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:19.538 07:20:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:19.538 07:20:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:19.539 07:20:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:19.539 07:20:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:19.539 07:20:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:19.539 07:20:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:19.539 07:20:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:19.539 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:19.539 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.150 ms 00:35:19.539 00:35:19.539 --- 10.0.0.2 ping statistics --- 00:35:19.539 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:19.539 rtt min/avg/max/mdev = 0.150/0.150/0.150/0.000 ms 00:35:19.539 07:20:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:19.539 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:19.539 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.157 ms 00:35:19.539 00:35:19.539 --- 10.0.0.1 ping statistics --- 00:35:19.539 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:19.539 rtt min/avg/max/mdev = 0.157/0.157/0.157/0.000 ms 00:35:19.539 07:20:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:19.539 07:20:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@450 -- # return 0 00:35:19.539 07:20:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:35:19.539 07:20:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:19.539 07:20:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:35:19.539 07:20:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:35:19.539 07:20:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:19.539 07:20:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:35:19.539 07:20:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:35:19.539 07:20:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:35:19.539 07:20:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:35:19.539 07:20:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:19.539 07:20:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:19.539 07:20:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:19.539 07:20:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=399714 00:35:19.539 07:20:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:35:19.539 07:20:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 399714 00:35:19.539 07:20:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 399714 ']' 00:35:19.539 07:20:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:19.539 07:20:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:19.539 07:20:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:19.539 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:19.539 07:20:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:19.539 07:20:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:19.539 [2024-11-18 07:20:40.442668] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:35:19.539 [2024-11-18 07:20:40.442747] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:19.802 [2024-11-18 07:20:40.518375] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:35:19.802 [2024-11-18 07:20:40.568230] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:19.802 [2024-11-18 07:20:40.568304] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:19.802 [2024-11-18 07:20:40.568317] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:19.802 [2024-11-18 07:20:40.568343] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:19.802 [2024-11-18 07:20:40.568352] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:19.802 [2024-11-18 07:20:40.569955] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:35:19.802 [2024-11-18 07:20:40.570023] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:35:19.802 [2024-11-18 07:20:40.570026] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:19.802 07:20:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:19.802 07:20:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:35:19.802 07:20:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:19.802 07:20:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:19.802 07:20:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:19.802 07:20:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:19.802 07:20:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:35:19.802 07:20:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:19.802 07:20:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:19.802 [2024-11-18 07:20:40.717076] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:19.802 07:20:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:19.802 07:20:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:35:19.802 07:20:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:19.802 07:20:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:19.802 Malloc0 00:35:19.802 07:20:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:19.802 07:20:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:35:19.802 07:20:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:19.802 07:20:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:19.802 07:20:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:19.802 07:20:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:35:19.802 07:20:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:19.802 07:20:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:20.089 07:20:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:20.089 07:20:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:20.089 07:20:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:20.089 07:20:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:20.089 [2024-11-18 07:20:40.784836] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:20.089 07:20:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:20.089 07:20:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:35:20.089 07:20:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:35:20.089 07:20:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:35:20.089 07:20:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:35:20.089 07:20:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:35:20.089 07:20:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:35:20.089 { 00:35:20.089 "params": { 00:35:20.089 "name": "Nvme$subsystem", 00:35:20.089 "trtype": "$TEST_TRANSPORT", 00:35:20.089 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:20.089 "adrfam": "ipv4", 00:35:20.089 "trsvcid": "$NVMF_PORT", 00:35:20.090 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:20.090 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:20.090 "hdgst": ${hdgst:-false}, 00:35:20.090 "ddgst": ${ddgst:-false} 00:35:20.090 }, 00:35:20.090 "method": "bdev_nvme_attach_controller" 00:35:20.090 } 00:35:20.090 EOF 00:35:20.090 )") 00:35:20.090 07:20:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:35:20.090 07:20:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:35:20.090 07:20:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:35:20.090 07:20:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:35:20.090 "params": { 00:35:20.090 "name": "Nvme1", 00:35:20.090 "trtype": "tcp", 00:35:20.090 "traddr": "10.0.0.2", 00:35:20.090 "adrfam": "ipv4", 00:35:20.090 "trsvcid": "4420", 00:35:20.090 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:20.090 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:20.090 "hdgst": false, 00:35:20.090 "ddgst": false 00:35:20.090 }, 00:35:20.090 "method": "bdev_nvme_attach_controller" 00:35:20.090 }' 00:35:20.090 [2024-11-18 07:20:40.840133] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:35:20.090 [2024-11-18 07:20:40.840203] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid399852 ] 00:35:20.090 [2024-11-18 07:20:40.907850] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:20.090 [2024-11-18 07:20:40.956646] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:20.362 Running I/O for 1 seconds... 00:35:21.337 8686.00 IOPS, 33.93 MiB/s 00:35:21.337 Latency(us) 00:35:21.337 [2024-11-18T06:20:42.315Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:21.337 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:35:21.337 Verification LBA range: start 0x0 length 0x4000 00:35:21.337 Nvme1n1 : 1.01 8740.86 34.14 0.00 0.00 14581.58 1189.36 14175.19 00:35:21.337 [2024-11-18T06:20:42.315Z] =================================================================================================================== 00:35:21.337 [2024-11-18T06:20:42.315Z] Total : 8740.86 34.14 0.00 0.00 14581.58 1189.36 14175.19 00:35:21.595 07:20:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=400010 00:35:21.595 07:20:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:35:21.595 07:20:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:35:21.595 07:20:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:35:21.595 07:20:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:35:21.595 07:20:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:35:21.595 07:20:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:35:21.595 07:20:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:35:21.595 { 00:35:21.595 "params": { 00:35:21.595 "name": "Nvme$subsystem", 00:35:21.595 "trtype": "$TEST_TRANSPORT", 00:35:21.595 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:21.595 "adrfam": "ipv4", 00:35:21.595 "trsvcid": "$NVMF_PORT", 00:35:21.595 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:21.595 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:21.595 "hdgst": ${hdgst:-false}, 00:35:21.595 "ddgst": ${ddgst:-false} 00:35:21.595 }, 00:35:21.595 "method": "bdev_nvme_attach_controller" 00:35:21.595 } 00:35:21.595 EOF 00:35:21.595 )") 00:35:21.595 07:20:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:35:21.596 07:20:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:35:21.596 07:20:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:35:21.596 07:20:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:35:21.596 "params": { 00:35:21.596 "name": "Nvme1", 00:35:21.596 "trtype": "tcp", 00:35:21.596 "traddr": "10.0.0.2", 00:35:21.596 "adrfam": "ipv4", 00:35:21.596 "trsvcid": "4420", 00:35:21.596 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:21.596 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:21.596 "hdgst": false, 00:35:21.596 "ddgst": false 00:35:21.596 }, 00:35:21.596 "method": "bdev_nvme_attach_controller" 00:35:21.596 }' 00:35:21.596 [2024-11-18 07:20:42.413000] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:35:21.596 [2024-11-18 07:20:42.413078] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid400010 ] 00:35:21.596 [2024-11-18 07:20:42.480811] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:21.596 [2024-11-18 07:20:42.525127] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:22.162 Running I/O for 15 seconds... 00:35:24.037 8735.00 IOPS, 34.12 MiB/s [2024-11-18T06:20:45.584Z] 8763.00 IOPS, 34.23 MiB/s [2024-11-18T06:20:45.584Z] 07:20:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 399714 00:35:24.606 07:20:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:35:24.606 [2024-11-18 07:20:45.384825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:44664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.606 [2024-11-18 07:20:45.384891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:24.606 [2024-11-18 07:20:45.384934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:44672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.606 [2024-11-18 07:20:45.384952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:24.606 [2024-11-18 07:20:45.384971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:44680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.606 [2024-11-18 07:20:45.384986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:24.606 [2024-11-18 07:20:45.385003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:44688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.606 [2024-11-18 07:20:45.385018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:24.606 [2024-11-18 07:20:45.385049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:44696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.606 [2024-11-18 07:20:45.385064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:24.606 [2024-11-18 07:20:45.385082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:44704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.606 [2024-11-18 07:20:45.385097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:24.606 [2024-11-18 07:20:45.385112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:44712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.606 [2024-11-18 07:20:45.385126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:24.606 [2024-11-18 07:20:45.385142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:44720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.606 [2024-11-18 07:20:45.385158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:24.606 [2024-11-18 07:20:45.385174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:44728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.606 [2024-11-18 07:20:45.385189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:24.606 [2024-11-18 07:20:45.385207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:44736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.606 [2024-11-18 07:20:45.385236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:24.606 [2024-11-18 07:20:45.385252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:44744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.606 [2024-11-18 07:20:45.385274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:24.606 [2024-11-18 07:20:45.385289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:44752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.606 [2024-11-18 07:20:45.385301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:24.606 [2024-11-18 07:20:45.385316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:44760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.607 [2024-11-18 07:20:45.385330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:24.607 [2024-11-18 07:20:45.385345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:44768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.607 [2024-11-18 07:20:45.385358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:24.607 [2024-11-18 07:20:45.385374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:44776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.607 [2024-11-18 07:20:45.385387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:24.607 [2024-11-18 07:20:45.385401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:44784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.607 [2024-11-18 07:20:45.385413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:24.607 [2024-11-18 07:20:45.385428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:45296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:24.607 [2024-11-18 07:20:45.385441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:24.607 [2024-11-18 07:20:45.385458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:45304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:24.607 [2024-11-18 07:20:45.385496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:24.607 [2024-11-18 07:20:45.385520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:44792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.607 [2024-11-18 07:20:45.385534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:24.607 [2024-11-18 07:20:45.385552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:44800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.607 [2024-11-18 07:20:45.385569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:24.607 [2024-11-18 07:20:45.385586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:44808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.607 [2024-11-18 07:20:45.385602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:24.607 [2024-11-18 07:20:45.385620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:44816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.607 [2024-11-18 07:20:45.385638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:24.607 [2024-11-18 07:20:45.385656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:44824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.607 [2024-11-18 07:20:45.385670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:24.607 [2024-11-18 07:20:45.385685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:44832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.607 [2024-11-18 07:20:45.385703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:24.607 [2024-11-18 07:20:45.385719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:44840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.607 [2024-11-18 07:20:45.385734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:24.607 [2024-11-18 07:20:45.385762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:44848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.607 [2024-11-18 07:20:45.385790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:24.607 [2024-11-18 07:20:45.385807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:44856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.607 [2024-11-18 07:20:45.385821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:24.607 [2024-11-18 07:20:45.385850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:44864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.607 [2024-11-18 07:20:45.385863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:24.607 [2024-11-18 07:20:45.385876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:44872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.607 [2024-11-18 07:20:45.385888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:24.607 [2024-11-18 07:20:45.385902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:44880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.607 [2024-11-18 07:20:45.385914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:24.607 [2024-11-18 07:20:45.385928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:44888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.607 [2024-11-18 07:20:45.385940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:24.607 [2024-11-18 07:20:45.385954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:44896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.607 [2024-11-18 07:20:45.385966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:24.607 [2024-11-18 07:20:45.385980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:44904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.607 [2024-11-18 07:20:45.385992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:24.607 [2024-11-18 07:20:45.386006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:44912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.607 [2024-11-18 07:20:45.386018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:24.607 [2024-11-18 07:20:45.386032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:44920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.607 [2024-11-18 07:20:45.386044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:24.607 [2024-11-18 07:20:45.386058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:44928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.607 [2024-11-18 07:20:45.386070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:24.607 [2024-11-18 07:20:45.386095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:44936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.607 [2024-11-18 07:20:45.386109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:24.607 [2024-11-18 07:20:45.386122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:44944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.607 [2024-11-18 07:20:45.386135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:24.607 [2024-11-18 07:20:45.386148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:44952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.607 [2024-11-18 07:20:45.386161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:24.607 [2024-11-18 07:20:45.386175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:44960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.607 [2024-11-18 07:20:45.386188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:24.607 [2024-11-18 07:20:45.386201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:44968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.607 [2024-11-18 07:20:45.386228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:24.607 [2024-11-18 07:20:45.386243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:44976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.607 [2024-11-18 07:20:45.386254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:24.607 [2024-11-18 07:20:45.386267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:44984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.607 [2024-11-18 07:20:45.386279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:24.607 [2024-11-18 07:20:45.386293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:44992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.607 [2024-11-18 07:20:45.386305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:24.607 [2024-11-18 07:20:45.386319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:45000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.607 [2024-11-18 07:20:45.386331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:24.607 [2024-11-18 07:20:45.386344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:45008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.607 [2024-11-18 07:20:45.386357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:24.607 [2024-11-18 07:20:45.386370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:45016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.607 [2024-11-18 07:20:45.386382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:24.607 [2024-11-18 07:20:45.386395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:45024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.607 [2024-11-18 07:20:45.386408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:24.607 [2024-11-18 07:20:45.386421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:45032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.607 [2024-11-18 07:20:45.386437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:24.607 [2024-11-18 07:20:45.386451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:45040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.607 [2024-11-18 07:20:45.386463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:24.607 [2024-11-18 07:20:45.386500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:45048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.607 [2024-11-18 07:20:45.386518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:24.607 [2024-11-18 07:20:45.386533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:45056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.607 [2024-11-18 07:20:45.386547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:24.608 [2024-11-18 07:20:45.386562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:45064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.608 [2024-11-18 07:20:45.386576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:24.608 [2024-11-18 07:20:45.386591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:45072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.608 [2024-11-18 07:20:45.386605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:24.608 [2024-11-18 07:20:45.386620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:45080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.608 [2024-11-18 07:20:45.386633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:24.608 [2024-11-18 07:20:45.386648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:45088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.608 [2024-11-18 07:20:45.386662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:24.608 [2024-11-18 07:20:45.386677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:45096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.608 [2024-11-18 07:20:45.386691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:24.608 [2024-11-18 07:20:45.386705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:45104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.608 [2024-11-18 07:20:45.386719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:24.608 [2024-11-18 07:20:45.386734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:45112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.608 [2024-11-18 07:20:45.386748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:24.608 [2024-11-18 07:20:45.386763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:45120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.608 [2024-11-18 07:20:45.386777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:24.608 [2024-11-18 07:20:45.386807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:45128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.608 [2024-11-18 07:20:45.386820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:24.608 [2024-11-18 07:20:45.386838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:45136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.608 [2024-11-18 07:20:45.386866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:24.608 [2024-11-18 07:20:45.386881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:45144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.608 [2024-11-18 07:20:45.386893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:24.608 [2024-11-18 07:20:45.386906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:45152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.608 [2024-11-18 07:20:45.386918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:24.608 [2024-11-18 07:20:45.386931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:45160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.608 [2024-11-18 07:20:45.386943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:24.608 [2024-11-18 07:20:45.386957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:45168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.608 [2024-11-18 07:20:45.386969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:24.608 [2024-11-18 07:20:45.386982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:45176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.608 [2024-11-18 07:20:45.386995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:24.608 [2024-11-18 07:20:45.387007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:45184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.608 [2024-11-18 07:20:45.387019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:24.608 [2024-11-18 07:20:45.387032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:45192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.608 [2024-11-18 07:20:45.387045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:24.608 [2024-11-18 07:20:45.387058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:45200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.608 [2024-11-18 07:20:45.387070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:24.608 [2024-11-18 07:20:45.387083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:45208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.608 [2024-11-18 07:20:45.387095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:24.608 [2024-11-18 07:20:45.387108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:45216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.608 [2024-11-18 07:20:45.387120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:24.608 [2024-11-18 07:20:45.387133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:45224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.608 [2024-11-18 07:20:45.387145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:24.608 [2024-11-18 07:20:45.387158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:45232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.608 [2024-11-18 07:20:45.387173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:24.608 [2024-11-18 07:20:45.387186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:45312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:24.608 [2024-11-18 07:20:45.387198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:24.608 [2024-11-18 07:20:45.387211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:45320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:24.608 [2024-11-18 07:20:45.387223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:24.608 [2024-11-18 07:20:45.387236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:45328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:24.608 [2024-11-18 07:20:45.387249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:24.608 [2024-11-18 07:20:45.387262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:45336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:24.608 [2024-11-18 07:20:45.387274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:24.608 [2024-11-18 07:20:45.387287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:45344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:24.608 [2024-11-18 07:20:45.387299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:24.608 [2024-11-18 07:20:45.387312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:45352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:24.608 [2024-11-18 07:20:45.387324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:24.608 [2024-11-18 07:20:45.387338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:45360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:24.608 [2024-11-18 07:20:45.387350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:24.608 [2024-11-18 07:20:45.387369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:45368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:24.608 [2024-11-18 07:20:45.387382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:24.608 [2024-11-18 07:20:45.387395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:45376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:24.608 [2024-11-18 07:20:45.387407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:24.608 [2024-11-18 07:20:45.387420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:45384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:24.608 [2024-11-18 07:20:45.387432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:24.608 [2024-11-18 07:20:45.387445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:45392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:24.608 [2024-11-18 07:20:45.387457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:24.608 [2024-11-18 07:20:45.387485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:45400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:24.608 [2024-11-18 07:20:45.387509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:24.608 [2024-11-18 07:20:45.387526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:45408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:24.608 [2024-11-18 07:20:45.387544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:24.608 [2024-11-18 07:20:45.387560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:45416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:24.608 [2024-11-18 07:20:45.387574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:24.608 [2024-11-18 07:20:45.387589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:45424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:24.608 [2024-11-18 07:20:45.387602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:24.608 [2024-11-18 07:20:45.387617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:45432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:24.608 [2024-11-18 07:20:45.387631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:24.608 [2024-11-18 07:20:45.387646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:45440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:24.608 [2024-11-18 07:20:45.387659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:24.608 [2024-11-18 07:20:45.387674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:45448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:24.609 [2024-11-18 07:20:45.387687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:24.609 [2024-11-18 07:20:45.387703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:45456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:24.609 [2024-11-18 07:20:45.387716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:24.609 [2024-11-18 07:20:45.387731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:45464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:24.609 [2024-11-18 07:20:45.387745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:24.609 [2024-11-18 07:20:45.387759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:45472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:24.609 [2024-11-18 07:20:45.387774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:24.609 [2024-11-18 07:20:45.387806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:45480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:24.609 [2024-11-18 07:20:45.387819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:24.609 [2024-11-18 07:20:45.387833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:45488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:24.609 [2024-11-18 07:20:45.387846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:24.609 [2024-11-18 07:20:45.387880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:45496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:24.609 [2024-11-18 07:20:45.387892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:24.609 [2024-11-18 07:20:45.387905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:45504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:24.609 [2024-11-18 07:20:45.387917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:24.609 [2024-11-18 07:20:45.387933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:45512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:24.609 [2024-11-18 07:20:45.387945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:24.609 [2024-11-18 07:20:45.387959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:45520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:24.609 [2024-11-18 07:20:45.387971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:24.609 [2024-11-18 07:20:45.387983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:45528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:24.609 [2024-11-18 07:20:45.387995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:24.609 [2024-11-18 07:20:45.388008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:45536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:24.609 [2024-11-18 07:20:45.388020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:24.609 [2024-11-18 07:20:45.388033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:45544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:24.609 [2024-11-18 07:20:45.388045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:24.609 [2024-11-18 07:20:45.388058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:45552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:24.609 [2024-11-18 07:20:45.388070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:24.609 [2024-11-18 07:20:45.388083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:45560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:24.609 [2024-11-18 07:20:45.388095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:24.609 [2024-11-18 07:20:45.388108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:45568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:24.609 [2024-11-18 07:20:45.388120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:24.609 [2024-11-18 07:20:45.388133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:45576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:24.609 [2024-11-18 07:20:45.388145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:24.609 [2024-11-18 07:20:45.388157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:45584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:24.609 [2024-11-18 07:20:45.388169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:24.609 [2024-11-18 07:20:45.388182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:45592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:24.609 [2024-11-18 07:20:45.388193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:24.609 [2024-11-18 07:20:45.388207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:45600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:24.609 [2024-11-18 07:20:45.388218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:24.609 [2024-11-18 07:20:45.388231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:45608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:24.609 [2024-11-18 07:20:45.388246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:24.609 [2024-11-18 07:20:45.388260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:45616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:24.609 [2024-11-18 07:20:45.388272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:24.609 [2024-11-18 07:20:45.388290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:45624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:24.609 [2024-11-18 07:20:45.388302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:24.609 [2024-11-18 07:20:45.388316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:45632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:24.609 [2024-11-18 07:20:45.388327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:24.609 [2024-11-18 07:20:45.388340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:45640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:24.609 [2024-11-18 07:20:45.388353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:24.609 [2024-11-18 07:20:45.388366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:45648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:24.609 [2024-11-18 07:20:45.388378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:24.609 [2024-11-18 07:20:45.388391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:45656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:24.609 [2024-11-18 07:20:45.388403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:24.609 [2024-11-18 07:20:45.388417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:45664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:24.609 [2024-11-18 07:20:45.388429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:24.609 [2024-11-18 07:20:45.388442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:45672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:24.609 [2024-11-18 07:20:45.388454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:24.609 [2024-11-18 07:20:45.388466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:45680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:24.609 [2024-11-18 07:20:45.388503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:24.609 [2024-11-18 07:20:45.388521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:45240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.609 [2024-11-18 07:20:45.388534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:24.609 [2024-11-18 07:20:45.388550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:45248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.609 [2024-11-18 07:20:45.388563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:24.609 [2024-11-18 07:20:45.388578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:45256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.609 [2024-11-18 07:20:45.388592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:24.609 [2024-11-18 07:20:45.388610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:45264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.609 [2024-11-18 07:20:45.388624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:24.609 [2024-11-18 07:20:45.388639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:45272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.609 [2024-11-18 07:20:45.388653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:24.609 [2024-11-18 07:20:45.388668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:45280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.609 [2024-11-18 07:20:45.388681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:24.609 [2024-11-18 07:20:45.388695] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d8f30 is same with the state(6) to be set 00:35:24.609 [2024-11-18 07:20:45.388711] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:24.609 [2024-11-18 07:20:45.388722] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:24.609 [2024-11-18 07:20:45.388734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:45288 len:8 PRP1 0x0 PRP2 0x0 00:35:24.609 [2024-11-18 07:20:45.388751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:24.609 [2024-11-18 07:20:45.392133] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:24.609 [2024-11-18 07:20:45.392211] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17dccf0 (9): Bad file descriptor 00:35:24.609 [2024-11-18 07:20:45.392819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:24.610 [2024-11-18 07:20:45.392862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17dccf0 with addr=10.0.0.2, port=4420 00:35:24.610 [2024-11-18 07:20:45.392878] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dccf0 is same with the state(6) to be set 00:35:24.610 [2024-11-18 07:20:45.393094] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17dccf0 (9): Bad file descriptor 00:35:24.610 [2024-11-18 07:20:45.393304] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:24.610 [2024-11-18 07:20:45.393323] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:24.610 [2024-11-18 07:20:45.393337] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:24.610 [2024-11-18 07:20:45.393350] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:24.610 [2024-11-18 07:20:45.405562] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:24.610 [2024-11-18 07:20:45.405901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:24.610 [2024-11-18 07:20:45.405929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17dccf0 with addr=10.0.0.2, port=4420 00:35:24.610 [2024-11-18 07:20:45.405945] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dccf0 is same with the state(6) to be set 00:35:24.610 [2024-11-18 07:20:45.406167] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17dccf0 (9): Bad file descriptor 00:35:24.610 [2024-11-18 07:20:45.406377] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:24.610 [2024-11-18 07:20:45.406395] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:24.610 [2024-11-18 07:20:45.406407] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:24.610 [2024-11-18 07:20:45.406423] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:24.610 [2024-11-18 07:20:45.418590] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:24.610 [2024-11-18 07:20:45.419017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:24.610 [2024-11-18 07:20:45.419044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17dccf0 with addr=10.0.0.2, port=4420 00:35:24.610 [2024-11-18 07:20:45.419060] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dccf0 is same with the state(6) to be set 00:35:24.610 [2024-11-18 07:20:45.419297] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17dccf0 (9): Bad file descriptor 00:35:24.610 [2024-11-18 07:20:45.419530] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:24.610 [2024-11-18 07:20:45.419550] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:24.610 [2024-11-18 07:20:45.419562] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:24.610 [2024-11-18 07:20:45.419574] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:24.610 [2024-11-18 07:20:45.431696] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:24.610 [2024-11-18 07:20:45.432075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:24.610 [2024-11-18 07:20:45.432117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17dccf0 with addr=10.0.0.2, port=4420 00:35:24.610 [2024-11-18 07:20:45.432133] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dccf0 is same with the state(6) to be set 00:35:24.610 [2024-11-18 07:20:45.432354] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17dccf0 (9): Bad file descriptor 00:35:24.610 [2024-11-18 07:20:45.432590] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:24.610 [2024-11-18 07:20:45.432609] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:24.610 [2024-11-18 07:20:45.432622] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:24.610 [2024-11-18 07:20:45.432633] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:24.610 [2024-11-18 07:20:45.444723] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:24.610 [2024-11-18 07:20:45.445239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:24.610 [2024-11-18 07:20:45.445281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17dccf0 with addr=10.0.0.2, port=4420 00:35:24.610 [2024-11-18 07:20:45.445298] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dccf0 is same with the state(6) to be set 00:35:24.610 [2024-11-18 07:20:45.445567] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17dccf0 (9): Bad file descriptor 00:35:24.610 [2024-11-18 07:20:45.445766] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:24.610 [2024-11-18 07:20:45.445784] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:24.610 [2024-11-18 07:20:45.445797] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:24.610 [2024-11-18 07:20:45.445823] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:24.610 [2024-11-18 07:20:45.457844] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:24.610 [2024-11-18 07:20:45.458344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:24.610 [2024-11-18 07:20:45.458386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17dccf0 with addr=10.0.0.2, port=4420 00:35:24.610 [2024-11-18 07:20:45.458403] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dccf0 is same with the state(6) to be set 00:35:24.610 [2024-11-18 07:20:45.458687] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17dccf0 (9): Bad file descriptor 00:35:24.610 [2024-11-18 07:20:45.458903] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:24.610 [2024-11-18 07:20:45.458922] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:24.610 [2024-11-18 07:20:45.458934] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:24.610 [2024-11-18 07:20:45.458944] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:24.610 [2024-11-18 07:20:45.470920] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:24.610 [2024-11-18 07:20:45.471426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:24.610 [2024-11-18 07:20:45.471468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17dccf0 with addr=10.0.0.2, port=4420 00:35:24.610 [2024-11-18 07:20:45.471485] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dccf0 is same with the state(6) to be set 00:35:24.610 [2024-11-18 07:20:45.471724] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17dccf0 (9): Bad file descriptor 00:35:24.610 [2024-11-18 07:20:45.471955] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:24.610 [2024-11-18 07:20:45.471973] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:24.610 [2024-11-18 07:20:45.471985] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:24.610 [2024-11-18 07:20:45.471996] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:24.610 [2024-11-18 07:20:45.484003] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:24.610 [2024-11-18 07:20:45.484372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:24.610 [2024-11-18 07:20:45.484416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17dccf0 with addr=10.0.0.2, port=4420 00:35:24.610 [2024-11-18 07:20:45.484431] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dccf0 is same with the state(6) to be set 00:35:24.610 [2024-11-18 07:20:45.484695] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17dccf0 (9): Bad file descriptor 00:35:24.610 [2024-11-18 07:20:45.484923] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:24.610 [2024-11-18 07:20:45.484942] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:24.610 [2024-11-18 07:20:45.484954] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:24.610 [2024-11-18 07:20:45.484965] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:24.610 [2024-11-18 07:20:45.497003] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:24.610 [2024-11-18 07:20:45.497366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:24.610 [2024-11-18 07:20:45.497407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17dccf0 with addr=10.0.0.2, port=4420 00:35:24.610 [2024-11-18 07:20:45.497427] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dccf0 is same with the state(6) to be set 00:35:24.610 [2024-11-18 07:20:45.497705] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17dccf0 (9): Bad file descriptor 00:35:24.610 [2024-11-18 07:20:45.497934] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:24.610 [2024-11-18 07:20:45.497953] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:24.610 [2024-11-18 07:20:45.497965] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:24.610 [2024-11-18 07:20:45.497976] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:24.610 [2024-11-18 07:20:45.510049] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:24.610 [2024-11-18 07:20:45.510416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:24.610 [2024-11-18 07:20:45.510443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17dccf0 with addr=10.0.0.2, port=4420 00:35:24.610 [2024-11-18 07:20:45.510459] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dccf0 is same with the state(6) to be set 00:35:24.610 [2024-11-18 07:20:45.510709] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17dccf0 (9): Bad file descriptor 00:35:24.610 [2024-11-18 07:20:45.510943] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:24.610 [2024-11-18 07:20:45.510962] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:24.610 [2024-11-18 07:20:45.510973] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:24.610 [2024-11-18 07:20:45.510984] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:24.610 [2024-11-18 07:20:45.523169] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:24.611 [2024-11-18 07:20:45.523593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:24.611 [2024-11-18 07:20:45.523620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17dccf0 with addr=10.0.0.2, port=4420 00:35:24.611 [2024-11-18 07:20:45.523636] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dccf0 is same with the state(6) to be set 00:35:24.611 [2024-11-18 07:20:45.523861] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17dccf0 (9): Bad file descriptor 00:35:24.611 [2024-11-18 07:20:45.524085] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:24.611 [2024-11-18 07:20:45.524103] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:24.611 [2024-11-18 07:20:45.524115] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:24.611 [2024-11-18 07:20:45.524126] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:24.611 [2024-11-18 07:20:45.536319] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:24.611 [2024-11-18 07:20:45.536753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:24.611 [2024-11-18 07:20:45.536780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17dccf0 with addr=10.0.0.2, port=4420 00:35:24.611 [2024-11-18 07:20:45.536810] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dccf0 is same with the state(6) to be set 00:35:24.611 [2024-11-18 07:20:45.537032] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17dccf0 (9): Bad file descriptor 00:35:24.611 [2024-11-18 07:20:45.537241] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:24.611 [2024-11-18 07:20:45.537264] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:24.611 [2024-11-18 07:20:45.537276] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:24.611 [2024-11-18 07:20:45.537287] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:24.611 [2024-11-18 07:20:45.549303] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:24.611 [2024-11-18 07:20:45.549639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:24.611 [2024-11-18 07:20:45.549665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17dccf0 with addr=10.0.0.2, port=4420 00:35:24.611 [2024-11-18 07:20:45.549681] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dccf0 is same with the state(6) to be set 00:35:24.611 [2024-11-18 07:20:45.549902] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17dccf0 (9): Bad file descriptor 00:35:24.611 [2024-11-18 07:20:45.550109] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:24.611 [2024-11-18 07:20:45.550128] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:24.611 [2024-11-18 07:20:45.550140] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:24.611 [2024-11-18 07:20:45.550151] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:24.611 [2024-11-18 07:20:45.562517] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:24.611 [2024-11-18 07:20:45.562911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:24.611 [2024-11-18 07:20:45.562938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17dccf0 with addr=10.0.0.2, port=4420 00:35:24.611 [2024-11-18 07:20:45.562953] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dccf0 is same with the state(6) to be set 00:35:24.611 [2024-11-18 07:20:45.563174] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17dccf0 (9): Bad file descriptor 00:35:24.611 [2024-11-18 07:20:45.563398] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:24.611 [2024-11-18 07:20:45.563416] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:24.611 [2024-11-18 07:20:45.563428] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:24.611 [2024-11-18 07:20:45.563440] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:24.611 [2024-11-18 07:20:45.575828] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:24.611 [2024-11-18 07:20:45.576208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:24.611 [2024-11-18 07:20:45.576234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17dccf0 with addr=10.0.0.2, port=4420 00:35:24.611 [2024-11-18 07:20:45.576249] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dccf0 is same with the state(6) to be set 00:35:24.611 [2024-11-18 07:20:45.576463] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17dccf0 (9): Bad file descriptor 00:35:24.611 [2024-11-18 07:20:45.576702] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:24.611 [2024-11-18 07:20:45.576721] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:24.611 [2024-11-18 07:20:45.576733] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:24.611 [2024-11-18 07:20:45.576749] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:24.870 [2024-11-18 07:20:45.588930] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:24.870 [2024-11-18 07:20:45.589409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:24.870 [2024-11-18 07:20:45.589451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17dccf0 with addr=10.0.0.2, port=4420 00:35:24.870 [2024-11-18 07:20:45.589467] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dccf0 is same with the state(6) to be set 00:35:24.870 [2024-11-18 07:20:45.589721] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17dccf0 (9): Bad file descriptor 00:35:24.870 [2024-11-18 07:20:45.589937] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:24.870 [2024-11-18 07:20:45.589956] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:24.870 [2024-11-18 07:20:45.589969] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:24.870 [2024-11-18 07:20:45.589980] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:24.870 [2024-11-18 07:20:45.602082] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:24.870 [2024-11-18 07:20:45.602576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:24.870 [2024-11-18 07:20:45.602619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17dccf0 with addr=10.0.0.2, port=4420 00:35:24.871 [2024-11-18 07:20:45.602636] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dccf0 is same with the state(6) to be set 00:35:24.871 [2024-11-18 07:20:45.602901] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17dccf0 (9): Bad file descriptor 00:35:24.871 [2024-11-18 07:20:45.603093] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:24.871 [2024-11-18 07:20:45.603111] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:24.871 [2024-11-18 07:20:45.603124] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:24.871 [2024-11-18 07:20:45.603135] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:24.871 [2024-11-18 07:20:45.615252] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:24.871 [2024-11-18 07:20:45.615651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:24.871 [2024-11-18 07:20:45.615679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17dccf0 with addr=10.0.0.2, port=4420 00:35:24.871 [2024-11-18 07:20:45.615695] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dccf0 is same with the state(6) to be set 00:35:24.871 [2024-11-18 07:20:45.615937] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17dccf0 (9): Bad file descriptor 00:35:24.871 [2024-11-18 07:20:45.616146] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:24.871 [2024-11-18 07:20:45.616164] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:24.871 [2024-11-18 07:20:45.616176] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:24.871 [2024-11-18 07:20:45.616187] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:24.871 [2024-11-18 07:20:45.628390] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:24.871 [2024-11-18 07:20:45.628793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:24.871 [2024-11-18 07:20:45.628835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17dccf0 with addr=10.0.0.2, port=4420 00:35:24.871 [2024-11-18 07:20:45.628851] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dccf0 is same with the state(6) to be set 00:35:24.871 [2024-11-18 07:20:45.629084] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17dccf0 (9): Bad file descriptor 00:35:24.871 [2024-11-18 07:20:45.629276] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:24.871 [2024-11-18 07:20:45.629293] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:24.871 [2024-11-18 07:20:45.629305] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:24.871 [2024-11-18 07:20:45.629316] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:24.871 [2024-11-18 07:20:45.641500] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:24.871 [2024-11-18 07:20:45.641890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:24.871 [2024-11-18 07:20:45.641918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17dccf0 with addr=10.0.0.2, port=4420 00:35:24.871 [2024-11-18 07:20:45.641949] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dccf0 is same with the state(6) to be set 00:35:24.871 [2024-11-18 07:20:45.642186] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17dccf0 (9): Bad file descriptor 00:35:24.871 [2024-11-18 07:20:45.642393] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:24.871 [2024-11-18 07:20:45.642412] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:24.871 [2024-11-18 07:20:45.642424] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:24.871 [2024-11-18 07:20:45.642436] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:24.871 [2024-11-18 07:20:45.655072] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:24.871 [2024-11-18 07:20:45.655459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:24.871 [2024-11-18 07:20:45.655511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17dccf0 with addr=10.0.0.2, port=4420 00:35:24.871 [2024-11-18 07:20:45.655554] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dccf0 is same with the state(6) to be set 00:35:24.871 [2024-11-18 07:20:45.655793] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17dccf0 (9): Bad file descriptor 00:35:24.871 [2024-11-18 07:20:45.656030] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:24.871 [2024-11-18 07:20:45.656049] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:24.871 [2024-11-18 07:20:45.656063] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:24.871 [2024-11-18 07:20:45.656074] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:24.871 [2024-11-18 07:20:45.668290] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:24.871 [2024-11-18 07:20:45.668745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:24.871 [2024-11-18 07:20:45.668773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17dccf0 with addr=10.0.0.2, port=4420 00:35:24.871 [2024-11-18 07:20:45.668789] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dccf0 is same with the state(6) to be set 00:35:24.871 [2024-11-18 07:20:45.669037] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17dccf0 (9): Bad file descriptor 00:35:24.871 [2024-11-18 07:20:45.669244] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:24.871 [2024-11-18 07:20:45.669263] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:24.871 [2024-11-18 07:20:45.669275] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:24.871 [2024-11-18 07:20:45.669286] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:24.871 [2024-11-18 07:20:45.681607] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:24.871 [2024-11-18 07:20:45.682013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:24.871 [2024-11-18 07:20:45.682039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17dccf0 with addr=10.0.0.2, port=4420 00:35:24.871 [2024-11-18 07:20:45.682070] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dccf0 is same with the state(6) to be set 00:35:24.871 [2024-11-18 07:20:45.682291] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17dccf0 (9): Bad file descriptor 00:35:24.871 [2024-11-18 07:20:45.682528] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:24.871 [2024-11-18 07:20:45.682563] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:24.871 [2024-11-18 07:20:45.682576] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:24.871 [2024-11-18 07:20:45.682588] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:24.871 [2024-11-18 07:20:45.694704] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:24.871 [2024-11-18 07:20:45.695083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:24.871 [2024-11-18 07:20:45.695125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17dccf0 with addr=10.0.0.2, port=4420 00:35:24.871 [2024-11-18 07:20:45.695141] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dccf0 is same with the state(6) to be set 00:35:24.871 [2024-11-18 07:20:45.695394] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17dccf0 (9): Bad file descriptor 00:35:24.871 [2024-11-18 07:20:45.695631] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:24.871 [2024-11-18 07:20:45.695651] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:24.871 [2024-11-18 07:20:45.695663] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:24.871 [2024-11-18 07:20:45.695675] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:24.871 [2024-11-18 07:20:45.707977] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:24.871 [2024-11-18 07:20:45.708348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:24.871 [2024-11-18 07:20:45.708392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17dccf0 with addr=10.0.0.2, port=4420 00:35:24.871 [2024-11-18 07:20:45.708408] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dccf0 is same with the state(6) to be set 00:35:24.871 [2024-11-18 07:20:45.708671] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17dccf0 (9): Bad file descriptor 00:35:24.871 [2024-11-18 07:20:45.708884] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:24.871 [2024-11-18 07:20:45.708907] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:24.871 [2024-11-18 07:20:45.708919] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:24.871 [2024-11-18 07:20:45.708931] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:24.871 [2024-11-18 07:20:45.721056] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:24.871 [2024-11-18 07:20:45.721393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:24.871 [2024-11-18 07:20:45.721420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17dccf0 with addr=10.0.0.2, port=4420 00:35:24.871 [2024-11-18 07:20:45.721435] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dccf0 is same with the state(6) to be set 00:35:24.871 [2024-11-18 07:20:45.721705] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17dccf0 (9): Bad file descriptor 00:35:24.871 [2024-11-18 07:20:45.721917] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:24.871 [2024-11-18 07:20:45.721935] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:24.871 [2024-11-18 07:20:45.721947] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:24.871 [2024-11-18 07:20:45.721958] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:24.871 [2024-11-18 07:20:45.734174] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:24.872 [2024-11-18 07:20:45.734560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:24.872 [2024-11-18 07:20:45.734602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17dccf0 with addr=10.0.0.2, port=4420 00:35:24.872 [2024-11-18 07:20:45.734618] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dccf0 is same with the state(6) to be set 00:35:24.872 [2024-11-18 07:20:45.734847] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17dccf0 (9): Bad file descriptor 00:35:24.872 [2024-11-18 07:20:45.735060] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:24.872 [2024-11-18 07:20:45.735079] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:24.872 [2024-11-18 07:20:45.735091] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:24.872 [2024-11-18 07:20:45.735103] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:24.872 [2024-11-18 07:20:45.747293] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:24.872 [2024-11-18 07:20:45.747727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:24.872 [2024-11-18 07:20:45.747754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17dccf0 with addr=10.0.0.2, port=4420 00:35:24.872 [2024-11-18 07:20:45.747769] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dccf0 is same with the state(6) to be set 00:35:24.872 [2024-11-18 07:20:45.748004] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17dccf0 (9): Bad file descriptor 00:35:24.872 [2024-11-18 07:20:45.748212] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:24.872 [2024-11-18 07:20:45.748230] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:24.872 [2024-11-18 07:20:45.748242] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:24.872 [2024-11-18 07:20:45.748258] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:24.872 [2024-11-18 07:20:45.760421] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:24.872 [2024-11-18 07:20:45.760842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:24.872 [2024-11-18 07:20:45.760868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17dccf0 with addr=10.0.0.2, port=4420 00:35:24.872 [2024-11-18 07:20:45.760883] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dccf0 is same with the state(6) to be set 00:35:24.872 [2024-11-18 07:20:45.761121] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17dccf0 (9): Bad file descriptor 00:35:24.872 [2024-11-18 07:20:45.761330] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:24.872 [2024-11-18 07:20:45.761348] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:24.872 [2024-11-18 07:20:45.761359] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:24.872 [2024-11-18 07:20:45.761370] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:24.872 [2024-11-18 07:20:45.773727] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:24.872 [2024-11-18 07:20:45.774157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:24.872 [2024-11-18 07:20:45.774198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17dccf0 with addr=10.0.0.2, port=4420 00:35:24.872 [2024-11-18 07:20:45.774214] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dccf0 is same with the state(6) to be set 00:35:24.872 [2024-11-18 07:20:45.774449] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17dccf0 (9): Bad file descriptor 00:35:24.872 [2024-11-18 07:20:45.774671] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:24.872 [2024-11-18 07:20:45.774691] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:24.872 [2024-11-18 07:20:45.774704] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:24.872 [2024-11-18 07:20:45.774715] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:24.872 [2024-11-18 07:20:45.787144] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:24.872 [2024-11-18 07:20:45.787581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:24.872 [2024-11-18 07:20:45.787610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17dccf0 with addr=10.0.0.2, port=4420 00:35:24.872 [2024-11-18 07:20:45.787626] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dccf0 is same with the state(6) to be set 00:35:24.872 [2024-11-18 07:20:45.787853] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17dccf0 (9): Bad file descriptor 00:35:24.872 [2024-11-18 07:20:45.788060] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:24.872 [2024-11-18 07:20:45.788079] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:24.872 [2024-11-18 07:20:45.788091] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:24.872 [2024-11-18 07:20:45.788102] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:24.872 [2024-11-18 07:20:45.800439] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:24.872 [2024-11-18 07:20:45.800819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:24.872 [2024-11-18 07:20:45.800848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17dccf0 with addr=10.0.0.2, port=4420 00:35:24.872 [2024-11-18 07:20:45.800864] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dccf0 is same with the state(6) to be set 00:35:24.872 [2024-11-18 07:20:45.801094] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17dccf0 (9): Bad file descriptor 00:35:24.872 [2024-11-18 07:20:45.801307] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:24.872 [2024-11-18 07:20:45.801326] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:24.872 [2024-11-18 07:20:45.801338] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:24.872 [2024-11-18 07:20:45.801349] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:24.872 [2024-11-18 07:20:45.813689] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:24.872 [2024-11-18 07:20:45.814204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:24.872 [2024-11-18 07:20:45.814246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17dccf0 with addr=10.0.0.2, port=4420 00:35:24.872 [2024-11-18 07:20:45.814263] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dccf0 is same with the state(6) to be set 00:35:24.872 [2024-11-18 07:20:45.814528] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17dccf0 (9): Bad file descriptor 00:35:24.872 [2024-11-18 07:20:45.814739] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:24.872 [2024-11-18 07:20:45.814759] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:24.872 [2024-11-18 07:20:45.814772] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:24.872 [2024-11-18 07:20:45.814784] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:24.872 [2024-11-18 07:20:45.826883] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:24.872 [2024-11-18 07:20:45.827218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:24.872 [2024-11-18 07:20:45.827245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17dccf0 with addr=10.0.0.2, port=4420 00:35:24.872 [2024-11-18 07:20:45.827260] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dccf0 is same with the state(6) to be set 00:35:24.872 [2024-11-18 07:20:45.827483] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17dccf0 (9): Bad file descriptor 00:35:24.872 [2024-11-18 07:20:45.827721] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:24.872 [2024-11-18 07:20:45.827740] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:24.872 [2024-11-18 07:20:45.827753] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:24.872 [2024-11-18 07:20:45.827780] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:24.872 [2024-11-18 07:20:45.840137] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:24.872 [2024-11-18 07:20:45.840574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:24.872 [2024-11-18 07:20:45.840602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17dccf0 with addr=10.0.0.2, port=4420 00:35:24.872 [2024-11-18 07:20:45.840619] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dccf0 is same with the state(6) to be set 00:35:24.872 [2024-11-18 07:20:45.840853] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17dccf0 (9): Bad file descriptor 00:35:24.872 [2024-11-18 07:20:45.841067] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:24.872 [2024-11-18 07:20:45.841086] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:24.872 [2024-11-18 07:20:45.841098] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:24.872 [2024-11-18 07:20:45.841109] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:25.132 [2024-11-18 07:20:45.853553] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:25.132 [2024-11-18 07:20:45.853953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.132 [2024-11-18 07:20:45.853980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17dccf0 with addr=10.0.0.2, port=4420 00:35:25.132 [2024-11-18 07:20:45.853996] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dccf0 is same with the state(6) to be set 00:35:25.132 [2024-11-18 07:20:45.854229] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17dccf0 (9): Bad file descriptor 00:35:25.132 [2024-11-18 07:20:45.854428] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:25.132 [2024-11-18 07:20:45.854446] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:25.132 [2024-11-18 07:20:45.854458] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:25.132 [2024-11-18 07:20:45.854484] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:25.132 [2024-11-18 07:20:45.866757] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:25.132 [2024-11-18 07:20:45.867242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.132 [2024-11-18 07:20:45.867268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17dccf0 with addr=10.0.0.2, port=4420 00:35:25.132 [2024-11-18 07:20:45.867299] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dccf0 is same with the state(6) to be set 00:35:25.132 [2024-11-18 07:20:45.867547] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17dccf0 (9): Bad file descriptor 00:35:25.132 [2024-11-18 07:20:45.867758] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:25.132 [2024-11-18 07:20:45.867778] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:25.132 [2024-11-18 07:20:45.867791] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:25.132 [2024-11-18 07:20:45.867818] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:25.132 7322.33 IOPS, 28.60 MiB/s [2024-11-18T06:20:46.110Z] [2024-11-18 07:20:45.880000] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:25.132 [2024-11-18 07:20:45.880437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.132 [2024-11-18 07:20:45.880465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17dccf0 with addr=10.0.0.2, port=4420 00:35:25.132 [2024-11-18 07:20:45.880481] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dccf0 is same with the state(6) to be set 00:35:25.132 [2024-11-18 07:20:45.880705] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17dccf0 (9): Bad file descriptor 00:35:25.132 [2024-11-18 07:20:45.880949] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:25.132 [2024-11-18 07:20:45.880968] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:25.132 [2024-11-18 07:20:45.880980] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:25.132 [2024-11-18 07:20:45.880992] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:25.132 [2024-11-18 07:20:45.893249] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:25.132 [2024-11-18 07:20:45.893660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.132 [2024-11-18 07:20:45.893689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17dccf0 with addr=10.0.0.2, port=4420 00:35:25.132 [2024-11-18 07:20:45.893705] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dccf0 is same with the state(6) to be set 00:35:25.132 [2024-11-18 07:20:45.893948] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17dccf0 (9): Bad file descriptor 00:35:25.132 [2024-11-18 07:20:45.894146] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:25.132 [2024-11-18 07:20:45.894165] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:25.132 [2024-11-18 07:20:45.894178] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:25.132 [2024-11-18 07:20:45.894189] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:25.132 [2024-11-18 07:20:45.906845] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:25.132 [2024-11-18 07:20:45.907232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.132 [2024-11-18 07:20:45.907261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17dccf0 with addr=10.0.0.2, port=4420 00:35:25.132 [2024-11-18 07:20:45.907277] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dccf0 is same with the state(6) to be set 00:35:25.133 [2024-11-18 07:20:45.907526] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17dccf0 (9): Bad file descriptor 00:35:25.133 [2024-11-18 07:20:45.907753] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:25.133 [2024-11-18 07:20:45.907773] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:25.133 [2024-11-18 07:20:45.907801] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:25.133 [2024-11-18 07:20:45.907813] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:25.133 [2024-11-18 07:20:45.920047] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:25.133 [2024-11-18 07:20:45.920418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.133 [2024-11-18 07:20:45.920445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17dccf0 with addr=10.0.0.2, port=4420 00:35:25.133 [2024-11-18 07:20:45.920461] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dccf0 is same with the state(6) to be set 00:35:25.133 [2024-11-18 07:20:45.920697] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17dccf0 (9): Bad file descriptor 00:35:25.133 [2024-11-18 07:20:45.920930] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:25.133 [2024-11-18 07:20:45.920949] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:25.133 [2024-11-18 07:20:45.920961] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:25.133 [2024-11-18 07:20:45.920976] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:25.133 [2024-11-18 07:20:45.933219] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:25.133 [2024-11-18 07:20:45.933620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.133 [2024-11-18 07:20:45.933649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17dccf0 with addr=10.0.0.2, port=4420 00:35:25.133 [2024-11-18 07:20:45.933665] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dccf0 is same with the state(6) to be set 00:35:25.133 [2024-11-18 07:20:45.933893] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17dccf0 (9): Bad file descriptor 00:35:25.133 [2024-11-18 07:20:45.934124] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:25.133 [2024-11-18 07:20:45.934143] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:25.133 [2024-11-18 07:20:45.934155] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:25.133 [2024-11-18 07:20:45.934166] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:25.133 [2024-11-18 07:20:45.946513] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:25.133 [2024-11-18 07:20:45.946900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.133 [2024-11-18 07:20:45.946928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17dccf0 with addr=10.0.0.2, port=4420 00:35:25.133 [2024-11-18 07:20:45.946958] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dccf0 is same with the state(6) to be set 00:35:25.133 [2024-11-18 07:20:45.947212] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17dccf0 (9): Bad file descriptor 00:35:25.133 [2024-11-18 07:20:45.947410] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:25.133 [2024-11-18 07:20:45.947428] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:25.133 [2024-11-18 07:20:45.947440] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:25.133 [2024-11-18 07:20:45.947451] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:25.133 [2024-11-18 07:20:45.959728] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:25.133 [2024-11-18 07:20:45.960087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.133 [2024-11-18 07:20:45.960130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17dccf0 with addr=10.0.0.2, port=4420 00:35:25.133 [2024-11-18 07:20:45.960146] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dccf0 is same with the state(6) to be set 00:35:25.133 [2024-11-18 07:20:45.960418] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17dccf0 (9): Bad file descriptor 00:35:25.133 [2024-11-18 07:20:45.960665] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:25.133 [2024-11-18 07:20:45.960686] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:25.133 [2024-11-18 07:20:45.960699] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:25.133 [2024-11-18 07:20:45.960711] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:25.133 [2024-11-18 07:20:45.972941] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:25.133 [2024-11-18 07:20:45.973324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.133 [2024-11-18 07:20:45.973352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17dccf0 with addr=10.0.0.2, port=4420 00:35:25.133 [2024-11-18 07:20:45.973368] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dccf0 is same with the state(6) to be set 00:35:25.133 [2024-11-18 07:20:45.973592] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17dccf0 (9): Bad file descriptor 00:35:25.133 [2024-11-18 07:20:45.973826] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:25.133 [2024-11-18 07:20:45.973846] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:25.133 [2024-11-18 07:20:45.973874] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:25.133 [2024-11-18 07:20:45.973886] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:25.133 [2024-11-18 07:20:45.986281] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:25.133 [2024-11-18 07:20:45.986719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.133 [2024-11-18 07:20:45.986746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17dccf0 with addr=10.0.0.2, port=4420 00:35:25.133 [2024-11-18 07:20:45.986777] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dccf0 is same with the state(6) to be set 00:35:25.133 [2024-11-18 07:20:45.987028] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17dccf0 (9): Bad file descriptor 00:35:25.133 [2024-11-18 07:20:45.987227] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:25.133 [2024-11-18 07:20:45.987245] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:25.133 [2024-11-18 07:20:45.987258] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:25.133 [2024-11-18 07:20:45.987269] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:25.133 [2024-11-18 07:20:45.999481] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:25.133 [2024-11-18 07:20:45.999893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.133 [2024-11-18 07:20:45.999923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17dccf0 with addr=10.0.0.2, port=4420 00:35:25.133 [2024-11-18 07:20:45.999939] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dccf0 is same with the state(6) to be set 00:35:25.133 [2024-11-18 07:20:46.000168] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17dccf0 (9): Bad file descriptor 00:35:25.133 [2024-11-18 07:20:46.000380] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:25.133 [2024-11-18 07:20:46.000399] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:25.133 [2024-11-18 07:20:46.000411] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:25.133 [2024-11-18 07:20:46.000423] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:25.133 [2024-11-18 07:20:46.012842] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:25.133 [2024-11-18 07:20:46.013217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.133 [2024-11-18 07:20:46.013245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17dccf0 with addr=10.0.0.2, port=4420 00:35:25.133 [2024-11-18 07:20:46.013265] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dccf0 is same with the state(6) to be set 00:35:25.133 [2024-11-18 07:20:46.013519] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17dccf0 (9): Bad file descriptor 00:35:25.133 [2024-11-18 07:20:46.013745] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:25.133 [2024-11-18 07:20:46.013766] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:25.133 [2024-11-18 07:20:46.013794] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:25.133 [2024-11-18 07:20:46.013806] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:25.133 [2024-11-18 07:20:46.026052] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:25.133 [2024-11-18 07:20:46.026440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.133 [2024-11-18 07:20:46.026481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17dccf0 with addr=10.0.0.2, port=4420 00:35:25.133 [2024-11-18 07:20:46.026505] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dccf0 is same with the state(6) to be set 00:35:25.133 [2024-11-18 07:20:46.026750] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17dccf0 (9): Bad file descriptor 00:35:25.133 [2024-11-18 07:20:46.026968] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:25.133 [2024-11-18 07:20:46.026986] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:25.133 [2024-11-18 07:20:46.026999] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:25.133 [2024-11-18 07:20:46.027010] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:25.133 [2024-11-18 07:20:46.039342] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:25.133 [2024-11-18 07:20:46.039682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.133 [2024-11-18 07:20:46.039726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17dccf0 with addr=10.0.0.2, port=4420 00:35:25.134 [2024-11-18 07:20:46.039743] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dccf0 is same with the state(6) to be set 00:35:25.134 [2024-11-18 07:20:46.039979] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17dccf0 (9): Bad file descriptor 00:35:25.134 [2024-11-18 07:20:46.040193] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:25.134 [2024-11-18 07:20:46.040211] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:25.134 [2024-11-18 07:20:46.040224] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:25.134 [2024-11-18 07:20:46.040235] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:25.134 [2024-11-18 07:20:46.052643] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:25.134 [2024-11-18 07:20:46.053035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.134 [2024-11-18 07:20:46.053077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17dccf0 with addr=10.0.0.2, port=4420 00:35:25.134 [2024-11-18 07:20:46.053093] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dccf0 is same with the state(6) to be set 00:35:25.134 [2024-11-18 07:20:46.053363] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17dccf0 (9): Bad file descriptor 00:35:25.134 [2024-11-18 07:20:46.053598] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:25.134 [2024-11-18 07:20:46.053619] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:25.134 [2024-11-18 07:20:46.053632] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:25.134 [2024-11-18 07:20:46.053644] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:25.134 [2024-11-18 07:20:46.065837] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:25.134 [2024-11-18 07:20:46.066209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.134 [2024-11-18 07:20:46.066237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17dccf0 with addr=10.0.0.2, port=4420 00:35:25.134 [2024-11-18 07:20:46.066253] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dccf0 is same with the state(6) to be set 00:35:25.134 [2024-11-18 07:20:46.066505] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17dccf0 (9): Bad file descriptor 00:35:25.134 [2024-11-18 07:20:46.066730] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:25.134 [2024-11-18 07:20:46.066750] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:25.134 [2024-11-18 07:20:46.066763] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:25.134 [2024-11-18 07:20:46.066775] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:25.134 [2024-11-18 07:20:46.079146] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:25.134 [2024-11-18 07:20:46.079557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.134 [2024-11-18 07:20:46.079585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17dccf0 with addr=10.0.0.2, port=4420 00:35:25.134 [2024-11-18 07:20:46.079601] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dccf0 is same with the state(6) to be set 00:35:25.134 [2024-11-18 07:20:46.079815] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17dccf0 (9): Bad file descriptor 00:35:25.134 [2024-11-18 07:20:46.080034] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:25.134 [2024-11-18 07:20:46.080053] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:25.134 [2024-11-18 07:20:46.080066] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:25.134 [2024-11-18 07:20:46.080078] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:25.134 [2024-11-18 07:20:46.092484] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:25.134 [2024-11-18 07:20:46.092884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.134 [2024-11-18 07:20:46.092913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17dccf0 with addr=10.0.0.2, port=4420 00:35:25.134 [2024-11-18 07:20:46.092929] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dccf0 is same with the state(6) to be set 00:35:25.134 [2024-11-18 07:20:46.093170] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17dccf0 (9): Bad file descriptor 00:35:25.134 [2024-11-18 07:20:46.093369] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:25.134 [2024-11-18 07:20:46.093388] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:25.134 [2024-11-18 07:20:46.093400] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:25.134 [2024-11-18 07:20:46.093416] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:25.134 [2024-11-18 07:20:46.105896] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:25.134 [2024-11-18 07:20:46.106319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.134 [2024-11-18 07:20:46.106361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17dccf0 with addr=10.0.0.2, port=4420 00:35:25.134 [2024-11-18 07:20:46.106377] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dccf0 is same with the state(6) to be set 00:35:25.134 [2024-11-18 07:20:46.106617] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17dccf0 (9): Bad file descriptor 00:35:25.134 [2024-11-18 07:20:46.106844] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:25.134 [2024-11-18 07:20:46.106878] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:25.134 [2024-11-18 07:20:46.106891] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:25.134 [2024-11-18 07:20:46.106902] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:25.394 [2024-11-18 07:20:46.119069] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:25.394 [2024-11-18 07:20:46.119442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.394 [2024-11-18 07:20:46.119470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17dccf0 with addr=10.0.0.2, port=4420 00:35:25.394 [2024-11-18 07:20:46.119486] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dccf0 is same with the state(6) to be set 00:35:25.394 [2024-11-18 07:20:46.119724] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17dccf0 (9): Bad file descriptor 00:35:25.394 [2024-11-18 07:20:46.119958] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:25.394 [2024-11-18 07:20:46.119977] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:25.394 [2024-11-18 07:20:46.119990] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:25.394 [2024-11-18 07:20:46.120001] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:25.394 [2024-11-18 07:20:46.132353] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:25.394 [2024-11-18 07:20:46.132786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.394 [2024-11-18 07:20:46.132813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17dccf0 with addr=10.0.0.2, port=4420 00:35:25.394 [2024-11-18 07:20:46.132828] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dccf0 is same with the state(6) to be set 00:35:25.394 [2024-11-18 07:20:46.133049] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17dccf0 (9): Bad file descriptor 00:35:25.394 [2024-11-18 07:20:46.133262] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:25.394 [2024-11-18 07:20:46.133281] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:25.394 [2024-11-18 07:20:46.133293] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:25.394 [2024-11-18 07:20:46.133304] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:25.394 [2024-11-18 07:20:46.145681] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:25.394 [2024-11-18 07:20:46.146049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.394 [2024-11-18 07:20:46.146078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17dccf0 with addr=10.0.0.2, port=4420 00:35:25.394 [2024-11-18 07:20:46.146093] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dccf0 is same with the state(6) to be set 00:35:25.395 [2024-11-18 07:20:46.146321] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17dccf0 (9): Bad file descriptor 00:35:25.395 [2024-11-18 07:20:46.146563] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:25.395 [2024-11-18 07:20:46.146599] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:25.395 [2024-11-18 07:20:46.146612] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:25.395 [2024-11-18 07:20:46.146624] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:25.395 [2024-11-18 07:20:46.159017] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:25.395 [2024-11-18 07:20:46.159420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.395 [2024-11-18 07:20:46.159448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17dccf0 with addr=10.0.0.2, port=4420 00:35:25.395 [2024-11-18 07:20:46.159464] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dccf0 is same with the state(6) to be set 00:35:25.395 [2024-11-18 07:20:46.159704] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17dccf0 (9): Bad file descriptor 00:35:25.395 [2024-11-18 07:20:46.159939] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:25.395 [2024-11-18 07:20:46.159958] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:25.395 [2024-11-18 07:20:46.159970] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:25.395 [2024-11-18 07:20:46.159981] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:25.395 [2024-11-18 07:20:46.172380] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:25.395 [2024-11-18 07:20:46.172724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.395 [2024-11-18 07:20:46.172767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17dccf0 with addr=10.0.0.2, port=4420 00:35:25.395 [2024-11-18 07:20:46.172783] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dccf0 is same with the state(6) to be set 00:35:25.395 [2024-11-18 07:20:46.173014] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17dccf0 (9): Bad file descriptor 00:35:25.395 [2024-11-18 07:20:46.173228] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:25.395 [2024-11-18 07:20:46.173246] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:25.395 [2024-11-18 07:20:46.173259] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:25.395 [2024-11-18 07:20:46.173270] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:25.395 [2024-11-18 07:20:46.185611] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:25.395 [2024-11-18 07:20:46.186002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.395 [2024-11-18 07:20:46.186029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17dccf0 with addr=10.0.0.2, port=4420 00:35:25.395 [2024-11-18 07:20:46.186050] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dccf0 is same with the state(6) to be set 00:35:25.395 [2024-11-18 07:20:46.186292] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17dccf0 (9): Bad file descriptor 00:35:25.395 [2024-11-18 07:20:46.186515] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:25.395 [2024-11-18 07:20:46.186559] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:25.395 [2024-11-18 07:20:46.186573] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:25.395 [2024-11-18 07:20:46.186585] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:25.395 [2024-11-18 07:20:46.198928] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:25.395 [2024-11-18 07:20:46.199273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.395 [2024-11-18 07:20:46.199301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17dccf0 with addr=10.0.0.2, port=4420 00:35:25.395 [2024-11-18 07:20:46.199317] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dccf0 is same with the state(6) to be set 00:35:25.395 [2024-11-18 07:20:46.199548] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17dccf0 (9): Bad file descriptor 00:35:25.395 [2024-11-18 07:20:46.199753] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:25.395 [2024-11-18 07:20:46.199772] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:25.395 [2024-11-18 07:20:46.199785] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:25.395 [2024-11-18 07:20:46.199797] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:25.395 [2024-11-18 07:20:46.212239] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:25.395 [2024-11-18 07:20:46.212606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.395 [2024-11-18 07:20:46.212634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17dccf0 with addr=10.0.0.2, port=4420 00:35:25.395 [2024-11-18 07:20:46.212650] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dccf0 is same with the state(6) to be set 00:35:25.395 [2024-11-18 07:20:46.212879] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17dccf0 (9): Bad file descriptor 00:35:25.395 [2024-11-18 07:20:46.213094] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:25.395 [2024-11-18 07:20:46.213113] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:25.395 [2024-11-18 07:20:46.213125] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:25.395 [2024-11-18 07:20:46.213137] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:25.395 [2024-11-18 07:20:46.225555] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:25.395 [2024-11-18 07:20:46.225970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.395 [2024-11-18 07:20:46.225998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17dccf0 with addr=10.0.0.2, port=4420 00:35:25.395 [2024-11-18 07:20:46.226013] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dccf0 is same with the state(6) to be set 00:35:25.395 [2024-11-18 07:20:46.226254] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17dccf0 (9): Bad file descriptor 00:35:25.395 [2024-11-18 07:20:46.226458] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:25.395 [2024-11-18 07:20:46.226500] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:25.395 [2024-11-18 07:20:46.226515] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:25.395 [2024-11-18 07:20:46.226527] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:25.395 [2024-11-18 07:20:46.238736] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:25.395 [2024-11-18 07:20:46.239252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.395 [2024-11-18 07:20:46.239278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17dccf0 with addr=10.0.0.2, port=4420 00:35:25.395 [2024-11-18 07:20:46.239309] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dccf0 is same with the state(6) to be set 00:35:25.395 [2024-11-18 07:20:46.239575] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17dccf0 (9): Bad file descriptor 00:35:25.395 [2024-11-18 07:20:46.239786] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:25.395 [2024-11-18 07:20:46.239805] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:25.395 [2024-11-18 07:20:46.239833] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:25.395 [2024-11-18 07:20:46.239844] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:25.395 [2024-11-18 07:20:46.251963] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:25.395 [2024-11-18 07:20:46.252333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.395 [2024-11-18 07:20:46.252361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17dccf0 with addr=10.0.0.2, port=4420 00:35:25.395 [2024-11-18 07:20:46.252377] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dccf0 is same with the state(6) to be set 00:35:25.395 [2024-11-18 07:20:46.252616] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17dccf0 (9): Bad file descriptor 00:35:25.395 [2024-11-18 07:20:46.252855] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:25.395 [2024-11-18 07:20:46.252874] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:25.395 [2024-11-18 07:20:46.252886] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:25.395 [2024-11-18 07:20:46.252898] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:25.395 [2024-11-18 07:20:46.265289] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:25.395 [2024-11-18 07:20:46.265702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.395 [2024-11-18 07:20:46.265731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17dccf0 with addr=10.0.0.2, port=4420 00:35:25.395 [2024-11-18 07:20:46.265747] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dccf0 is same with the state(6) to be set 00:35:25.395 [2024-11-18 07:20:46.265976] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17dccf0 (9): Bad file descriptor 00:35:25.395 [2024-11-18 07:20:46.266189] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:25.395 [2024-11-18 07:20:46.266208] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:25.395 [2024-11-18 07:20:46.266220] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:25.395 [2024-11-18 07:20:46.266236] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:25.395 [2024-11-18 07:20:46.278558] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:25.395 [2024-11-18 07:20:46.278977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.395 [2024-11-18 07:20:46.279020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17dccf0 with addr=10.0.0.2, port=4420 00:35:25.395 [2024-11-18 07:20:46.279035] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dccf0 is same with the state(6) to be set 00:35:25.396 [2024-11-18 07:20:46.279279] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17dccf0 (9): Bad file descriptor 00:35:25.396 [2024-11-18 07:20:46.279520] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:25.396 [2024-11-18 07:20:46.279556] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:25.396 [2024-11-18 07:20:46.279571] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:25.396 [2024-11-18 07:20:46.279583] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:25.396 [2024-11-18 07:20:46.291903] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:25.396 [2024-11-18 07:20:46.292314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.396 [2024-11-18 07:20:46.292340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17dccf0 with addr=10.0.0.2, port=4420 00:35:25.396 [2024-11-18 07:20:46.292369] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dccf0 is same with the state(6) to be set 00:35:25.396 [2024-11-18 07:20:46.292622] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17dccf0 (9): Bad file descriptor 00:35:25.396 [2024-11-18 07:20:46.292848] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:25.396 [2024-11-18 07:20:46.292883] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:25.396 [2024-11-18 07:20:46.292895] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:25.396 [2024-11-18 07:20:46.292907] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:25.396 [2024-11-18 07:20:46.305123] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:25.396 [2024-11-18 07:20:46.305523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.396 [2024-11-18 07:20:46.305565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17dccf0 with addr=10.0.0.2, port=4420 00:35:25.396 [2024-11-18 07:20:46.305582] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dccf0 is same with the state(6) to be set 00:35:25.396 [2024-11-18 07:20:46.305810] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17dccf0 (9): Bad file descriptor 00:35:25.396 [2024-11-18 07:20:46.306024] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:25.396 [2024-11-18 07:20:46.306043] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:25.396 [2024-11-18 07:20:46.306055] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:25.396 [2024-11-18 07:20:46.306067] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:25.396 [2024-11-18 07:20:46.318403] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:25.396 [2024-11-18 07:20:46.318852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.396 [2024-11-18 07:20:46.318880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17dccf0 with addr=10.0.0.2, port=4420 00:35:25.396 [2024-11-18 07:20:46.318897] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dccf0 is same with the state(6) to be set 00:35:25.396 [2024-11-18 07:20:46.319128] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17dccf0 (9): Bad file descriptor 00:35:25.396 [2024-11-18 07:20:46.319343] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:25.396 [2024-11-18 07:20:46.319362] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:25.396 [2024-11-18 07:20:46.319374] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:25.396 [2024-11-18 07:20:46.319386] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:25.396 [2024-11-18 07:20:46.331604] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:25.396 [2024-11-18 07:20:46.331933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.396 [2024-11-18 07:20:46.331974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17dccf0 with addr=10.0.0.2, port=4420 00:35:25.396 [2024-11-18 07:20:46.331990] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dccf0 is same with the state(6) to be set 00:35:25.396 [2024-11-18 07:20:46.332211] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17dccf0 (9): Bad file descriptor 00:35:25.396 [2024-11-18 07:20:46.332425] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:25.396 [2024-11-18 07:20:46.332444] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:25.396 [2024-11-18 07:20:46.332456] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:25.396 [2024-11-18 07:20:46.332467] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:25.396 [2024-11-18 07:20:46.344928] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:25.396 [2024-11-18 07:20:46.345302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.396 [2024-11-18 07:20:46.345345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17dccf0 with addr=10.0.0.2, port=4420 00:35:25.396 [2024-11-18 07:20:46.345360] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dccf0 is same with the state(6) to be set 00:35:25.396 [2024-11-18 07:20:46.345600] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17dccf0 (9): Bad file descriptor 00:35:25.396 [2024-11-18 07:20:46.345847] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:25.396 [2024-11-18 07:20:46.345866] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:25.396 [2024-11-18 07:20:46.345878] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:25.396 [2024-11-18 07:20:46.345890] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:25.396 [2024-11-18 07:20:46.358152] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:25.396 [2024-11-18 07:20:46.358591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.396 [2024-11-18 07:20:46.358619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17dccf0 with addr=10.0.0.2, port=4420 00:35:25.396 [2024-11-18 07:20:46.358641] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dccf0 is same with the state(6) to be set 00:35:25.396 [2024-11-18 07:20:46.358870] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17dccf0 (9): Bad file descriptor 00:35:25.396 [2024-11-18 07:20:46.359085] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:25.396 [2024-11-18 07:20:46.359104] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:25.396 [2024-11-18 07:20:46.359116] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:25.396 [2024-11-18 07:20:46.359127] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:25.656 [2024-11-18 07:20:46.371770] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:25.656 [2024-11-18 07:20:46.372197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.656 [2024-11-18 07:20:46.372225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17dccf0 with addr=10.0.0.2, port=4420 00:35:25.656 [2024-11-18 07:20:46.372241] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dccf0 is same with the state(6) to be set 00:35:25.656 [2024-11-18 07:20:46.372455] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17dccf0 (9): Bad file descriptor 00:35:25.656 [2024-11-18 07:20:46.372696] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:25.656 [2024-11-18 07:20:46.372717] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:25.656 [2024-11-18 07:20:46.372730] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:25.656 [2024-11-18 07:20:46.372742] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:25.656 [2024-11-18 07:20:46.385198] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:25.656 [2024-11-18 07:20:46.385551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.656 [2024-11-18 07:20:46.385578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17dccf0 with addr=10.0.0.2, port=4420 00:35:25.656 [2024-11-18 07:20:46.385594] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dccf0 is same with the state(6) to be set 00:35:25.656 [2024-11-18 07:20:46.385821] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17dccf0 (9): Bad file descriptor 00:35:25.656 [2024-11-18 07:20:46.386035] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:25.656 [2024-11-18 07:20:46.386054] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:25.656 [2024-11-18 07:20:46.386066] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:25.656 [2024-11-18 07:20:46.386078] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:25.656 [2024-11-18 07:20:46.398636] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:25.656 [2024-11-18 07:20:46.399016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.656 [2024-11-18 07:20:46.399044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17dccf0 with addr=10.0.0.2, port=4420 00:35:25.656 [2024-11-18 07:20:46.399060] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dccf0 is same with the state(6) to be set 00:35:25.656 [2024-11-18 07:20:46.399302] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17dccf0 (9): Bad file descriptor 00:35:25.656 [2024-11-18 07:20:46.399546] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:25.656 [2024-11-18 07:20:46.399567] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:25.656 [2024-11-18 07:20:46.399580] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:25.656 [2024-11-18 07:20:46.399591] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:25.656 [2024-11-18 07:20:46.411903] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:25.656 [2024-11-18 07:20:46.412346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.656 [2024-11-18 07:20:46.412374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17dccf0 with addr=10.0.0.2, port=4420 00:35:25.656 [2024-11-18 07:20:46.412390] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dccf0 is same with the state(6) to be set 00:35:25.656 [2024-11-18 07:20:46.412629] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17dccf0 (9): Bad file descriptor 00:35:25.656 [2024-11-18 07:20:46.412862] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:25.656 [2024-11-18 07:20:46.412881] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:25.656 [2024-11-18 07:20:46.412893] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:25.656 [2024-11-18 07:20:46.412904] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:25.656 [2024-11-18 07:20:46.425152] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:25.656 [2024-11-18 07:20:46.425526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.656 [2024-11-18 07:20:46.425554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17dccf0 with addr=10.0.0.2, port=4420 00:35:25.656 [2024-11-18 07:20:46.425570] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dccf0 is same with the state(6) to be set 00:35:25.656 [2024-11-18 07:20:46.425784] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17dccf0 (9): Bad file descriptor 00:35:25.656 [2024-11-18 07:20:46.425997] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:25.656 [2024-11-18 07:20:46.426015] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:25.657 [2024-11-18 07:20:46.426028] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:25.657 [2024-11-18 07:20:46.426039] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:25.657 [2024-11-18 07:20:46.438409] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:25.657 [2024-11-18 07:20:46.438799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.657 [2024-11-18 07:20:46.438827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17dccf0 with addr=10.0.0.2, port=4420 00:35:25.657 [2024-11-18 07:20:46.438843] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dccf0 is same with the state(6) to be set 00:35:25.657 [2024-11-18 07:20:46.439072] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17dccf0 (9): Bad file descriptor 00:35:25.657 [2024-11-18 07:20:46.439286] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:25.657 [2024-11-18 07:20:46.439304] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:25.657 [2024-11-18 07:20:46.439317] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:25.657 [2024-11-18 07:20:46.439333] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:25.657 [2024-11-18 07:20:46.451680] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:25.657 [2024-11-18 07:20:46.452010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.657 [2024-11-18 07:20:46.452052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17dccf0 with addr=10.0.0.2, port=4420 00:35:25.657 [2024-11-18 07:20:46.452067] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dccf0 is same with the state(6) to be set 00:35:25.657 [2024-11-18 07:20:46.452289] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17dccf0 (9): Bad file descriptor 00:35:25.657 [2024-11-18 07:20:46.452530] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:25.657 [2024-11-18 07:20:46.452550] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:25.657 [2024-11-18 07:20:46.452563] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:25.657 [2024-11-18 07:20:46.452575] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:25.657 [2024-11-18 07:20:46.464996] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:25.657 [2024-11-18 07:20:46.465365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.657 [2024-11-18 07:20:46.465392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17dccf0 with addr=10.0.0.2, port=4420 00:35:25.657 [2024-11-18 07:20:46.465408] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dccf0 is same with the state(6) to be set 00:35:25.657 [2024-11-18 07:20:46.465646] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17dccf0 (9): Bad file descriptor 00:35:25.657 [2024-11-18 07:20:46.465879] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:25.657 [2024-11-18 07:20:46.465898] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:25.657 [2024-11-18 07:20:46.465910] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:25.657 [2024-11-18 07:20:46.465921] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:25.657 [2024-11-18 07:20:46.478171] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:25.657 [2024-11-18 07:20:46.478519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.657 [2024-11-18 07:20:46.478547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17dccf0 with addr=10.0.0.2, port=4420 00:35:25.657 [2024-11-18 07:20:46.478563] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dccf0 is same with the state(6) to be set 00:35:25.657 [2024-11-18 07:20:46.478776] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17dccf0 (9): Bad file descriptor 00:35:25.657 [2024-11-18 07:20:46.479027] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:25.657 [2024-11-18 07:20:46.479047] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:25.657 [2024-11-18 07:20:46.479059] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:25.657 [2024-11-18 07:20:46.479071] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:25.657 [2024-11-18 07:20:46.491392] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:25.657 [2024-11-18 07:20:46.491831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.657 [2024-11-18 07:20:46.491859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17dccf0 with addr=10.0.0.2, port=4420 00:35:25.657 [2024-11-18 07:20:46.491874] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dccf0 is same with the state(6) to be set 00:35:25.657 [2024-11-18 07:20:46.492131] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17dccf0 (9): Bad file descriptor 00:35:25.657 [2024-11-18 07:20:46.492330] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:25.657 [2024-11-18 07:20:46.492349] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:25.657 [2024-11-18 07:20:46.492361] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:25.657 [2024-11-18 07:20:46.492373] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:25.657 [2024-11-18 07:20:46.504667] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:25.657 [2024-11-18 07:20:46.505037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.657 [2024-11-18 07:20:46.505064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17dccf0 with addr=10.0.0.2, port=4420 00:35:25.657 [2024-11-18 07:20:46.505080] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dccf0 is same with the state(6) to be set 00:35:25.657 [2024-11-18 07:20:46.505312] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17dccf0 (9): Bad file descriptor 00:35:25.657 [2024-11-18 07:20:46.505553] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:25.657 [2024-11-18 07:20:46.505574] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:25.657 [2024-11-18 07:20:46.505588] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:25.657 [2024-11-18 07:20:46.505600] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:25.657 [2024-11-18 07:20:46.517878] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:25.657 [2024-11-18 07:20:46.518215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.657 [2024-11-18 07:20:46.518241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17dccf0 with addr=10.0.0.2, port=4420 00:35:25.657 [2024-11-18 07:20:46.518256] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dccf0 is same with the state(6) to be set 00:35:25.657 [2024-11-18 07:20:46.518456] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17dccf0 (9): Bad file descriptor 00:35:25.657 [2024-11-18 07:20:46.518688] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:25.657 [2024-11-18 07:20:46.518709] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:25.657 [2024-11-18 07:20:46.518722] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:25.657 [2024-11-18 07:20:46.518735] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:25.657 [2024-11-18 07:20:46.531178] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:25.657 [2024-11-18 07:20:46.531563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.657 [2024-11-18 07:20:46.531591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17dccf0 with addr=10.0.0.2, port=4420 00:35:25.657 [2024-11-18 07:20:46.531612] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dccf0 is same with the state(6) to be set 00:35:25.657 [2024-11-18 07:20:46.531841] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17dccf0 (9): Bad file descriptor 00:35:25.657 [2024-11-18 07:20:46.532055] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:25.657 [2024-11-18 07:20:46.532074] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:25.657 [2024-11-18 07:20:46.532086] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:25.657 [2024-11-18 07:20:46.532097] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:25.657 [2024-11-18 07:20:46.544554] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:25.657 [2024-11-18 07:20:46.544976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.657 [2024-11-18 07:20:46.545004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17dccf0 with addr=10.0.0.2, port=4420 00:35:25.657 [2024-11-18 07:20:46.545020] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dccf0 is same with the state(6) to be set 00:35:25.657 [2024-11-18 07:20:46.545243] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17dccf0 (9): Bad file descriptor 00:35:25.657 [2024-11-18 07:20:46.545458] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:25.657 [2024-11-18 07:20:46.545502] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:25.657 [2024-11-18 07:20:46.545518] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:25.657 [2024-11-18 07:20:46.545530] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:25.657 [2024-11-18 07:20:46.557820] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:25.657 [2024-11-18 07:20:46.558213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.657 [2024-11-18 07:20:46.558240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17dccf0 with addr=10.0.0.2, port=4420 00:35:25.657 [2024-11-18 07:20:46.558256] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dccf0 is same with the state(6) to be set 00:35:25.657 [2024-11-18 07:20:46.558478] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17dccf0 (9): Bad file descriptor 00:35:25.658 [2024-11-18 07:20:46.558713] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:25.658 [2024-11-18 07:20:46.558733] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:25.658 [2024-11-18 07:20:46.558746] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:25.658 [2024-11-18 07:20:46.558758] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:25.658 [2024-11-18 07:20:46.571049] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:25.658 [2024-11-18 07:20:46.571448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.658 [2024-11-18 07:20:46.571476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17dccf0 with addr=10.0.0.2, port=4420 00:35:25.658 [2024-11-18 07:20:46.571501] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dccf0 is same with the state(6) to be set 00:35:25.658 [2024-11-18 07:20:46.571718] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17dccf0 (9): Bad file descriptor 00:35:25.658 [2024-11-18 07:20:46.571959] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:25.658 [2024-11-18 07:20:46.571983] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:25.658 [2024-11-18 07:20:46.571997] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:25.658 [2024-11-18 07:20:46.572009] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:25.658 [2024-11-18 07:20:46.584427] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:25.658 [2024-11-18 07:20:46.584870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.658 [2024-11-18 07:20:46.584898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17dccf0 with addr=10.0.0.2, port=4420 00:35:25.658 [2024-11-18 07:20:46.584914] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dccf0 is same with the state(6) to be set 00:35:25.658 [2024-11-18 07:20:46.585145] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17dccf0 (9): Bad file descriptor 00:35:25.658 [2024-11-18 07:20:46.585359] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:25.658 [2024-11-18 07:20:46.585378] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:25.658 [2024-11-18 07:20:46.585390] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:25.658 [2024-11-18 07:20:46.585401] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:25.658 [2024-11-18 07:20:46.597772] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:25.658 [2024-11-18 07:20:46.598176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.658 [2024-11-18 07:20:46.598204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17dccf0 with addr=10.0.0.2, port=4420 00:35:25.658 [2024-11-18 07:20:46.598220] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dccf0 is same with the state(6) to be set 00:35:25.658 [2024-11-18 07:20:46.598462] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17dccf0 (9): Bad file descriptor 00:35:25.658 [2024-11-18 07:20:46.598707] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:25.658 [2024-11-18 07:20:46.598729] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:25.658 [2024-11-18 07:20:46.598742] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:25.658 [2024-11-18 07:20:46.598754] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:25.658 [2024-11-18 07:20:46.610952] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:25.658 [2024-11-18 07:20:46.611386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.658 [2024-11-18 07:20:46.611412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17dccf0 with addr=10.0.0.2, port=4420 00:35:25.658 [2024-11-18 07:20:46.611428] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dccf0 is same with the state(6) to be set 00:35:25.658 [2024-11-18 07:20:46.611666] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17dccf0 (9): Bad file descriptor 00:35:25.658 [2024-11-18 07:20:46.611903] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:25.658 [2024-11-18 07:20:46.611922] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:25.658 [2024-11-18 07:20:46.611935] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:25.658 [2024-11-18 07:20:46.611955] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:25.658 [2024-11-18 07:20:46.624174] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:25.658 [2024-11-18 07:20:46.624549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.658 [2024-11-18 07:20:46.624591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17dccf0 with addr=10.0.0.2, port=4420 00:35:25.658 [2024-11-18 07:20:46.624607] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dccf0 is same with the state(6) to be set 00:35:25.658 [2024-11-18 07:20:46.624851] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17dccf0 (9): Bad file descriptor 00:35:25.658 [2024-11-18 07:20:46.625081] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:25.658 [2024-11-18 07:20:46.625100] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:25.658 [2024-11-18 07:20:46.625112] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:25.658 [2024-11-18 07:20:46.625124] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:25.917 [2024-11-18 07:20:46.637527] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:25.917 [2024-11-18 07:20:46.637911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.917 [2024-11-18 07:20:46.637939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17dccf0 with addr=10.0.0.2, port=4420 00:35:25.917 [2024-11-18 07:20:46.637955] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dccf0 is same with the state(6) to be set 00:35:25.917 [2024-11-18 07:20:46.638183] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17dccf0 (9): Bad file descriptor 00:35:25.918 [2024-11-18 07:20:46.638417] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:25.918 [2024-11-18 07:20:46.638436] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:25.918 [2024-11-18 07:20:46.638448] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:25.918 [2024-11-18 07:20:46.638459] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:25.918 [2024-11-18 07:20:46.650820] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:25.918 [2024-11-18 07:20:46.651214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.918 [2024-11-18 07:20:46.651242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17dccf0 with addr=10.0.0.2, port=4420 00:35:25.918 [2024-11-18 07:20:46.651258] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dccf0 is same with the state(6) to be set 00:35:25.918 [2024-11-18 07:20:46.651480] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17dccf0 (9): Bad file descriptor 00:35:25.918 [2024-11-18 07:20:46.651718] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:25.918 [2024-11-18 07:20:46.651738] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:25.918 [2024-11-18 07:20:46.651752] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:25.918 [2024-11-18 07:20:46.651764] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:25.918 [2024-11-18 07:20:46.664431] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:25.918 [2024-11-18 07:20:46.664851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.918 [2024-11-18 07:20:46.664879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17dccf0 with addr=10.0.0.2, port=4420 00:35:25.918 [2024-11-18 07:20:46.664895] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dccf0 is same with the state(6) to be set 00:35:25.918 [2024-11-18 07:20:46.665126] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17dccf0 (9): Bad file descriptor 00:35:25.918 [2024-11-18 07:20:46.665340] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:25.918 [2024-11-18 07:20:46.665359] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:25.918 [2024-11-18 07:20:46.665371] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:25.918 [2024-11-18 07:20:46.665382] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:25.918 [2024-11-18 07:20:46.677850] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:25.918 [2024-11-18 07:20:46.678223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.918 [2024-11-18 07:20:46.678265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17dccf0 with addr=10.0.0.2, port=4420 00:35:25.918 [2024-11-18 07:20:46.678281] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dccf0 is same with the state(6) to be set 00:35:25.918 [2024-11-18 07:20:46.678544] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17dccf0 (9): Bad file descriptor 00:35:25.918 [2024-11-18 07:20:46.678748] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:25.918 [2024-11-18 07:20:46.678767] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:25.918 [2024-11-18 07:20:46.678794] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:25.918 [2024-11-18 07:20:46.678806] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:25.918 [2024-11-18 07:20:46.691129] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:25.918 [2024-11-18 07:20:46.691502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.918 [2024-11-18 07:20:46.691530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17dccf0 with addr=10.0.0.2, port=4420 00:35:25.918 [2024-11-18 07:20:46.691547] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dccf0 is same with the state(6) to be set 00:35:25.918 [2024-11-18 07:20:46.691788] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17dccf0 (9): Bad file descriptor 00:35:25.918 [2024-11-18 07:20:46.692003] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:25.918 [2024-11-18 07:20:46.692021] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:25.918 [2024-11-18 07:20:46.692034] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:25.918 [2024-11-18 07:20:46.692045] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:25.918 [2024-11-18 07:20:46.704376] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:25.918 [2024-11-18 07:20:46.704750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.918 [2024-11-18 07:20:46.704778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17dccf0 with addr=10.0.0.2, port=4420 00:35:25.918 [2024-11-18 07:20:46.704814] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dccf0 is same with the state(6) to be set 00:35:25.918 [2024-11-18 07:20:46.705035] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17dccf0 (9): Bad file descriptor 00:35:25.918 [2024-11-18 07:20:46.705247] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:25.918 [2024-11-18 07:20:46.705266] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:25.918 [2024-11-18 07:20:46.705278] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:25.918 [2024-11-18 07:20:46.705289] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:25.918 [2024-11-18 07:20:46.717747] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:25.918 [2024-11-18 07:20:46.718139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.918 [2024-11-18 07:20:46.718167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17dccf0 with addr=10.0.0.2, port=4420 00:35:25.918 [2024-11-18 07:20:46.718183] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dccf0 is same with the state(6) to be set 00:35:25.918 [2024-11-18 07:20:46.718424] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17dccf0 (9): Bad file descriptor 00:35:25.918 [2024-11-18 07:20:46.718663] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:25.918 [2024-11-18 07:20:46.718698] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:25.918 [2024-11-18 07:20:46.718712] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:25.918 [2024-11-18 07:20:46.718725] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:25.918 [2024-11-18 07:20:46.731015] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:25.918 [2024-11-18 07:20:46.731404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.918 [2024-11-18 07:20:46.731447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17dccf0 with addr=10.0.0.2, port=4420 00:35:25.918 [2024-11-18 07:20:46.731464] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dccf0 is same with the state(6) to be set 00:35:25.918 [2024-11-18 07:20:46.731703] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17dccf0 (9): Bad file descriptor 00:35:25.918 [2024-11-18 07:20:46.731935] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:25.918 [2024-11-18 07:20:46.731954] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:25.918 [2024-11-18 07:20:46.731967] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:25.918 [2024-11-18 07:20:46.731978] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:25.918 [2024-11-18 07:20:46.744200] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:25.918 [2024-11-18 07:20:46.744543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.918 [2024-11-18 07:20:46.744571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17dccf0 with addr=10.0.0.2, port=4420 00:35:25.918 [2024-11-18 07:20:46.744587] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dccf0 is same with the state(6) to be set 00:35:25.918 [2024-11-18 07:20:46.744819] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17dccf0 (9): Bad file descriptor 00:35:25.918 [2024-11-18 07:20:46.745032] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:25.918 [2024-11-18 07:20:46.745055] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:25.918 [2024-11-18 07:20:46.745068] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:25.918 [2024-11-18 07:20:46.745080] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:25.918 [2024-11-18 07:20:46.757498] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:25.918 [2024-11-18 07:20:46.757875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.918 [2024-11-18 07:20:46.757902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17dccf0 with addr=10.0.0.2, port=4420 00:35:25.918 [2024-11-18 07:20:46.757918] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dccf0 is same with the state(6) to be set 00:35:25.918 [2024-11-18 07:20:46.758145] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17dccf0 (9): Bad file descriptor 00:35:25.918 [2024-11-18 07:20:46.758357] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:25.918 [2024-11-18 07:20:46.758376] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:25.918 [2024-11-18 07:20:46.758388] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:25.918 [2024-11-18 07:20:46.758399] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:25.918 [2024-11-18 07:20:46.770785] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:25.918 [2024-11-18 07:20:46.771179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.918 [2024-11-18 07:20:46.771207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17dccf0 with addr=10.0.0.2, port=4420 00:35:25.919 [2024-11-18 07:20:46.771223] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dccf0 is same with the state(6) to be set 00:35:25.919 [2024-11-18 07:20:46.771464] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17dccf0 (9): Bad file descriptor 00:35:25.919 [2024-11-18 07:20:46.771698] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:25.919 [2024-11-18 07:20:46.771719] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:25.919 [2024-11-18 07:20:46.771732] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:25.919 [2024-11-18 07:20:46.771744] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:25.919 [2024-11-18 07:20:46.784099] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:25.919 [2024-11-18 07:20:46.784545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.919 [2024-11-18 07:20:46.784574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17dccf0 with addr=10.0.0.2, port=4420 00:35:25.919 [2024-11-18 07:20:46.784590] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dccf0 is same with the state(6) to be set 00:35:25.919 [2024-11-18 07:20:46.784831] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17dccf0 (9): Bad file descriptor 00:35:25.919 [2024-11-18 07:20:46.785029] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:25.919 [2024-11-18 07:20:46.785048] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:25.919 [2024-11-18 07:20:46.785061] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:25.919 [2024-11-18 07:20:46.785077] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:25.919 [2024-11-18 07:20:46.797346] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:25.919 [2024-11-18 07:20:46.797784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.919 [2024-11-18 07:20:46.797812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17dccf0 with addr=10.0.0.2, port=4420 00:35:25.919 [2024-11-18 07:20:46.797828] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dccf0 is same with the state(6) to be set 00:35:25.919 [2024-11-18 07:20:46.798056] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17dccf0 (9): Bad file descriptor 00:35:25.919 [2024-11-18 07:20:46.798270] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:25.919 [2024-11-18 07:20:46.798288] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:25.919 [2024-11-18 07:20:46.798301] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:25.919 [2024-11-18 07:20:46.798313] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:25.919 [2024-11-18 07:20:46.810698] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:25.919 [2024-11-18 07:20:46.811087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.919 [2024-11-18 07:20:46.811130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17dccf0 with addr=10.0.0.2, port=4420 00:35:25.919 [2024-11-18 07:20:46.811147] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dccf0 is same with the state(6) to be set 00:35:25.919 [2024-11-18 07:20:46.811417] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17dccf0 (9): Bad file descriptor 00:35:25.919 [2024-11-18 07:20:46.811664] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:25.919 [2024-11-18 07:20:46.811685] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:25.919 [2024-11-18 07:20:46.811698] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:25.919 [2024-11-18 07:20:46.811711] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:25.919 [2024-11-18 07:20:46.823994] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:25.919 [2024-11-18 07:20:46.824339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.919 [2024-11-18 07:20:46.824367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17dccf0 with addr=10.0.0.2, port=4420 00:35:25.919 [2024-11-18 07:20:46.824384] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dccf0 is same with the state(6) to be set 00:35:25.919 [2024-11-18 07:20:46.824608] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17dccf0 (9): Bad file descriptor 00:35:25.919 [2024-11-18 07:20:46.824851] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:25.919 [2024-11-18 07:20:46.824885] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:25.919 [2024-11-18 07:20:46.824897] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:25.919 [2024-11-18 07:20:46.824908] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:25.919 [2024-11-18 07:20:46.837264] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:25.919 [2024-11-18 07:20:46.837668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.919 [2024-11-18 07:20:46.837697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17dccf0 with addr=10.0.0.2, port=4420 00:35:25.919 [2024-11-18 07:20:46.837713] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dccf0 is same with the state(6) to be set 00:35:25.919 [2024-11-18 07:20:46.837955] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17dccf0 (9): Bad file descriptor 00:35:25.919 [2024-11-18 07:20:46.838168] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:25.919 [2024-11-18 07:20:46.838187] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:25.919 [2024-11-18 07:20:46.838199] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:25.919 [2024-11-18 07:20:46.838211] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:25.919 [2024-11-18 07:20:46.850514] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:25.919 [2024-11-18 07:20:46.850935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.919 [2024-11-18 07:20:46.850976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17dccf0 with addr=10.0.0.2, port=4420 00:35:25.919 [2024-11-18 07:20:46.850992] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dccf0 is same with the state(6) to be set 00:35:25.919 [2024-11-18 07:20:46.851234] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17dccf0 (9): Bad file descriptor 00:35:25.919 [2024-11-18 07:20:46.851431] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:25.919 [2024-11-18 07:20:46.851454] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:25.919 [2024-11-18 07:20:46.851467] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:25.919 [2024-11-18 07:20:46.851502] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:25.919 [2024-11-18 07:20:46.863911] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:25.919 [2024-11-18 07:20:46.864411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.919 [2024-11-18 07:20:46.864452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17dccf0 with addr=10.0.0.2, port=4420 00:35:25.919 [2024-11-18 07:20:46.864469] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dccf0 is same with the state(6) to be set 00:35:25.919 [2024-11-18 07:20:46.864690] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17dccf0 (9): Bad file descriptor 00:35:25.919 [2024-11-18 07:20:46.864931] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:25.919 [2024-11-18 07:20:46.864950] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:25.919 [2024-11-18 07:20:46.864962] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:25.919 [2024-11-18 07:20:46.864974] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:25.919 5491.75 IOPS, 21.45 MiB/s [2024-11-18T06:20:46.897Z] [2024-11-18 07:20:46.878690] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:25.919 [2024-11-18 07:20:46.879120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.919 [2024-11-18 07:20:46.879163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17dccf0 with addr=10.0.0.2, port=4420 00:35:25.919 [2024-11-18 07:20:46.879184] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dccf0 is same with the state(6) to be set 00:35:25.919 [2024-11-18 07:20:46.879426] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17dccf0 (9): Bad file descriptor 00:35:25.919 [2024-11-18 07:20:46.879657] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:25.919 [2024-11-18 07:20:46.879677] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:25.919 [2024-11-18 07:20:46.879690] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:25.919 [2024-11-18 07:20:46.879702] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:25.919 [2024-11-18 07:20:46.892199] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:25.919 [2024-11-18 07:20:46.892620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.919 [2024-11-18 07:20:46.892649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17dccf0 with addr=10.0.0.2, port=4420 00:35:25.919 [2024-11-18 07:20:46.892665] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dccf0 is same with the state(6) to be set 00:35:25.919 [2024-11-18 07:20:46.892879] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17dccf0 (9): Bad file descriptor 00:35:25.919 [2024-11-18 07:20:46.893138] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:25.919 [2024-11-18 07:20:46.893158] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:25.919 [2024-11-18 07:20:46.893171] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:25.919 [2024-11-18 07:20:46.893182] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:26.179 [2024-11-18 07:20:46.905400] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:26.179 [2024-11-18 07:20:46.905792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.179 [2024-11-18 07:20:46.905821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17dccf0 with addr=10.0.0.2, port=4420 00:35:26.179 [2024-11-18 07:20:46.905837] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dccf0 is same with the state(6) to be set 00:35:26.179 [2024-11-18 07:20:46.906066] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17dccf0 (9): Bad file descriptor 00:35:26.179 [2024-11-18 07:20:46.906280] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:26.179 [2024-11-18 07:20:46.906298] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:26.179 [2024-11-18 07:20:46.906310] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:26.179 [2024-11-18 07:20:46.906322] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:26.179 [2024-11-18 07:20:46.918921] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:26.179 [2024-11-18 07:20:46.919275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.179 [2024-11-18 07:20:46.919301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17dccf0 with addr=10.0.0.2, port=4420 00:35:26.179 [2024-11-18 07:20:46.919316] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dccf0 is same with the state(6) to be set 00:35:26.179 [2024-11-18 07:20:46.919559] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17dccf0 (9): Bad file descriptor 00:35:26.179 [2024-11-18 07:20:46.919769] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:26.179 [2024-11-18 07:20:46.919789] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:26.179 [2024-11-18 07:20:46.919816] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:26.179 [2024-11-18 07:20:46.919827] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:26.179 [2024-11-18 07:20:46.932252] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:26.179 [2024-11-18 07:20:46.932594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.179 [2024-11-18 07:20:46.932636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17dccf0 with addr=10.0.0.2, port=4420 00:35:26.179 [2024-11-18 07:20:46.932652] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dccf0 is same with the state(6) to be set 00:35:26.179 [2024-11-18 07:20:46.932893] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17dccf0 (9): Bad file descriptor 00:35:26.179 [2024-11-18 07:20:46.933107] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:26.179 [2024-11-18 07:20:46.933126] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:26.179 [2024-11-18 07:20:46.933139] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:26.179 [2024-11-18 07:20:46.933150] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:26.179 [2024-11-18 07:20:46.945587] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:26.179 [2024-11-18 07:20:46.945981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.179 [2024-11-18 07:20:46.946024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17dccf0 with addr=10.0.0.2, port=4420 00:35:26.179 [2024-11-18 07:20:46.946040] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dccf0 is same with the state(6) to be set 00:35:26.179 [2024-11-18 07:20:46.946312] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17dccf0 (9): Bad file descriptor 00:35:26.179 [2024-11-18 07:20:46.946554] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:26.179 [2024-11-18 07:20:46.946574] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:26.180 [2024-11-18 07:20:46.946587] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:26.180 [2024-11-18 07:20:46.946599] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:26.180 [2024-11-18 07:20:46.958829] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:26.180 [2024-11-18 07:20:46.959262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.180 [2024-11-18 07:20:46.959289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17dccf0 with addr=10.0.0.2, port=4420 00:35:26.180 [2024-11-18 07:20:46.959305] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dccf0 is same with the state(6) to be set 00:35:26.180 [2024-11-18 07:20:46.959544] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17dccf0 (9): Bad file descriptor 00:35:26.180 [2024-11-18 07:20:46.959755] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:26.180 [2024-11-18 07:20:46.959792] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:26.180 [2024-11-18 07:20:46.959810] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:26.180 [2024-11-18 07:20:46.959823] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:26.180 [2024-11-18 07:20:46.972003] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:26.180 [2024-11-18 07:20:46.972332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.180 [2024-11-18 07:20:46.972359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17dccf0 with addr=10.0.0.2, port=4420 00:35:26.180 [2024-11-18 07:20:46.972375] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dccf0 is same with the state(6) to be set 00:35:26.180 [2024-11-18 07:20:46.972625] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17dccf0 (9): Bad file descriptor 00:35:26.180 [2024-11-18 07:20:46.972851] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:26.180 [2024-11-18 07:20:46.972870] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:26.180 [2024-11-18 07:20:46.972882] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:26.180 [2024-11-18 07:20:46.972893] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:26.180 [2024-11-18 07:20:46.985320] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:26.180 [2024-11-18 07:20:46.985672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.180 [2024-11-18 07:20:46.985700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17dccf0 with addr=10.0.0.2, port=4420 00:35:26.180 [2024-11-18 07:20:46.985716] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dccf0 is same with the state(6) to be set 00:35:26.180 [2024-11-18 07:20:46.985944] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17dccf0 (9): Bad file descriptor 00:35:26.180 [2024-11-18 07:20:46.986158] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:26.180 [2024-11-18 07:20:46.986177] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:26.180 [2024-11-18 07:20:46.986189] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:26.180 [2024-11-18 07:20:46.986201] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:26.180 [2024-11-18 07:20:46.998649] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:26.180 [2024-11-18 07:20:46.999101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.180 [2024-11-18 07:20:46.999129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17dccf0 with addr=10.0.0.2, port=4420 00:35:26.180 [2024-11-18 07:20:46.999145] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dccf0 is same with the state(6) to be set 00:35:26.180 [2024-11-18 07:20:46.999387] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17dccf0 (9): Bad file descriptor 00:35:26.180 [2024-11-18 07:20:46.999635] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:26.180 [2024-11-18 07:20:46.999656] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:26.180 [2024-11-18 07:20:46.999670] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:26.180 [2024-11-18 07:20:46.999683] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:26.180 [2024-11-18 07:20:47.011899] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:26.180 [2024-11-18 07:20:47.012339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.180 [2024-11-18 07:20:47.012366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17dccf0 with addr=10.0.0.2, port=4420 00:35:26.180 [2024-11-18 07:20:47.012382] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dccf0 is same with the state(6) to be set 00:35:26.180 [2024-11-18 07:20:47.012621] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17dccf0 (9): Bad file descriptor 00:35:26.180 [2024-11-18 07:20:47.012861] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:26.180 [2024-11-18 07:20:47.012879] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:26.180 [2024-11-18 07:20:47.012892] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:26.180 [2024-11-18 07:20:47.012903] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:26.180 [2024-11-18 07:20:47.025252] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:26.180 [2024-11-18 07:20:47.025617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.180 [2024-11-18 07:20:47.025646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17dccf0 with addr=10.0.0.2, port=4420 00:35:26.180 [2024-11-18 07:20:47.025662] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dccf0 is same with the state(6) to be set 00:35:26.180 [2024-11-18 07:20:47.025903] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17dccf0 (9): Bad file descriptor 00:35:26.180 [2024-11-18 07:20:47.026111] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:26.180 [2024-11-18 07:20:47.026129] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:26.180 [2024-11-18 07:20:47.026141] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:26.180 [2024-11-18 07:20:47.026152] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:26.180 [2024-11-18 07:20:47.038318] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:26.180 [2024-11-18 07:20:47.038718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.180 [2024-11-18 07:20:47.038747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17dccf0 with addr=10.0.0.2, port=4420 00:35:26.180 [2024-11-18 07:20:47.038763] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dccf0 is same with the state(6) to be set 00:35:26.180 [2024-11-18 07:20:47.039006] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17dccf0 (9): Bad file descriptor 00:35:26.180 [2024-11-18 07:20:47.039214] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:26.180 [2024-11-18 07:20:47.039233] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:26.180 [2024-11-18 07:20:47.039245] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:26.180 [2024-11-18 07:20:47.039256] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:26.180 [2024-11-18 07:20:47.051438] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:26.180 [2024-11-18 07:20:47.051826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.180 [2024-11-18 07:20:47.051870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17dccf0 with addr=10.0.0.2, port=4420 00:35:26.180 [2024-11-18 07:20:47.051890] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dccf0 is same with the state(6) to be set 00:35:26.180 [2024-11-18 07:20:47.052125] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17dccf0 (9): Bad file descriptor 00:35:26.180 [2024-11-18 07:20:47.052316] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:26.180 [2024-11-18 07:20:47.052334] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:26.180 [2024-11-18 07:20:47.052347] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:26.180 [2024-11-18 07:20:47.052358] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:26.180 [2024-11-18 07:20:47.064630] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:26.180 [2024-11-18 07:20:47.064986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.180 [2024-11-18 07:20:47.065014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17dccf0 with addr=10.0.0.2, port=4420 00:35:26.180 [2024-11-18 07:20:47.065030] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dccf0 is same with the state(6) to be set 00:35:26.180 [2024-11-18 07:20:47.065251] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17dccf0 (9): Bad file descriptor 00:35:26.180 [2024-11-18 07:20:47.065467] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:26.180 [2024-11-18 07:20:47.065486] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:26.180 [2024-11-18 07:20:47.065524] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:26.180 [2024-11-18 07:20:47.065538] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:26.180 [2024-11-18 07:20:47.077635] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:26.180 [2024-11-18 07:20:47.077998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.180 [2024-11-18 07:20:47.078024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17dccf0 with addr=10.0.0.2, port=4420 00:35:26.180 [2024-11-18 07:20:47.078040] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dccf0 is same with the state(6) to be set 00:35:26.180 [2024-11-18 07:20:47.078275] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17dccf0 (9): Bad file descriptor 00:35:26.180 [2024-11-18 07:20:47.078483] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:26.181 [2024-11-18 07:20:47.078525] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:26.181 [2024-11-18 07:20:47.078538] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:26.181 [2024-11-18 07:20:47.078550] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:26.181 [2024-11-18 07:20:47.090713] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:26.181 [2024-11-18 07:20:47.091210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.181 [2024-11-18 07:20:47.091237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17dccf0 with addr=10.0.0.2, port=4420 00:35:26.181 [2024-11-18 07:20:47.091268] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dccf0 is same with the state(6) to be set 00:35:26.181 [2024-11-18 07:20:47.091532] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17dccf0 (9): Bad file descriptor 00:35:26.181 [2024-11-18 07:20:47.091736] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:26.181 [2024-11-18 07:20:47.091754] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:26.181 [2024-11-18 07:20:47.091767] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:26.181 [2024-11-18 07:20:47.091792] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:26.181 [2024-11-18 07:20:47.103766] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:26.181 [2024-11-18 07:20:47.104127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.181 [2024-11-18 07:20:47.104154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17dccf0 with addr=10.0.0.2, port=4420 00:35:26.181 [2024-11-18 07:20:47.104169] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dccf0 is same with the state(6) to be set 00:35:26.181 [2024-11-18 07:20:47.104403] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17dccf0 (9): Bad file descriptor 00:35:26.181 [2024-11-18 07:20:47.104639] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:26.181 [2024-11-18 07:20:47.104659] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:26.181 [2024-11-18 07:20:47.104671] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:26.181 [2024-11-18 07:20:47.104682] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:26.181 [2024-11-18 07:20:47.116875] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:26.181 [2024-11-18 07:20:47.117269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.181 [2024-11-18 07:20:47.117295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17dccf0 with addr=10.0.0.2, port=4420 00:35:26.181 [2024-11-18 07:20:47.117310] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dccf0 is same with the state(6) to be set 00:35:26.181 [2024-11-18 07:20:47.117543] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17dccf0 (9): Bad file descriptor 00:35:26.181 [2024-11-18 07:20:47.117780] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:26.181 [2024-11-18 07:20:47.117815] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:26.181 [2024-11-18 07:20:47.117828] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:26.181 [2024-11-18 07:20:47.117839] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:26.181 [2024-11-18 07:20:47.129944] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:26.181 [2024-11-18 07:20:47.130433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.181 [2024-11-18 07:20:47.130476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17dccf0 with addr=10.0.0.2, port=4420 00:35:26.181 [2024-11-18 07:20:47.130502] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dccf0 is same with the state(6) to be set 00:35:26.181 [2024-11-18 07:20:47.130733] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17dccf0 (9): Bad file descriptor 00:35:26.181 [2024-11-18 07:20:47.130961] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:26.181 [2024-11-18 07:20:47.130979] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:26.181 [2024-11-18 07:20:47.130995] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:26.181 [2024-11-18 07:20:47.131007] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:26.181 [2024-11-18 07:20:47.143064] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:26.181 [2024-11-18 07:20:47.143467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.181 [2024-11-18 07:20:47.143546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17dccf0 with addr=10.0.0.2, port=4420 00:35:26.181 [2024-11-18 07:20:47.143562] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dccf0 is same with the state(6) to be set 00:35:26.181 [2024-11-18 07:20:47.143830] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17dccf0 (9): Bad file descriptor 00:35:26.181 [2024-11-18 07:20:47.144022] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:26.181 [2024-11-18 07:20:47.144040] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:26.181 [2024-11-18 07:20:47.144052] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:26.181 [2024-11-18 07:20:47.144063] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:26.181 [2024-11-18 07:20:47.156605] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:26.441 [2024-11-18 07:20:47.157061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.441 [2024-11-18 07:20:47.157091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17dccf0 with addr=10.0.0.2, port=4420 00:35:26.441 [2024-11-18 07:20:47.157107] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dccf0 is same with the state(6) to be set 00:35:26.441 [2024-11-18 07:20:47.157336] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17dccf0 (9): Bad file descriptor 00:35:26.441 [2024-11-18 07:20:47.157610] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:26.441 [2024-11-18 07:20:47.157632] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:26.441 [2024-11-18 07:20:47.157662] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:26.441 [2024-11-18 07:20:47.157674] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:26.441 [2024-11-18 07:20:47.170270] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:26.441 [2024-11-18 07:20:47.170631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.441 [2024-11-18 07:20:47.170659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17dccf0 with addr=10.0.0.2, port=4420 00:35:26.441 [2024-11-18 07:20:47.170675] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dccf0 is same with the state(6) to be set 00:35:26.441 [2024-11-18 07:20:47.170904] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17dccf0 (9): Bad file descriptor 00:35:26.441 [2024-11-18 07:20:47.171118] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:26.441 [2024-11-18 07:20:47.171136] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:26.441 [2024-11-18 07:20:47.171149] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:26.441 [2024-11-18 07:20:47.171160] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:26.441 [2024-11-18 07:20:47.183607] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:26.441 [2024-11-18 07:20:47.184096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.441 [2024-11-18 07:20:47.184146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17dccf0 with addr=10.0.0.2, port=4420 00:35:26.441 [2024-11-18 07:20:47.184162] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dccf0 is same with the state(6) to be set 00:35:26.441 [2024-11-18 07:20:47.184422] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17dccf0 (9): Bad file descriptor 00:35:26.441 [2024-11-18 07:20:47.184659] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:26.441 [2024-11-18 07:20:47.184681] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:26.441 [2024-11-18 07:20:47.184694] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:26.441 [2024-11-18 07:20:47.184707] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:26.441 [2024-11-18 07:20:47.196965] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:26.441 [2024-11-18 07:20:47.197325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.441 [2024-11-18 07:20:47.197352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17dccf0 with addr=10.0.0.2, port=4420 00:35:26.441 [2024-11-18 07:20:47.197368] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dccf0 is same with the state(6) to be set 00:35:26.441 [2024-11-18 07:20:47.197619] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17dccf0 (9): Bad file descriptor 00:35:26.441 [2024-11-18 07:20:47.197863] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:26.441 [2024-11-18 07:20:47.197882] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:26.441 [2024-11-18 07:20:47.197893] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:26.441 [2024-11-18 07:20:47.197904] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:26.441 [2024-11-18 07:20:47.210313] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:26.441 [2024-11-18 07:20:47.210746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.441 [2024-11-18 07:20:47.210774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17dccf0 with addr=10.0.0.2, port=4420 00:35:26.441 [2024-11-18 07:20:47.210790] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dccf0 is same with the state(6) to be set 00:35:26.441 [2024-11-18 07:20:47.211016] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17dccf0 (9): Bad file descriptor 00:35:26.441 [2024-11-18 07:20:47.211224] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:26.441 [2024-11-18 07:20:47.211242] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:26.441 [2024-11-18 07:20:47.211254] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:26.441 [2024-11-18 07:20:47.211265] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:26.441 [2024-11-18 07:20:47.223431] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:26.441 [2024-11-18 07:20:47.223763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.441 [2024-11-18 07:20:47.223805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17dccf0 with addr=10.0.0.2, port=4420 00:35:26.441 [2024-11-18 07:20:47.223825] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dccf0 is same with the state(6) to be set 00:35:26.441 [2024-11-18 07:20:47.224048] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17dccf0 (9): Bad file descriptor 00:35:26.441 [2024-11-18 07:20:47.224256] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:26.441 [2024-11-18 07:20:47.224274] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:26.441 [2024-11-18 07:20:47.224286] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:26.441 [2024-11-18 07:20:47.224297] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:26.441 [2024-11-18 07:20:47.236522] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:26.441 [2024-11-18 07:20:47.236905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.441 [2024-11-18 07:20:47.236946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17dccf0 with addr=10.0.0.2, port=4420 00:35:26.441 [2024-11-18 07:20:47.236961] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dccf0 is same with the state(6) to be set 00:35:26.441 [2024-11-18 07:20:47.237205] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17dccf0 (9): Bad file descriptor 00:35:26.441 [2024-11-18 07:20:47.237413] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:26.441 [2024-11-18 07:20:47.237432] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:26.441 [2024-11-18 07:20:47.237444] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:26.441 [2024-11-18 07:20:47.237456] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:26.441 [2024-11-18 07:20:47.249604] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:26.441 [2024-11-18 07:20:47.249932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.441 [2024-11-18 07:20:47.249958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17dccf0 with addr=10.0.0.2, port=4420 00:35:26.441 [2024-11-18 07:20:47.249974] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dccf0 is same with the state(6) to be set 00:35:26.441 [2024-11-18 07:20:47.250189] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17dccf0 (9): Bad file descriptor 00:35:26.441 [2024-11-18 07:20:47.250396] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:26.441 [2024-11-18 07:20:47.250414] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:26.441 [2024-11-18 07:20:47.250426] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:26.441 [2024-11-18 07:20:47.250438] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:26.441 [2024-11-18 07:20:47.262759] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:26.441 [2024-11-18 07:20:47.263118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.441 [2024-11-18 07:20:47.263145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17dccf0 with addr=10.0.0.2, port=4420 00:35:26.441 [2024-11-18 07:20:47.263160] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dccf0 is same with the state(6) to be set 00:35:26.441 [2024-11-18 07:20:47.263396] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17dccf0 (9): Bad file descriptor 00:35:26.441 [2024-11-18 07:20:47.263641] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:26.441 [2024-11-18 07:20:47.263662] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:26.441 [2024-11-18 07:20:47.263674] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:26.442 [2024-11-18 07:20:47.263685] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:26.442 [2024-11-18 07:20:47.275757] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:26.442 [2024-11-18 07:20:47.276137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.442 [2024-11-18 07:20:47.276165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17dccf0 with addr=10.0.0.2, port=4420 00:35:26.442 [2024-11-18 07:20:47.276180] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dccf0 is same with the state(6) to be set 00:35:26.442 [2024-11-18 07:20:47.276403] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17dccf0 (9): Bad file descriptor 00:35:26.442 [2024-11-18 07:20:47.276641] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:26.442 [2024-11-18 07:20:47.276661] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:26.442 [2024-11-18 07:20:47.276673] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:26.442 [2024-11-18 07:20:47.276684] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:26.442 [2024-11-18 07:20:47.288985] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:26.442 [2024-11-18 07:20:47.289465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.442 [2024-11-18 07:20:47.289522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17dccf0 with addr=10.0.0.2, port=4420 00:35:26.442 [2024-11-18 07:20:47.289541] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dccf0 is same with the state(6) to be set 00:35:26.442 [2024-11-18 07:20:47.289790] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17dccf0 (9): Bad file descriptor 00:35:26.442 [2024-11-18 07:20:47.289982] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:26.442 [2024-11-18 07:20:47.290000] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:26.442 [2024-11-18 07:20:47.290012] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:26.442 [2024-11-18 07:20:47.290023] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:26.442 [2024-11-18 07:20:47.301970] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:26.442 [2024-11-18 07:20:47.302379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.442 [2024-11-18 07:20:47.302431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17dccf0 with addr=10.0.0.2, port=4420 00:35:26.442 [2024-11-18 07:20:47.302446] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dccf0 is same with the state(6) to be set 00:35:26.442 [2024-11-18 07:20:47.302708] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17dccf0 (9): Bad file descriptor 00:35:26.442 [2024-11-18 07:20:47.302937] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:26.442 [2024-11-18 07:20:47.302955] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:26.442 [2024-11-18 07:20:47.302967] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:26.442 [2024-11-18 07:20:47.302982] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:26.442 [2024-11-18 07:20:47.315047] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:26.442 [2024-11-18 07:20:47.315458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.442 [2024-11-18 07:20:47.315514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17dccf0 with addr=10.0.0.2, port=4420 00:35:26.442 [2024-11-18 07:20:47.315530] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dccf0 is same with the state(6) to be set 00:35:26.442 [2024-11-18 07:20:47.315796] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17dccf0 (9): Bad file descriptor 00:35:26.442 [2024-11-18 07:20:47.315988] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:26.442 [2024-11-18 07:20:47.316006] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:26.442 [2024-11-18 07:20:47.316018] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:26.442 [2024-11-18 07:20:47.316029] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:26.442 [2024-11-18 07:20:47.328167] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:26.442 [2024-11-18 07:20:47.328529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.442 [2024-11-18 07:20:47.328556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17dccf0 with addr=10.0.0.2, port=4420 00:35:26.442 [2024-11-18 07:20:47.328572] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dccf0 is same with the state(6) to be set 00:35:26.442 [2024-11-18 07:20:47.328807] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17dccf0 (9): Bad file descriptor 00:35:26.442 [2024-11-18 07:20:47.329015] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:26.442 [2024-11-18 07:20:47.329033] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:26.442 [2024-11-18 07:20:47.329045] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:26.442 [2024-11-18 07:20:47.329056] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:26.442 [2024-11-18 07:20:47.341308] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:26.442 [2024-11-18 07:20:47.341744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.442 [2024-11-18 07:20:47.341785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17dccf0 with addr=10.0.0.2, port=4420 00:35:26.442 [2024-11-18 07:20:47.341801] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dccf0 is same with the state(6) to be set 00:35:26.442 [2024-11-18 07:20:47.342040] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17dccf0 (9): Bad file descriptor 00:35:26.442 [2024-11-18 07:20:47.342248] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:26.442 [2024-11-18 07:20:47.342266] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:26.442 [2024-11-18 07:20:47.342278] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:26.442 [2024-11-18 07:20:47.342289] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:26.442 [2024-11-18 07:20:47.354558] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:26.442 [2024-11-18 07:20:47.354972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.442 [2024-11-18 07:20:47.355014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17dccf0 with addr=10.0.0.2, port=4420 00:35:26.442 [2024-11-18 07:20:47.355029] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dccf0 is same with the state(6) to be set 00:35:26.442 [2024-11-18 07:20:47.355276] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17dccf0 (9): Bad file descriptor 00:35:26.442 [2024-11-18 07:20:47.355483] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:26.442 [2024-11-18 07:20:47.355526] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:26.442 [2024-11-18 07:20:47.355540] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:26.442 [2024-11-18 07:20:47.355551] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:26.442 [2024-11-18 07:20:47.367671] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:26.442 [2024-11-18 07:20:47.368036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.442 [2024-11-18 07:20:47.368076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17dccf0 with addr=10.0.0.2, port=4420 00:35:26.442 [2024-11-18 07:20:47.368091] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dccf0 is same with the state(6) to be set 00:35:26.442 [2024-11-18 07:20:47.368337] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17dccf0 (9): Bad file descriptor 00:35:26.442 [2024-11-18 07:20:47.368591] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:26.442 [2024-11-18 07:20:47.368612] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:26.442 [2024-11-18 07:20:47.368626] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:26.442 [2024-11-18 07:20:47.368638] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:26.442 [2024-11-18 07:20:47.380796] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:26.442 [2024-11-18 07:20:47.381288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.442 [2024-11-18 07:20:47.381331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17dccf0 with addr=10.0.0.2, port=4420 00:35:26.442 [2024-11-18 07:20:47.381348] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dccf0 is same with the state(6) to be set 00:35:26.442 [2024-11-18 07:20:47.381626] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17dccf0 (9): Bad file descriptor 00:35:26.442 [2024-11-18 07:20:47.381843] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:26.442 [2024-11-18 07:20:47.381862] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:26.442 [2024-11-18 07:20:47.381874] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:26.442 [2024-11-18 07:20:47.381886] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:26.442 [2024-11-18 07:20:47.393872] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:26.442 [2024-11-18 07:20:47.394301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.442 [2024-11-18 07:20:47.394328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17dccf0 with addr=10.0.0.2, port=4420 00:35:26.442 [2024-11-18 07:20:47.394349] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dccf0 is same with the state(6) to be set 00:35:26.442 [2024-11-18 07:20:47.394599] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17dccf0 (9): Bad file descriptor 00:35:26.443 [2024-11-18 07:20:47.394818] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:26.443 [2024-11-18 07:20:47.394851] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:26.443 [2024-11-18 07:20:47.394864] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:26.443 [2024-11-18 07:20:47.394875] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:26.443 [2024-11-18 07:20:47.407033] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:26.443 [2024-11-18 07:20:47.407460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.443 [2024-11-18 07:20:47.407509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17dccf0 with addr=10.0.0.2, port=4420 00:35:26.443 [2024-11-18 07:20:47.407526] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dccf0 is same with the state(6) to be set 00:35:26.443 [2024-11-18 07:20:47.407793] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17dccf0 (9): Bad file descriptor 00:35:26.443 [2024-11-18 07:20:47.408003] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:26.443 [2024-11-18 07:20:47.408021] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:26.443 [2024-11-18 07:20:47.408033] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:26.443 [2024-11-18 07:20:47.408044] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:26.702 [2024-11-18 07:20:47.420675] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:26.702 [2024-11-18 07:20:47.421131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.702 [2024-11-18 07:20:47.421177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17dccf0 with addr=10.0.0.2, port=4420 00:35:26.702 [2024-11-18 07:20:47.421193] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dccf0 is same with the state(6) to be set 00:35:26.702 [2024-11-18 07:20:47.421462] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17dccf0 (9): Bad file descriptor 00:35:26.702 [2024-11-18 07:20:47.421686] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:26.702 [2024-11-18 07:20:47.421705] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:26.702 [2024-11-18 07:20:47.421718] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:26.702 [2024-11-18 07:20:47.421729] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:26.702 [2024-11-18 07:20:47.433807] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:26.702 [2024-11-18 07:20:47.434285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.702 [2024-11-18 07:20:47.434335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17dccf0 with addr=10.0.0.2, port=4420 00:35:26.702 [2024-11-18 07:20:47.434350] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dccf0 is same with the state(6) to be set 00:35:26.702 [2024-11-18 07:20:47.434614] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17dccf0 (9): Bad file descriptor 00:35:26.702 [2024-11-18 07:20:47.434869] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:26.702 [2024-11-18 07:20:47.434888] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:26.702 [2024-11-18 07:20:47.434900] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:26.702 [2024-11-18 07:20:47.434911] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:26.702 [2024-11-18 07:20:47.446886] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:26.702 [2024-11-18 07:20:47.447261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.702 [2024-11-18 07:20:47.447304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17dccf0 with addr=10.0.0.2, port=4420 00:35:26.702 [2024-11-18 07:20:47.447320] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dccf0 is same with the state(6) to be set 00:35:26.702 [2024-11-18 07:20:47.447585] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17dccf0 (9): Bad file descriptor 00:35:26.702 [2024-11-18 07:20:47.447790] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:26.702 [2024-11-18 07:20:47.447823] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:26.702 [2024-11-18 07:20:47.447835] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:26.702 [2024-11-18 07:20:47.447847] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:26.702 [2024-11-18 07:20:47.459943] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:26.702 [2024-11-18 07:20:47.460369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.702 [2024-11-18 07:20:47.460397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17dccf0 with addr=10.0.0.2, port=4420 00:35:26.702 [2024-11-18 07:20:47.460413] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dccf0 is same with the state(6) to be set 00:35:26.702 [2024-11-18 07:20:47.460653] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17dccf0 (9): Bad file descriptor 00:35:26.702 [2024-11-18 07:20:47.460883] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:26.702 [2024-11-18 07:20:47.460901] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:26.702 [2024-11-18 07:20:47.460913] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:26.702 [2024-11-18 07:20:47.460924] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:26.702 [2024-11-18 07:20:47.473086] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:26.702 [2024-11-18 07:20:47.473449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.702 [2024-11-18 07:20:47.473476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17dccf0 with addr=10.0.0.2, port=4420 00:35:26.702 [2024-11-18 07:20:47.473515] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dccf0 is same with the state(6) to be set 00:35:26.702 [2024-11-18 07:20:47.473748] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17dccf0 (9): Bad file descriptor 00:35:26.702 [2024-11-18 07:20:47.473975] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:26.702 [2024-11-18 07:20:47.473993] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:26.702 [2024-11-18 07:20:47.474005] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:26.702 [2024-11-18 07:20:47.474020] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:26.702 [2024-11-18 07:20:47.486175] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:26.702 [2024-11-18 07:20:47.486662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.702 [2024-11-18 07:20:47.486704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17dccf0 with addr=10.0.0.2, port=4420 00:35:26.702 [2024-11-18 07:20:47.486721] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dccf0 is same with the state(6) to be set 00:35:26.702 [2024-11-18 07:20:47.486970] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17dccf0 (9): Bad file descriptor 00:35:26.702 [2024-11-18 07:20:47.487162] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:26.702 [2024-11-18 07:20:47.487180] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:26.702 [2024-11-18 07:20:47.487192] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:26.702 [2024-11-18 07:20:47.487203] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:26.702 [2024-11-18 07:20:47.499273] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:26.702 [2024-11-18 07:20:47.499623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.702 [2024-11-18 07:20:47.499650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17dccf0 with addr=10.0.0.2, port=4420 00:35:26.702 [2024-11-18 07:20:47.499666] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dccf0 is same with the state(6) to be set 00:35:26.702 [2024-11-18 07:20:47.499907] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17dccf0 (9): Bad file descriptor 00:35:26.703 [2024-11-18 07:20:47.500099] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:26.703 [2024-11-18 07:20:47.500117] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:26.703 [2024-11-18 07:20:47.500129] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:26.703 [2024-11-18 07:20:47.500140] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:26.703 [2024-11-18 07:20:47.512394] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:26.703 [2024-11-18 07:20:47.512792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.703 [2024-11-18 07:20:47.512835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17dccf0 with addr=10.0.0.2, port=4420 00:35:26.703 [2024-11-18 07:20:47.512851] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dccf0 is same with the state(6) to be set 00:35:26.703 [2024-11-18 07:20:47.513084] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17dccf0 (9): Bad file descriptor 00:35:26.703 [2024-11-18 07:20:47.513291] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:26.703 [2024-11-18 07:20:47.513309] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:26.703 [2024-11-18 07:20:47.513321] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:26.703 [2024-11-18 07:20:47.513331] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:26.703 [2024-11-18 07:20:47.525619] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:26.703 [2024-11-18 07:20:47.525994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.703 [2024-11-18 07:20:47.526031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17dccf0 with addr=10.0.0.2, port=4420 00:35:26.703 [2024-11-18 07:20:47.526065] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dccf0 is same with the state(6) to be set 00:35:26.703 [2024-11-18 07:20:47.526320] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17dccf0 (9): Bad file descriptor 00:35:26.703 [2024-11-18 07:20:47.526557] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:26.703 [2024-11-18 07:20:47.526577] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:26.703 [2024-11-18 07:20:47.526589] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:26.703 [2024-11-18 07:20:47.526600] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:26.703 [2024-11-18 07:20:47.539046] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:26.703 [2024-11-18 07:20:47.539452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.703 [2024-11-18 07:20:47.539481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17dccf0 with addr=10.0.0.2, port=4420 00:35:26.703 [2024-11-18 07:20:47.539507] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dccf0 is same with the state(6) to be set 00:35:26.703 [2024-11-18 07:20:47.539736] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17dccf0 (9): Bad file descriptor 00:35:26.703 [2024-11-18 07:20:47.539977] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:26.703 [2024-11-18 07:20:47.539995] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:26.703 [2024-11-18 07:20:47.540007] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:26.703 [2024-11-18 07:20:47.540018] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:26.703 [2024-11-18 07:20:47.552390] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:26.703 [2024-11-18 07:20:47.552831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.703 [2024-11-18 07:20:47.552858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17dccf0 with addr=10.0.0.2, port=4420 00:35:26.703 [2024-11-18 07:20:47.552874] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dccf0 is same with the state(6) to be set 00:35:26.703 [2024-11-18 07:20:47.553102] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17dccf0 (9): Bad file descriptor 00:35:26.703 [2024-11-18 07:20:47.553316] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:26.703 [2024-11-18 07:20:47.553335] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:26.703 [2024-11-18 07:20:47.553348] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:26.703 [2024-11-18 07:20:47.553360] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:26.703 [2024-11-18 07:20:47.565588] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:26.703 [2024-11-18 07:20:47.565930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.703 [2024-11-18 07:20:47.565957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17dccf0 with addr=10.0.0.2, port=4420 00:35:26.703 [2024-11-18 07:20:47.565977] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dccf0 is same with the state(6) to be set 00:35:26.703 [2024-11-18 07:20:47.566179] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17dccf0 (9): Bad file descriptor 00:35:26.703 [2024-11-18 07:20:47.566389] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:26.703 [2024-11-18 07:20:47.566407] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:26.703 [2024-11-18 07:20:47.566419] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:26.703 [2024-11-18 07:20:47.566430] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:26.703 [2024-11-18 07:20:47.578845] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:26.703 [2024-11-18 07:20:47.579324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.703 [2024-11-18 07:20:47.579375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17dccf0 with addr=10.0.0.2, port=4420 00:35:26.703 [2024-11-18 07:20:47.579391] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dccf0 is same with the state(6) to be set 00:35:26.703 [2024-11-18 07:20:47.579664] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17dccf0 (9): Bad file descriptor 00:35:26.703 [2024-11-18 07:20:47.579862] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:26.703 [2024-11-18 07:20:47.579880] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:26.703 [2024-11-18 07:20:47.579892] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:26.703 [2024-11-18 07:20:47.579919] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:26.703 [2024-11-18 07:20:47.592043] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:26.703 [2024-11-18 07:20:47.592374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.703 [2024-11-18 07:20:47.592401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17dccf0 with addr=10.0.0.2, port=4420 00:35:26.703 [2024-11-18 07:20:47.592416] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dccf0 is same with the state(6) to be set 00:35:26.703 [2024-11-18 07:20:47.592645] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17dccf0 (9): Bad file descriptor 00:35:26.703 [2024-11-18 07:20:47.592892] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:26.703 [2024-11-18 07:20:47.592910] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:26.703 [2024-11-18 07:20:47.592922] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:26.703 [2024-11-18 07:20:47.592933] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:26.703 [2024-11-18 07:20:47.605226] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:26.703 [2024-11-18 07:20:47.605591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.703 [2024-11-18 07:20:47.605635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17dccf0 with addr=10.0.0.2, port=4420 00:35:26.703 [2024-11-18 07:20:47.605651] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dccf0 is same with the state(6) to be set 00:35:26.703 [2024-11-18 07:20:47.605902] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17dccf0 (9): Bad file descriptor 00:35:26.703 [2024-11-18 07:20:47.606117] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:26.703 [2024-11-18 07:20:47.606136] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:26.703 [2024-11-18 07:20:47.606148] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:26.703 [2024-11-18 07:20:47.606159] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:26.703 [2024-11-18 07:20:47.618435] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:26.703 [2024-11-18 07:20:47.618812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.703 [2024-11-18 07:20:47.618874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17dccf0 with addr=10.0.0.2, port=4420 00:35:26.703 [2024-11-18 07:20:47.618890] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dccf0 is same with the state(6) to be set 00:35:26.703 [2024-11-18 07:20:47.619124] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17dccf0 (9): Bad file descriptor 00:35:26.703 [2024-11-18 07:20:47.619331] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:26.703 [2024-11-18 07:20:47.619350] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:26.703 [2024-11-18 07:20:47.619362] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:26.703 [2024-11-18 07:20:47.619373] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:26.703 [2024-11-18 07:20:47.631688] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:26.703 [2024-11-18 07:20:47.632065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.703 [2024-11-18 07:20:47.632091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17dccf0 with addr=10.0.0.2, port=4420 00:35:26.703 [2024-11-18 07:20:47.632106] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dccf0 is same with the state(6) to be set 00:35:26.703 [2024-11-18 07:20:47.632319] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17dccf0 (9): Bad file descriptor 00:35:26.704 [2024-11-18 07:20:47.632570] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:26.704 [2024-11-18 07:20:47.632591] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:26.704 [2024-11-18 07:20:47.632604] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:26.704 [2024-11-18 07:20:47.632616] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:26.704 [2024-11-18 07:20:47.644926] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:26.704 [2024-11-18 07:20:47.645349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.704 [2024-11-18 07:20:47.645390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17dccf0 with addr=10.0.0.2, port=4420 00:35:26.704 [2024-11-18 07:20:47.645406] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dccf0 is same with the state(6) to be set 00:35:26.704 [2024-11-18 07:20:47.645658] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17dccf0 (9): Bad file descriptor 00:35:26.704 [2024-11-18 07:20:47.645889] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:26.704 [2024-11-18 07:20:47.645908] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:26.704 [2024-11-18 07:20:47.645919] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:26.704 [2024-11-18 07:20:47.645935] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:26.704 [2024-11-18 07:20:47.658053] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:26.704 [2024-11-18 07:20:47.658476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.704 [2024-11-18 07:20:47.658524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17dccf0 with addr=10.0.0.2, port=4420 00:35:26.704 [2024-11-18 07:20:47.658542] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dccf0 is same with the state(6) to be set 00:35:26.704 [2024-11-18 07:20:47.658784] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17dccf0 (9): Bad file descriptor 00:35:26.704 [2024-11-18 07:20:47.659010] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:26.704 [2024-11-18 07:20:47.659029] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:26.704 [2024-11-18 07:20:47.659042] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:26.704 [2024-11-18 07:20:47.659053] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:26.704 [2024-11-18 07:20:47.671207] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:26.704 [2024-11-18 07:20:47.671525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.704 [2024-11-18 07:20:47.671552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17dccf0 with addr=10.0.0.2, port=4420 00:35:26.704 [2024-11-18 07:20:47.671568] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dccf0 is same with the state(6) to be set 00:35:26.704 [2024-11-18 07:20:47.671784] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17dccf0 (9): Bad file descriptor 00:35:26.704 [2024-11-18 07:20:47.671992] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:26.704 [2024-11-18 07:20:47.672010] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:26.704 [2024-11-18 07:20:47.672022] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:26.704 [2024-11-18 07:20:47.672033] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:26.963 [2024-11-18 07:20:47.684598] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:26.963 [2024-11-18 07:20:47.684952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.963 [2024-11-18 07:20:47.684980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17dccf0 with addr=10.0.0.2, port=4420 00:35:26.963 [2024-11-18 07:20:47.684996] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dccf0 is same with the state(6) to be set 00:35:26.963 [2024-11-18 07:20:47.685209] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17dccf0 (9): Bad file descriptor 00:35:26.963 [2024-11-18 07:20:47.685460] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:26.963 [2024-11-18 07:20:47.685503] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:26.963 [2024-11-18 07:20:47.685518] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:26.963 [2024-11-18 07:20:47.685530] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:26.963 [2024-11-18 07:20:47.697677] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:26.963 [2024-11-18 07:20:47.698186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.963 [2024-11-18 07:20:47.698228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17dccf0 with addr=10.0.0.2, port=4420 00:35:26.963 [2024-11-18 07:20:47.698244] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dccf0 is same with the state(6) to be set 00:35:26.963 [2024-11-18 07:20:47.698505] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17dccf0 (9): Bad file descriptor 00:35:26.963 [2024-11-18 07:20:47.698725] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:26.963 [2024-11-18 07:20:47.698744] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:26.963 [2024-11-18 07:20:47.698756] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:26.963 [2024-11-18 07:20:47.698769] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:26.963 [2024-11-18 07:20:47.710739] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:26.963 [2024-11-18 07:20:47.711057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.963 [2024-11-18 07:20:47.711082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17dccf0 with addr=10.0.0.2, port=4420 00:35:26.963 [2024-11-18 07:20:47.711096] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dccf0 is same with the state(6) to be set 00:35:26.963 [2024-11-18 07:20:47.711291] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17dccf0 (9): Bad file descriptor 00:35:26.964 [2024-11-18 07:20:47.711507] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:26.964 [2024-11-18 07:20:47.711541] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:26.964 [2024-11-18 07:20:47.711553] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:26.964 [2024-11-18 07:20:47.711565] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:26.964 [2024-11-18 07:20:47.723809] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:26.964 [2024-11-18 07:20:47.724229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.964 [2024-11-18 07:20:47.724256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17dccf0 with addr=10.0.0.2, port=4420 00:35:26.964 [2024-11-18 07:20:47.724271] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dccf0 is same with the state(6) to be set 00:35:26.964 [2024-11-18 07:20:47.724514] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17dccf0 (9): Bad file descriptor 00:35:26.964 [2024-11-18 07:20:47.724728] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:26.964 [2024-11-18 07:20:47.724747] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:26.964 [2024-11-18 07:20:47.724759] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:26.964 [2024-11-18 07:20:47.724770] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:26.964 [2024-11-18 07:20:47.736926] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:26.964 [2024-11-18 07:20:47.737413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.964 [2024-11-18 07:20:47.737454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17dccf0 with addr=10.0.0.2, port=4420 00:35:26.964 [2024-11-18 07:20:47.737475] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dccf0 is same with the state(6) to be set 00:35:26.964 [2024-11-18 07:20:47.737713] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17dccf0 (9): Bad file descriptor 00:35:26.964 [2024-11-18 07:20:47.737940] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:26.964 [2024-11-18 07:20:47.737958] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:26.964 [2024-11-18 07:20:47.737970] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:26.964 [2024-11-18 07:20:47.737981] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:26.964 [2024-11-18 07:20:47.750020] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:26.964 [2024-11-18 07:20:47.750526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.964 [2024-11-18 07:20:47.750552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17dccf0 with addr=10.0.0.2, port=4420 00:35:26.964 [2024-11-18 07:20:47.750568] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dccf0 is same with the state(6) to be set 00:35:26.964 [2024-11-18 07:20:47.750829] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17dccf0 (9): Bad file descriptor 00:35:26.964 [2024-11-18 07:20:47.751021] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:26.964 [2024-11-18 07:20:47.751038] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:26.964 [2024-11-18 07:20:47.751051] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:26.964 [2024-11-18 07:20:47.751062] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:26.964 [2024-11-18 07:20:47.763068] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:26.964 [2024-11-18 07:20:47.763569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.964 [2024-11-18 07:20:47.763610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17dccf0 with addr=10.0.0.2, port=4420 00:35:26.964 [2024-11-18 07:20:47.763627] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dccf0 is same with the state(6) to be set 00:35:26.964 [2024-11-18 07:20:47.763876] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17dccf0 (9): Bad file descriptor 00:35:26.964 [2024-11-18 07:20:47.764084] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:26.964 [2024-11-18 07:20:47.764101] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:26.964 [2024-11-18 07:20:47.764113] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:26.964 [2024-11-18 07:20:47.764124] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:26.964 [2024-11-18 07:20:47.776197] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:26.964 [2024-11-18 07:20:47.776636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.964 [2024-11-18 07:20:47.776664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17dccf0 with addr=10.0.0.2, port=4420 00:35:26.964 [2024-11-18 07:20:47.776679] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dccf0 is same with the state(6) to be set 00:35:26.964 [2024-11-18 07:20:47.776921] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17dccf0 (9): Bad file descriptor 00:35:26.964 [2024-11-18 07:20:47.777152] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:26.964 [2024-11-18 07:20:47.777171] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:26.964 [2024-11-18 07:20:47.777183] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:26.964 [2024-11-18 07:20:47.777194] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:26.964 [2024-11-18 07:20:47.789214] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:26.964 [2024-11-18 07:20:47.789578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.964 [2024-11-18 07:20:47.789622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17dccf0 with addr=10.0.0.2, port=4420 00:35:26.964 [2024-11-18 07:20:47.789637] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dccf0 is same with the state(6) to be set 00:35:26.964 [2024-11-18 07:20:47.789888] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17dccf0 (9): Bad file descriptor 00:35:26.964 [2024-11-18 07:20:47.790096] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:26.964 [2024-11-18 07:20:47.790114] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:26.964 [2024-11-18 07:20:47.790126] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:26.964 [2024-11-18 07:20:47.790137] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:26.964 [2024-11-18 07:20:47.802216] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:26.964 [2024-11-18 07:20:47.802612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.964 [2024-11-18 07:20:47.802639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17dccf0 with addr=10.0.0.2, port=4420 00:35:26.964 [2024-11-18 07:20:47.802654] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dccf0 is same with the state(6) to be set 00:35:26.964 [2024-11-18 07:20:47.802875] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17dccf0 (9): Bad file descriptor 00:35:26.964 [2024-11-18 07:20:47.803083] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:26.964 [2024-11-18 07:20:47.803101] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:26.964 [2024-11-18 07:20:47.803113] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:26.964 [2024-11-18 07:20:47.803124] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:26.964 [2024-11-18 07:20:47.815295] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:26.964 [2024-11-18 07:20:47.815665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.964 [2024-11-18 07:20:47.815708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17dccf0 with addr=10.0.0.2, port=4420 00:35:26.964 [2024-11-18 07:20:47.815723] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dccf0 is same with the state(6) to be set 00:35:26.964 [2024-11-18 07:20:47.815976] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17dccf0 (9): Bad file descriptor 00:35:26.964 [2024-11-18 07:20:47.816183] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:26.964 [2024-11-18 07:20:47.816201] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:26.964 [2024-11-18 07:20:47.816213] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:26.964 [2024-11-18 07:20:47.816228] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:26.964 [2024-11-18 07:20:47.828362] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:26.964 [2024-11-18 07:20:47.828794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.964 [2024-11-18 07:20:47.828821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17dccf0 with addr=10.0.0.2, port=4420 00:35:26.964 [2024-11-18 07:20:47.828852] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dccf0 is same with the state(6) to be set 00:35:26.964 [2024-11-18 07:20:47.829091] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17dccf0 (9): Bad file descriptor 00:35:26.964 [2024-11-18 07:20:47.829297] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:26.964 [2024-11-18 07:20:47.829315] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:26.964 [2024-11-18 07:20:47.829327] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:26.964 [2024-11-18 07:20:47.829338] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:26.964 [2024-11-18 07:20:47.841389] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:26.964 [2024-11-18 07:20:47.841821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.964 [2024-11-18 07:20:47.841884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17dccf0 with addr=10.0.0.2, port=4420 00:35:26.965 [2024-11-18 07:20:47.841899] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dccf0 is same with the state(6) to be set 00:35:26.965 [2024-11-18 07:20:47.842146] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17dccf0 (9): Bad file descriptor 00:35:26.965 [2024-11-18 07:20:47.842354] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:26.965 [2024-11-18 07:20:47.842372] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:26.965 [2024-11-18 07:20:47.842384] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:26.965 [2024-11-18 07:20:47.842395] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:26.965 [2024-11-18 07:20:47.854387] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:26.965 [2024-11-18 07:20:47.854914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.965 [2024-11-18 07:20:47.854968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17dccf0 with addr=10.0.0.2, port=4420 00:35:26.965 [2024-11-18 07:20:47.854983] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dccf0 is same with the state(6) to be set 00:35:26.965 [2024-11-18 07:20:47.855232] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17dccf0 (9): Bad file descriptor 00:35:26.965 [2024-11-18 07:20:47.855423] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:26.965 [2024-11-18 07:20:47.855442] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:26.965 [2024-11-18 07:20:47.855454] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:26.965 [2024-11-18 07:20:47.855464] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:26.965 [2024-11-18 07:20:47.867652] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:26.965 [2024-11-18 07:20:47.868121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.965 [2024-11-18 07:20:47.868163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17dccf0 with addr=10.0.0.2, port=4420 00:35:26.965 [2024-11-18 07:20:47.868179] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dccf0 is same with the state(6) to be set 00:35:26.965 [2024-11-18 07:20:47.868415] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17dccf0 (9): Bad file descriptor 00:35:26.965 [2024-11-18 07:20:47.868655] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:26.965 [2024-11-18 07:20:47.868675] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:26.965 [2024-11-18 07:20:47.868689] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:26.965 [2024-11-18 07:20:47.868701] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:26.965 4393.40 IOPS, 17.16 MiB/s [2024-11-18T06:20:47.943Z] [2024-11-18 07:20:47.882292] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:26.965 [2024-11-18 07:20:47.882684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.965 [2024-11-18 07:20:47.882712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17dccf0 with addr=10.0.0.2, port=4420 00:35:26.965 [2024-11-18 07:20:47.882729] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dccf0 is same with the state(6) to be set 00:35:26.965 [2024-11-18 07:20:47.882970] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17dccf0 (9): Bad file descriptor 00:35:26.965 [2024-11-18 07:20:47.883184] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:26.965 [2024-11-18 07:20:47.883203] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:26.965 [2024-11-18 07:20:47.883215] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:26.965 [2024-11-18 07:20:47.883227] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:26.965 [2024-11-18 07:20:47.895374] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:26.965 [2024-11-18 07:20:47.895877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.965 [2024-11-18 07:20:47.895927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17dccf0 with addr=10.0.0.2, port=4420 00:35:26.965 [2024-11-18 07:20:47.895943] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dccf0 is same with the state(6) to be set 00:35:26.965 [2024-11-18 07:20:47.896207] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17dccf0 (9): Bad file descriptor 00:35:26.965 [2024-11-18 07:20:47.896399] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:26.965 [2024-11-18 07:20:47.896417] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:26.965 [2024-11-18 07:20:47.896429] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:26.965 [2024-11-18 07:20:47.896440] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:26.965 [2024-11-18 07:20:47.908471] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:26.965 [2024-11-18 07:20:47.908841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.965 [2024-11-18 07:20:47.908870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17dccf0 with addr=10.0.0.2, port=4420 00:35:26.965 [2024-11-18 07:20:47.908894] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dccf0 is same with the state(6) to be set 00:35:26.965 [2024-11-18 07:20:47.909134] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17dccf0 (9): Bad file descriptor 00:35:26.965 [2024-11-18 07:20:47.909342] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:26.965 [2024-11-18 07:20:47.909361] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:26.965 [2024-11-18 07:20:47.909373] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:26.965 [2024-11-18 07:20:47.909384] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:26.965 [2024-11-18 07:20:47.921932] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:26.965 [2024-11-18 07:20:47.922297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.965 [2024-11-18 07:20:47.922325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17dccf0 with addr=10.0.0.2, port=4420 00:35:26.965 [2024-11-18 07:20:47.922340] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dccf0 is same with the state(6) to be set 00:35:26.965 [2024-11-18 07:20:47.922590] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17dccf0 (9): Bad file descriptor 00:35:26.965 [2024-11-18 07:20:47.922803] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:26.965 [2024-11-18 07:20:47.922821] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:26.965 [2024-11-18 07:20:47.922833] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:26.965 [2024-11-18 07:20:47.922844] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:26.965 [2024-11-18 07:20:47.935119] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:26.965 [2024-11-18 07:20:47.935545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.965 [2024-11-18 07:20:47.935573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17dccf0 with addr=10.0.0.2, port=4420 00:35:26.965 [2024-11-18 07:20:47.935588] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dccf0 is same with the state(6) to be set 00:35:26.965 [2024-11-18 07:20:47.935858] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17dccf0 (9): Bad file descriptor 00:35:26.965 [2024-11-18 07:20:47.936049] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:26.965 [2024-11-18 07:20:47.936067] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:26.965 [2024-11-18 07:20:47.936080] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:26.965 [2024-11-18 07:20:47.936091] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:27.225 [2024-11-18 07:20:47.948604] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:27.225 [2024-11-18 07:20:47.949019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.225 [2024-11-18 07:20:47.949059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17dccf0 with addr=10.0.0.2, port=4420 00:35:27.225 [2024-11-18 07:20:47.949076] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dccf0 is same with the state(6) to be set 00:35:27.225 [2024-11-18 07:20:47.949310] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17dccf0 (9): Bad file descriptor 00:35:27.225 [2024-11-18 07:20:47.949535] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:27.225 [2024-11-18 07:20:47.949556] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:27.225 [2024-11-18 07:20:47.949569] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:27.225 [2024-11-18 07:20:47.949581] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:27.225 [2024-11-18 07:20:47.961881] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:27.225 [2024-11-18 07:20:47.962179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.225 [2024-11-18 07:20:47.962247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17dccf0 with addr=10.0.0.2, port=4420 00:35:27.225 [2024-11-18 07:20:47.962283] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dccf0 is same with the state(6) to be set 00:35:27.225 [2024-11-18 07:20:47.962523] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17dccf0 (9): Bad file descriptor 00:35:27.225 [2024-11-18 07:20:47.962738] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:27.225 [2024-11-18 07:20:47.962756] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:27.225 [2024-11-18 07:20:47.962769] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:27.225 [2024-11-18 07:20:47.962781] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:27.225 [2024-11-18 07:20:47.975007] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:27.225 [2024-11-18 07:20:47.975382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.225 [2024-11-18 07:20:47.975425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17dccf0 with addr=10.0.0.2, port=4420 00:35:27.225 [2024-11-18 07:20:47.975441] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dccf0 is same with the state(6) to be set 00:35:27.225 [2024-11-18 07:20:47.975707] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17dccf0 (9): Bad file descriptor 00:35:27.225 [2024-11-18 07:20:47.975919] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:27.225 [2024-11-18 07:20:47.975938] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:27.225 [2024-11-18 07:20:47.975950] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:27.225 [2024-11-18 07:20:47.975961] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:27.225 [2024-11-18 07:20:47.988088] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:27.225 [2024-11-18 07:20:47.988506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.225 [2024-11-18 07:20:47.988571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17dccf0 with addr=10.0.0.2, port=4420 00:35:27.225 [2024-11-18 07:20:47.988586] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dccf0 is same with the state(6) to be set 00:35:27.225 [2024-11-18 07:20:47.988834] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17dccf0 (9): Bad file descriptor 00:35:27.225 [2024-11-18 07:20:47.989042] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:27.225 [2024-11-18 07:20:47.989060] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:27.225 [2024-11-18 07:20:47.989077] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:27.225 [2024-11-18 07:20:47.989088] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:27.225 [2024-11-18 07:20:48.001364] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:27.225 [2024-11-18 07:20:48.001824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.225 [2024-11-18 07:20:48.001867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17dccf0 with addr=10.0.0.2, port=4420 00:35:27.225 [2024-11-18 07:20:48.001883] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dccf0 is same with the state(6) to be set 00:35:27.225 [2024-11-18 07:20:48.002134] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17dccf0 (9): Bad file descriptor 00:35:27.225 [2024-11-18 07:20:48.002325] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:27.225 [2024-11-18 07:20:48.002343] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:27.225 [2024-11-18 07:20:48.002355] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:27.225 [2024-11-18 07:20:48.002366] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:27.225 [2024-11-18 07:20:48.014583] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:27.225 [2024-11-18 07:20:48.015025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.225 [2024-11-18 07:20:48.015076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17dccf0 with addr=10.0.0.2, port=4420 00:35:27.225 [2024-11-18 07:20:48.015091] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dccf0 is same with the state(6) to be set 00:35:27.225 [2024-11-18 07:20:48.015353] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17dccf0 (9): Bad file descriptor 00:35:27.225 [2024-11-18 07:20:48.015589] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:27.225 [2024-11-18 07:20:48.015610] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:27.225 [2024-11-18 07:20:48.015622] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:27.225 [2024-11-18 07:20:48.015634] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:27.225 [2024-11-18 07:20:48.028125] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:27.225 [2024-11-18 07:20:48.028470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.225 [2024-11-18 07:20:48.028508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17dccf0 with addr=10.0.0.2, port=4420 00:35:27.225 [2024-11-18 07:20:48.028527] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dccf0 is same with the state(6) to be set 00:35:27.225 [2024-11-18 07:20:48.028748] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17dccf0 (9): Bad file descriptor 00:35:27.225 [2024-11-18 07:20:48.028970] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:27.225 [2024-11-18 07:20:48.028990] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:27.225 [2024-11-18 07:20:48.029003] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:27.225 [2024-11-18 07:20:48.029015] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:27.225 [2024-11-18 07:20:48.041414] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:27.225 [2024-11-18 07:20:48.041835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.225 [2024-11-18 07:20:48.041869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17dccf0 with addr=10.0.0.2, port=4420 00:35:27.225 [2024-11-18 07:20:48.041901] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dccf0 is same with the state(6) to be set 00:35:27.225 [2024-11-18 07:20:48.042140] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17dccf0 (9): Bad file descriptor 00:35:27.225 [2024-11-18 07:20:48.042338] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:27.225 [2024-11-18 07:20:48.042357] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:27.225 [2024-11-18 07:20:48.042370] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:27.225 [2024-11-18 07:20:48.042381] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:27.225 [2024-11-18 07:20:48.054654] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:27.225 [2024-11-18 07:20:48.055094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.225 [2024-11-18 07:20:48.055147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17dccf0 with addr=10.0.0.2, port=4420 00:35:27.225 [2024-11-18 07:20:48.055162] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dccf0 is same with the state(6) to be set 00:35:27.225 [2024-11-18 07:20:48.055423] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17dccf0 (9): Bad file descriptor 00:35:27.225 [2024-11-18 07:20:48.055649] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:27.225 [2024-11-18 07:20:48.055670] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:27.225 [2024-11-18 07:20:48.055683] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:27.225 [2024-11-18 07:20:48.055694] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:27.225 [2024-11-18 07:20:48.067917] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:27.226 [2024-11-18 07:20:48.068312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.226 [2024-11-18 07:20:48.068339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17dccf0 with addr=10.0.0.2, port=4420 00:35:27.226 [2024-11-18 07:20:48.068355] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dccf0 is same with the state(6) to be set 00:35:27.226 [2024-11-18 07:20:48.068592] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17dccf0 (9): Bad file descriptor 00:35:27.226 [2024-11-18 07:20:48.068824] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:27.226 [2024-11-18 07:20:48.068857] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:27.226 [2024-11-18 07:20:48.068869] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:27.226 [2024-11-18 07:20:48.068881] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:27.226 [2024-11-18 07:20:48.081163] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:27.226 [2024-11-18 07:20:48.081541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.226 [2024-11-18 07:20:48.081570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17dccf0 with addr=10.0.0.2, port=4420 00:35:27.226 [2024-11-18 07:20:48.081591] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dccf0 is same with the state(6) to be set 00:35:27.226 [2024-11-18 07:20:48.081820] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17dccf0 (9): Bad file descriptor 00:35:27.226 [2024-11-18 07:20:48.082028] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:27.226 [2024-11-18 07:20:48.082046] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:27.226 [2024-11-18 07:20:48.082058] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:27.226 [2024-11-18 07:20:48.082069] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:27.226 [2024-11-18 07:20:48.094163] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:27.226 [2024-11-18 07:20:48.094526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.226 [2024-11-18 07:20:48.094569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17dccf0 with addr=10.0.0.2, port=4420 00:35:27.226 [2024-11-18 07:20:48.094585] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dccf0 is same with the state(6) to be set 00:35:27.226 [2024-11-18 07:20:48.094833] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17dccf0 (9): Bad file descriptor 00:35:27.226 [2024-11-18 07:20:48.095025] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:27.226 [2024-11-18 07:20:48.095043] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:27.226 [2024-11-18 07:20:48.095055] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:27.226 [2024-11-18 07:20:48.095066] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:27.226 [2024-11-18 07:20:48.107291] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:27.226 [2024-11-18 07:20:48.107682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.226 [2024-11-18 07:20:48.107710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17dccf0 with addr=10.0.0.2, port=4420 00:35:27.226 [2024-11-18 07:20:48.107726] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dccf0 is same with the state(6) to be set 00:35:27.226 [2024-11-18 07:20:48.107967] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17dccf0 (9): Bad file descriptor 00:35:27.226 [2024-11-18 07:20:48.108175] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:27.226 [2024-11-18 07:20:48.108193] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:27.226 [2024-11-18 07:20:48.108205] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:27.226 [2024-11-18 07:20:48.108216] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:27.226 [2024-11-18 07:20:48.120250] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:27.226 [2024-11-18 07:20:48.120614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.226 [2024-11-18 07:20:48.120656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17dccf0 with addr=10.0.0.2, port=4420 00:35:27.226 [2024-11-18 07:20:48.120671] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dccf0 is same with the state(6) to be set 00:35:27.226 [2024-11-18 07:20:48.120918] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17dccf0 (9): Bad file descriptor 00:35:27.226 [2024-11-18 07:20:48.121115] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:27.226 [2024-11-18 07:20:48.121133] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:27.226 [2024-11-18 07:20:48.121145] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:27.226 [2024-11-18 07:20:48.121156] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:27.226 [2024-11-18 07:20:48.133219] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:27.226 [2024-11-18 07:20:48.133520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.226 [2024-11-18 07:20:48.133561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17dccf0 with addr=10.0.0.2, port=4420 00:35:27.226 [2024-11-18 07:20:48.133576] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dccf0 is same with the state(6) to be set 00:35:27.226 [2024-11-18 07:20:48.133792] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17dccf0 (9): Bad file descriptor 00:35:27.226 [2024-11-18 07:20:48.134001] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:27.226 [2024-11-18 07:20:48.134019] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:27.226 [2024-11-18 07:20:48.134031] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:27.226 [2024-11-18 07:20:48.134042] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:27.226 [2024-11-18 07:20:48.146350] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:27.226 [2024-11-18 07:20:48.146846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.226 [2024-11-18 07:20:48.146889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17dccf0 with addr=10.0.0.2, port=4420 00:35:27.226 [2024-11-18 07:20:48.146905] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dccf0 is same with the state(6) to be set 00:35:27.226 [2024-11-18 07:20:48.147155] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17dccf0 (9): Bad file descriptor 00:35:27.226 [2024-11-18 07:20:48.147363] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:27.226 [2024-11-18 07:20:48.147381] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:27.226 [2024-11-18 07:20:48.147393] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:27.226 [2024-11-18 07:20:48.147403] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:27.226 [2024-11-18 07:20:48.159326] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:27.226 [2024-11-18 07:20:48.159777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.226 [2024-11-18 07:20:48.159819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17dccf0 with addr=10.0.0.2, port=4420 00:35:27.226 [2024-11-18 07:20:48.159835] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dccf0 is same with the state(6) to be set 00:35:27.226 [2024-11-18 07:20:48.160086] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17dccf0 (9): Bad file descriptor 00:35:27.226 [2024-11-18 07:20:48.160328] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:27.226 [2024-11-18 07:20:48.160348] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:27.226 [2024-11-18 07:20:48.160366] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:27.226 [2024-11-18 07:20:48.160378] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:27.226 [2024-11-18 07:20:48.172409] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:27.226 [2024-11-18 07:20:48.172774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.226 [2024-11-18 07:20:48.172802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17dccf0 with addr=10.0.0.2, port=4420 00:35:27.226 [2024-11-18 07:20:48.172818] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dccf0 is same with the state(6) to be set 00:35:27.226 [2024-11-18 07:20:48.173056] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17dccf0 (9): Bad file descriptor 00:35:27.226 [2024-11-18 07:20:48.173262] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:27.226 [2024-11-18 07:20:48.173280] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:27.226 [2024-11-18 07:20:48.173293] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:27.226 [2024-11-18 07:20:48.173303] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:27.226 [2024-11-18 07:20:48.185767] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:27.226 [2024-11-18 07:20:48.186156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.226 [2024-11-18 07:20:48.186184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17dccf0 with addr=10.0.0.2, port=4420 00:35:27.226 [2024-11-18 07:20:48.186200] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dccf0 is same with the state(6) to be set 00:35:27.226 [2024-11-18 07:20:48.186443] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17dccf0 (9): Bad file descriptor 00:35:27.226 [2024-11-18 07:20:48.186685] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:27.226 [2024-11-18 07:20:48.186706] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:27.226 [2024-11-18 07:20:48.186718] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:27.226 [2024-11-18 07:20:48.186730] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:27.227 [2024-11-18 07:20:48.199199] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:27.227 [2024-11-18 07:20:48.199569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.227 [2024-11-18 07:20:48.199597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17dccf0 with addr=10.0.0.2, port=4420 00:35:27.227 [2024-11-18 07:20:48.199613] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dccf0 is same with the state(6) to be set 00:35:27.227 [2024-11-18 07:20:48.199842] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17dccf0 (9): Bad file descriptor 00:35:27.227 [2024-11-18 07:20:48.200067] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:27.227 [2024-11-18 07:20:48.200087] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:27.227 [2024-11-18 07:20:48.200115] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:27.227 [2024-11-18 07:20:48.200128] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:27.486 [2024-11-18 07:20:48.212541] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:27.486 [2024-11-18 07:20:48.212949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.486 [2024-11-18 07:20:48.212991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17dccf0 with addr=10.0.0.2, port=4420 00:35:27.486 [2024-11-18 07:20:48.213006] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dccf0 is same with the state(6) to be set 00:35:27.486 [2024-11-18 07:20:48.213261] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17dccf0 (9): Bad file descriptor 00:35:27.486 [2024-11-18 07:20:48.213482] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:27.486 [2024-11-18 07:20:48.213512] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:27.486 [2024-11-18 07:20:48.213525] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:27.486 [2024-11-18 07:20:48.213538] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:27.486 [2024-11-18 07:20:48.225870] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:27.486 [2024-11-18 07:20:48.226188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.486 [2024-11-18 07:20:48.226214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17dccf0 with addr=10.0.0.2, port=4420 00:35:27.487 [2024-11-18 07:20:48.226229] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dccf0 is same with the state(6) to be set 00:35:27.487 [2024-11-18 07:20:48.226444] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17dccf0 (9): Bad file descriptor 00:35:27.487 [2024-11-18 07:20:48.226692] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:27.487 [2024-11-18 07:20:48.226713] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:27.487 [2024-11-18 07:20:48.226726] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:27.487 [2024-11-18 07:20:48.226738] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:27.487 [2024-11-18 07:20:48.238848] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:27.487 [2024-11-18 07:20:48.239227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.487 [2024-11-18 07:20:48.239268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17dccf0 with addr=10.0.0.2, port=4420 00:35:27.487 [2024-11-18 07:20:48.239284] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dccf0 is same with the state(6) to be set 00:35:27.487 [2024-11-18 07:20:48.239516] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17dccf0 (9): Bad file descriptor 00:35:27.487 [2024-11-18 07:20:48.239730] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:27.487 [2024-11-18 07:20:48.239749] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:27.487 [2024-11-18 07:20:48.239762] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:27.487 [2024-11-18 07:20:48.239773] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:27.487 [2024-11-18 07:20:48.251844] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:27.487 [2024-11-18 07:20:48.252239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.487 [2024-11-18 07:20:48.252266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17dccf0 with addr=10.0.0.2, port=4420 00:35:27.487 [2024-11-18 07:20:48.252287] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dccf0 is same with the state(6) to be set 00:35:27.487 [2024-11-18 07:20:48.252537] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17dccf0 (9): Bad file descriptor 00:35:27.487 [2024-11-18 07:20:48.252737] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:27.487 [2024-11-18 07:20:48.252755] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:27.487 [2024-11-18 07:20:48.252768] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:27.487 [2024-11-18 07:20:48.252793] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:27.487 [2024-11-18 07:20:48.264879] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:27.487 [2024-11-18 07:20:48.265368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.487 [2024-11-18 07:20:48.265410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17dccf0 with addr=10.0.0.2, port=4420 00:35:27.487 [2024-11-18 07:20:48.265426] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dccf0 is same with the state(6) to be set 00:35:27.487 [2024-11-18 07:20:48.265706] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17dccf0 (9): Bad file descriptor 00:35:27.487 [2024-11-18 07:20:48.265937] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:27.487 [2024-11-18 07:20:48.265956] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:27.487 [2024-11-18 07:20:48.265967] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:27.487 [2024-11-18 07:20:48.265978] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:27.487 [2024-11-18 07:20:48.277902] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:27.487 [2024-11-18 07:20:48.278392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.487 [2024-11-18 07:20:48.278419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17dccf0 with addr=10.0.0.2, port=4420 00:35:27.487 [2024-11-18 07:20:48.278450] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dccf0 is same with the state(6) to be set 00:35:27.487 [2024-11-18 07:20:48.278688] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17dccf0 (9): Bad file descriptor 00:35:27.487 [2024-11-18 07:20:48.278906] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:27.487 [2024-11-18 07:20:48.278924] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:27.487 [2024-11-18 07:20:48.278936] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:27.487 [2024-11-18 07:20:48.278947] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:27.487 [2024-11-18 07:20:48.290972] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:27.487 [2024-11-18 07:20:48.291367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.487 [2024-11-18 07:20:48.291394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17dccf0 with addr=10.0.0.2, port=4420 00:35:27.487 [2024-11-18 07:20:48.291409] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dccf0 is same with the state(6) to be set 00:35:27.487 [2024-11-18 07:20:48.291675] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17dccf0 (9): Bad file descriptor 00:35:27.487 [2024-11-18 07:20:48.291911] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:27.487 [2024-11-18 07:20:48.291930] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:27.487 [2024-11-18 07:20:48.291943] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:27.487 [2024-11-18 07:20:48.291953] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:27.487 [2024-11-18 07:20:48.304022] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:27.487 [2024-11-18 07:20:48.304383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.487 [2024-11-18 07:20:48.304409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17dccf0 with addr=10.0.0.2, port=4420 00:35:27.487 [2024-11-18 07:20:48.304425] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dccf0 is same with the state(6) to be set 00:35:27.487 [2024-11-18 07:20:48.304689] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17dccf0 (9): Bad file descriptor 00:35:27.487 [2024-11-18 07:20:48.304917] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:27.487 [2024-11-18 07:20:48.304936] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:27.487 [2024-11-18 07:20:48.304947] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:27.487 [2024-11-18 07:20:48.304959] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:27.487 [2024-11-18 07:20:48.317063] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:27.487 [2024-11-18 07:20:48.317455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.487 [2024-11-18 07:20:48.317482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17dccf0 with addr=10.0.0.2, port=4420 00:35:27.487 [2024-11-18 07:20:48.317522] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dccf0 is same with the state(6) to be set 00:35:27.487 [2024-11-18 07:20:48.317753] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17dccf0 (9): Bad file descriptor 00:35:27.487 [2024-11-18 07:20:48.317983] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:27.487 [2024-11-18 07:20:48.318001] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:27.487 [2024-11-18 07:20:48.318013] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:27.487 [2024-11-18 07:20:48.318024] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:27.487 [2024-11-18 07:20:48.330134] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:27.487 [2024-11-18 07:20:48.330526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.487 [2024-11-18 07:20:48.330553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17dccf0 with addr=10.0.0.2, port=4420 00:35:27.487 [2024-11-18 07:20:48.330569] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dccf0 is same with the state(6) to be set 00:35:27.487 [2024-11-18 07:20:48.330790] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17dccf0 (9): Bad file descriptor 00:35:27.487 [2024-11-18 07:20:48.331015] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:27.487 [2024-11-18 07:20:48.331033] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:27.487 [2024-11-18 07:20:48.331049] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:27.487 [2024-11-18 07:20:48.331061] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:27.487 [2024-11-18 07:20:48.343153] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:27.487 [2024-11-18 07:20:48.343516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.487 [2024-11-18 07:20:48.343561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17dccf0 with addr=10.0.0.2, port=4420 00:35:27.487 [2024-11-18 07:20:48.343577] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dccf0 is same with the state(6) to be set 00:35:27.487 [2024-11-18 07:20:48.343829] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17dccf0 (9): Bad file descriptor 00:35:27.487 [2024-11-18 07:20:48.344037] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:27.487 [2024-11-18 07:20:48.344055] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:27.487 [2024-11-18 07:20:48.344067] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:27.487 [2024-11-18 07:20:48.344078] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:27.487 [2024-11-18 07:20:48.356282] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:27.487 [2024-11-18 07:20:48.356698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.488 [2024-11-18 07:20:48.356739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17dccf0 with addr=10.0.0.2, port=4420 00:35:27.488 [2024-11-18 07:20:48.356755] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dccf0 is same with the state(6) to be set 00:35:27.488 [2024-11-18 07:20:48.356991] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17dccf0 (9): Bad file descriptor 00:35:27.488 [2024-11-18 07:20:48.357182] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:27.488 [2024-11-18 07:20:48.357200] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:27.488 [2024-11-18 07:20:48.357212] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:27.488 [2024-11-18 07:20:48.357223] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:27.488 [2024-11-18 07:20:48.369306] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:27.488 [2024-11-18 07:20:48.369749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.488 [2024-11-18 07:20:48.369777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17dccf0 with addr=10.0.0.2, port=4420 00:35:27.488 [2024-11-18 07:20:48.369793] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dccf0 is same with the state(6) to be set 00:35:27.488 [2024-11-18 07:20:48.370033] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17dccf0 (9): Bad file descriptor 00:35:27.488 [2024-11-18 07:20:48.370242] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:27.488 [2024-11-18 07:20:48.370260] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:27.488 [2024-11-18 07:20:48.370272] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:27.488 [2024-11-18 07:20:48.370283] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:27.488 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 399714 Killed "${NVMF_APP[@]}" "$@" 00:35:27.488 07:20:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:35:27.488 07:20:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:35:27.488 07:20:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:27.488 07:20:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:27.488 07:20:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:27.488 07:20:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=400678 00:35:27.488 07:20:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:35:27.488 07:20:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 400678 00:35:27.488 07:20:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 400678 ']' 00:35:27.488 07:20:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:27.488 07:20:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:27.488 07:20:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:27.488 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:27.488 07:20:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:27.488 07:20:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:27.488 [2024-11-18 07:20:48.382880] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:27.488 [2024-11-18 07:20:48.383250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.488 [2024-11-18 07:20:48.383293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17dccf0 with addr=10.0.0.2, port=4420 00:35:27.488 [2024-11-18 07:20:48.383309] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dccf0 is same with the state(6) to be set 00:35:27.488 [2024-11-18 07:20:48.383580] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17dccf0 (9): Bad file descriptor 00:35:27.488 [2024-11-18 07:20:48.383799] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:27.488 [2024-11-18 07:20:48.383834] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:27.488 [2024-11-18 07:20:48.383847] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:27.488 [2024-11-18 07:20:48.383859] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:27.488 [2024-11-18 07:20:48.396307] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:27.488 [2024-11-18 07:20:48.396679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.488 [2024-11-18 07:20:48.396723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17dccf0 with addr=10.0.0.2, port=4420 00:35:27.488 [2024-11-18 07:20:48.396740] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dccf0 is same with the state(6) to be set 00:35:27.488 [2024-11-18 07:20:48.396978] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17dccf0 (9): Bad file descriptor 00:35:27.488 [2024-11-18 07:20:48.397192] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:27.488 [2024-11-18 07:20:48.397210] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:27.488 [2024-11-18 07:20:48.397222] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:27.488 [2024-11-18 07:20:48.397234] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:27.488 [2024-11-18 07:20:48.409626] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:27.488 [2024-11-18 07:20:48.410011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.488 [2024-11-18 07:20:48.410039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17dccf0 with addr=10.0.0.2, port=4420 00:35:27.488 [2024-11-18 07:20:48.410070] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dccf0 is same with the state(6) to be set 00:35:27.488 [2024-11-18 07:20:48.410324] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17dccf0 (9): Bad file descriptor 00:35:27.488 [2024-11-18 07:20:48.410567] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:27.488 [2024-11-18 07:20:48.410588] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:27.488 [2024-11-18 07:20:48.410602] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:27.488 [2024-11-18 07:20:48.410614] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:27.488 [2024-11-18 07:20:48.422869] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:27.488 [2024-11-18 07:20:48.423219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.488 [2024-11-18 07:20:48.423246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17dccf0 with addr=10.0.0.2, port=4420 00:35:27.488 [2024-11-18 07:20:48.423262] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dccf0 is same with the state(6) to be set 00:35:27.488 [2024-11-18 07:20:48.423484] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17dccf0 (9): Bad file descriptor 00:35:27.488 [2024-11-18 07:20:48.423698] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:27.488 [2024-11-18 07:20:48.423717] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:27.488 [2024-11-18 07:20:48.423731] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:27.488 [2024-11-18 07:20:48.423743] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:27.488 [2024-11-18 07:20:48.425696] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:35:27.488 [2024-11-18 07:20:48.425769] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:27.488 [2024-11-18 07:20:48.436244] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:27.488 [2024-11-18 07:20:48.436681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.488 [2024-11-18 07:20:48.436709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17dccf0 with addr=10.0.0.2, port=4420 00:35:27.488 [2024-11-18 07:20:48.436725] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dccf0 is same with the state(6) to be set 00:35:27.488 [2024-11-18 07:20:48.436977] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17dccf0 (9): Bad file descriptor 00:35:27.488 [2024-11-18 07:20:48.437175] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:27.488 [2024-11-18 07:20:48.437193] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:27.488 [2024-11-18 07:20:48.437205] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:27.488 [2024-11-18 07:20:48.437222] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:27.488 [2024-11-18 07:20:48.449437] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:27.488 [2024-11-18 07:20:48.449825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.488 [2024-11-18 07:20:48.449853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17dccf0 with addr=10.0.0.2, port=4420 00:35:27.488 [2024-11-18 07:20:48.449869] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dccf0 is same with the state(6) to be set 00:35:27.488 [2024-11-18 07:20:48.450097] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17dccf0 (9): Bad file descriptor 00:35:27.488 [2024-11-18 07:20:48.450311] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:27.488 [2024-11-18 07:20:48.450329] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:27.488 [2024-11-18 07:20:48.450341] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:27.488 [2024-11-18 07:20:48.450353] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:27.488 [2024-11-18 07:20:48.463100] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:27.488 [2024-11-18 07:20:48.463500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.488 [2024-11-18 07:20:48.463528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17dccf0 with addr=10.0.0.2, port=4420 00:35:27.489 [2024-11-18 07:20:48.463544] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dccf0 is same with the state(6) to be set 00:35:27.489 [2024-11-18 07:20:48.463758] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17dccf0 (9): Bad file descriptor 00:35:27.748 [2024-11-18 07:20:48.463976] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:27.748 [2024-11-18 07:20:48.463997] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:27.748 [2024-11-18 07:20:48.464011] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:27.748 [2024-11-18 07:20:48.464023] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:27.748 [2024-11-18 07:20:48.476463] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:27.748 [2024-11-18 07:20:48.476863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.748 [2024-11-18 07:20:48.476890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17dccf0 with addr=10.0.0.2, port=4420 00:35:27.748 [2024-11-18 07:20:48.476906] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dccf0 is same with the state(6) to be set 00:35:27.748 [2024-11-18 07:20:48.477148] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17dccf0 (9): Bad file descriptor 00:35:27.749 [2024-11-18 07:20:48.477362] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:27.749 [2024-11-18 07:20:48.477381] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:27.749 [2024-11-18 07:20:48.477393] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:27.749 [2024-11-18 07:20:48.477405] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:27.749 [2024-11-18 07:20:48.489842] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:27.749 [2024-11-18 07:20:48.490278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.749 [2024-11-18 07:20:48.490310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17dccf0 with addr=10.0.0.2, port=4420 00:35:27.749 [2024-11-18 07:20:48.490327] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dccf0 is same with the state(6) to be set 00:35:27.749 [2024-11-18 07:20:48.490581] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17dccf0 (9): Bad file descriptor 00:35:27.749 [2024-11-18 07:20:48.490813] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:27.749 [2024-11-18 07:20:48.490833] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:27.749 [2024-11-18 07:20:48.490846] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:27.749 [2024-11-18 07:20:48.490872] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:27.749 [2024-11-18 07:20:48.499606] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:35:27.749 [2024-11-18 07:20:48.503145] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:27.749 [2024-11-18 07:20:48.503539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.749 [2024-11-18 07:20:48.503582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17dccf0 with addr=10.0.0.2, port=4420 00:35:27.749 [2024-11-18 07:20:48.503599] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dccf0 is same with the state(6) to be set 00:35:27.749 [2024-11-18 07:20:48.503828] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17dccf0 (9): Bad file descriptor 00:35:27.749 [2024-11-18 07:20:48.504042] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:27.749 [2024-11-18 07:20:48.504061] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:27.749 [2024-11-18 07:20:48.504073] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:27.749 [2024-11-18 07:20:48.504085] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:27.749 [2024-11-18 07:20:48.516358] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:27.749 [2024-11-18 07:20:48.516983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.749 [2024-11-18 07:20:48.517019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17dccf0 with addr=10.0.0.2, port=4420 00:35:27.749 [2024-11-18 07:20:48.517039] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dccf0 is same with the state(6) to be set 00:35:27.749 [2024-11-18 07:20:48.517305] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17dccf0 (9): Bad file descriptor 00:35:27.749 [2024-11-18 07:20:48.517532] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:27.749 [2024-11-18 07:20:48.517553] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:27.749 [2024-11-18 07:20:48.517568] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:27.749 [2024-11-18 07:20:48.517582] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:27.749 [2024-11-18 07:20:48.529833] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:27.749 [2024-11-18 07:20:48.530273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.749 [2024-11-18 07:20:48.530302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17dccf0 with addr=10.0.0.2, port=4420 00:35:27.749 [2024-11-18 07:20:48.530326] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dccf0 is same with the state(6) to be set 00:35:27.749 [2024-11-18 07:20:48.530567] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17dccf0 (9): Bad file descriptor 00:35:27.749 [2024-11-18 07:20:48.530779] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:27.749 [2024-11-18 07:20:48.530799] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:27.749 [2024-11-18 07:20:48.530828] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:27.749 [2024-11-18 07:20:48.530841] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:27.749 [2024-11-18 07:20:48.543135] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:27.749 [2024-11-18 07:20:48.543711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.749 [2024-11-18 07:20:48.543740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17dccf0 with addr=10.0.0.2, port=4420 00:35:27.749 [2024-11-18 07:20:48.543757] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dccf0 is same with the state(6) to be set 00:35:27.749 [2024-11-18 07:20:48.543975] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17dccf0 (9): Bad file descriptor 00:35:27.749 [2024-11-18 07:20:48.544194] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:27.749 [2024-11-18 07:20:48.544215] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:27.749 [2024-11-18 07:20:48.544230] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:27.749 [2024-11-18 07:20:48.544243] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:27.749 [2024-11-18 07:20:48.545216] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:27.749 [2024-11-18 07:20:48.545260] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:27.749 [2024-11-18 07:20:48.545274] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:27.749 [2024-11-18 07:20:48.545284] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:27.749 [2024-11-18 07:20:48.545293] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:27.749 [2024-11-18 07:20:48.546650] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:35:27.749 [2024-11-18 07:20:48.546711] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:35:27.749 [2024-11-18 07:20:48.546715] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:27.749 [2024-11-18 07:20:48.556663] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:27.749 [2024-11-18 07:20:48.557172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.749 [2024-11-18 07:20:48.557209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17dccf0 with addr=10.0.0.2, port=4420 00:35:27.749 [2024-11-18 07:20:48.557228] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dccf0 is same with the state(6) to be set 00:35:27.749 [2024-11-18 07:20:48.557467] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17dccf0 (9): Bad file descriptor 00:35:27.749 [2024-11-18 07:20:48.557714] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:27.749 [2024-11-18 07:20:48.557736] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:27.749 [2024-11-18 07:20:48.557753] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:27.749 [2024-11-18 07:20:48.557781] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:27.749 [2024-11-18 07:20:48.570215] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:27.749 [2024-11-18 07:20:48.570782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.749 [2024-11-18 07:20:48.570820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17dccf0 with addr=10.0.0.2, port=4420 00:35:27.749 [2024-11-18 07:20:48.570839] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dccf0 is same with the state(6) to be set 00:35:27.749 [2024-11-18 07:20:48.571078] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17dccf0 (9): Bad file descriptor 00:35:27.749 [2024-11-18 07:20:48.571293] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:27.749 [2024-11-18 07:20:48.571313] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:27.749 [2024-11-18 07:20:48.571329] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:27.749 [2024-11-18 07:20:48.571343] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:27.749 [2024-11-18 07:20:48.583815] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:27.749 [2024-11-18 07:20:48.584316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.749 [2024-11-18 07:20:48.584354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17dccf0 with addr=10.0.0.2, port=4420 00:35:27.749 [2024-11-18 07:20:48.584373] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dccf0 is same with the state(6) to be set 00:35:27.749 [2024-11-18 07:20:48.584605] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17dccf0 (9): Bad file descriptor 00:35:27.749 [2024-11-18 07:20:48.584843] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:27.749 [2024-11-18 07:20:48.584864] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:27.749 [2024-11-18 07:20:48.584879] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:27.749 [2024-11-18 07:20:48.584893] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:27.749 [2024-11-18 07:20:48.597372] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:27.749 [2024-11-18 07:20:48.597886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.749 [2024-11-18 07:20:48.597922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17dccf0 with addr=10.0.0.2, port=4420 00:35:27.749 [2024-11-18 07:20:48.597942] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dccf0 is same with the state(6) to be set 00:35:27.749 [2024-11-18 07:20:48.598178] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17dccf0 (9): Bad file descriptor 00:35:27.750 [2024-11-18 07:20:48.598413] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:27.750 [2024-11-18 07:20:48.598435] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:27.750 [2024-11-18 07:20:48.598451] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:27.750 [2024-11-18 07:20:48.598466] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:27.750 [2024-11-18 07:20:48.610948] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:27.750 [2024-11-18 07:20:48.611482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.750 [2024-11-18 07:20:48.611525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17dccf0 with addr=10.0.0.2, port=4420 00:35:27.750 [2024-11-18 07:20:48.611545] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dccf0 is same with the state(6) to be set 00:35:27.750 [2024-11-18 07:20:48.611783] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17dccf0 (9): Bad file descriptor 00:35:27.750 [2024-11-18 07:20:48.611999] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:27.750 [2024-11-18 07:20:48.612019] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:27.750 [2024-11-18 07:20:48.612035] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:27.750 [2024-11-18 07:20:48.612049] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:27.750 [2024-11-18 07:20:48.624576] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:27.750 [2024-11-18 07:20:48.625075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.750 [2024-11-18 07:20:48.625111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17dccf0 with addr=10.0.0.2, port=4420 00:35:27.750 [2024-11-18 07:20:48.625129] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dccf0 is same with the state(6) to be set 00:35:27.750 [2024-11-18 07:20:48.625366] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17dccf0 (9): Bad file descriptor 00:35:27.750 [2024-11-18 07:20:48.625612] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:27.750 [2024-11-18 07:20:48.625634] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:27.750 [2024-11-18 07:20:48.625649] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:27.750 [2024-11-18 07:20:48.625664] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:27.750 [2024-11-18 07:20:48.638136] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:27.750 [2024-11-18 07:20:48.638469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.750 [2024-11-18 07:20:48.638505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17dccf0 with addr=10.0.0.2, port=4420 00:35:27.750 [2024-11-18 07:20:48.638523] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dccf0 is same with the state(6) to be set 00:35:27.750 [2024-11-18 07:20:48.638737] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17dccf0 (9): Bad file descriptor 00:35:27.750 [2024-11-18 07:20:48.638966] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:27.750 [2024-11-18 07:20:48.638986] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:27.750 [2024-11-18 07:20:48.638999] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:27.750 [2024-11-18 07:20:48.639011] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:27.750 [2024-11-18 07:20:48.651760] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:27.750 [2024-11-18 07:20:48.652100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.750 [2024-11-18 07:20:48.652128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17dccf0 with addr=10.0.0.2, port=4420 00:35:27.750 [2024-11-18 07:20:48.652145] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dccf0 is same with the state(6) to be set 00:35:27.750 [2024-11-18 07:20:48.652366] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17dccf0 (9): Bad file descriptor 00:35:27.750 [2024-11-18 07:20:48.652594] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:27.750 [2024-11-18 07:20:48.652615] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:27.750 [2024-11-18 07:20:48.652629] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:27.750 [2024-11-18 07:20:48.652641] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:27.750 07:20:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:27.750 07:20:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:35:27.750 07:20:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:27.750 07:20:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:27.750 07:20:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:27.750 [2024-11-18 07:20:48.665430] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:27.750 [2024-11-18 07:20:48.665801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.750 [2024-11-18 07:20:48.665829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17dccf0 with addr=10.0.0.2, port=4420 00:35:27.750 [2024-11-18 07:20:48.665846] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dccf0 is same with the state(6) to be set 00:35:27.750 [2024-11-18 07:20:48.666060] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17dccf0 (9): Bad file descriptor 00:35:27.750 [2024-11-18 07:20:48.666305] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:27.750 [2024-11-18 07:20:48.666328] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:27.750 [2024-11-18 07:20:48.666342] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:27.750 [2024-11-18 07:20:48.666355] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:27.750 07:20:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:27.750 07:20:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:35:27.750 07:20:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:27.750 07:20:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:27.750 [2024-11-18 07:20:48.679087] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:27.750 [2024-11-18 07:20:48.679437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.750 [2024-11-18 07:20:48.679465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17dccf0 with addr=10.0.0.2, port=4420 00:35:27.750 [2024-11-18 07:20:48.679480] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dccf0 is same with the state(6) to be set 00:35:27.750 [2024-11-18 07:20:48.679702] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17dccf0 (9): Bad file descriptor 00:35:27.750 [2024-11-18 07:20:48.679932] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:27.750 [2024-11-18 07:20:48.679953] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:27.750 [2024-11-18 07:20:48.679966] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:27.750 [2024-11-18 07:20:48.679983] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:27.750 [2024-11-18 07:20:48.680938] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:27.750 07:20:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:27.750 07:20:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:35:27.750 07:20:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:27.750 07:20:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:27.750 [2024-11-18 07:20:48.692817] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:27.750 [2024-11-18 07:20:48.693178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.750 [2024-11-18 07:20:48.693207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17dccf0 with addr=10.0.0.2, port=4420 00:35:27.750 [2024-11-18 07:20:48.693223] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dccf0 is same with the state(6) to be set 00:35:27.750 [2024-11-18 07:20:48.693440] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17dccf0 (9): Bad file descriptor 00:35:27.750 [2024-11-18 07:20:48.693710] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:27.750 [2024-11-18 07:20:48.693732] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:27.750 [2024-11-18 07:20:48.693746] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:27.750 [2024-11-18 07:20:48.693759] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:27.750 [2024-11-18 07:20:48.706397] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:27.750 [2024-11-18 07:20:48.706784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.750 [2024-11-18 07:20:48.706822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17dccf0 with addr=10.0.0.2, port=4420 00:35:27.750 [2024-11-18 07:20:48.706839] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dccf0 is same with the state(6) to be set 00:35:27.750 [2024-11-18 07:20:48.707082] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17dccf0 (9): Bad file descriptor 00:35:27.750 [2024-11-18 07:20:48.707304] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:27.750 [2024-11-18 07:20:48.707323] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:27.750 [2024-11-18 07:20:48.707336] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:27.750 [2024-11-18 07:20:48.707347] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:27.750 [2024-11-18 07:20:48.719934] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:27.750 [2024-11-18 07:20:48.720412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.751 [2024-11-18 07:20:48.720447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17dccf0 with addr=10.0.0.2, port=4420 00:35:27.751 [2024-11-18 07:20:48.720465] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dccf0 is same with the state(6) to be set 00:35:27.751 [2024-11-18 07:20:48.720704] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17dccf0 (9): Bad file descriptor 00:35:27.751 [2024-11-18 07:20:48.720925] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:27.751 [2024-11-18 07:20:48.720947] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:27.751 [2024-11-18 07:20:48.720978] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:27.751 [2024-11-18 07:20:48.720992] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:27.751 Malloc0 00:35:27.751 07:20:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:27.751 07:20:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:35:27.751 07:20:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:27.751 07:20:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:28.008 07:20:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:28.008 07:20:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:35:28.008 07:20:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:28.008 07:20:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:28.008 [2024-11-18 07:20:48.733523] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:28.008 [2024-11-18 07:20:48.733860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:28.008 [2024-11-18 07:20:48.733888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17dccf0 with addr=10.0.0.2, port=4420 00:35:28.008 [2024-11-18 07:20:48.733905] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dccf0 is same with the state(6) to be set 00:35:28.008 [2024-11-18 07:20:48.734145] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17dccf0 (9): Bad file descriptor 00:35:28.008 [2024-11-18 07:20:48.734355] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:28.008 [2024-11-18 07:20:48.734376] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:28.008 [2024-11-18 07:20:48.734390] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:28.008 [2024-11-18 07:20:48.734402] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:28.008 07:20:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:28.008 07:20:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:28.008 07:20:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:28.008 07:20:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:28.008 [2024-11-18 07:20:48.740969] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:28.008 07:20:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:28.008 07:20:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 400010 00:35:28.008 [2024-11-18 07:20:48.747045] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:28.009 [2024-11-18 07:20:48.772059] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller successful. 00:35:28.943 3802.17 IOPS, 14.85 MiB/s [2024-11-18T06:20:51.295Z] 4489.29 IOPS, 17.54 MiB/s [2024-11-18T06:20:52.229Z] 4966.62 IOPS, 19.40 MiB/s [2024-11-18T06:20:53.164Z] 5367.78 IOPS, 20.97 MiB/s [2024-11-18T06:20:54.097Z] 5681.70 IOPS, 22.19 MiB/s [2024-11-18T06:20:55.031Z] 5919.00 IOPS, 23.12 MiB/s [2024-11-18T06:20:55.965Z] 6125.00 IOPS, 23.93 MiB/s [2024-11-18T06:20:57.339Z] 6285.46 IOPS, 24.55 MiB/s [2024-11-18T06:20:58.274Z] 6450.00 IOPS, 25.20 MiB/s [2024-11-18T06:20:58.274Z] 6581.53 IOPS, 25.71 MiB/s 00:35:37.296 Latency(us) 00:35:37.296 [2024-11-18T06:20:58.274Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:37.296 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:35:37.296 Verification LBA range: start 0x0 length 0x4000 00:35:37.296 Nvme1n1 : 15.01 6583.65 25.72 9963.76 0.00 7712.58 631.09 23787.14 00:35:37.296 [2024-11-18T06:20:58.274Z] =================================================================================================================== 00:35:37.296 [2024-11-18T06:20:58.274Z] Total : 6583.65 25.72 9963.76 0.00 7712.58 631.09 23787.14 00:35:37.296 07:20:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:35:37.296 07:20:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:37.296 07:20:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:37.296 07:20:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:37.296 07:20:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:37.296 07:20:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:35:37.296 07:20:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:35:37.296 07:20:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:37.296 07:20:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # sync 00:35:37.296 07:20:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:37.296 07:20:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set +e 00:35:37.296 07:20:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:37.296 07:20:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:37.296 rmmod nvme_tcp 00:35:37.296 rmmod nvme_fabrics 00:35:37.296 rmmod nvme_keyring 00:35:37.296 07:20:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:37.296 07:20:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@128 -- # set -e 00:35:37.296 07:20:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@129 -- # return 0 00:35:37.296 07:20:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@517 -- # '[' -n 400678 ']' 00:35:37.296 07:20:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@518 -- # killprocess 400678 00:35:37.296 07:20:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 400678 ']' 00:35:37.296 07:20:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@958 -- # kill -0 400678 00:35:37.296 07:20:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # uname 00:35:37.296 07:20:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:37.296 07:20:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 400678 00:35:37.296 07:20:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:37.296 07:20:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:37.296 07:20:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 400678' 00:35:37.296 killing process with pid 400678 00:35:37.296 07:20:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@973 -- # kill 400678 00:35:37.296 07:20:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@978 -- # wait 400678 00:35:37.557 07:20:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:35:37.557 07:20:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:37.557 07:20:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:37.557 07:20:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # iptr 00:35:37.557 07:20:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-save 00:35:37.557 07:20:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:37.557 07:20:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-restore 00:35:37.557 07:20:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:37.557 07:20:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:37.557 07:20:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:37.557 07:20:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:37.557 07:20:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:40.089 07:21:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:40.089 00:35:40.089 real 0m22.426s 00:35:40.089 user 0m59.578s 00:35:40.089 sys 0m4.417s 00:35:40.089 07:21:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:40.089 07:21:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:40.089 ************************************ 00:35:40.089 END TEST nvmf_bdevperf 00:35:40.089 ************************************ 00:35:40.089 07:21:00 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:35:40.089 07:21:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:35:40.089 07:21:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:40.089 07:21:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:35:40.089 ************************************ 00:35:40.089 START TEST nvmf_target_disconnect 00:35:40.089 ************************************ 00:35:40.089 07:21:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:35:40.089 * Looking for test storage... 00:35:40.089 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:35:40.089 07:21:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:35:40.089 07:21:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1693 -- # lcov --version 00:35:40.089 07:21:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:35:40.089 07:21:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:35:40.089 07:21:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:40.089 07:21:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:40.089 07:21:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:40.089 07:21:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:35:40.089 07:21:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:35:40.089 07:21:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:35:40.089 07:21:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:35:40.089 07:21:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:35:40.089 07:21:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:35:40.089 07:21:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:35:40.089 07:21:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:40.089 07:21:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:35:40.089 07:21:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@345 -- # : 1 00:35:40.089 07:21:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:40.089 07:21:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:40.089 07:21:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # decimal 1 00:35:40.089 07:21:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=1 00:35:40.089 07:21:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:40.089 07:21:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 1 00:35:40.089 07:21:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:35:40.089 07:21:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # decimal 2 00:35:40.089 07:21:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=2 00:35:40.089 07:21:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:40.089 07:21:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 2 00:35:40.089 07:21:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:35:40.089 07:21:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:40.089 07:21:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:40.089 07:21:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # return 0 00:35:40.089 07:21:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:40.089 07:21:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:35:40.089 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:40.089 --rc genhtml_branch_coverage=1 00:35:40.089 --rc genhtml_function_coverage=1 00:35:40.089 --rc genhtml_legend=1 00:35:40.089 --rc geninfo_all_blocks=1 00:35:40.089 --rc geninfo_unexecuted_blocks=1 00:35:40.089 00:35:40.089 ' 00:35:40.089 07:21:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:35:40.089 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:40.089 --rc genhtml_branch_coverage=1 00:35:40.089 --rc genhtml_function_coverage=1 00:35:40.089 --rc genhtml_legend=1 00:35:40.089 --rc geninfo_all_blocks=1 00:35:40.089 --rc geninfo_unexecuted_blocks=1 00:35:40.089 00:35:40.089 ' 00:35:40.089 07:21:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:35:40.089 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:40.089 --rc genhtml_branch_coverage=1 00:35:40.089 --rc genhtml_function_coverage=1 00:35:40.089 --rc genhtml_legend=1 00:35:40.089 --rc geninfo_all_blocks=1 00:35:40.089 --rc geninfo_unexecuted_blocks=1 00:35:40.089 00:35:40.089 ' 00:35:40.089 07:21:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:35:40.089 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:40.089 --rc genhtml_branch_coverage=1 00:35:40.089 --rc genhtml_function_coverage=1 00:35:40.089 --rc genhtml_legend=1 00:35:40.089 --rc geninfo_all_blocks=1 00:35:40.089 --rc geninfo_unexecuted_blocks=1 00:35:40.089 00:35:40.089 ' 00:35:40.089 07:21:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:40.089 07:21:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:35:40.089 07:21:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:40.089 07:21:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:40.089 07:21:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:40.089 07:21:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:40.089 07:21:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:40.089 07:21:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:40.089 07:21:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:40.090 07:21:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:40.090 07:21:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:40.090 07:21:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:40.090 07:21:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:35:40.090 07:21:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:35:40.090 07:21:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:40.090 07:21:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:40.090 07:21:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:40.090 07:21:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:40.090 07:21:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:40.090 07:21:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:35:40.090 07:21:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:40.090 07:21:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:40.090 07:21:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:40.090 07:21:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:40.090 07:21:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:40.090 07:21:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:40.090 07:21:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:35:40.090 07:21:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:40.090 07:21:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # : 0 00:35:40.090 07:21:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:40.090 07:21:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:40.090 07:21:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:40.090 07:21:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:40.090 07:21:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:40.090 07:21:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:40.090 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:40.090 07:21:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:40.090 07:21:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:40.090 07:21:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:40.090 07:21:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:35:40.090 07:21:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:35:40.090 07:21:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:35:40.090 07:21:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:35:40.090 07:21:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:35:40.090 07:21:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:40.090 07:21:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:35:40.090 07:21:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:35:40.090 07:21:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:35:40.090 07:21:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:40.090 07:21:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:40.090 07:21:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:40.090 07:21:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:35:40.090 07:21:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:35:40.090 07:21:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:35:40.090 07:21:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:35:41.994 07:21:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:41.994 07:21:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:35:41.994 07:21:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:41.994 07:21:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:41.994 07:21:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:41.994 07:21:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:41.994 07:21:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:41.994 07:21:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:35:41.994 07:21:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:41.994 07:21:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # e810=() 00:35:41.994 07:21:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:35:41.994 07:21:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # x722=() 00:35:41.994 07:21:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:35:41.994 07:21:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:35:41.994 07:21:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:35:41.994 07:21:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:41.994 07:21:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:41.994 07:21:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:41.994 07:21:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:41.994 07:21:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:41.994 07:21:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:41.994 07:21:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:41.994 07:21:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:41.994 07:21:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:41.994 07:21:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:41.994 07:21:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:41.994 07:21:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:41.994 07:21:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:41.994 07:21:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:41.994 07:21:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:41.994 07:21:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:41.994 07:21:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:41.994 07:21:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:41.994 07:21:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:41.994 07:21:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:35:41.994 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:35:41.994 07:21:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:41.994 07:21:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:41.994 07:21:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:41.994 07:21:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:41.994 07:21:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:41.994 07:21:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:41.994 07:21:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:35:41.994 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:35:41.994 07:21:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:41.994 07:21:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:41.994 07:21:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:41.994 07:21:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:41.994 07:21:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:41.994 07:21:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:41.994 07:21:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:41.994 07:21:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:41.994 07:21:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:41.994 07:21:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:41.994 07:21:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:41.994 07:21:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:41.994 07:21:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:41.994 07:21:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:41.994 07:21:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:41.994 07:21:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:35:41.994 Found net devices under 0000:0a:00.0: cvl_0_0 00:35:41.994 07:21:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:41.994 07:21:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:41.995 07:21:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:41.995 07:21:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:41.995 07:21:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:41.995 07:21:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:41.995 07:21:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:41.995 07:21:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:41.995 07:21:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:35:41.995 Found net devices under 0000:0a:00.1: cvl_0_1 00:35:41.995 07:21:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:41.995 07:21:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:35:41.995 07:21:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:35:41.995 07:21:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:35:41.995 07:21:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:35:41.995 07:21:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:35:41.995 07:21:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:41.995 07:21:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:41.995 07:21:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:41.995 07:21:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:41.995 07:21:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:41.995 07:21:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:41.995 07:21:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:41.995 07:21:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:41.995 07:21:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:41.995 07:21:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:41.995 07:21:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:41.995 07:21:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:41.995 07:21:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:41.995 07:21:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:41.995 07:21:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:42.253 07:21:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:42.253 07:21:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:42.253 07:21:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:42.253 07:21:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:42.253 07:21:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:42.253 07:21:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:42.253 07:21:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:42.253 07:21:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:42.253 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:42.253 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.183 ms 00:35:42.253 00:35:42.253 --- 10.0.0.2 ping statistics --- 00:35:42.254 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:42.254 rtt min/avg/max/mdev = 0.183/0.183/0.183/0.000 ms 00:35:42.254 07:21:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:42.254 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:42.254 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.055 ms 00:35:42.254 00:35:42.254 --- 10.0.0.1 ping statistics --- 00:35:42.254 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:42.254 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:35:42.254 07:21:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:42.254 07:21:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@450 -- # return 0 00:35:42.254 07:21:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:35:42.254 07:21:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:42.254 07:21:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:35:42.254 07:21:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:35:42.254 07:21:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:42.254 07:21:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:35:42.254 07:21:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:35:42.254 07:21:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:35:42.254 07:21:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:35:42.254 07:21:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:42.254 07:21:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:35:42.254 ************************************ 00:35:42.254 START TEST nvmf_target_disconnect_tc1 00:35:42.254 ************************************ 00:35:42.254 07:21:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc1 00:35:42.254 07:21:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:35:42.254 07:21:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # local es=0 00:35:42.254 07:21:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:35:42.254 07:21:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:35:42.254 07:21:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:42.254 07:21:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:35:42.254 07:21:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:42.254 07:21:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:35:42.254 07:21:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:42.254 07:21:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:35:42.254 07:21:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:35:42.254 07:21:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:35:42.512 [2024-11-18 07:21:03.268358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.512 [2024-11-18 07:21:03.268428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19d9a90 with addr=10.0.0.2, port=4420 00:35:42.512 [2024-11-18 07:21:03.268467] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:35:42.512 [2024-11-18 07:21:03.268524] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:35:42.512 [2024-11-18 07:21:03.268546] nvme.c: 939:spdk_nvme_probe_ext: *ERROR*: Create probe context failed 00:35:42.512 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:35:42.512 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:35:42.513 Initializing NVMe Controllers 00:35:42.513 07:21:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # es=1 00:35:42.513 07:21:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:35:42.513 07:21:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:35:42.513 07:21:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:35:42.513 00:35:42.513 real 0m0.097s 00:35:42.513 user 0m0.035s 00:35:42.513 sys 0m0.062s 00:35:42.513 07:21:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:42.513 07:21:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:35:42.513 ************************************ 00:35:42.513 END TEST nvmf_target_disconnect_tc1 00:35:42.513 ************************************ 00:35:42.513 07:21:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:35:42.513 07:21:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:35:42.513 07:21:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:42.513 07:21:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:35:42.513 ************************************ 00:35:42.513 START TEST nvmf_target_disconnect_tc2 00:35:42.513 ************************************ 00:35:42.513 07:21:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc2 00:35:42.513 07:21:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:35:42.513 07:21:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:35:42.513 07:21:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:42.513 07:21:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:42.513 07:21:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:42.513 07:21:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=403945 00:35:42.513 07:21:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:35:42.513 07:21:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 403945 00:35:42.513 07:21:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 403945 ']' 00:35:42.513 07:21:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:42.513 07:21:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:42.513 07:21:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:42.513 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:42.513 07:21:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:42.513 07:21:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:42.513 [2024-11-18 07:21:03.381483] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:35:42.513 [2024-11-18 07:21:03.381586] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:42.513 [2024-11-18 07:21:03.455094] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:35:42.771 [2024-11-18 07:21:03.503749] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:42.771 [2024-11-18 07:21:03.503814] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:42.771 [2024-11-18 07:21:03.503829] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:42.771 [2024-11-18 07:21:03.503841] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:42.771 [2024-11-18 07:21:03.503850] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:42.771 [2024-11-18 07:21:03.505295] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:35:42.771 [2024-11-18 07:21:03.505370] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:35:42.771 [2024-11-18 07:21:03.505424] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:35:42.771 [2024-11-18 07:21:03.505426] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:35:42.771 07:21:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:42.771 07:21:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:35:42.771 07:21:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:42.771 07:21:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:42.771 07:21:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:42.771 07:21:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:42.771 07:21:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:35:42.771 07:21:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:42.771 07:21:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:42.771 Malloc0 00:35:42.771 07:21:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:42.771 07:21:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:35:42.771 07:21:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:42.771 07:21:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:42.771 [2024-11-18 07:21:03.680318] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:42.771 07:21:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:42.772 07:21:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:35:42.772 07:21:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:42.772 07:21:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:42.772 07:21:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:42.772 07:21:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:35:42.772 07:21:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:42.772 07:21:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:42.772 07:21:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:42.772 07:21:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:42.772 07:21:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:42.772 07:21:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:42.772 [2024-11-18 07:21:03.708614] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:42.772 07:21:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:42.772 07:21:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:35:42.772 07:21:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:42.772 07:21:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:42.772 07:21:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:42.772 07:21:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=404088 00:35:42.772 07:21:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:35:42.772 07:21:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:35:45.337 07:21:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 403945 00:35:45.337 07:21:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:35:45.337 Read completed with error (sct=0, sc=8) 00:35:45.337 starting I/O failed 00:35:45.337 Read completed with error (sct=0, sc=8) 00:35:45.337 starting I/O failed 00:35:45.337 Read completed with error (sct=0, sc=8) 00:35:45.337 starting I/O failed 00:35:45.337 Read completed with error (sct=0, sc=8) 00:35:45.337 starting I/O failed 00:35:45.337 Read completed with error (sct=0, sc=8) 00:35:45.337 starting I/O failed 00:35:45.337 Read completed with error (sct=0, sc=8) 00:35:45.337 starting I/O failed 00:35:45.337 Read completed with error (sct=0, sc=8) 00:35:45.337 starting I/O failed 00:35:45.337 Read completed with error (sct=0, sc=8) 00:35:45.337 starting I/O failed 00:35:45.337 Read completed with error (sct=0, sc=8) 00:35:45.337 starting I/O failed 00:35:45.337 Read completed with error (sct=0, sc=8) 00:35:45.337 starting I/O failed 00:35:45.337 Read completed with error (sct=0, sc=8) 00:35:45.337 starting I/O failed 00:35:45.337 Read completed with error (sct=0, sc=8) 00:35:45.337 starting I/O failed 00:35:45.337 Read completed with error (sct=0, sc=8) 00:35:45.337 starting I/O failed 00:35:45.337 Write completed with error (sct=0, sc=8) 00:35:45.337 starting I/O failed 00:35:45.337 Write completed with error (sct=0, sc=8) 00:35:45.337 starting I/O failed 00:35:45.337 Read completed with error (sct=0, sc=8) 00:35:45.337 starting I/O failed 00:35:45.337 Read completed with error (sct=0, sc=8) 00:35:45.337 starting I/O failed 00:35:45.337 Write completed with error (sct=0, sc=8) 00:35:45.337 starting I/O failed 00:35:45.337 Read completed with error (sct=0, sc=8) 00:35:45.337 starting I/O failed 00:35:45.337 Read completed with error (sct=0, sc=8) 00:35:45.337 starting I/O failed 00:35:45.337 Read completed with error (sct=0, sc=8) 00:35:45.337 starting I/O failed 00:35:45.337 Write completed with error (sct=0, sc=8) 00:35:45.337 starting I/O failed 00:35:45.337 Write completed with error (sct=0, sc=8) 00:35:45.337 starting I/O failed 00:35:45.337 Read completed with error (sct=0, sc=8) 00:35:45.337 starting I/O failed 00:35:45.337 Write completed with error (sct=0, sc=8) 00:35:45.337 starting I/O failed 00:35:45.337 Read completed with error (sct=0, sc=8) 00:35:45.337 starting I/O failed 00:35:45.337 Read completed with error (sct=0, sc=8) 00:35:45.337 starting I/O failed 00:35:45.337 Read completed with error (sct=0, sc=8) 00:35:45.337 starting I/O failed 00:35:45.337 Write completed with error (sct=0, sc=8) 00:35:45.337 starting I/O failed 00:35:45.337 Write completed with error (sct=0, sc=8) 00:35:45.337 starting I/O failed 00:35:45.337 Read completed with error (sct=0, sc=8) 00:35:45.337 starting I/O failed 00:35:45.337 Read completed with error (sct=0, sc=8) 00:35:45.337 starting I/O failed 00:35:45.337 Read completed with error (sct=0, sc=8) 00:35:45.337 starting I/O failed 00:35:45.337 Read completed with error (sct=0, sc=8) 00:35:45.337 starting I/O failed 00:35:45.337 Read completed with error (sct=0, sc=8) 00:35:45.337 starting I/O failed 00:35:45.337 Read completed with error (sct=0, sc=8) 00:35:45.337 starting I/O failed 00:35:45.337 Read completed with error (sct=0, sc=8) 00:35:45.337 starting I/O failed 00:35:45.338 Read completed with error (sct=0, sc=8) 00:35:45.338 starting I/O failed 00:35:45.338 [2024-11-18 07:21:05.732750] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:45.338 Read completed with error (sct=0, sc=8) 00:35:45.338 starting I/O failed 00:35:45.338 Read completed with error (sct=0, sc=8) 00:35:45.338 starting I/O failed 00:35:45.338 Read completed with error (sct=0, sc=8) 00:35:45.338 starting I/O failed 00:35:45.338 Read completed with error (sct=0, sc=8) 00:35:45.338 starting I/O failed 00:35:45.338 Read completed with error (sct=0, sc=8) 00:35:45.338 starting I/O failed 00:35:45.338 Read completed with error (sct=0, sc=8) 00:35:45.338 starting I/O failed 00:35:45.338 Read completed with error (sct=0, sc=8) 00:35:45.338 starting I/O failed 00:35:45.338 Read completed with error (sct=0, sc=8) 00:35:45.338 starting I/O failed 00:35:45.338 Read completed with error (sct=0, sc=8) 00:35:45.338 starting I/O failed 00:35:45.338 Read completed with error (sct=0, sc=8) 00:35:45.338 starting I/O failed 00:35:45.338 Read completed with error (sct=0, sc=8) 00:35:45.338 starting I/O failed 00:35:45.338 Read completed with error (sct=0, sc=8) 00:35:45.338 starting I/O failed 00:35:45.338 Write completed with error (sct=0, sc=8) 00:35:45.338 starting I/O failed 00:35:45.338 Read completed with error (sct=0, sc=8) 00:35:45.338 starting I/O failed 00:35:45.338 Write completed with error (sct=0, sc=8) 00:35:45.338 starting I/O failed 00:35:45.338 Read completed with error (sct=0, sc=8) 00:35:45.338 starting I/O failed 00:35:45.338 Write completed with error (sct=0, sc=8) 00:35:45.338 starting I/O failed 00:35:45.338 Read completed with error (sct=0, sc=8) 00:35:45.338 starting I/O failed 00:35:45.338 Write completed with error (sct=0, sc=8) 00:35:45.338 starting I/O failed 00:35:45.338 Write completed with error (sct=0, sc=8) 00:35:45.338 starting I/O failed 00:35:45.338 Write completed with error (sct=0, sc=8) 00:35:45.338 starting I/O failed 00:35:45.338 Write completed with error (sct=0, sc=8) 00:35:45.338 starting I/O failed 00:35:45.338 Read completed with error (sct=0, sc=8) 00:35:45.338 starting I/O failed 00:35:45.338 Read completed with error (sct=0, sc=8) 00:35:45.338 starting I/O failed 00:35:45.338 Write completed with error (sct=0, sc=8) 00:35:45.338 starting I/O failed 00:35:45.338 Read completed with error (sct=0, sc=8) 00:35:45.338 starting I/O failed 00:35:45.338 [2024-11-18 07:21:05.733139] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:45.338 Read completed with error (sct=0, sc=8) 00:35:45.338 starting I/O failed 00:35:45.338 Read completed with error (sct=0, sc=8) 00:35:45.338 starting I/O failed 00:35:45.338 Read completed with error (sct=0, sc=8) 00:35:45.338 starting I/O failed 00:35:45.338 Read completed with error (sct=0, sc=8) 00:35:45.338 starting I/O failed 00:35:45.338 Read completed with error (sct=0, sc=8) 00:35:45.338 starting I/O failed 00:35:45.338 Read completed with error (sct=0, sc=8) 00:35:45.338 starting I/O failed 00:35:45.338 Read completed with error (sct=0, sc=8) 00:35:45.338 starting I/O failed 00:35:45.338 Read completed with error (sct=0, sc=8) 00:35:45.338 starting I/O failed 00:35:45.338 Write completed with error (sct=0, sc=8) 00:35:45.338 starting I/O failed 00:35:45.338 Read completed with error (sct=0, sc=8) 00:35:45.338 starting I/O failed 00:35:45.338 Read completed with error (sct=0, sc=8) 00:35:45.338 starting I/O failed 00:35:45.338 Write completed with error (sct=0, sc=8) 00:35:45.338 starting I/O failed 00:35:45.338 Read completed with error (sct=0, sc=8) 00:35:45.338 starting I/O failed 00:35:45.338 Read completed with error (sct=0, sc=8) 00:35:45.338 starting I/O failed 00:35:45.338 Read completed with error (sct=0, sc=8) 00:35:45.338 starting I/O failed 00:35:45.338 Write completed with error (sct=0, sc=8) 00:35:45.338 starting I/O failed 00:35:45.338 Write completed with error (sct=0, sc=8) 00:35:45.338 starting I/O failed 00:35:45.338 Write completed with error (sct=0, sc=8) 00:35:45.338 starting I/O failed 00:35:45.338 Read completed with error (sct=0, sc=8) 00:35:45.338 starting I/O failed 00:35:45.338 Read completed with error (sct=0, sc=8) 00:35:45.338 starting I/O failed 00:35:45.338 Read completed with error (sct=0, sc=8) 00:35:45.338 starting I/O failed 00:35:45.338 Write completed with error (sct=0, sc=8) 00:35:45.338 starting I/O failed 00:35:45.338 Read completed with error (sct=0, sc=8) 00:35:45.338 starting I/O failed 00:35:45.338 Read completed with error (sct=0, sc=8) 00:35:45.338 starting I/O failed 00:35:45.338 Read completed with error (sct=0, sc=8) 00:35:45.338 starting I/O failed 00:35:45.338 Write completed with error (sct=0, sc=8) 00:35:45.338 starting I/O failed 00:35:45.338 Read completed with error (sct=0, sc=8) 00:35:45.338 starting I/O failed 00:35:45.338 Read completed with error (sct=0, sc=8) 00:35:45.338 starting I/O failed 00:35:45.338 Read completed with error (sct=0, sc=8) 00:35:45.338 starting I/O failed 00:35:45.338 Read completed with error (sct=0, sc=8) 00:35:45.338 starting I/O failed 00:35:45.338 Write completed with error (sct=0, sc=8) 00:35:45.338 starting I/O failed 00:35:45.338 Write completed with error (sct=0, sc=8) 00:35:45.338 starting I/O failed 00:35:45.338 [2024-11-18 07:21:05.733463] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:45.338 Read completed with error (sct=0, sc=8) 00:35:45.338 starting I/O failed 00:35:45.338 Read completed with error (sct=0, sc=8) 00:35:45.338 starting I/O failed 00:35:45.338 Read completed with error (sct=0, sc=8) 00:35:45.338 starting I/O failed 00:35:45.338 Read completed with error (sct=0, sc=8) 00:35:45.338 starting I/O failed 00:35:45.338 Read completed with error (sct=0, sc=8) 00:35:45.338 starting I/O failed 00:35:45.338 Read completed with error (sct=0, sc=8) 00:35:45.338 starting I/O failed 00:35:45.338 Read completed with error (sct=0, sc=8) 00:35:45.338 starting I/O failed 00:35:45.338 Read completed with error (sct=0, sc=8) 00:35:45.338 starting I/O failed 00:35:45.338 Read completed with error (sct=0, sc=8) 00:35:45.338 starting I/O failed 00:35:45.338 Read completed with error (sct=0, sc=8) 00:35:45.338 starting I/O failed 00:35:45.338 Read completed with error (sct=0, sc=8) 00:35:45.338 starting I/O failed 00:35:45.338 Read completed with error (sct=0, sc=8) 00:35:45.338 starting I/O failed 00:35:45.338 Read completed with error (sct=0, sc=8) 00:35:45.338 starting I/O failed 00:35:45.338 Write completed with error (sct=0, sc=8) 00:35:45.338 starting I/O failed 00:35:45.338 Write completed with error (sct=0, sc=8) 00:35:45.338 starting I/O failed 00:35:45.338 Read completed with error (sct=0, sc=8) 00:35:45.338 starting I/O failed 00:35:45.338 Write completed with error (sct=0, sc=8) 00:35:45.338 starting I/O failed 00:35:45.338 Read completed with error (sct=0, sc=8) 00:35:45.338 starting I/O failed 00:35:45.338 Write completed with error (sct=0, sc=8) 00:35:45.338 starting I/O failed 00:35:45.338 Write completed with error (sct=0, sc=8) 00:35:45.338 starting I/O failed 00:35:45.338 Read completed with error (sct=0, sc=8) 00:35:45.338 starting I/O failed 00:35:45.338 Read completed with error (sct=0, sc=8) 00:35:45.338 starting I/O failed 00:35:45.338 Read completed with error (sct=0, sc=8) 00:35:45.338 starting I/O failed 00:35:45.338 Write completed with error (sct=0, sc=8) 00:35:45.338 starting I/O failed 00:35:45.338 Write completed with error (sct=0, sc=8) 00:35:45.338 starting I/O failed 00:35:45.338 Read completed with error (sct=0, sc=8) 00:35:45.338 starting I/O failed 00:35:45.338 Read completed with error (sct=0, sc=8) 00:35:45.338 starting I/O failed 00:35:45.338 Write completed with error (sct=0, sc=8) 00:35:45.338 starting I/O failed 00:35:45.338 Write completed with error (sct=0, sc=8) 00:35:45.338 starting I/O failed 00:35:45.338 Read completed with error (sct=0, sc=8) 00:35:45.338 starting I/O failed 00:35:45.338 Write completed with error (sct=0, sc=8) 00:35:45.338 starting I/O failed 00:35:45.338 Read completed with error (sct=0, sc=8) 00:35:45.338 starting I/O failed 00:35:45.338 [2024-11-18 07:21:05.733791] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:45.338 [2024-11-18 07:21:05.733948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.338 [2024-11-18 07:21:05.733998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.338 qpair failed and we were unable to recover it. 00:35:45.338 [2024-11-18 07:21:05.734101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.338 [2024-11-18 07:21:05.734130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.338 qpair failed and we were unable to recover it. 00:35:45.338 [2024-11-18 07:21:05.734226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.338 [2024-11-18 07:21:05.734254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.338 qpair failed and we were unable to recover it. 00:35:45.338 [2024-11-18 07:21:05.734376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.338 [2024-11-18 07:21:05.734403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.338 qpair failed and we were unable to recover it. 00:35:45.338 [2024-11-18 07:21:05.734528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.338 [2024-11-18 07:21:05.734555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.338 qpair failed and we were unable to recover it. 00:35:45.339 [2024-11-18 07:21:05.734647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.339 [2024-11-18 07:21:05.734675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.339 qpair failed and we were unable to recover it. 00:35:45.339 [2024-11-18 07:21:05.734771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.339 [2024-11-18 07:21:05.734798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.339 qpair failed and we were unable to recover it. 00:35:45.339 [2024-11-18 07:21:05.734943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.339 [2024-11-18 07:21:05.734970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.339 qpair failed and we were unable to recover it. 00:35:45.339 [2024-11-18 07:21:05.735086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.339 [2024-11-18 07:21:05.735112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.339 qpair failed and we were unable to recover it. 00:35:45.339 [2024-11-18 07:21:05.735225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.339 [2024-11-18 07:21:05.735251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.339 qpair failed and we were unable to recover it. 00:35:45.339 [2024-11-18 07:21:05.735363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.339 [2024-11-18 07:21:05.735390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.339 qpair failed and we were unable to recover it. 00:35:45.339 [2024-11-18 07:21:05.735523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.339 [2024-11-18 07:21:05.735550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.339 qpair failed and we were unable to recover it. 00:35:45.339 [2024-11-18 07:21:05.735647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.339 [2024-11-18 07:21:05.735674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.339 qpair failed and we were unable to recover it. 00:35:45.339 [2024-11-18 07:21:05.735756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.339 [2024-11-18 07:21:05.735783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.339 qpair failed and we were unable to recover it. 00:35:45.339 [2024-11-18 07:21:05.735888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.339 [2024-11-18 07:21:05.735914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.339 qpair failed and we were unable to recover it. 00:35:45.339 [2024-11-18 07:21:05.736055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.339 [2024-11-18 07:21:05.736081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.339 qpair failed and we were unable to recover it. 00:35:45.339 [2024-11-18 07:21:05.736197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.339 [2024-11-18 07:21:05.736224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.339 qpair failed and we were unable to recover it. 00:35:45.339 [2024-11-18 07:21:05.736352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.339 [2024-11-18 07:21:05.736400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.339 qpair failed and we were unable to recover it. 00:35:45.339 [2024-11-18 07:21:05.736548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.339 [2024-11-18 07:21:05.736595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.339 qpair failed and we were unable to recover it. 00:35:45.339 [2024-11-18 07:21:05.736715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.339 [2024-11-18 07:21:05.736743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.339 qpair failed and we were unable to recover it. 00:35:45.339 [2024-11-18 07:21:05.736851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.339 [2024-11-18 07:21:05.736877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.339 qpair failed and we were unable to recover it. 00:35:45.339 [2024-11-18 07:21:05.736992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.339 [2024-11-18 07:21:05.737025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.339 qpair failed and we were unable to recover it. 00:35:45.339 [2024-11-18 07:21:05.737139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.339 [2024-11-18 07:21:05.737165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.339 qpair failed and we were unable to recover it. 00:35:45.339 [2024-11-18 07:21:05.737252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.339 [2024-11-18 07:21:05.737278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.339 qpair failed and we were unable to recover it. 00:35:45.339 [2024-11-18 07:21:05.737387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.339 [2024-11-18 07:21:05.737414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.339 qpair failed and we were unable to recover it. 00:35:45.339 [2024-11-18 07:21:05.737534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.339 [2024-11-18 07:21:05.737562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.339 qpair failed and we were unable to recover it. 00:35:45.339 [2024-11-18 07:21:05.737687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.339 [2024-11-18 07:21:05.737718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.339 qpair failed and we were unable to recover it. 00:35:45.339 [2024-11-18 07:21:05.737833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.339 [2024-11-18 07:21:05.737859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.339 qpair failed and we were unable to recover it. 00:35:45.339 [2024-11-18 07:21:05.737999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.339 [2024-11-18 07:21:05.738025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.339 qpair failed and we were unable to recover it. 00:35:45.339 [2024-11-18 07:21:05.738138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.339 [2024-11-18 07:21:05.738163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.339 qpair failed and we were unable to recover it. 00:35:45.339 [2024-11-18 07:21:05.738286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.339 [2024-11-18 07:21:05.738326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.339 qpair failed and we were unable to recover it. 00:35:45.339 [2024-11-18 07:21:05.738428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.339 [2024-11-18 07:21:05.738456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.339 qpair failed and we were unable to recover it. 00:35:45.339 [2024-11-18 07:21:05.738589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.339 [2024-11-18 07:21:05.738618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.339 qpair failed and we were unable to recover it. 00:35:45.339 [2024-11-18 07:21:05.738712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.339 [2024-11-18 07:21:05.738740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.339 qpair failed and we were unable to recover it. 00:35:45.339 [2024-11-18 07:21:05.738884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.339 [2024-11-18 07:21:05.738910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.339 qpair failed and we were unable to recover it. 00:35:45.339 [2024-11-18 07:21:05.739020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.339 [2024-11-18 07:21:05.739046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.339 qpair failed and we were unable to recover it. 00:35:45.339 [2024-11-18 07:21:05.739127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.339 [2024-11-18 07:21:05.739154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.339 qpair failed and we were unable to recover it. 00:35:45.339 [2024-11-18 07:21:05.739293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.339 [2024-11-18 07:21:05.739319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.339 qpair failed and we were unable to recover it. 00:35:45.339 [2024-11-18 07:21:05.739404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.339 [2024-11-18 07:21:05.739432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.339 qpair failed and we were unable to recover it. 00:35:45.339 [2024-11-18 07:21:05.739533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.339 [2024-11-18 07:21:05.739561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.339 qpair failed and we were unable to recover it. 00:35:45.339 [2024-11-18 07:21:05.739676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.339 [2024-11-18 07:21:05.739703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.339 qpair failed and we were unable to recover it. 00:35:45.339 [2024-11-18 07:21:05.739816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.339 [2024-11-18 07:21:05.739842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.339 qpair failed and we were unable to recover it. 00:35:45.339 [2024-11-18 07:21:05.739977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.340 [2024-11-18 07:21:05.740003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.340 qpair failed and we were unable to recover it. 00:35:45.340 [2024-11-18 07:21:05.740145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.340 [2024-11-18 07:21:05.740171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.340 qpair failed and we were unable to recover it. 00:35:45.340 [2024-11-18 07:21:05.740249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.340 [2024-11-18 07:21:05.740275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.340 qpair failed and we were unable to recover it. 00:35:45.340 [2024-11-18 07:21:05.740356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.340 [2024-11-18 07:21:05.740382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.340 qpair failed and we were unable to recover it. 00:35:45.340 [2024-11-18 07:21:05.740546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.340 [2024-11-18 07:21:05.740573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.340 qpair failed and we were unable to recover it. 00:35:45.340 [2024-11-18 07:21:05.740656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.340 [2024-11-18 07:21:05.740684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.340 qpair failed and we were unable to recover it. 00:35:45.340 [2024-11-18 07:21:05.740782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.340 [2024-11-18 07:21:05.740811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.340 qpair failed and we were unable to recover it. 00:35:45.340 [2024-11-18 07:21:05.740945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.340 [2024-11-18 07:21:05.740971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.340 qpair failed and we were unable to recover it. 00:35:45.340 [2024-11-18 07:21:05.741115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.340 [2024-11-18 07:21:05.741143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.340 qpair failed and we were unable to recover it. 00:35:45.340 [2024-11-18 07:21:05.741251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.340 [2024-11-18 07:21:05.741278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.340 qpair failed and we were unable to recover it. 00:35:45.340 [2024-11-18 07:21:05.741422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.340 [2024-11-18 07:21:05.741448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.340 qpair failed and we were unable to recover it. 00:35:45.340 [2024-11-18 07:21:05.741582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.340 [2024-11-18 07:21:05.741610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.340 qpair failed and we were unable to recover it. 00:35:45.340 [2024-11-18 07:21:05.741704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.340 [2024-11-18 07:21:05.741731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.340 qpair failed and we were unable to recover it. 00:35:45.340 [2024-11-18 07:21:05.741861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.340 [2024-11-18 07:21:05.741889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.340 qpair failed and we were unable to recover it. 00:35:45.340 [2024-11-18 07:21:05.742000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.340 [2024-11-18 07:21:05.742028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.340 qpair failed and we were unable to recover it. 00:35:45.340 [2024-11-18 07:21:05.742118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.340 [2024-11-18 07:21:05.742145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.340 qpair failed and we were unable to recover it. 00:35:45.340 [2024-11-18 07:21:05.742261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.340 [2024-11-18 07:21:05.742287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.340 qpair failed and we were unable to recover it. 00:35:45.340 [2024-11-18 07:21:05.742406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.340 [2024-11-18 07:21:05.742433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.340 qpair failed and we were unable to recover it. 00:35:45.340 [2024-11-18 07:21:05.742539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.340 [2024-11-18 07:21:05.742567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.340 qpair failed and we were unable to recover it. 00:35:45.340 [2024-11-18 07:21:05.742654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.340 [2024-11-18 07:21:05.742681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.340 qpair failed and we were unable to recover it. 00:35:45.340 [2024-11-18 07:21:05.742793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.340 [2024-11-18 07:21:05.742820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.340 qpair failed and we were unable to recover it. 00:35:45.340 [2024-11-18 07:21:05.742935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.340 [2024-11-18 07:21:05.742961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.340 qpair failed and we were unable to recover it. 00:35:45.340 [2024-11-18 07:21:05.743038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.340 [2024-11-18 07:21:05.743064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.340 qpair failed and we were unable to recover it. 00:35:45.340 [2024-11-18 07:21:05.743156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.340 [2024-11-18 07:21:05.743183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.340 qpair failed and we were unable to recover it. 00:35:45.340 [2024-11-18 07:21:05.743326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.340 [2024-11-18 07:21:05.743360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.340 qpair failed and we were unable to recover it. 00:35:45.340 [2024-11-18 07:21:05.743468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.340 [2024-11-18 07:21:05.743506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.340 qpair failed and we were unable to recover it. 00:35:45.340 [2024-11-18 07:21:05.743603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.340 [2024-11-18 07:21:05.743630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.340 qpair failed and we were unable to recover it. 00:35:45.340 [2024-11-18 07:21:05.743713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.340 [2024-11-18 07:21:05.743739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.340 qpair failed and we were unable to recover it. 00:35:45.340 [2024-11-18 07:21:05.743852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.340 [2024-11-18 07:21:05.743878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.340 qpair failed and we were unable to recover it. 00:35:45.340 [2024-11-18 07:21:05.744010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.340 [2024-11-18 07:21:05.744037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.340 qpair failed and we were unable to recover it. 00:35:45.340 [2024-11-18 07:21:05.744163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.340 [2024-11-18 07:21:05.744190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.340 qpair failed and we were unable to recover it. 00:35:45.340 [2024-11-18 07:21:05.744311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.340 [2024-11-18 07:21:05.744338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.340 qpair failed and we were unable to recover it. 00:35:45.340 [2024-11-18 07:21:05.744426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.340 [2024-11-18 07:21:05.744453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.340 qpair failed and we were unable to recover it. 00:35:45.340 [2024-11-18 07:21:05.744570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.340 [2024-11-18 07:21:05.744597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.340 qpair failed and we were unable to recover it. 00:35:45.340 [2024-11-18 07:21:05.744692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.340 [2024-11-18 07:21:05.744719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.340 qpair failed and we were unable to recover it. 00:35:45.340 [2024-11-18 07:21:05.744835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.340 [2024-11-18 07:21:05.744862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.340 qpair failed and we were unable to recover it. 00:35:45.340 [2024-11-18 07:21:05.745014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.340 [2024-11-18 07:21:05.745043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.340 qpair failed and we were unable to recover it. 00:35:45.340 [2024-11-18 07:21:05.745127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.340 [2024-11-18 07:21:05.745154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.340 qpair failed and we were unable to recover it. 00:35:45.341 [2024-11-18 07:21:05.745346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.341 [2024-11-18 07:21:05.745385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.341 qpair failed and we were unable to recover it. 00:35:45.341 [2024-11-18 07:21:05.745544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.341 [2024-11-18 07:21:05.745573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.341 qpair failed and we were unable to recover it. 00:35:45.341 [2024-11-18 07:21:05.745690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.341 [2024-11-18 07:21:05.745716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.341 qpair failed and we were unable to recover it. 00:35:45.341 [2024-11-18 07:21:05.745827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.341 [2024-11-18 07:21:05.745853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.341 qpair failed and we were unable to recover it. 00:35:45.341 [2024-11-18 07:21:05.745967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.341 [2024-11-18 07:21:05.745993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.341 qpair failed and we were unable to recover it. 00:35:45.341 [2024-11-18 07:21:05.746075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.341 [2024-11-18 07:21:05.746101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.341 qpair failed and we were unable to recover it. 00:35:45.341 [2024-11-18 07:21:05.746210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.341 [2024-11-18 07:21:05.746237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.341 qpair failed and we were unable to recover it. 00:35:45.341 [2024-11-18 07:21:05.746330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.341 [2024-11-18 07:21:05.746357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.341 qpair failed and we were unable to recover it. 00:35:45.341 [2024-11-18 07:21:05.746471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.341 [2024-11-18 07:21:05.746503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.341 qpair failed and we were unable to recover it. 00:35:45.341 [2024-11-18 07:21:05.746639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.341 [2024-11-18 07:21:05.746666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.341 qpair failed and we were unable to recover it. 00:35:45.341 [2024-11-18 07:21:05.746791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.341 [2024-11-18 07:21:05.746817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.341 qpair failed and we were unable to recover it. 00:35:45.341 [2024-11-18 07:21:05.746927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.341 [2024-11-18 07:21:05.746953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.341 qpair failed and we were unable to recover it. 00:35:45.341 [2024-11-18 07:21:05.747091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.341 [2024-11-18 07:21:05.747117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.341 qpair failed and we were unable to recover it. 00:35:45.341 [2024-11-18 07:21:05.747205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.341 [2024-11-18 07:21:05.747233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.341 qpair failed and we were unable to recover it. 00:35:45.341 [2024-11-18 07:21:05.747313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.341 [2024-11-18 07:21:05.747339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.341 qpair failed and we were unable to recover it. 00:35:45.341 [2024-11-18 07:21:05.747475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.341 [2024-11-18 07:21:05.747508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.341 qpair failed and we were unable to recover it. 00:35:45.341 [2024-11-18 07:21:05.747620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.341 [2024-11-18 07:21:05.747646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.341 qpair failed and we were unable to recover it. 00:35:45.341 [2024-11-18 07:21:05.747795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.341 [2024-11-18 07:21:05.747822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.341 qpair failed and we were unable to recover it. 00:35:45.341 [2024-11-18 07:21:05.747900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.341 [2024-11-18 07:21:05.747928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.341 qpair failed and we were unable to recover it. 00:35:45.341 [2024-11-18 07:21:05.748021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.341 [2024-11-18 07:21:05.748049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.341 qpair failed and we were unable to recover it. 00:35:45.341 [2024-11-18 07:21:05.748189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.341 [2024-11-18 07:21:05.748216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.341 qpair failed and we were unable to recover it. 00:35:45.341 [2024-11-18 07:21:05.748334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.341 [2024-11-18 07:21:05.748361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.341 qpair failed and we were unable to recover it. 00:35:45.341 [2024-11-18 07:21:05.748440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.341 [2024-11-18 07:21:05.748467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.341 qpair failed and we were unable to recover it. 00:35:45.341 [2024-11-18 07:21:05.748617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.341 [2024-11-18 07:21:05.748644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.341 qpair failed and we were unable to recover it. 00:35:45.341 [2024-11-18 07:21:05.748762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.341 [2024-11-18 07:21:05.748788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.341 qpair failed and we were unable to recover it. 00:35:45.341 [2024-11-18 07:21:05.748901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.341 [2024-11-18 07:21:05.748928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.341 qpair failed and we were unable to recover it. 00:35:45.341 [2024-11-18 07:21:05.749008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.341 [2024-11-18 07:21:05.749039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.341 qpair failed and we were unable to recover it. 00:35:45.341 [2024-11-18 07:21:05.749142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.341 [2024-11-18 07:21:05.749181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.341 qpair failed and we were unable to recover it. 00:35:45.341 [2024-11-18 07:21:05.749305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.341 [2024-11-18 07:21:05.749332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.341 qpair failed and we were unable to recover it. 00:35:45.341 [2024-11-18 07:21:05.749473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.341 [2024-11-18 07:21:05.749511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.341 qpair failed and we were unable to recover it. 00:35:45.341 [2024-11-18 07:21:05.749625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.341 [2024-11-18 07:21:05.749651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.341 qpair failed and we were unable to recover it. 00:35:45.341 [2024-11-18 07:21:05.749736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.341 [2024-11-18 07:21:05.749762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.341 qpair failed and we were unable to recover it. 00:35:45.341 [2024-11-18 07:21:05.749873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.341 [2024-11-18 07:21:05.749898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.341 qpair failed and we were unable to recover it. 00:35:45.341 [2024-11-18 07:21:05.750036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.341 [2024-11-18 07:21:05.750063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.341 qpair failed and we were unable to recover it. 00:35:45.341 [2024-11-18 07:21:05.750219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.341 [2024-11-18 07:21:05.750258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.341 qpair failed and we were unable to recover it. 00:35:45.341 [2024-11-18 07:21:05.750384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.341 [2024-11-18 07:21:05.750412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.341 qpair failed and we were unable to recover it. 00:35:45.341 [2024-11-18 07:21:05.750561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.341 [2024-11-18 07:21:05.750589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.341 qpair failed and we were unable to recover it. 00:35:45.341 [2024-11-18 07:21:05.750677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.342 [2024-11-18 07:21:05.750704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.342 qpair failed and we were unable to recover it. 00:35:45.342 [2024-11-18 07:21:05.750802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.342 [2024-11-18 07:21:05.750829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.342 qpair failed and we were unable to recover it. 00:35:45.342 [2024-11-18 07:21:05.750945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.342 [2024-11-18 07:21:05.750972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.342 qpair failed and we were unable to recover it. 00:35:45.342 [2024-11-18 07:21:05.751118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.342 [2024-11-18 07:21:05.751145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.342 qpair failed and we were unable to recover it. 00:35:45.342 [2024-11-18 07:21:05.751254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.342 [2024-11-18 07:21:05.751280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.342 qpair failed and we were unable to recover it. 00:35:45.342 [2024-11-18 07:21:05.751372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.342 [2024-11-18 07:21:05.751399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.342 qpair failed and we were unable to recover it. 00:35:45.342 [2024-11-18 07:21:05.751507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.342 [2024-11-18 07:21:05.751534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.342 qpair failed and we were unable to recover it. 00:35:45.342 [2024-11-18 07:21:05.751640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.342 [2024-11-18 07:21:05.751667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.342 qpair failed and we were unable to recover it. 00:35:45.342 [2024-11-18 07:21:05.751779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.342 [2024-11-18 07:21:05.751806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.342 qpair failed and we were unable to recover it. 00:35:45.342 [2024-11-18 07:21:05.751888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.342 [2024-11-18 07:21:05.751914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.342 qpair failed and we were unable to recover it. 00:35:45.342 [2024-11-18 07:21:05.751999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.342 [2024-11-18 07:21:05.752027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.342 qpair failed and we were unable to recover it. 00:35:45.342 [2024-11-18 07:21:05.752167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.342 [2024-11-18 07:21:05.752194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.342 qpair failed and we were unable to recover it. 00:35:45.342 [2024-11-18 07:21:05.752339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.342 [2024-11-18 07:21:05.752366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.342 qpair failed and we were unable to recover it. 00:35:45.342 [2024-11-18 07:21:05.752503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.342 [2024-11-18 07:21:05.752530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.342 qpair failed and we were unable to recover it. 00:35:45.342 [2024-11-18 07:21:05.752645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.342 [2024-11-18 07:21:05.752674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.342 qpair failed and we were unable to recover it. 00:35:45.342 [2024-11-18 07:21:05.752787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.342 [2024-11-18 07:21:05.752813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.342 qpair failed and we were unable to recover it. 00:35:45.342 [2024-11-18 07:21:05.752903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.342 [2024-11-18 07:21:05.752929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.342 qpair failed and we were unable to recover it. 00:35:45.342 [2024-11-18 07:21:05.753038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.342 [2024-11-18 07:21:05.753064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.342 qpair failed and we were unable to recover it. 00:35:45.342 [2024-11-18 07:21:05.753173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.342 [2024-11-18 07:21:05.753198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.342 qpair failed and we were unable to recover it. 00:35:45.342 [2024-11-18 07:21:05.753310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.342 [2024-11-18 07:21:05.753335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.342 qpair failed and we were unable to recover it. 00:35:45.342 [2024-11-18 07:21:05.753415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.342 [2024-11-18 07:21:05.753441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.342 qpair failed and we were unable to recover it. 00:35:45.342 [2024-11-18 07:21:05.753555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.342 [2024-11-18 07:21:05.753584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.342 qpair failed and we were unable to recover it. 00:35:45.342 [2024-11-18 07:21:05.753725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.342 [2024-11-18 07:21:05.753752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.342 qpair failed and we were unable to recover it. 00:35:45.342 [2024-11-18 07:21:05.753896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.342 [2024-11-18 07:21:05.753923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.342 qpair failed and we were unable to recover it. 00:35:45.342 [2024-11-18 07:21:05.754063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.342 [2024-11-18 07:21:05.754090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.342 qpair failed and we were unable to recover it. 00:35:45.342 [2024-11-18 07:21:05.754209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.342 [2024-11-18 07:21:05.754236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.342 qpair failed and we were unable to recover it. 00:35:45.342 [2024-11-18 07:21:05.754326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.342 [2024-11-18 07:21:05.754354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.342 qpair failed and we were unable to recover it. 00:35:45.342 [2024-11-18 07:21:05.754470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.342 [2024-11-18 07:21:05.754503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.342 qpair failed and we were unable to recover it. 00:35:45.342 [2024-11-18 07:21:05.754617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.342 [2024-11-18 07:21:05.754643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.342 qpair failed and we were unable to recover it. 00:35:45.342 [2024-11-18 07:21:05.754752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.342 [2024-11-18 07:21:05.754783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.342 qpair failed and we were unable to recover it. 00:35:45.342 [2024-11-18 07:21:05.754906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.342 [2024-11-18 07:21:05.754933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.342 qpair failed and we were unable to recover it. 00:35:45.342 [2024-11-18 07:21:05.755011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.342 [2024-11-18 07:21:05.755038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.342 qpair failed and we were unable to recover it. 00:35:45.342 [2024-11-18 07:21:05.755115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.342 [2024-11-18 07:21:05.755141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.342 qpair failed and we were unable to recover it. 00:35:45.342 [2024-11-18 07:21:05.755260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.343 [2024-11-18 07:21:05.755287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.343 qpair failed and we were unable to recover it. 00:35:45.343 [2024-11-18 07:21:05.755400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.343 [2024-11-18 07:21:05.755425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.343 qpair failed and we were unable to recover it. 00:35:45.343 [2024-11-18 07:21:05.755513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.343 [2024-11-18 07:21:05.755542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.343 qpair failed and we were unable to recover it. 00:35:45.343 [2024-11-18 07:21:05.755678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.343 [2024-11-18 07:21:05.755704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.343 qpair failed and we were unable to recover it. 00:35:45.343 [2024-11-18 07:21:05.755845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.343 [2024-11-18 07:21:05.755872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.343 qpair failed and we were unable to recover it. 00:35:45.343 [2024-11-18 07:21:05.755987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.343 [2024-11-18 07:21:05.756013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.343 qpair failed and we were unable to recover it. 00:35:45.343 [2024-11-18 07:21:05.756128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.343 [2024-11-18 07:21:05.756154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.343 qpair failed and we were unable to recover it. 00:35:45.343 [2024-11-18 07:21:05.756235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.343 [2024-11-18 07:21:05.756262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.343 qpair failed and we were unable to recover it. 00:35:45.343 [2024-11-18 07:21:05.756376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.343 [2024-11-18 07:21:05.756403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.343 qpair failed and we were unable to recover it. 00:35:45.343 [2024-11-18 07:21:05.756503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.343 [2024-11-18 07:21:05.756531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.343 qpair failed and we were unable to recover it. 00:35:45.343 [2024-11-18 07:21:05.756651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.343 [2024-11-18 07:21:05.756678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.343 qpair failed and we were unable to recover it. 00:35:45.343 [2024-11-18 07:21:05.756790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.343 [2024-11-18 07:21:05.756817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.343 qpair failed and we were unable to recover it. 00:35:45.343 [2024-11-18 07:21:05.756966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.343 [2024-11-18 07:21:05.756992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.343 qpair failed and we were unable to recover it. 00:35:45.343 [2024-11-18 07:21:05.757106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.343 [2024-11-18 07:21:05.757133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.343 qpair failed and we were unable to recover it. 00:35:45.343 [2024-11-18 07:21:05.757238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.343 [2024-11-18 07:21:05.757264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.343 qpair failed and we were unable to recover it. 00:35:45.343 [2024-11-18 07:21:05.757344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.343 [2024-11-18 07:21:05.757370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.343 qpair failed and we were unable to recover it. 00:35:45.343 [2024-11-18 07:21:05.757478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.343 [2024-11-18 07:21:05.757513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.343 qpair failed and we were unable to recover it. 00:35:45.343 [2024-11-18 07:21:05.757599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.343 [2024-11-18 07:21:05.757625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.343 qpair failed and we were unable to recover it. 00:35:45.343 [2024-11-18 07:21:05.757730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.343 [2024-11-18 07:21:05.757756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.343 qpair failed and we were unable to recover it. 00:35:45.343 [2024-11-18 07:21:05.757840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.343 [2024-11-18 07:21:05.757866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.343 qpair failed and we were unable to recover it. 00:35:45.343 [2024-11-18 07:21:05.757973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.343 [2024-11-18 07:21:05.758000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.343 qpair failed and we were unable to recover it. 00:35:45.343 [2024-11-18 07:21:05.758081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.343 [2024-11-18 07:21:05.758107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.343 qpair failed and we were unable to recover it. 00:35:45.343 [2024-11-18 07:21:05.758196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.343 [2024-11-18 07:21:05.758224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.343 qpair failed and we were unable to recover it. 00:35:45.343 [2024-11-18 07:21:05.758318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.343 [2024-11-18 07:21:05.758344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.343 qpair failed and we were unable to recover it. 00:35:45.343 [2024-11-18 07:21:05.758433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.343 [2024-11-18 07:21:05.758462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.343 qpair failed and we were unable to recover it. 00:35:45.343 [2024-11-18 07:21:05.758588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.343 [2024-11-18 07:21:05.758614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.343 qpair failed and we were unable to recover it. 00:35:45.343 [2024-11-18 07:21:05.758731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.343 [2024-11-18 07:21:05.758758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.343 qpair failed and we were unable to recover it. 00:35:45.343 [2024-11-18 07:21:05.758884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.343 [2024-11-18 07:21:05.758911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.343 qpair failed and we were unable to recover it. 00:35:45.343 [2024-11-18 07:21:05.759052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.343 [2024-11-18 07:21:05.759078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.343 qpair failed and we were unable to recover it. 00:35:45.343 [2024-11-18 07:21:05.759185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.343 [2024-11-18 07:21:05.759211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.343 qpair failed and we were unable to recover it. 00:35:45.343 [2024-11-18 07:21:05.759343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.343 [2024-11-18 07:21:05.759371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.343 qpair failed and we were unable to recover it. 00:35:45.343 [2024-11-18 07:21:05.759508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.343 [2024-11-18 07:21:05.759535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.343 qpair failed and we were unable to recover it. 00:35:45.343 [2024-11-18 07:21:05.759642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.343 [2024-11-18 07:21:05.759668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.343 qpair failed and we were unable to recover it. 00:35:45.343 [2024-11-18 07:21:05.759756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.343 [2024-11-18 07:21:05.759782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.343 qpair failed and we were unable to recover it. 00:35:45.343 [2024-11-18 07:21:05.759871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.343 [2024-11-18 07:21:05.759897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.343 qpair failed and we were unable to recover it. 00:35:45.343 [2024-11-18 07:21:05.760009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.343 [2024-11-18 07:21:05.760035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.343 qpair failed and we were unable to recover it. 00:35:45.343 [2024-11-18 07:21:05.760150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.343 [2024-11-18 07:21:05.760180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.343 qpair failed and we were unable to recover it. 00:35:45.343 [2024-11-18 07:21:05.760294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.343 [2024-11-18 07:21:05.760320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.343 qpair failed and we were unable to recover it. 00:35:45.344 [2024-11-18 07:21:05.760460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.344 [2024-11-18 07:21:05.760486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.344 qpair failed and we were unable to recover it. 00:35:45.344 [2024-11-18 07:21:05.760574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.344 [2024-11-18 07:21:05.760599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.344 qpair failed and we were unable to recover it. 00:35:45.344 [2024-11-18 07:21:05.760736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.344 [2024-11-18 07:21:05.760762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.344 qpair failed and we were unable to recover it. 00:35:45.344 [2024-11-18 07:21:05.760854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.344 [2024-11-18 07:21:05.760879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.344 qpair failed and we were unable to recover it. 00:35:45.344 [2024-11-18 07:21:05.760987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.344 [2024-11-18 07:21:05.761013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.344 qpair failed and we were unable to recover it. 00:35:45.344 [2024-11-18 07:21:05.761086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.344 [2024-11-18 07:21:05.761112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.344 qpair failed and we were unable to recover it. 00:35:45.344 [2024-11-18 07:21:05.761224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.344 [2024-11-18 07:21:05.761250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.344 qpair failed and we were unable to recover it. 00:35:45.344 [2024-11-18 07:21:05.761396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.344 [2024-11-18 07:21:05.761422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.344 qpair failed and we were unable to recover it. 00:35:45.344 [2024-11-18 07:21:05.761542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.344 [2024-11-18 07:21:05.761568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.344 qpair failed and we were unable to recover it. 00:35:45.344 [2024-11-18 07:21:05.761710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.344 [2024-11-18 07:21:05.761736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.344 qpair failed and we were unable to recover it. 00:35:45.344 [2024-11-18 07:21:05.761877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.344 [2024-11-18 07:21:05.761902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.344 qpair failed and we were unable to recover it. 00:35:45.344 [2024-11-18 07:21:05.762006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.344 [2024-11-18 07:21:05.762032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.344 qpair failed and we were unable to recover it. 00:35:45.344 [2024-11-18 07:21:05.762132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.344 [2024-11-18 07:21:05.762158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.344 qpair failed and we were unable to recover it. 00:35:45.344 [2024-11-18 07:21:05.762269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.344 [2024-11-18 07:21:05.762293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.344 qpair failed and we were unable to recover it. 00:35:45.344 [2024-11-18 07:21:05.762372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.344 [2024-11-18 07:21:05.762398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.344 qpair failed and we were unable to recover it. 00:35:45.344 [2024-11-18 07:21:05.762518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.344 [2024-11-18 07:21:05.762545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.344 qpair failed and we were unable to recover it. 00:35:45.344 [2024-11-18 07:21:05.762622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.344 [2024-11-18 07:21:05.762647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.344 qpair failed and we were unable to recover it. 00:35:45.344 [2024-11-18 07:21:05.762734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.344 [2024-11-18 07:21:05.762760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.344 qpair failed and we were unable to recover it. 00:35:45.344 [2024-11-18 07:21:05.762879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.344 [2024-11-18 07:21:05.762906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.344 qpair failed and we were unable to recover it. 00:35:45.344 [2024-11-18 07:21:05.763051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.344 [2024-11-18 07:21:05.763078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.344 qpair failed and we were unable to recover it. 00:35:45.344 [2024-11-18 07:21:05.763203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.344 [2024-11-18 07:21:05.763230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.344 qpair failed and we were unable to recover it. 00:35:45.344 [2024-11-18 07:21:05.763339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.344 [2024-11-18 07:21:05.763366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.344 qpair failed and we were unable to recover it. 00:35:45.344 [2024-11-18 07:21:05.763444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.344 [2024-11-18 07:21:05.763473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.344 qpair failed and we were unable to recover it. 00:35:45.344 [2024-11-18 07:21:05.763586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.344 [2024-11-18 07:21:05.763612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.344 qpair failed and we were unable to recover it. 00:35:45.344 [2024-11-18 07:21:05.763700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.344 [2024-11-18 07:21:05.763726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.344 qpair failed and we were unable to recover it. 00:35:45.344 [2024-11-18 07:21:05.763825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.344 [2024-11-18 07:21:05.763853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.344 qpair failed and we were unable to recover it. 00:35:45.344 [2024-11-18 07:21:05.763934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.344 [2024-11-18 07:21:05.763960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.344 qpair failed and we were unable to recover it. 00:35:45.344 [2024-11-18 07:21:05.764081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.344 [2024-11-18 07:21:05.764120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.344 qpair failed and we were unable to recover it. 00:35:45.344 [2024-11-18 07:21:05.764241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.344 [2024-11-18 07:21:05.764270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.344 qpair failed and we were unable to recover it. 00:35:45.344 [2024-11-18 07:21:05.764414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.344 [2024-11-18 07:21:05.764441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.344 qpair failed and we were unable to recover it. 00:35:45.344 [2024-11-18 07:21:05.764556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.344 [2024-11-18 07:21:05.764584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.344 qpair failed and we were unable to recover it. 00:35:45.344 [2024-11-18 07:21:05.764730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.344 [2024-11-18 07:21:05.764757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.344 qpair failed and we were unable to recover it. 00:35:45.344 [2024-11-18 07:21:05.764847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.344 [2024-11-18 07:21:05.764874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.344 qpair failed and we were unable to recover it. 00:35:45.344 [2024-11-18 07:21:05.764987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.344 [2024-11-18 07:21:05.765013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.344 qpair failed and we were unable to recover it. 00:35:45.344 [2024-11-18 07:21:05.765135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.344 [2024-11-18 07:21:05.765162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.344 qpair failed and we were unable to recover it. 00:35:45.344 [2024-11-18 07:21:05.765273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.344 [2024-11-18 07:21:05.765300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.344 qpair failed and we were unable to recover it. 00:35:45.344 [2024-11-18 07:21:05.765432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.344 [2024-11-18 07:21:05.765459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.344 qpair failed and we were unable to recover it. 00:35:45.345 [2024-11-18 07:21:05.765558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.345 [2024-11-18 07:21:05.765594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.345 qpair failed and we were unable to recover it. 00:35:45.345 [2024-11-18 07:21:05.765677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.345 [2024-11-18 07:21:05.765708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.345 qpair failed and we were unable to recover it. 00:35:45.345 [2024-11-18 07:21:05.765824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.345 [2024-11-18 07:21:05.765851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.345 qpair failed and we were unable to recover it. 00:35:45.345 [2024-11-18 07:21:05.766000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.345 [2024-11-18 07:21:05.766026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.345 qpair failed and we were unable to recover it. 00:35:45.345 [2024-11-18 07:21:05.766105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.345 [2024-11-18 07:21:05.766130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.345 qpair failed and we were unable to recover it. 00:35:45.345 [2024-11-18 07:21:05.766218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.345 [2024-11-18 07:21:05.766247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.345 qpair failed and we were unable to recover it. 00:35:45.345 [2024-11-18 07:21:05.766332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.345 [2024-11-18 07:21:05.766361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.345 qpair failed and we were unable to recover it. 00:35:45.345 [2024-11-18 07:21:05.766472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.345 [2024-11-18 07:21:05.766506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.345 qpair failed and we were unable to recover it. 00:35:45.345 [2024-11-18 07:21:05.766625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.345 [2024-11-18 07:21:05.766652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.345 qpair failed and we were unable to recover it. 00:35:45.345 [2024-11-18 07:21:05.766797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.345 [2024-11-18 07:21:05.766824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.345 qpair failed and we were unable to recover it. 00:35:45.345 [2024-11-18 07:21:05.766916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.345 [2024-11-18 07:21:05.766943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.345 qpair failed and we were unable to recover it. 00:35:45.345 [2024-11-18 07:21:05.767055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.345 [2024-11-18 07:21:05.767081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.345 qpair failed and we were unable to recover it. 00:35:45.345 [2024-11-18 07:21:05.767191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.345 [2024-11-18 07:21:05.767219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.345 qpair failed and we were unable to recover it. 00:35:45.345 [2024-11-18 07:21:05.767352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.345 [2024-11-18 07:21:05.767379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.345 qpair failed and we were unable to recover it. 00:35:45.345 [2024-11-18 07:21:05.767502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.345 [2024-11-18 07:21:05.767529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.345 qpair failed and we were unable to recover it. 00:35:45.345 [2024-11-18 07:21:05.767649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.345 [2024-11-18 07:21:05.767675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.345 qpair failed and we were unable to recover it. 00:35:45.345 [2024-11-18 07:21:05.767762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.345 [2024-11-18 07:21:05.767788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.345 qpair failed and we were unable to recover it. 00:35:45.345 [2024-11-18 07:21:05.767908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.345 [2024-11-18 07:21:05.767934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.345 qpair failed and we were unable to recover it. 00:35:45.345 [2024-11-18 07:21:05.768048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.345 [2024-11-18 07:21:05.768076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.345 qpair failed and we were unable to recover it. 00:35:45.345 [2024-11-18 07:21:05.768161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.345 [2024-11-18 07:21:05.768197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.345 qpair failed and we were unable to recover it. 00:35:45.345 [2024-11-18 07:21:05.768306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.345 [2024-11-18 07:21:05.768334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.345 qpair failed and we were unable to recover it. 00:35:45.345 [2024-11-18 07:21:05.768458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.345 [2024-11-18 07:21:05.768485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.345 qpair failed and we were unable to recover it. 00:35:45.345 [2024-11-18 07:21:05.768604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.345 [2024-11-18 07:21:05.768631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.345 qpair failed and we were unable to recover it. 00:35:45.345 [2024-11-18 07:21:05.768728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.345 [2024-11-18 07:21:05.768755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.345 qpair failed and we were unable to recover it. 00:35:45.345 [2024-11-18 07:21:05.768872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.345 [2024-11-18 07:21:05.768899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.345 qpair failed and we were unable to recover it. 00:35:45.345 [2024-11-18 07:21:05.769021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.345 [2024-11-18 07:21:05.769048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.345 qpair failed and we were unable to recover it. 00:35:45.345 [2024-11-18 07:21:05.769165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.345 [2024-11-18 07:21:05.769192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.345 qpair failed and we were unable to recover it. 00:35:45.345 [2024-11-18 07:21:05.769311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.345 [2024-11-18 07:21:05.769338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.345 qpair failed and we were unable to recover it. 00:35:45.345 [2024-11-18 07:21:05.769428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.345 [2024-11-18 07:21:05.769454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.345 qpair failed and we were unable to recover it. 00:35:45.345 [2024-11-18 07:21:05.769554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.345 [2024-11-18 07:21:05.769582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.345 qpair failed and we were unable to recover it. 00:35:45.345 [2024-11-18 07:21:05.769666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.345 [2024-11-18 07:21:05.769693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.345 qpair failed and we were unable to recover it. 00:35:45.345 [2024-11-18 07:21:05.769885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.345 [2024-11-18 07:21:05.769912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.345 qpair failed and we were unable to recover it. 00:35:45.345 [2024-11-18 07:21:05.770018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.345 [2024-11-18 07:21:05.770045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.345 qpair failed and we were unable to recover it. 00:35:45.345 [2024-11-18 07:21:05.770129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.345 [2024-11-18 07:21:05.770156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.345 qpair failed and we were unable to recover it. 00:35:45.345 [2024-11-18 07:21:05.770269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.345 [2024-11-18 07:21:05.770296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.345 qpair failed and we were unable to recover it. 00:35:45.345 [2024-11-18 07:21:05.770422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.345 [2024-11-18 07:21:05.770448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.345 qpair failed and we were unable to recover it. 00:35:45.345 [2024-11-18 07:21:05.770544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.345 [2024-11-18 07:21:05.770571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.345 qpair failed and we were unable to recover it. 00:35:45.345 [2024-11-18 07:21:05.770712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.346 [2024-11-18 07:21:05.770738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.346 qpair failed and we were unable to recover it. 00:35:45.346 [2024-11-18 07:21:05.770862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.346 [2024-11-18 07:21:05.770907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.346 qpair failed and we were unable to recover it. 00:35:45.346 [2024-11-18 07:21:05.770991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.346 [2024-11-18 07:21:05.771019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.346 qpair failed and we were unable to recover it. 00:35:45.346 [2024-11-18 07:21:05.771101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.346 [2024-11-18 07:21:05.771129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.346 qpair failed and we were unable to recover it. 00:35:45.346 [2024-11-18 07:21:05.771236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.346 [2024-11-18 07:21:05.771268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.346 qpair failed and we were unable to recover it. 00:35:45.346 [2024-11-18 07:21:05.771360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.346 [2024-11-18 07:21:05.771387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.346 qpair failed and we were unable to recover it. 00:35:45.346 [2024-11-18 07:21:05.771526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.346 [2024-11-18 07:21:05.771553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.346 qpair failed and we were unable to recover it. 00:35:45.346 [2024-11-18 07:21:05.771692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.346 [2024-11-18 07:21:05.771719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.346 qpair failed and we were unable to recover it. 00:35:45.346 [2024-11-18 07:21:05.771838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.346 [2024-11-18 07:21:05.771871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.346 qpair failed and we were unable to recover it. 00:35:45.346 [2024-11-18 07:21:05.771947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.346 [2024-11-18 07:21:05.771974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.346 qpair failed and we were unable to recover it. 00:35:45.346 [2024-11-18 07:21:05.772120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.346 [2024-11-18 07:21:05.772146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.346 qpair failed and we were unable to recover it. 00:35:45.346 [2024-11-18 07:21:05.772257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.346 [2024-11-18 07:21:05.772284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.346 qpair failed and we were unable to recover it. 00:35:45.346 [2024-11-18 07:21:05.772366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.346 [2024-11-18 07:21:05.772393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.346 qpair failed and we were unable to recover it. 00:35:45.346 [2024-11-18 07:21:05.772527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.346 [2024-11-18 07:21:05.772566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.346 qpair failed and we were unable to recover it. 00:35:45.346 [2024-11-18 07:21:05.772659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.346 [2024-11-18 07:21:05.772686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.346 qpair failed and we were unable to recover it. 00:35:45.346 [2024-11-18 07:21:05.772804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.346 [2024-11-18 07:21:05.772830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.346 qpair failed and we were unable to recover it. 00:35:45.346 [2024-11-18 07:21:05.772913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.346 [2024-11-18 07:21:05.772938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.346 qpair failed and we were unable to recover it. 00:35:45.346 [2024-11-18 07:21:05.773053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.346 [2024-11-18 07:21:05.773080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.346 qpair failed and we were unable to recover it. 00:35:45.346 [2024-11-18 07:21:05.773231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.346 [2024-11-18 07:21:05.773258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.346 qpair failed and we were unable to recover it. 00:35:45.346 [2024-11-18 07:21:05.773370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.346 [2024-11-18 07:21:05.773395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.346 qpair failed and we were unable to recover it. 00:35:45.346 [2024-11-18 07:21:05.773483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.346 [2024-11-18 07:21:05.773515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.346 qpair failed and we were unable to recover it. 00:35:45.346 [2024-11-18 07:21:05.773625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.346 [2024-11-18 07:21:05.773652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.346 qpair failed and we were unable to recover it. 00:35:45.346 [2024-11-18 07:21:05.773734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.346 [2024-11-18 07:21:05.773762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.346 qpair failed and we were unable to recover it. 00:35:45.346 [2024-11-18 07:21:05.773880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.346 [2024-11-18 07:21:05.773907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.346 qpair failed and we were unable to recover it. 00:35:45.346 [2024-11-18 07:21:05.774020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.346 [2024-11-18 07:21:05.774047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.346 qpair failed and we were unable to recover it. 00:35:45.346 [2024-11-18 07:21:05.774161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.346 [2024-11-18 07:21:05.774188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.346 qpair failed and we were unable to recover it. 00:35:45.346 [2024-11-18 07:21:05.774335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.346 [2024-11-18 07:21:05.774362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.346 qpair failed and we were unable to recover it. 00:35:45.346 [2024-11-18 07:21:05.774473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.346 [2024-11-18 07:21:05.774513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.346 qpair failed and we were unable to recover it. 00:35:45.346 [2024-11-18 07:21:05.774628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.346 [2024-11-18 07:21:05.774654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.346 qpair failed and we were unable to recover it. 00:35:45.346 [2024-11-18 07:21:05.774766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.346 [2024-11-18 07:21:05.774793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.346 qpair failed and we were unable to recover it. 00:35:45.346 [2024-11-18 07:21:05.774890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.346 [2024-11-18 07:21:05.774917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.346 qpair failed and we were unable to recover it. 00:35:45.346 [2024-11-18 07:21:05.774996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.346 [2024-11-18 07:21:05.775033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.346 qpair failed and we were unable to recover it. 00:35:45.346 [2024-11-18 07:21:05.775150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.346 [2024-11-18 07:21:05.775176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.346 qpair failed and we were unable to recover it. 00:35:45.346 [2024-11-18 07:21:05.775264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.346 [2024-11-18 07:21:05.775290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.346 qpair failed and we were unable to recover it. 00:35:45.346 [2024-11-18 07:21:05.775369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.346 [2024-11-18 07:21:05.775395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.346 qpair failed and we were unable to recover it. 00:35:45.347 [2024-11-18 07:21:05.775503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.347 [2024-11-18 07:21:05.775529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.347 qpair failed and we were unable to recover it. 00:35:45.347 [2024-11-18 07:21:05.775652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.347 [2024-11-18 07:21:05.775678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.347 qpair failed and we were unable to recover it. 00:35:45.347 [2024-11-18 07:21:05.775822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.347 [2024-11-18 07:21:05.775848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.347 qpair failed and we were unable to recover it. 00:35:45.347 [2024-11-18 07:21:05.775933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.347 [2024-11-18 07:21:05.775958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.347 qpair failed and we were unable to recover it. 00:35:45.347 [2024-11-18 07:21:05.776041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.347 [2024-11-18 07:21:05.776067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.347 qpair failed and we were unable to recover it. 00:35:45.347 [2024-11-18 07:21:05.776180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.347 [2024-11-18 07:21:05.776206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.347 qpair failed and we were unable to recover it. 00:35:45.347 [2024-11-18 07:21:05.776294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.347 [2024-11-18 07:21:05.776319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.347 qpair failed and we were unable to recover it. 00:35:45.347 [2024-11-18 07:21:05.776454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.347 [2024-11-18 07:21:05.776501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.347 qpair failed and we were unable to recover it. 00:35:45.347 [2024-11-18 07:21:05.776603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.347 [2024-11-18 07:21:05.776630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.347 qpair failed and we were unable to recover it. 00:35:45.347 [2024-11-18 07:21:05.776730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.347 [2024-11-18 07:21:05.776775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.347 qpair failed and we were unable to recover it. 00:35:45.347 [2024-11-18 07:21:05.776892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.347 [2024-11-18 07:21:05.776943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.347 qpair failed and we were unable to recover it. 00:35:45.347 [2024-11-18 07:21:05.777031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.347 [2024-11-18 07:21:05.777059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.347 qpair failed and we were unable to recover it. 00:35:45.347 [2024-11-18 07:21:05.777171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.347 [2024-11-18 07:21:05.777220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.347 qpair failed and we were unable to recover it. 00:35:45.347 [2024-11-18 07:21:05.777341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.347 [2024-11-18 07:21:05.777368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.347 qpair failed and we were unable to recover it. 00:35:45.347 [2024-11-18 07:21:05.777451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.347 [2024-11-18 07:21:05.777478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.347 qpair failed and we were unable to recover it. 00:35:45.347 [2024-11-18 07:21:05.777590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.347 [2024-11-18 07:21:05.777617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.347 qpair failed and we were unable to recover it. 00:35:45.347 [2024-11-18 07:21:05.777734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.347 [2024-11-18 07:21:05.777761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.347 qpair failed and we were unable to recover it. 00:35:45.347 [2024-11-18 07:21:05.777842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.347 [2024-11-18 07:21:05.777869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.347 qpair failed and we were unable to recover it. 00:35:45.347 [2024-11-18 07:21:05.777960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.347 [2024-11-18 07:21:05.777986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.347 qpair failed and we were unable to recover it. 00:35:45.347 [2024-11-18 07:21:05.778071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.347 [2024-11-18 07:21:05.778099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.347 qpair failed and we were unable to recover it. 00:35:45.347 [2024-11-18 07:21:05.778223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.347 [2024-11-18 07:21:05.778252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.347 qpair failed and we were unable to recover it. 00:35:45.347 [2024-11-18 07:21:05.778353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.347 [2024-11-18 07:21:05.778392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.347 qpair failed and we were unable to recover it. 00:35:45.347 [2024-11-18 07:21:05.778498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.347 [2024-11-18 07:21:05.778526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.347 qpair failed and we were unable to recover it. 00:35:45.347 [2024-11-18 07:21:05.778623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.347 [2024-11-18 07:21:05.778650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.347 qpair failed and we were unable to recover it. 00:35:45.347 [2024-11-18 07:21:05.778763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.347 [2024-11-18 07:21:05.778789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.347 qpair failed and we were unable to recover it. 00:35:45.347 [2024-11-18 07:21:05.778873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.347 [2024-11-18 07:21:05.778900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.347 qpair failed and we were unable to recover it. 00:35:45.347 [2024-11-18 07:21:05.778989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.347 [2024-11-18 07:21:05.779017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.347 qpair failed and we were unable to recover it. 00:35:45.347 [2024-11-18 07:21:05.779132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.347 [2024-11-18 07:21:05.779160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.347 qpair failed and we were unable to recover it. 00:35:45.347 [2024-11-18 07:21:05.779309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.347 [2024-11-18 07:21:05.779348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.347 qpair failed and we were unable to recover it. 00:35:45.347 [2024-11-18 07:21:05.779468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.347 [2024-11-18 07:21:05.779501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.347 qpair failed and we were unable to recover it. 00:35:45.347 [2024-11-18 07:21:05.779587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.347 [2024-11-18 07:21:05.779614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.347 qpair failed and we were unable to recover it. 00:35:45.347 [2024-11-18 07:21:05.779735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.347 [2024-11-18 07:21:05.779761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.347 qpair failed and we were unable to recover it. 00:35:45.347 [2024-11-18 07:21:05.779878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.347 [2024-11-18 07:21:05.779904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.347 qpair failed and we were unable to recover it. 00:35:45.347 [2024-11-18 07:21:05.779988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.347 [2024-11-18 07:21:05.780014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.347 qpair failed and we were unable to recover it. 00:35:45.347 [2024-11-18 07:21:05.780106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.347 [2024-11-18 07:21:05.780134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.347 qpair failed and we were unable to recover it. 00:35:45.347 [2024-11-18 07:21:05.780219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.347 [2024-11-18 07:21:05.780245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.347 qpair failed and we were unable to recover it. 00:35:45.347 [2024-11-18 07:21:05.780359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.348 [2024-11-18 07:21:05.780391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.348 qpair failed and we were unable to recover it. 00:35:45.348 [2024-11-18 07:21:05.780499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.348 [2024-11-18 07:21:05.780534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.348 qpair failed and we were unable to recover it. 00:35:45.348 [2024-11-18 07:21:05.780614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.348 [2024-11-18 07:21:05.780641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.348 qpair failed and we were unable to recover it. 00:35:45.348 [2024-11-18 07:21:05.780735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.348 [2024-11-18 07:21:05.780762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.348 qpair failed and we were unable to recover it. 00:35:45.348 [2024-11-18 07:21:05.780844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.348 [2024-11-18 07:21:05.780872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.348 qpair failed and we were unable to recover it. 00:35:45.348 [2024-11-18 07:21:05.780961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.348 [2024-11-18 07:21:05.780987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.348 qpair failed and we were unable to recover it. 00:35:45.348 [2024-11-18 07:21:05.781075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.348 [2024-11-18 07:21:05.781103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.348 qpair failed and we were unable to recover it. 00:35:45.348 [2024-11-18 07:21:05.781214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.348 [2024-11-18 07:21:05.781241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.348 qpair failed and we were unable to recover it. 00:35:45.348 [2024-11-18 07:21:05.781356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.348 [2024-11-18 07:21:05.781383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.348 qpair failed and we were unable to recover it. 00:35:45.348 [2024-11-18 07:21:05.781475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.348 [2024-11-18 07:21:05.781509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.348 qpair failed and we were unable to recover it. 00:35:45.348 [2024-11-18 07:21:05.781597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.348 [2024-11-18 07:21:05.781625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.348 qpair failed and we were unable to recover it. 00:35:45.348 [2024-11-18 07:21:05.781715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.348 [2024-11-18 07:21:05.781741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.348 qpair failed and we were unable to recover it. 00:35:45.348 [2024-11-18 07:21:05.781845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.348 [2024-11-18 07:21:05.781872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.348 qpair failed and we were unable to recover it. 00:35:45.348 [2024-11-18 07:21:05.781957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.348 [2024-11-18 07:21:05.781985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.348 qpair failed and we were unable to recover it. 00:35:45.348 [2024-11-18 07:21:05.782077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.348 [2024-11-18 07:21:05.782104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.348 qpair failed and we were unable to recover it. 00:35:45.348 [2024-11-18 07:21:05.782185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.348 [2024-11-18 07:21:05.782211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.348 qpair failed and we were unable to recover it. 00:35:45.348 [2024-11-18 07:21:05.782321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.348 [2024-11-18 07:21:05.782347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.348 qpair failed and we were unable to recover it. 00:35:45.348 [2024-11-18 07:21:05.782459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.348 [2024-11-18 07:21:05.782486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.348 qpair failed and we were unable to recover it. 00:35:45.348 [2024-11-18 07:21:05.782588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.348 [2024-11-18 07:21:05.782616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.348 qpair failed and we were unable to recover it. 00:35:45.348 [2024-11-18 07:21:05.782730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.348 [2024-11-18 07:21:05.782756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.348 qpair failed and we were unable to recover it. 00:35:45.348 [2024-11-18 07:21:05.782832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.348 [2024-11-18 07:21:05.782858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.348 qpair failed and we were unable to recover it. 00:35:45.348 [2024-11-18 07:21:05.782939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.348 [2024-11-18 07:21:05.782966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.348 qpair failed and we were unable to recover it. 00:35:45.348 [2024-11-18 07:21:05.783041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.348 [2024-11-18 07:21:05.783067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.348 qpair failed and we were unable to recover it. 00:35:45.348 [2024-11-18 07:21:05.783146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.348 [2024-11-18 07:21:05.783172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.348 qpair failed and we were unable to recover it. 00:35:45.348 [2024-11-18 07:21:05.783325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.348 [2024-11-18 07:21:05.783351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.348 qpair failed and we were unable to recover it. 00:35:45.348 [2024-11-18 07:21:05.783504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.348 [2024-11-18 07:21:05.783530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.348 qpair failed and we were unable to recover it. 00:35:45.348 [2024-11-18 07:21:05.783609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.348 [2024-11-18 07:21:05.783635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.348 qpair failed and we were unable to recover it. 00:35:45.348 [2024-11-18 07:21:05.783753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.348 [2024-11-18 07:21:05.783780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.348 qpair failed and we were unable to recover it. 00:35:45.348 [2024-11-18 07:21:05.783918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.348 [2024-11-18 07:21:05.783944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.348 qpair failed and we were unable to recover it. 00:35:45.348 [2024-11-18 07:21:05.784017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.348 [2024-11-18 07:21:05.784043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.348 qpair failed and we were unable to recover it. 00:35:45.348 [2024-11-18 07:21:05.784128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.348 [2024-11-18 07:21:05.784155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.348 qpair failed and we were unable to recover it. 00:35:45.348 [2024-11-18 07:21:05.784264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.348 [2024-11-18 07:21:05.784290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.348 qpair failed and we were unable to recover it. 00:35:45.348 [2024-11-18 07:21:05.784416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.348 [2024-11-18 07:21:05.784455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.348 qpair failed and we were unable to recover it. 00:35:45.348 [2024-11-18 07:21:05.784588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.348 [2024-11-18 07:21:05.784627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.348 qpair failed and we were unable to recover it. 00:35:45.348 [2024-11-18 07:21:05.784748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.348 [2024-11-18 07:21:05.784777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.348 qpair failed and we were unable to recover it. 00:35:45.348 [2024-11-18 07:21:05.784935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.348 [2024-11-18 07:21:05.784961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.348 qpair failed and we were unable to recover it. 00:35:45.348 [2024-11-18 07:21:05.785116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.348 [2024-11-18 07:21:05.785168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.348 qpair failed and we were unable to recover it. 00:35:45.348 [2024-11-18 07:21:05.785305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.348 [2024-11-18 07:21:05.785332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.348 qpair failed and we were unable to recover it. 00:35:45.349 [2024-11-18 07:21:05.785441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.349 [2024-11-18 07:21:05.785467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.349 qpair failed and we were unable to recover it. 00:35:45.349 [2024-11-18 07:21:05.785591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.349 [2024-11-18 07:21:05.785629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.349 qpair failed and we were unable to recover it. 00:35:45.349 [2024-11-18 07:21:05.785719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.349 [2024-11-18 07:21:05.785751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.349 qpair failed and we were unable to recover it. 00:35:45.349 [2024-11-18 07:21:05.785849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.349 [2024-11-18 07:21:05.785876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.349 qpair failed and we were unable to recover it. 00:35:45.349 [2024-11-18 07:21:05.786018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.349 [2024-11-18 07:21:05.786044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.349 qpair failed and we were unable to recover it. 00:35:45.349 [2024-11-18 07:21:05.786141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.349 [2024-11-18 07:21:05.786179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.349 qpair failed and we were unable to recover it. 00:35:45.349 [2024-11-18 07:21:05.786263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.349 [2024-11-18 07:21:05.786290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.349 qpair failed and we were unable to recover it. 00:35:45.349 [2024-11-18 07:21:05.786435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.349 [2024-11-18 07:21:05.786461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.349 qpair failed and we were unable to recover it. 00:35:45.349 [2024-11-18 07:21:05.786561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.349 [2024-11-18 07:21:05.786588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.349 qpair failed and we were unable to recover it. 00:35:45.349 [2024-11-18 07:21:05.786714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.349 [2024-11-18 07:21:05.786742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.349 qpair failed and we were unable to recover it. 00:35:45.349 [2024-11-18 07:21:05.786858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.349 [2024-11-18 07:21:05.786883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.349 qpair failed and we were unable to recover it. 00:35:45.349 [2024-11-18 07:21:05.786997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.349 [2024-11-18 07:21:05.787024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.349 qpair failed and we were unable to recover it. 00:35:45.349 [2024-11-18 07:21:05.787135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.349 [2024-11-18 07:21:05.787162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.349 qpair failed and we were unable to recover it. 00:35:45.349 [2024-11-18 07:21:05.787298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.349 [2024-11-18 07:21:05.787323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.349 qpair failed and we were unable to recover it. 00:35:45.349 [2024-11-18 07:21:05.787468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.349 [2024-11-18 07:21:05.787500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.349 qpair failed and we were unable to recover it. 00:35:45.349 [2024-11-18 07:21:05.787585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.349 [2024-11-18 07:21:05.787611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.349 qpair failed and we were unable to recover it. 00:35:45.349 [2024-11-18 07:21:05.787743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.349 [2024-11-18 07:21:05.787783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.349 qpair failed and we were unable to recover it. 00:35:45.349 [2024-11-18 07:21:05.787884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.349 [2024-11-18 07:21:05.787912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.349 qpair failed and we were unable to recover it. 00:35:45.349 [2024-11-18 07:21:05.788050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.349 [2024-11-18 07:21:05.788077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.349 qpair failed and we were unable to recover it. 00:35:45.349 [2024-11-18 07:21:05.788165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.349 [2024-11-18 07:21:05.788193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.349 qpair failed and we were unable to recover it. 00:35:45.349 [2024-11-18 07:21:05.788329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.349 [2024-11-18 07:21:05.788356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.349 qpair failed and we were unable to recover it. 00:35:45.349 [2024-11-18 07:21:05.788497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.349 [2024-11-18 07:21:05.788536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.349 qpair failed and we were unable to recover it. 00:35:45.349 [2024-11-18 07:21:05.788652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.349 [2024-11-18 07:21:05.788680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.349 qpair failed and we were unable to recover it. 00:35:45.349 [2024-11-18 07:21:05.788756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.349 [2024-11-18 07:21:05.788781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.349 qpair failed and we were unable to recover it. 00:35:45.349 [2024-11-18 07:21:05.788866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.349 [2024-11-18 07:21:05.788892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.349 qpair failed and we were unable to recover it. 00:35:45.349 [2024-11-18 07:21:05.788973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.349 [2024-11-18 07:21:05.788999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.349 qpair failed and we were unable to recover it. 00:35:45.349 [2024-11-18 07:21:05.789081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.349 [2024-11-18 07:21:05.789107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.349 qpair failed and we were unable to recover it. 00:35:45.349 [2024-11-18 07:21:05.789192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.349 [2024-11-18 07:21:05.789218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.349 qpair failed and we were unable to recover it. 00:35:45.349 [2024-11-18 07:21:05.789335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.349 [2024-11-18 07:21:05.789363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.349 qpair failed and we were unable to recover it. 00:35:45.349 [2024-11-18 07:21:05.789507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.349 [2024-11-18 07:21:05.789545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.349 qpair failed and we were unable to recover it. 00:35:45.349 [2024-11-18 07:21:05.789644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.349 [2024-11-18 07:21:05.789671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.349 qpair failed and we were unable to recover it. 00:35:45.349 [2024-11-18 07:21:05.789797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.349 [2024-11-18 07:21:05.789823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.349 qpair failed and we were unable to recover it. 00:35:45.349 [2024-11-18 07:21:05.789932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.349 [2024-11-18 07:21:05.789958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.349 qpair failed and we were unable to recover it. 00:35:45.349 [2024-11-18 07:21:05.790059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.349 [2024-11-18 07:21:05.790098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.349 qpair failed and we were unable to recover it. 00:35:45.349 [2024-11-18 07:21:05.790258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.349 [2024-11-18 07:21:05.790312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.349 qpair failed and we were unable to recover it. 00:35:45.349 [2024-11-18 07:21:05.790425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.349 [2024-11-18 07:21:05.790451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.349 qpair failed and we were unable to recover it. 00:35:45.349 [2024-11-18 07:21:05.790579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.349 [2024-11-18 07:21:05.790606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.349 qpair failed and we were unable to recover it. 00:35:45.349 [2024-11-18 07:21:05.790715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.349 [2024-11-18 07:21:05.790740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.350 qpair failed and we were unable to recover it. 00:35:45.350 [2024-11-18 07:21:05.790824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.350 [2024-11-18 07:21:05.790849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.350 qpair failed and we were unable to recover it. 00:35:45.350 [2024-11-18 07:21:05.790937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.350 [2024-11-18 07:21:05.790963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.350 qpair failed and we were unable to recover it. 00:35:45.350 [2024-11-18 07:21:05.791166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.350 [2024-11-18 07:21:05.791214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.350 qpair failed and we were unable to recover it. 00:35:45.350 [2024-11-18 07:21:05.791306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.350 [2024-11-18 07:21:05.791334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.350 qpair failed and we were unable to recover it. 00:35:45.350 [2024-11-18 07:21:05.791449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.350 [2024-11-18 07:21:05.791498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.350 qpair failed and we were unable to recover it. 00:35:45.350 [2024-11-18 07:21:05.791586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.350 [2024-11-18 07:21:05.791612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.350 qpair failed and we were unable to recover it. 00:35:45.350 [2024-11-18 07:21:05.791724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.350 [2024-11-18 07:21:05.791749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.350 qpair failed and we were unable to recover it. 00:35:45.350 [2024-11-18 07:21:05.791863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.350 [2024-11-18 07:21:05.791889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.350 qpair failed and we were unable to recover it. 00:35:45.350 [2024-11-18 07:21:05.792002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.350 [2024-11-18 07:21:05.792029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.350 qpair failed and we were unable to recover it. 00:35:45.350 [2024-11-18 07:21:05.792137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.350 [2024-11-18 07:21:05.792163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.350 qpair failed and we were unable to recover it. 00:35:45.350 [2024-11-18 07:21:05.792256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.350 [2024-11-18 07:21:05.792282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.350 qpair failed and we were unable to recover it. 00:35:45.350 [2024-11-18 07:21:05.792362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.350 [2024-11-18 07:21:05.792389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.350 qpair failed and we were unable to recover it. 00:35:45.350 [2024-11-18 07:21:05.792502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.350 [2024-11-18 07:21:05.792528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.350 qpair failed and we were unable to recover it. 00:35:45.350 [2024-11-18 07:21:05.792636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.350 [2024-11-18 07:21:05.792661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.350 qpair failed and we were unable to recover it. 00:35:45.350 [2024-11-18 07:21:05.792748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.350 [2024-11-18 07:21:05.792773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.350 qpair failed and we were unable to recover it. 00:35:45.350 [2024-11-18 07:21:05.792857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.350 [2024-11-18 07:21:05.792883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.350 qpair failed and we were unable to recover it. 00:35:45.350 [2024-11-18 07:21:05.792960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.350 [2024-11-18 07:21:05.792986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.350 qpair failed and we were unable to recover it. 00:35:45.350 [2024-11-18 07:21:05.793101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.350 [2024-11-18 07:21:05.793126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.350 qpair failed and we were unable to recover it. 00:35:45.350 [2024-11-18 07:21:05.793248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.350 [2024-11-18 07:21:05.793274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.350 qpair failed and we were unable to recover it. 00:35:45.350 [2024-11-18 07:21:05.793416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.350 [2024-11-18 07:21:05.793446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.350 qpair failed and we were unable to recover it. 00:35:45.350 [2024-11-18 07:21:05.793578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.350 [2024-11-18 07:21:05.793607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.350 qpair failed and we were unable to recover it. 00:35:45.350 [2024-11-18 07:21:05.793721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.350 [2024-11-18 07:21:05.793747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.350 qpair failed and we were unable to recover it. 00:35:45.350 [2024-11-18 07:21:05.793862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.350 [2024-11-18 07:21:05.793888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.350 qpair failed and we were unable to recover it. 00:35:45.350 [2024-11-18 07:21:05.793998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.350 [2024-11-18 07:21:05.794023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.350 qpair failed and we were unable to recover it. 00:35:45.350 [2024-11-18 07:21:05.794108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.350 [2024-11-18 07:21:05.794134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.350 qpair failed and we were unable to recover it. 00:35:45.350 [2024-11-18 07:21:05.794245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.350 [2024-11-18 07:21:05.794271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.350 qpair failed and we were unable to recover it. 00:35:45.350 [2024-11-18 07:21:05.794381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.350 [2024-11-18 07:21:05.794407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.350 qpair failed and we were unable to recover it. 00:35:45.350 [2024-11-18 07:21:05.794499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.350 [2024-11-18 07:21:05.794525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.350 qpair failed and we were unable to recover it. 00:35:45.350 [2024-11-18 07:21:05.794665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.350 [2024-11-18 07:21:05.794691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.350 qpair failed and we were unable to recover it. 00:35:45.350 [2024-11-18 07:21:05.794805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.350 [2024-11-18 07:21:05.794831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.350 qpair failed and we were unable to recover it. 00:35:45.350 [2024-11-18 07:21:05.794945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.350 [2024-11-18 07:21:05.794970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.350 qpair failed and we were unable to recover it. 00:35:45.350 [2024-11-18 07:21:05.795086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.350 [2024-11-18 07:21:05.795116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.350 qpair failed and we were unable to recover it. 00:35:45.350 [2024-11-18 07:21:05.795201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.350 [2024-11-18 07:21:05.795227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.350 qpair failed and we were unable to recover it. 00:35:45.350 [2024-11-18 07:21:05.795325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.350 [2024-11-18 07:21:05.795364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.350 qpair failed and we were unable to recover it. 00:35:45.350 [2024-11-18 07:21:05.795464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.350 [2024-11-18 07:21:05.795496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.350 qpair failed and we were unable to recover it. 00:35:45.350 [2024-11-18 07:21:05.795609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.350 [2024-11-18 07:21:05.795635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.350 qpair failed and we were unable to recover it. 00:35:45.350 [2024-11-18 07:21:05.795748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.350 [2024-11-18 07:21:05.795775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.350 qpair failed and we were unable to recover it. 00:35:45.350 [2024-11-18 07:21:05.795885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.351 [2024-11-18 07:21:05.795910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.351 qpair failed and we were unable to recover it. 00:35:45.351 [2024-11-18 07:21:05.796029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.351 [2024-11-18 07:21:05.796055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.351 qpair failed and we were unable to recover it. 00:35:45.351 [2024-11-18 07:21:05.796226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.351 [2024-11-18 07:21:05.796263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.351 qpair failed and we were unable to recover it. 00:35:45.351 [2024-11-18 07:21:05.796402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.351 [2024-11-18 07:21:05.796428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.351 qpair failed and we were unable to recover it. 00:35:45.351 [2024-11-18 07:21:05.796510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.351 [2024-11-18 07:21:05.796536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.351 qpair failed and we were unable to recover it. 00:35:45.351 [2024-11-18 07:21:05.796648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.351 [2024-11-18 07:21:05.796673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.351 qpair failed and we were unable to recover it. 00:35:45.351 [2024-11-18 07:21:05.796794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.351 [2024-11-18 07:21:05.796819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.351 qpair failed and we were unable to recover it. 00:35:45.351 [2024-11-18 07:21:05.796932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.351 [2024-11-18 07:21:05.796959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.351 qpair failed and we were unable to recover it. 00:35:45.351 [2024-11-18 07:21:05.797142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.351 [2024-11-18 07:21:05.797178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.351 qpair failed and we were unable to recover it. 00:35:45.351 [2024-11-18 07:21:05.797322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.351 [2024-11-18 07:21:05.797348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.351 qpair failed and we were unable to recover it. 00:35:45.351 [2024-11-18 07:21:05.797487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.351 [2024-11-18 07:21:05.797520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.351 qpair failed and we were unable to recover it. 00:35:45.351 [2024-11-18 07:21:05.797614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.351 [2024-11-18 07:21:05.797639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.351 qpair failed and we were unable to recover it. 00:35:45.351 [2024-11-18 07:21:05.797753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.351 [2024-11-18 07:21:05.797779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.351 qpair failed and we were unable to recover it. 00:35:45.351 [2024-11-18 07:21:05.797920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.351 [2024-11-18 07:21:05.797945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.351 qpair failed and we were unable to recover it. 00:35:45.351 [2024-11-18 07:21:05.798062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.351 [2024-11-18 07:21:05.798087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.351 qpair failed and we were unable to recover it. 00:35:45.351 [2024-11-18 07:21:05.798178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.351 [2024-11-18 07:21:05.798204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.351 qpair failed and we were unable to recover it. 00:35:45.351 [2024-11-18 07:21:05.798296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.351 [2024-11-18 07:21:05.798322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.351 qpair failed and we were unable to recover it. 00:35:45.351 [2024-11-18 07:21:05.798448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.351 [2024-11-18 07:21:05.798502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.351 qpair failed and we were unable to recover it. 00:35:45.351 [2024-11-18 07:21:05.798623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.351 [2024-11-18 07:21:05.798650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.351 qpair failed and we were unable to recover it. 00:35:45.351 [2024-11-18 07:21:05.798762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.351 [2024-11-18 07:21:05.798788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.351 qpair failed and we were unable to recover it. 00:35:45.351 [2024-11-18 07:21:05.798896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.351 [2024-11-18 07:21:05.798922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.351 qpair failed and we were unable to recover it. 00:35:45.351 [2024-11-18 07:21:05.799059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.351 [2024-11-18 07:21:05.799091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.351 qpair failed and we were unable to recover it. 00:35:45.351 [2024-11-18 07:21:05.799229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.351 [2024-11-18 07:21:05.799255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.351 qpair failed and we were unable to recover it. 00:35:45.351 [2024-11-18 07:21:05.799396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.351 [2024-11-18 07:21:05.799421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.351 qpair failed and we were unable to recover it. 00:35:45.351 [2024-11-18 07:21:05.799535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.351 [2024-11-18 07:21:05.799562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.351 qpair failed and we were unable to recover it. 00:35:45.351 [2024-11-18 07:21:05.799644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.351 [2024-11-18 07:21:05.799669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.351 qpair failed and we were unable to recover it. 00:35:45.351 [2024-11-18 07:21:05.799778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.351 [2024-11-18 07:21:05.799803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.351 qpair failed and we were unable to recover it. 00:35:45.351 [2024-11-18 07:21:05.799919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.351 [2024-11-18 07:21:05.799945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.351 qpair failed and we were unable to recover it. 00:35:45.351 [2024-11-18 07:21:05.800073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.351 [2024-11-18 07:21:05.800112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.351 qpair failed and we were unable to recover it. 00:35:45.351 [2024-11-18 07:21:05.800229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.351 [2024-11-18 07:21:05.800255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.351 qpair failed and we were unable to recover it. 00:35:45.351 [2024-11-18 07:21:05.800389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.351 [2024-11-18 07:21:05.800428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.351 qpair failed and we were unable to recover it. 00:35:45.351 [2024-11-18 07:21:05.800563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.351 [2024-11-18 07:21:05.800593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.351 qpair failed and we were unable to recover it. 00:35:45.351 [2024-11-18 07:21:05.800747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.351 [2024-11-18 07:21:05.800774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.351 qpair failed and we were unable to recover it. 00:35:45.351 [2024-11-18 07:21:05.800900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.351 [2024-11-18 07:21:05.800926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.351 qpair failed and we were unable to recover it. 00:35:45.351 [2024-11-18 07:21:05.801078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.351 [2024-11-18 07:21:05.801105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.352 qpair failed and we were unable to recover it. 00:35:45.352 [2024-11-18 07:21:05.801194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.352 [2024-11-18 07:21:05.801221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.352 qpair failed and we were unable to recover it. 00:35:45.352 [2024-11-18 07:21:05.801336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.352 [2024-11-18 07:21:05.801362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.352 qpair failed and we were unable to recover it. 00:35:45.352 [2024-11-18 07:21:05.801451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.352 [2024-11-18 07:21:05.801479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.352 qpair failed and we were unable to recover it. 00:35:45.352 [2024-11-18 07:21:05.801615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.352 [2024-11-18 07:21:05.801654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.352 qpair failed and we were unable to recover it. 00:35:45.352 [2024-11-18 07:21:05.801798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.352 [2024-11-18 07:21:05.801826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.352 qpair failed and we were unable to recover it. 00:35:45.352 [2024-11-18 07:21:05.801946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.352 [2024-11-18 07:21:05.801972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.352 qpair failed and we were unable to recover it. 00:35:45.352 [2024-11-18 07:21:05.802058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.352 [2024-11-18 07:21:05.802084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.352 qpair failed and we were unable to recover it. 00:35:45.352 [2024-11-18 07:21:05.802175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.352 [2024-11-18 07:21:05.802203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.352 qpair failed and we were unable to recover it. 00:35:45.352 [2024-11-18 07:21:05.802288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.352 [2024-11-18 07:21:05.802315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.352 qpair failed and we were unable to recover it. 00:35:45.352 [2024-11-18 07:21:05.802398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.352 [2024-11-18 07:21:05.802425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.352 qpair failed and we were unable to recover it. 00:35:45.352 [2024-11-18 07:21:05.802541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.352 [2024-11-18 07:21:05.802580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.352 qpair failed and we were unable to recover it. 00:35:45.352 [2024-11-18 07:21:05.802681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.352 [2024-11-18 07:21:05.802709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.352 qpair failed and we were unable to recover it. 00:35:45.352 [2024-11-18 07:21:05.802823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.352 [2024-11-18 07:21:05.802849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.352 qpair failed and we were unable to recover it. 00:35:45.352 [2024-11-18 07:21:05.802939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.352 [2024-11-18 07:21:05.802966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.352 qpair failed and we were unable to recover it. 00:35:45.352 [2024-11-18 07:21:05.803102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.352 [2024-11-18 07:21:05.803128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.352 qpair failed and we were unable to recover it. 00:35:45.352 [2024-11-18 07:21:05.803254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.352 [2024-11-18 07:21:05.803292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.352 qpair failed and we were unable to recover it. 00:35:45.352 [2024-11-18 07:21:05.803381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.352 [2024-11-18 07:21:05.803409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.352 qpair failed and we were unable to recover it. 00:35:45.352 [2024-11-18 07:21:05.803512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.352 [2024-11-18 07:21:05.803540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.352 qpair failed and we were unable to recover it. 00:35:45.352 [2024-11-18 07:21:05.803682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.352 [2024-11-18 07:21:05.803708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.352 qpair failed and we were unable to recover it. 00:35:45.352 [2024-11-18 07:21:05.803796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.352 [2024-11-18 07:21:05.803822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.352 qpair failed and we were unable to recover it. 00:35:45.352 [2024-11-18 07:21:05.803961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.352 [2024-11-18 07:21:05.803988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.352 qpair failed and we were unable to recover it. 00:35:45.352 [2024-11-18 07:21:05.804131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.352 [2024-11-18 07:21:05.804179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.352 qpair failed and we were unable to recover it. 00:35:45.352 [2024-11-18 07:21:05.804270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.352 [2024-11-18 07:21:05.804298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.352 qpair failed and we were unable to recover it. 00:35:45.352 [2024-11-18 07:21:05.804405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.352 [2024-11-18 07:21:05.804432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.352 qpair failed and we were unable to recover it. 00:35:45.352 [2024-11-18 07:21:05.804535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.352 [2024-11-18 07:21:05.804564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.352 qpair failed and we were unable to recover it. 00:35:45.352 [2024-11-18 07:21:05.804653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.352 [2024-11-18 07:21:05.804680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.352 qpair failed and we were unable to recover it. 00:35:45.352 [2024-11-18 07:21:05.804818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.352 [2024-11-18 07:21:05.804862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.352 qpair failed and we were unable to recover it. 00:35:45.352 [2024-11-18 07:21:05.804945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.352 [2024-11-18 07:21:05.804994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.352 qpair failed and we were unable to recover it. 00:35:45.352 [2024-11-18 07:21:05.805176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.352 [2024-11-18 07:21:05.805203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.352 qpair failed and we were unable to recover it. 00:35:45.352 [2024-11-18 07:21:05.805319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.352 [2024-11-18 07:21:05.805345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.352 qpair failed and we were unable to recover it. 00:35:45.352 [2024-11-18 07:21:05.805457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.352 [2024-11-18 07:21:05.805482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.352 qpair failed and we were unable to recover it. 00:35:45.352 [2024-11-18 07:21:05.805620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.352 [2024-11-18 07:21:05.805649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.352 qpair failed and we were unable to recover it. 00:35:45.352 [2024-11-18 07:21:05.805766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.352 [2024-11-18 07:21:05.805792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.352 qpair failed and we were unable to recover it. 00:35:45.352 [2024-11-18 07:21:05.805878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.352 [2024-11-18 07:21:05.805904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.352 qpair failed and we were unable to recover it. 00:35:45.352 [2024-11-18 07:21:05.806013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.352 [2024-11-18 07:21:05.806039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.352 qpair failed and we were unable to recover it. 00:35:45.352 [2024-11-18 07:21:05.806145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.352 [2024-11-18 07:21:05.806173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.352 qpair failed and we were unable to recover it. 00:35:45.352 [2024-11-18 07:21:05.806259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.352 [2024-11-18 07:21:05.806287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.352 qpair failed and we were unable to recover it. 00:35:45.352 [2024-11-18 07:21:05.806395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.353 [2024-11-18 07:21:05.806422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.353 qpair failed and we were unable to recover it. 00:35:45.353 [2024-11-18 07:21:05.806562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.353 [2024-11-18 07:21:05.806590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.353 qpair failed and we were unable to recover it. 00:35:45.353 [2024-11-18 07:21:05.806671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.353 [2024-11-18 07:21:05.806698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.353 qpair failed and we were unable to recover it. 00:35:45.353 [2024-11-18 07:21:05.806832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.353 [2024-11-18 07:21:05.806860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.353 qpair failed and we were unable to recover it. 00:35:45.353 [2024-11-18 07:21:05.806946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.353 [2024-11-18 07:21:05.806974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.353 qpair failed and we were unable to recover it. 00:35:45.353 [2024-11-18 07:21:05.807131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.353 [2024-11-18 07:21:05.807170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.353 qpair failed and we were unable to recover it. 00:35:45.353 [2024-11-18 07:21:05.807284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.353 [2024-11-18 07:21:05.807311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.353 qpair failed and we were unable to recover it. 00:35:45.353 [2024-11-18 07:21:05.807397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.353 [2024-11-18 07:21:05.807424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.353 qpair failed and we were unable to recover it. 00:35:45.353 [2024-11-18 07:21:05.807541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.353 [2024-11-18 07:21:05.807570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.353 qpair failed and we were unable to recover it. 00:35:45.353 [2024-11-18 07:21:05.807659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.353 [2024-11-18 07:21:05.807686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.353 qpair failed and we were unable to recover it. 00:35:45.353 [2024-11-18 07:21:05.807778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.353 [2024-11-18 07:21:05.807805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.353 qpair failed and we were unable to recover it. 00:35:45.353 [2024-11-18 07:21:05.807917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.353 [2024-11-18 07:21:05.807943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.353 qpair failed and we were unable to recover it. 00:35:45.353 [2024-11-18 07:21:05.808030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.353 [2024-11-18 07:21:05.808056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.353 qpair failed and we were unable to recover it. 00:35:45.353 [2024-11-18 07:21:05.808201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.353 [2024-11-18 07:21:05.808251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.353 qpair failed and we were unable to recover it. 00:35:45.353 [2024-11-18 07:21:05.808391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.353 [2024-11-18 07:21:05.808417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.353 qpair failed and we were unable to recover it. 00:35:45.353 [2024-11-18 07:21:05.808569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.353 [2024-11-18 07:21:05.808597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.353 qpair failed and we were unable to recover it. 00:35:45.353 [2024-11-18 07:21:05.808743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.353 [2024-11-18 07:21:05.808789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.353 qpair failed and we were unable to recover it. 00:35:45.353 [2024-11-18 07:21:05.808898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.353 [2024-11-18 07:21:05.808925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.353 qpair failed and we were unable to recover it. 00:35:45.353 [2024-11-18 07:21:05.809097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.353 [2024-11-18 07:21:05.809147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.353 qpair failed and we were unable to recover it. 00:35:45.353 [2024-11-18 07:21:05.809285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.353 [2024-11-18 07:21:05.809312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.353 qpair failed and we were unable to recover it. 00:35:45.353 [2024-11-18 07:21:05.809438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.353 [2024-11-18 07:21:05.809477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.353 qpair failed and we were unable to recover it. 00:35:45.353 [2024-11-18 07:21:05.809607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.353 [2024-11-18 07:21:05.809634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.353 qpair failed and we were unable to recover it. 00:35:45.353 [2024-11-18 07:21:05.809744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.353 [2024-11-18 07:21:05.809770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.353 qpair failed and we were unable to recover it. 00:35:45.353 [2024-11-18 07:21:05.809853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.353 [2024-11-18 07:21:05.809879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.353 qpair failed and we were unable to recover it. 00:35:45.353 [2024-11-18 07:21:05.809988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.353 [2024-11-18 07:21:05.810014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.353 qpair failed and we were unable to recover it. 00:35:45.353 [2024-11-18 07:21:05.810093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.353 [2024-11-18 07:21:05.810119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.353 qpair failed and we were unable to recover it. 00:35:45.353 [2024-11-18 07:21:05.810204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.353 [2024-11-18 07:21:05.810230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.353 qpair failed and we were unable to recover it. 00:35:45.353 [2024-11-18 07:21:05.810314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.353 [2024-11-18 07:21:05.810343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.353 qpair failed and we were unable to recover it. 00:35:45.353 [2024-11-18 07:21:05.810426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.353 [2024-11-18 07:21:05.810455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.353 qpair failed and we were unable to recover it. 00:35:45.353 [2024-11-18 07:21:05.810550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.353 [2024-11-18 07:21:05.810577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.353 qpair failed and we were unable to recover it. 00:35:45.353 [2024-11-18 07:21:05.810732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.353 [2024-11-18 07:21:05.810780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.353 qpair failed and we were unable to recover it. 00:35:45.353 [2024-11-18 07:21:05.810888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.353 [2024-11-18 07:21:05.810942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.353 qpair failed and we were unable to recover it. 00:35:45.353 [2024-11-18 07:21:05.811087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.353 [2024-11-18 07:21:05.811140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.353 qpair failed and we were unable to recover it. 00:35:45.353 [2024-11-18 07:21:05.811220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.353 [2024-11-18 07:21:05.811247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.353 qpair failed and we were unable to recover it. 00:35:45.353 [2024-11-18 07:21:05.811361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.353 [2024-11-18 07:21:05.811389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.353 qpair failed and we were unable to recover it. 00:35:45.353 [2024-11-18 07:21:05.811505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.353 [2024-11-18 07:21:05.811533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.353 qpair failed and we were unable to recover it. 00:35:45.353 [2024-11-18 07:21:05.811621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.353 [2024-11-18 07:21:05.811649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.353 qpair failed and we were unable to recover it. 00:35:45.353 [2024-11-18 07:21:05.811729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.353 [2024-11-18 07:21:05.811755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.353 qpair failed and we were unable to recover it. 00:35:45.354 [2024-11-18 07:21:05.811889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.354 [2024-11-18 07:21:05.811915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.354 qpair failed and we were unable to recover it. 00:35:45.354 [2024-11-18 07:21:05.812074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.354 [2024-11-18 07:21:05.812100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.354 qpair failed and we were unable to recover it. 00:35:45.354 [2024-11-18 07:21:05.812213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.354 [2024-11-18 07:21:05.812239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.354 qpair failed and we were unable to recover it. 00:35:45.354 [2024-11-18 07:21:05.812356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.354 [2024-11-18 07:21:05.812383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.354 qpair failed and we were unable to recover it. 00:35:45.354 [2024-11-18 07:21:05.812470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.354 [2024-11-18 07:21:05.812502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.354 qpair failed and we were unable to recover it. 00:35:45.354 [2024-11-18 07:21:05.812594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.354 [2024-11-18 07:21:05.812620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.354 qpair failed and we were unable to recover it. 00:35:45.354 [2024-11-18 07:21:05.812732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.354 [2024-11-18 07:21:05.812758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.354 qpair failed and we were unable to recover it. 00:35:45.354 [2024-11-18 07:21:05.812849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.354 [2024-11-18 07:21:05.812877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.354 qpair failed and we were unable to recover it. 00:35:45.354 [2024-11-18 07:21:05.812993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.354 [2024-11-18 07:21:05.813019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.354 qpair failed and we were unable to recover it. 00:35:45.354 [2024-11-18 07:21:05.813132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.354 [2024-11-18 07:21:05.813184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.354 qpair failed and we were unable to recover it. 00:35:45.354 [2024-11-18 07:21:05.813303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.354 [2024-11-18 07:21:05.813329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.354 qpair failed and we were unable to recover it. 00:35:45.354 [2024-11-18 07:21:05.813415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.354 [2024-11-18 07:21:05.813444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.354 qpair failed and we were unable to recover it. 00:35:45.354 [2024-11-18 07:21:05.813537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.354 [2024-11-18 07:21:05.813564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.354 qpair failed and we were unable to recover it. 00:35:45.354 [2024-11-18 07:21:05.813641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.354 [2024-11-18 07:21:05.813667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.354 qpair failed and we were unable to recover it. 00:35:45.354 [2024-11-18 07:21:05.813790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.354 [2024-11-18 07:21:05.813817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.354 qpair failed and we were unable to recover it. 00:35:45.354 [2024-11-18 07:21:05.813932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.354 [2024-11-18 07:21:05.813958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.354 qpair failed and we were unable to recover it. 00:35:45.354 [2024-11-18 07:21:05.814051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.354 [2024-11-18 07:21:05.814077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.354 qpair failed and we were unable to recover it. 00:35:45.354 [2024-11-18 07:21:05.814158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.354 [2024-11-18 07:21:05.814185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.354 qpair failed and we were unable to recover it. 00:35:45.354 [2024-11-18 07:21:05.814272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.354 [2024-11-18 07:21:05.814305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.354 qpair failed and we were unable to recover it. 00:35:45.354 [2024-11-18 07:21:05.814422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.354 [2024-11-18 07:21:05.814448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.354 qpair failed and we were unable to recover it. 00:35:45.354 [2024-11-18 07:21:05.814568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.354 [2024-11-18 07:21:05.814595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.354 qpair failed and we were unable to recover it. 00:35:45.354 [2024-11-18 07:21:05.814733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.354 [2024-11-18 07:21:05.814758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.354 qpair failed and we were unable to recover it. 00:35:45.354 [2024-11-18 07:21:05.814867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.354 [2024-11-18 07:21:05.814893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.354 qpair failed and we were unable to recover it. 00:35:45.354 [2024-11-18 07:21:05.815027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.354 [2024-11-18 07:21:05.815063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.354 qpair failed and we were unable to recover it. 00:35:45.354 [2024-11-18 07:21:05.815193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.354 [2024-11-18 07:21:05.815222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.354 qpair failed and we were unable to recover it. 00:35:45.354 [2024-11-18 07:21:05.815332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.354 [2024-11-18 07:21:05.815358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.354 qpair failed and we were unable to recover it. 00:35:45.354 [2024-11-18 07:21:05.815442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.354 [2024-11-18 07:21:05.815468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.354 qpair failed and we were unable to recover it. 00:35:45.354 [2024-11-18 07:21:05.815564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.354 [2024-11-18 07:21:05.815592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.354 qpair failed and we were unable to recover it. 00:35:45.354 [2024-11-18 07:21:05.815674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.354 [2024-11-18 07:21:05.815700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.354 qpair failed and we were unable to recover it. 00:35:45.354 [2024-11-18 07:21:05.815782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.354 [2024-11-18 07:21:05.815809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.354 qpair failed and we were unable to recover it. 00:35:45.354 [2024-11-18 07:21:05.815965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.354 [2024-11-18 07:21:05.815993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.354 qpair failed and we were unable to recover it. 00:35:45.354 [2024-11-18 07:21:05.816077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.354 [2024-11-18 07:21:05.816103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.354 qpair failed and we were unable to recover it. 00:35:45.354 [2024-11-18 07:21:05.816191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.354 [2024-11-18 07:21:05.816218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.354 qpair failed and we were unable to recover it. 00:35:45.354 [2024-11-18 07:21:05.816326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.354 [2024-11-18 07:21:05.816353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.354 qpair failed and we were unable to recover it. 00:35:45.354 [2024-11-18 07:21:05.816472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.354 [2024-11-18 07:21:05.816517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.354 qpair failed and we were unable to recover it. 00:35:45.354 [2024-11-18 07:21:05.816604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.354 [2024-11-18 07:21:05.816633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.354 qpair failed and we were unable to recover it. 00:35:45.354 [2024-11-18 07:21:05.816746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.354 [2024-11-18 07:21:05.816772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.354 qpair failed and we were unable to recover it. 00:35:45.354 [2024-11-18 07:21:05.816915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.354 [2024-11-18 07:21:05.816968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.355 qpair failed and we were unable to recover it. 00:35:45.355 [2024-11-18 07:21:05.817051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.355 [2024-11-18 07:21:05.817078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.355 qpair failed and we were unable to recover it. 00:35:45.355 [2024-11-18 07:21:05.817167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.355 [2024-11-18 07:21:05.817194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.355 qpair failed and we were unable to recover it. 00:35:45.355 [2024-11-18 07:21:05.817321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.355 [2024-11-18 07:21:05.817359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.355 qpair failed and we were unable to recover it. 00:35:45.355 [2024-11-18 07:21:05.817503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.355 [2024-11-18 07:21:05.817542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.355 qpair failed and we were unable to recover it. 00:35:45.355 [2024-11-18 07:21:05.817659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.355 [2024-11-18 07:21:05.817686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.355 qpair failed and we were unable to recover it. 00:35:45.355 [2024-11-18 07:21:05.817775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.355 [2024-11-18 07:21:05.817801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.355 qpair failed and we were unable to recover it. 00:35:45.355 [2024-11-18 07:21:05.817892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.355 [2024-11-18 07:21:05.817941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.355 qpair failed and we were unable to recover it. 00:35:45.355 [2024-11-18 07:21:05.818121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.355 [2024-11-18 07:21:05.818176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.355 qpair failed and we were unable to recover it. 00:35:45.355 [2024-11-18 07:21:05.818298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.355 [2024-11-18 07:21:05.818325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.355 qpair failed and we were unable to recover it. 00:35:45.355 [2024-11-18 07:21:05.818411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.355 [2024-11-18 07:21:05.818437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.355 qpair failed and we were unable to recover it. 00:35:45.355 [2024-11-18 07:21:05.818573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.355 [2024-11-18 07:21:05.818600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.355 qpair failed and we were unable to recover it. 00:35:45.355 [2024-11-18 07:21:05.818680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.355 [2024-11-18 07:21:05.818707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.355 qpair failed and we were unable to recover it. 00:35:45.355 [2024-11-18 07:21:05.818798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.355 [2024-11-18 07:21:05.818825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.355 qpair failed and we were unable to recover it. 00:35:45.355 [2024-11-18 07:21:05.818907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.355 [2024-11-18 07:21:05.818934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.355 qpair failed and we were unable to recover it. 00:35:45.355 [2024-11-18 07:21:05.819052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.355 [2024-11-18 07:21:05.819078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.355 qpair failed and we were unable to recover it. 00:35:45.355 [2024-11-18 07:21:05.819166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.355 [2024-11-18 07:21:05.819193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.355 qpair failed and we were unable to recover it. 00:35:45.355 [2024-11-18 07:21:05.819335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.355 [2024-11-18 07:21:05.819362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.355 qpair failed and we were unable to recover it. 00:35:45.355 [2024-11-18 07:21:05.819455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.355 [2024-11-18 07:21:05.819481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.355 qpair failed and we were unable to recover it. 00:35:45.355 [2024-11-18 07:21:05.819591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.355 [2024-11-18 07:21:05.819618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.355 qpair failed and we were unable to recover it. 00:35:45.355 [2024-11-18 07:21:05.819702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.355 [2024-11-18 07:21:05.819728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.355 qpair failed and we were unable to recover it. 00:35:45.355 [2024-11-18 07:21:05.819840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.355 [2024-11-18 07:21:05.819874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.355 qpair failed and we were unable to recover it. 00:35:45.355 [2024-11-18 07:21:05.819990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.355 [2024-11-18 07:21:05.820016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.355 qpair failed and we were unable to recover it. 00:35:45.355 [2024-11-18 07:21:05.820126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.355 [2024-11-18 07:21:05.820153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.355 qpair failed and we were unable to recover it. 00:35:45.355 [2024-11-18 07:21:05.820267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.355 [2024-11-18 07:21:05.820293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.355 qpair failed and we were unable to recover it. 00:35:45.355 [2024-11-18 07:21:05.820380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.355 [2024-11-18 07:21:05.820418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.355 qpair failed and we were unable to recover it. 00:35:45.355 [2024-11-18 07:21:05.820575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.355 [2024-11-18 07:21:05.820614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.355 qpair failed and we were unable to recover it. 00:35:45.355 [2024-11-18 07:21:05.820708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.355 [2024-11-18 07:21:05.820736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.355 qpair failed and we were unable to recover it. 00:35:45.355 [2024-11-18 07:21:05.820836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.355 [2024-11-18 07:21:05.820862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.355 qpair failed and we were unable to recover it. 00:35:45.355 [2024-11-18 07:21:05.820971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.355 [2024-11-18 07:21:05.820996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.355 qpair failed and we were unable to recover it. 00:35:45.355 [2024-11-18 07:21:05.821120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.355 [2024-11-18 07:21:05.821145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.355 qpair failed and we were unable to recover it. 00:35:45.355 [2024-11-18 07:21:05.821236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.355 [2024-11-18 07:21:05.821263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.355 qpair failed and we were unable to recover it. 00:35:45.355 [2024-11-18 07:21:05.821391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.355 [2024-11-18 07:21:05.821430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.355 qpair failed and we were unable to recover it. 00:35:45.355 [2024-11-18 07:21:05.821564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.355 [2024-11-18 07:21:05.821603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.355 qpair failed and we were unable to recover it. 00:35:45.355 [2024-11-18 07:21:05.821751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.355 [2024-11-18 07:21:05.821779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.355 qpair failed and we were unable to recover it. 00:35:45.355 [2024-11-18 07:21:05.821898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.355 [2024-11-18 07:21:05.821925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.355 qpair failed and we were unable to recover it. 00:35:45.355 [2024-11-18 07:21:05.822068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.355 [2024-11-18 07:21:05.822095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.355 qpair failed and we were unable to recover it. 00:35:45.355 [2024-11-18 07:21:05.822206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.355 [2024-11-18 07:21:05.822246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.356 qpair failed and we were unable to recover it. 00:35:45.356 [2024-11-18 07:21:05.822401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.356 [2024-11-18 07:21:05.822429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.356 qpair failed and we were unable to recover it. 00:35:45.356 [2024-11-18 07:21:05.822537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.356 [2024-11-18 07:21:05.822576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.356 qpair failed and we were unable to recover it. 00:35:45.356 [2024-11-18 07:21:05.822699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.356 [2024-11-18 07:21:05.822726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.356 qpair failed and we were unable to recover it. 00:35:45.356 [2024-11-18 07:21:05.822843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.356 [2024-11-18 07:21:05.822869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.356 qpair failed and we were unable to recover it. 00:35:45.356 [2024-11-18 07:21:05.822977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.356 [2024-11-18 07:21:05.823027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.356 qpair failed and we were unable to recover it. 00:35:45.356 [2024-11-18 07:21:05.823110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.356 [2024-11-18 07:21:05.823138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.356 qpair failed and we were unable to recover it. 00:35:45.356 [2024-11-18 07:21:05.823260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.356 [2024-11-18 07:21:05.823287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.356 qpair failed and we were unable to recover it. 00:35:45.356 [2024-11-18 07:21:05.823406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.356 [2024-11-18 07:21:05.823445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.356 qpair failed and we were unable to recover it. 00:35:45.356 [2024-11-18 07:21:05.823552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.356 [2024-11-18 07:21:05.823581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.356 qpair failed and we were unable to recover it. 00:35:45.356 [2024-11-18 07:21:05.823663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.356 [2024-11-18 07:21:05.823690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.356 qpair failed and we were unable to recover it. 00:35:45.356 [2024-11-18 07:21:05.823784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.356 [2024-11-18 07:21:05.823812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.356 qpair failed and we were unable to recover it. 00:35:45.356 [2024-11-18 07:21:05.823929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.356 [2024-11-18 07:21:05.823955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.356 qpair failed and we were unable to recover it. 00:35:45.356 [2024-11-18 07:21:05.824061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.356 [2024-11-18 07:21:05.824090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.356 qpair failed and we were unable to recover it. 00:35:45.356 [2024-11-18 07:21:05.824177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.356 [2024-11-18 07:21:05.824204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.356 qpair failed and we were unable to recover it. 00:35:45.356 [2024-11-18 07:21:05.824315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.356 [2024-11-18 07:21:05.824342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.356 qpair failed and we were unable to recover it. 00:35:45.356 [2024-11-18 07:21:05.824426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.356 [2024-11-18 07:21:05.824453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.356 qpair failed and we were unable to recover it. 00:35:45.356 [2024-11-18 07:21:05.824570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.356 [2024-11-18 07:21:05.824597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.356 qpair failed and we were unable to recover it. 00:35:45.356 [2024-11-18 07:21:05.824739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.356 [2024-11-18 07:21:05.824766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.356 qpair failed and we were unable to recover it. 00:35:45.356 [2024-11-18 07:21:05.824874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.356 [2024-11-18 07:21:05.824900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.356 qpair failed and we were unable to recover it. 00:35:45.356 [2024-11-18 07:21:05.824984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.356 [2024-11-18 07:21:05.825010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.356 qpair failed and we were unable to recover it. 00:35:45.356 [2024-11-18 07:21:05.825122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.356 [2024-11-18 07:21:05.825147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.356 qpair failed and we were unable to recover it. 00:35:45.356 [2024-11-18 07:21:05.825232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.356 [2024-11-18 07:21:05.825259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.356 qpair failed and we were unable to recover it. 00:35:45.356 [2024-11-18 07:21:05.825373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.356 [2024-11-18 07:21:05.825399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.356 qpair failed and we were unable to recover it. 00:35:45.356 [2024-11-18 07:21:05.825480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.356 [2024-11-18 07:21:05.825515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.356 qpair failed and we were unable to recover it. 00:35:45.356 [2024-11-18 07:21:05.825613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.356 [2024-11-18 07:21:05.825640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.356 qpair failed and we were unable to recover it. 00:35:45.356 [2024-11-18 07:21:05.825720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.356 [2024-11-18 07:21:05.825747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.356 qpair failed and we were unable to recover it. 00:35:45.356 [2024-11-18 07:21:05.825889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.356 [2024-11-18 07:21:05.825916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.356 qpair failed and we were unable to recover it. 00:35:45.356 [2024-11-18 07:21:05.826007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.356 [2024-11-18 07:21:05.826033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.356 qpair failed and we were unable to recover it. 00:35:45.356 [2024-11-18 07:21:05.826147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.356 [2024-11-18 07:21:05.826174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.356 qpair failed and we were unable to recover it. 00:35:45.356 [2024-11-18 07:21:05.826278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.356 [2024-11-18 07:21:05.826305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.356 qpair failed and we were unable to recover it. 00:35:45.356 [2024-11-18 07:21:05.826415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.356 [2024-11-18 07:21:05.826442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.356 qpair failed and we were unable to recover it. 00:35:45.356 [2024-11-18 07:21:05.826526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.356 [2024-11-18 07:21:05.826553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.356 qpair failed and we were unable to recover it. 00:35:45.356 [2024-11-18 07:21:05.826639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.356 [2024-11-18 07:21:05.826665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.357 qpair failed and we were unable to recover it. 00:35:45.357 [2024-11-18 07:21:05.826806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.357 [2024-11-18 07:21:05.826832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.357 qpair failed and we were unable to recover it. 00:35:45.357 [2024-11-18 07:21:05.826915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.357 [2024-11-18 07:21:05.826942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.357 qpair failed and we were unable to recover it. 00:35:45.357 [2024-11-18 07:21:05.827037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.357 [2024-11-18 07:21:05.827064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.357 qpair failed and we were unable to recover it. 00:35:45.357 [2024-11-18 07:21:05.827204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.357 [2024-11-18 07:21:05.827230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.357 qpair failed and we were unable to recover it. 00:35:45.357 [2024-11-18 07:21:05.827319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.357 [2024-11-18 07:21:05.827346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.357 qpair failed and we were unable to recover it. 00:35:45.357 [2024-11-18 07:21:05.827430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.357 [2024-11-18 07:21:05.827458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.357 qpair failed and we were unable to recover it. 00:35:45.357 [2024-11-18 07:21:05.827557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.357 [2024-11-18 07:21:05.827586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.357 qpair failed and we were unable to recover it. 00:35:45.357 [2024-11-18 07:21:05.827716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.357 [2024-11-18 07:21:05.827754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.357 qpair failed and we were unable to recover it. 00:35:45.357 [2024-11-18 07:21:05.827875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.357 [2024-11-18 07:21:05.827904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.357 qpair failed and we were unable to recover it. 00:35:45.357 [2024-11-18 07:21:05.828019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.357 [2024-11-18 07:21:05.828046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.357 qpair failed and we were unable to recover it. 00:35:45.357 [2024-11-18 07:21:05.828162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.357 [2024-11-18 07:21:05.828188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.357 qpair failed and we were unable to recover it. 00:35:45.357 [2024-11-18 07:21:05.828300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.357 [2024-11-18 07:21:05.828326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.357 qpair failed and we were unable to recover it. 00:35:45.357 [2024-11-18 07:21:05.828438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.357 [2024-11-18 07:21:05.828466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.357 qpair failed and we were unable to recover it. 00:35:45.357 [2024-11-18 07:21:05.828598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.357 [2024-11-18 07:21:05.828625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.357 qpair failed and we were unable to recover it. 00:35:45.357 [2024-11-18 07:21:05.828736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.357 [2024-11-18 07:21:05.828762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.357 qpair failed and we were unable to recover it. 00:35:45.357 [2024-11-18 07:21:05.828873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.357 [2024-11-18 07:21:05.828924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.357 qpair failed and we were unable to recover it. 00:35:45.357 [2024-11-18 07:21:05.829061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.357 [2024-11-18 07:21:05.829108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.357 qpair failed and we were unable to recover it. 00:35:45.357 [2024-11-18 07:21:05.829250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.357 [2024-11-18 07:21:05.829300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.357 qpair failed and we were unable to recover it. 00:35:45.357 [2024-11-18 07:21:05.829410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.357 [2024-11-18 07:21:05.829436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.357 qpair failed and we were unable to recover it. 00:35:45.357 [2024-11-18 07:21:05.829550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.357 [2024-11-18 07:21:05.829578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.357 qpair failed and we were unable to recover it. 00:35:45.357 [2024-11-18 07:21:05.829691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.357 [2024-11-18 07:21:05.829717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.357 qpair failed and we were unable to recover it. 00:35:45.357 [2024-11-18 07:21:05.829803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.357 [2024-11-18 07:21:05.829829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.357 qpair failed and we were unable to recover it. 00:35:45.357 [2024-11-18 07:21:05.829916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.357 [2024-11-18 07:21:05.829943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.357 qpair failed and we were unable to recover it. 00:35:45.357 [2024-11-18 07:21:05.830023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.357 [2024-11-18 07:21:05.830049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.357 qpair failed and we were unable to recover it. 00:35:45.357 [2024-11-18 07:21:05.830169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.357 [2024-11-18 07:21:05.830198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.357 qpair failed and we were unable to recover it. 00:35:45.357 [2024-11-18 07:21:05.830340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.357 [2024-11-18 07:21:05.830366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.357 qpair failed and we were unable to recover it. 00:35:45.357 [2024-11-18 07:21:05.830470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.357 [2024-11-18 07:21:05.830529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.357 qpair failed and we were unable to recover it. 00:35:45.357 [2024-11-18 07:21:05.830626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.357 [2024-11-18 07:21:05.830653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.357 qpair failed and we were unable to recover it. 00:35:45.357 [2024-11-18 07:21:05.830743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.357 [2024-11-18 07:21:05.830769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.357 qpair failed and we were unable to recover it. 00:35:45.357 [2024-11-18 07:21:05.830856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.357 [2024-11-18 07:21:05.830882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.357 qpair failed and we were unable to recover it. 00:35:45.357 [2024-11-18 07:21:05.830997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.357 [2024-11-18 07:21:05.831023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.357 qpair failed and we were unable to recover it. 00:35:45.357 [2024-11-18 07:21:05.831169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.357 [2024-11-18 07:21:05.831197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.357 qpair failed and we were unable to recover it. 00:35:45.357 [2024-11-18 07:21:05.831314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.357 [2024-11-18 07:21:05.831342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.357 qpair failed and we were unable to recover it. 00:35:45.357 [2024-11-18 07:21:05.831462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.357 [2024-11-18 07:21:05.831499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.357 qpair failed and we were unable to recover it. 00:35:45.357 [2024-11-18 07:21:05.831623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.357 [2024-11-18 07:21:05.831650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.357 qpair failed and we were unable to recover it. 00:35:45.357 [2024-11-18 07:21:05.831761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.357 [2024-11-18 07:21:05.831787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.357 qpair failed and we were unable to recover it. 00:35:45.357 [2024-11-18 07:21:05.831975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.357 [2024-11-18 07:21:05.832024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.357 qpair failed and we were unable to recover it. 00:35:45.357 [2024-11-18 07:21:05.832151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.357 [2024-11-18 07:21:05.832209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.358 qpair failed and we were unable to recover it. 00:35:45.358 [2024-11-18 07:21:05.832348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.358 [2024-11-18 07:21:05.832375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.358 qpair failed and we were unable to recover it. 00:35:45.358 [2024-11-18 07:21:05.832487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.358 [2024-11-18 07:21:05.832520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.358 qpair failed and we were unable to recover it. 00:35:45.358 [2024-11-18 07:21:05.832629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.358 [2024-11-18 07:21:05.832657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.358 qpair failed and we were unable to recover it. 00:35:45.358 [2024-11-18 07:21:05.832771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.358 [2024-11-18 07:21:05.832797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.358 qpair failed and we were unable to recover it. 00:35:45.358 [2024-11-18 07:21:05.832885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.358 [2024-11-18 07:21:05.832914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.358 qpair failed and we were unable to recover it. 00:35:45.358 [2024-11-18 07:21:05.833024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.358 [2024-11-18 07:21:05.833051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.358 qpair failed and we were unable to recover it. 00:35:45.358 [2024-11-18 07:21:05.833151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.358 [2024-11-18 07:21:05.833177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.358 qpair failed and we were unable to recover it. 00:35:45.358 [2024-11-18 07:21:05.833287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.358 [2024-11-18 07:21:05.833316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.358 qpair failed and we were unable to recover it. 00:35:45.358 [2024-11-18 07:21:05.833428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.358 [2024-11-18 07:21:05.833454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.358 qpair failed and we were unable to recover it. 00:35:45.358 [2024-11-18 07:21:05.833600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.358 [2024-11-18 07:21:05.833627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.358 qpair failed and we were unable to recover it. 00:35:45.358 [2024-11-18 07:21:05.833739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.358 [2024-11-18 07:21:05.833767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.358 qpair failed and we were unable to recover it. 00:35:45.358 [2024-11-18 07:21:05.833881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.358 [2024-11-18 07:21:05.833907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.358 qpair failed and we were unable to recover it. 00:35:45.358 [2024-11-18 07:21:05.833996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.358 [2024-11-18 07:21:05.834022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.358 qpair failed and we were unable to recover it. 00:35:45.358 [2024-11-18 07:21:05.834134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.358 [2024-11-18 07:21:05.834162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.358 qpair failed and we were unable to recover it. 00:35:45.358 [2024-11-18 07:21:05.834273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.358 [2024-11-18 07:21:05.834300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.358 qpair failed and we were unable to recover it. 00:35:45.358 [2024-11-18 07:21:05.834396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.358 [2024-11-18 07:21:05.834435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.358 qpair failed and we were unable to recover it. 00:35:45.358 [2024-11-18 07:21:05.834536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.358 [2024-11-18 07:21:05.834565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.358 qpair failed and we were unable to recover it. 00:35:45.358 [2024-11-18 07:21:05.834676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.358 [2024-11-18 07:21:05.834702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.358 qpair failed and we were unable to recover it. 00:35:45.358 [2024-11-18 07:21:05.834803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.358 [2024-11-18 07:21:05.834829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.358 qpair failed and we were unable to recover it. 00:35:45.358 [2024-11-18 07:21:05.834935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.358 [2024-11-18 07:21:05.834966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.358 qpair failed and we were unable to recover it. 00:35:45.358 [2024-11-18 07:21:05.835048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.358 [2024-11-18 07:21:05.835074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.358 qpair failed and we were unable to recover it. 00:35:45.358 [2024-11-18 07:21:05.835186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.358 [2024-11-18 07:21:05.835213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.358 qpair failed and we were unable to recover it. 00:35:45.358 [2024-11-18 07:21:05.835345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.358 [2024-11-18 07:21:05.835384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.358 qpair failed and we were unable to recover it. 00:35:45.358 [2024-11-18 07:21:05.835516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.358 [2024-11-18 07:21:05.835555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.358 qpair failed and we were unable to recover it. 00:35:45.358 [2024-11-18 07:21:05.835676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.358 [2024-11-18 07:21:05.835705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.358 qpair failed and we were unable to recover it. 00:35:45.358 [2024-11-18 07:21:05.835821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.358 [2024-11-18 07:21:05.835848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.358 qpair failed and we were unable to recover it. 00:35:45.358 [2024-11-18 07:21:05.835994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.358 [2024-11-18 07:21:05.836044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.358 qpair failed and we were unable to recover it. 00:35:45.358 [2024-11-18 07:21:05.836187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.358 [2024-11-18 07:21:05.836234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.358 qpair failed and we were unable to recover it. 00:35:45.358 [2024-11-18 07:21:05.836323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.358 [2024-11-18 07:21:05.836351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.358 qpair failed and we were unable to recover it. 00:35:45.358 [2024-11-18 07:21:05.836436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.358 [2024-11-18 07:21:05.836462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.358 qpair failed and we were unable to recover it. 00:35:45.358 [2024-11-18 07:21:05.836574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.358 [2024-11-18 07:21:05.836613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.358 qpair failed and we were unable to recover it. 00:35:45.358 [2024-11-18 07:21:05.836732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.358 [2024-11-18 07:21:05.836759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.358 qpair failed and we were unable to recover it. 00:35:45.358 [2024-11-18 07:21:05.836878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.358 [2024-11-18 07:21:05.836904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.358 qpair failed and we were unable to recover it. 00:35:45.358 [2024-11-18 07:21:05.837005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.358 [2024-11-18 07:21:05.837030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.358 qpair failed and we were unable to recover it. 00:35:45.358 [2024-11-18 07:21:05.837195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.358 [2024-11-18 07:21:05.837221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.358 qpair failed and we were unable to recover it. 00:35:45.358 [2024-11-18 07:21:05.837363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.358 [2024-11-18 07:21:05.837389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.358 qpair failed and we were unable to recover it. 00:35:45.358 [2024-11-18 07:21:05.837485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.358 [2024-11-18 07:21:05.837520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.358 qpair failed and we were unable to recover it. 00:35:45.358 [2024-11-18 07:21:05.837602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.359 [2024-11-18 07:21:05.837628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.359 qpair failed and we were unable to recover it. 00:35:45.359 [2024-11-18 07:21:05.837716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.359 [2024-11-18 07:21:05.837746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.359 qpair failed and we were unable to recover it. 00:35:45.359 [2024-11-18 07:21:05.837867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.359 [2024-11-18 07:21:05.837894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.359 qpair failed and we were unable to recover it. 00:35:45.359 [2024-11-18 07:21:05.837986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.359 [2024-11-18 07:21:05.838014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.359 qpair failed and we were unable to recover it. 00:35:45.359 [2024-11-18 07:21:05.838097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.359 [2024-11-18 07:21:05.838123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.359 qpair failed and we were unable to recover it. 00:35:45.359 [2024-11-18 07:21:05.838285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.359 [2024-11-18 07:21:05.838314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.359 qpair failed and we were unable to recover it. 00:35:45.359 [2024-11-18 07:21:05.838395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.359 [2024-11-18 07:21:05.838423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.359 qpair failed and we were unable to recover it. 00:35:45.359 [2024-11-18 07:21:05.838545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.359 [2024-11-18 07:21:05.838572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.359 qpair failed and we were unable to recover it. 00:35:45.359 [2024-11-18 07:21:05.838653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.359 [2024-11-18 07:21:05.838680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.359 qpair failed and we were unable to recover it. 00:35:45.359 [2024-11-18 07:21:05.838766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.359 [2024-11-18 07:21:05.838795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.359 qpair failed and we were unable to recover it. 00:35:45.359 [2024-11-18 07:21:05.838920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.359 [2024-11-18 07:21:05.838947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.359 qpair failed and we were unable to recover it. 00:35:45.359 [2024-11-18 07:21:05.839057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.359 [2024-11-18 07:21:05.839107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.359 qpair failed and we were unable to recover it. 00:35:45.359 [2024-11-18 07:21:05.839218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.359 [2024-11-18 07:21:05.839244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.359 qpair failed and we were unable to recover it. 00:35:45.359 [2024-11-18 07:21:05.839335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.359 [2024-11-18 07:21:05.839362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.359 qpair failed and we were unable to recover it. 00:35:45.359 [2024-11-18 07:21:05.839476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.359 [2024-11-18 07:21:05.839510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.359 qpair failed and we were unable to recover it. 00:35:45.359 [2024-11-18 07:21:05.839601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.359 [2024-11-18 07:21:05.839627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.359 qpair failed and we were unable to recover it. 00:35:45.359 [2024-11-18 07:21:05.839713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.359 [2024-11-18 07:21:05.839738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.359 qpair failed and we were unable to recover it. 00:35:45.359 [2024-11-18 07:21:05.839829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.359 [2024-11-18 07:21:05.839855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.359 qpair failed and we were unable to recover it. 00:35:45.359 [2024-11-18 07:21:05.839939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.359 [2024-11-18 07:21:05.839966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.359 qpair failed and we were unable to recover it. 00:35:45.359 [2024-11-18 07:21:05.840097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.359 [2024-11-18 07:21:05.840138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.359 qpair failed and we were unable to recover it. 00:35:45.359 [2024-11-18 07:21:05.840253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.359 [2024-11-18 07:21:05.840280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.359 qpair failed and we were unable to recover it. 00:35:45.359 [2024-11-18 07:21:05.840376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.359 [2024-11-18 07:21:05.840405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.359 qpair failed and we were unable to recover it. 00:35:45.359 [2024-11-18 07:21:05.840508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.359 [2024-11-18 07:21:05.840535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.359 qpair failed and we were unable to recover it. 00:35:45.359 [2024-11-18 07:21:05.840636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.359 [2024-11-18 07:21:05.840663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.359 qpair failed and we were unable to recover it. 00:35:45.359 [2024-11-18 07:21:05.840752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.359 [2024-11-18 07:21:05.840778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.359 qpair failed and we were unable to recover it. 00:35:45.359 [2024-11-18 07:21:05.840899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.359 [2024-11-18 07:21:05.840926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.359 qpair failed and we were unable to recover it. 00:35:45.359 [2024-11-18 07:21:05.841019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.359 [2024-11-18 07:21:05.841047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.359 qpair failed and we were unable to recover it. 00:35:45.359 [2024-11-18 07:21:05.841128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.359 [2024-11-18 07:21:05.841155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.359 qpair failed and we were unable to recover it. 00:35:45.359 [2024-11-18 07:21:05.841268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.359 [2024-11-18 07:21:05.841294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.359 qpair failed and we were unable to recover it. 00:35:45.359 [2024-11-18 07:21:05.841416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.359 [2024-11-18 07:21:05.841442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.359 qpair failed and we were unable to recover it. 00:35:45.359 [2024-11-18 07:21:05.841559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.359 [2024-11-18 07:21:05.841586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.359 qpair failed and we were unable to recover it. 00:35:45.359 [2024-11-18 07:21:05.841705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.359 [2024-11-18 07:21:05.841732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.359 qpair failed and we were unable to recover it. 00:35:45.359 [2024-11-18 07:21:05.841825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.359 [2024-11-18 07:21:05.841850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.359 qpair failed and we were unable to recover it. 00:35:45.359 [2024-11-18 07:21:05.841938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.359 [2024-11-18 07:21:05.841964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.359 qpair failed and we were unable to recover it. 00:35:45.359 [2024-11-18 07:21:05.842080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.359 [2024-11-18 07:21:05.842107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.359 qpair failed and we were unable to recover it. 00:35:45.359 [2024-11-18 07:21:05.842244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.359 [2024-11-18 07:21:05.842271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.359 qpair failed and we were unable to recover it. 00:35:45.359 [2024-11-18 07:21:05.842372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.359 [2024-11-18 07:21:05.842412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.359 qpair failed and we were unable to recover it. 00:35:45.359 [2024-11-18 07:21:05.842548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.359 [2024-11-18 07:21:05.842576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.359 qpair failed and we were unable to recover it. 00:35:45.360 [2024-11-18 07:21:05.842664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.360 [2024-11-18 07:21:05.842691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.360 qpair failed and we were unable to recover it. 00:35:45.360 [2024-11-18 07:21:05.842774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.360 [2024-11-18 07:21:05.842813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.360 qpair failed and we were unable to recover it. 00:35:45.360 [2024-11-18 07:21:05.842931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.360 [2024-11-18 07:21:05.842958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.360 qpair failed and we were unable to recover it. 00:35:45.360 [2024-11-18 07:21:05.843052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.360 [2024-11-18 07:21:05.843078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.360 qpair failed and we were unable to recover it. 00:35:45.360 [2024-11-18 07:21:05.843162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.360 [2024-11-18 07:21:05.843189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.360 qpair failed and we were unable to recover it. 00:35:45.360 [2024-11-18 07:21:05.843317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.360 [2024-11-18 07:21:05.843346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.360 qpair failed and we were unable to recover it. 00:35:45.360 [2024-11-18 07:21:05.843440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.360 [2024-11-18 07:21:05.843469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.360 qpair failed and we were unable to recover it. 00:35:45.360 [2024-11-18 07:21:05.843590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.360 [2024-11-18 07:21:05.843617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.360 qpair failed and we were unable to recover it. 00:35:45.360 [2024-11-18 07:21:05.843728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.360 [2024-11-18 07:21:05.843753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.360 qpair failed and we were unable to recover it. 00:35:45.360 [2024-11-18 07:21:05.843847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.360 [2024-11-18 07:21:05.843872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.360 qpair failed and we were unable to recover it. 00:35:45.360 [2024-11-18 07:21:05.843955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.360 [2024-11-18 07:21:05.843982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.360 qpair failed and we were unable to recover it. 00:35:45.360 [2024-11-18 07:21:05.844098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.360 [2024-11-18 07:21:05.844132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.360 qpair failed and we were unable to recover it. 00:35:45.360 [2024-11-18 07:21:05.844221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.360 [2024-11-18 07:21:05.844246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.360 qpair failed and we were unable to recover it. 00:35:45.360 [2024-11-18 07:21:05.844330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.360 [2024-11-18 07:21:05.844356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.360 qpair failed and we were unable to recover it. 00:35:45.360 [2024-11-18 07:21:05.844503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.360 [2024-11-18 07:21:05.844530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.360 qpair failed and we were unable to recover it. 00:35:45.360 [2024-11-18 07:21:05.844639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.360 [2024-11-18 07:21:05.844665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.360 qpair failed and we were unable to recover it. 00:35:45.360 [2024-11-18 07:21:05.844746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.360 [2024-11-18 07:21:05.844774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.360 qpair failed and we were unable to recover it. 00:35:45.360 [2024-11-18 07:21:05.844897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.360 [2024-11-18 07:21:05.844923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.360 qpair failed and we were unable to recover it. 00:35:45.360 [2024-11-18 07:21:05.845041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.360 [2024-11-18 07:21:05.845066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.360 qpair failed and we were unable to recover it. 00:35:45.360 [2024-11-18 07:21:05.845206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.360 [2024-11-18 07:21:05.845232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.360 qpair failed and we were unable to recover it. 00:35:45.360 [2024-11-18 07:21:05.845322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.360 [2024-11-18 07:21:05.845348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.360 qpair failed and we were unable to recover it. 00:35:45.360 [2024-11-18 07:21:05.845483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.360 [2024-11-18 07:21:05.845515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.360 qpair failed and we were unable to recover it. 00:35:45.360 [2024-11-18 07:21:05.845602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.360 [2024-11-18 07:21:05.845627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.360 qpair failed and we were unable to recover it. 00:35:45.360 [2024-11-18 07:21:05.845707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.360 [2024-11-18 07:21:05.845733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.360 qpair failed and we were unable to recover it. 00:35:45.360 [2024-11-18 07:21:05.845852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.360 [2024-11-18 07:21:05.845877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.360 qpair failed and we were unable to recover it. 00:35:45.360 [2024-11-18 07:21:05.845997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.360 [2024-11-18 07:21:05.846030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.360 qpair failed and we were unable to recover it. 00:35:45.360 [2024-11-18 07:21:05.846115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.360 [2024-11-18 07:21:05.846141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.360 qpair failed and we were unable to recover it. 00:35:45.360 [2024-11-18 07:21:05.846238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.360 [2024-11-18 07:21:05.846266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.360 qpair failed and we were unable to recover it. 00:35:45.360 [2024-11-18 07:21:05.846404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.360 [2024-11-18 07:21:05.846433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.360 qpair failed and we were unable to recover it. 00:35:45.360 [2024-11-18 07:21:05.846555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.360 [2024-11-18 07:21:05.846582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.360 qpair failed and we were unable to recover it. 00:35:45.360 [2024-11-18 07:21:05.846692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.360 [2024-11-18 07:21:05.846719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.360 qpair failed and we were unable to recover it. 00:35:45.360 [2024-11-18 07:21:05.846807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.360 [2024-11-18 07:21:05.846834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.360 qpair failed and we were unable to recover it. 00:35:45.360 [2024-11-18 07:21:05.846924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.360 [2024-11-18 07:21:05.846951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.360 qpair failed and we were unable to recover it. 00:35:45.360 [2024-11-18 07:21:05.847096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.360 [2024-11-18 07:21:05.847148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.360 qpair failed and we were unable to recover it. 00:35:45.360 [2024-11-18 07:21:05.847265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.361 [2024-11-18 07:21:05.847291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.361 qpair failed and we were unable to recover it. 00:35:45.361 [2024-11-18 07:21:05.847398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.361 [2024-11-18 07:21:05.847437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.361 qpair failed and we were unable to recover it. 00:35:45.361 [2024-11-18 07:21:05.847571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.361 [2024-11-18 07:21:05.847599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.361 qpair failed and we were unable to recover it. 00:35:45.361 [2024-11-18 07:21:05.847739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.361 [2024-11-18 07:21:05.847766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.361 qpair failed and we were unable to recover it. 00:35:45.361 [2024-11-18 07:21:05.847853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.361 [2024-11-18 07:21:05.847881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.361 qpair failed and we were unable to recover it. 00:35:45.361 [2024-11-18 07:21:05.848078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.361 [2024-11-18 07:21:05.848133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.361 qpair failed and we were unable to recover it. 00:35:45.361 [2024-11-18 07:21:05.848218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.361 [2024-11-18 07:21:05.848244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.361 qpair failed and we were unable to recover it. 00:35:45.361 [2024-11-18 07:21:05.848340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.361 [2024-11-18 07:21:05.848367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.361 qpair failed and we were unable to recover it. 00:35:45.361 [2024-11-18 07:21:05.848470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.361 [2024-11-18 07:21:05.848522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.361 qpair failed and we were unable to recover it. 00:35:45.361 [2024-11-18 07:21:05.848612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.361 [2024-11-18 07:21:05.848640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.361 qpair failed and we were unable to recover it. 00:35:45.361 [2024-11-18 07:21:05.848737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.361 [2024-11-18 07:21:05.848763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.361 qpair failed and we were unable to recover it. 00:35:45.361 [2024-11-18 07:21:05.848860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.361 [2024-11-18 07:21:05.848898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.361 qpair failed and we were unable to recover it. 00:35:45.361 [2024-11-18 07:21:05.848981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.361 [2024-11-18 07:21:05.849008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.361 qpair failed and we were unable to recover it. 00:35:45.361 [2024-11-18 07:21:05.849091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.361 [2024-11-18 07:21:05.849120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.361 qpair failed and we were unable to recover it. 00:35:45.361 [2024-11-18 07:21:05.849207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.361 [2024-11-18 07:21:05.849235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.361 qpair failed and we were unable to recover it. 00:35:45.361 [2024-11-18 07:21:05.849322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.361 [2024-11-18 07:21:05.849348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.361 qpair failed and we were unable to recover it. 00:35:45.361 [2024-11-18 07:21:05.849504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.361 [2024-11-18 07:21:05.849531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.361 qpair failed and we were unable to recover it. 00:35:45.361 [2024-11-18 07:21:05.849640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.361 [2024-11-18 07:21:05.849704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.361 qpair failed and we were unable to recover it. 00:35:45.361 [2024-11-18 07:21:05.849790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.361 [2024-11-18 07:21:05.849817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.361 qpair failed and we were unable to recover it. 00:35:45.361 [2024-11-18 07:21:05.849902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.361 [2024-11-18 07:21:05.849930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.361 qpair failed and we were unable to recover it. 00:35:45.361 [2024-11-18 07:21:05.850075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.361 [2024-11-18 07:21:05.850103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.361 qpair failed and we were unable to recover it. 00:35:45.361 [2024-11-18 07:21:05.850221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.361 [2024-11-18 07:21:05.850248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.361 qpair failed and we were unable to recover it. 00:35:45.361 [2024-11-18 07:21:05.850372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.361 [2024-11-18 07:21:05.850400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.361 qpair failed and we were unable to recover it. 00:35:45.361 [2024-11-18 07:21:05.850512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.361 [2024-11-18 07:21:05.850539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.361 qpair failed and we were unable to recover it. 00:35:45.361 [2024-11-18 07:21:05.850626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.361 [2024-11-18 07:21:05.850653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.361 qpair failed and we were unable to recover it. 00:35:45.361 [2024-11-18 07:21:05.850742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.361 [2024-11-18 07:21:05.850780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.361 qpair failed and we were unable to recover it. 00:35:45.361 [2024-11-18 07:21:05.850966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.361 [2024-11-18 07:21:05.851018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.361 qpair failed and we were unable to recover it. 00:35:45.361 [2024-11-18 07:21:05.851165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.361 [2024-11-18 07:21:05.851214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.361 qpair failed and we were unable to recover it. 00:35:45.361 [2024-11-18 07:21:05.851350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.361 [2024-11-18 07:21:05.851377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.361 qpair failed and we were unable to recover it. 00:35:45.361 [2024-11-18 07:21:05.851465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.361 [2024-11-18 07:21:05.851496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.361 qpair failed and we were unable to recover it. 00:35:45.361 [2024-11-18 07:21:05.851607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.361 [2024-11-18 07:21:05.851633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.361 qpair failed and we were unable to recover it. 00:35:45.361 [2024-11-18 07:21:05.851740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.361 [2024-11-18 07:21:05.851793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.361 qpair failed and we were unable to recover it. 00:35:45.361 [2024-11-18 07:21:05.851974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.361 [2024-11-18 07:21:05.852022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.361 qpair failed and we were unable to recover it. 00:35:45.361 [2024-11-18 07:21:05.852135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.361 [2024-11-18 07:21:05.852191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.361 qpair failed and we were unable to recover it. 00:35:45.361 [2024-11-18 07:21:05.852273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.361 [2024-11-18 07:21:05.852300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.361 qpair failed and we were unable to recover it. 00:35:45.361 [2024-11-18 07:21:05.852435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.361 [2024-11-18 07:21:05.852461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.361 qpair failed and we were unable to recover it. 00:35:45.361 [2024-11-18 07:21:05.852558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.361 [2024-11-18 07:21:05.852585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.361 qpair failed and we were unable to recover it. 00:35:45.361 [2024-11-18 07:21:05.852689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.361 [2024-11-18 07:21:05.852715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.362 qpair failed and we were unable to recover it. 00:35:45.362 [2024-11-18 07:21:05.852825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.362 [2024-11-18 07:21:05.852851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.362 qpair failed and we were unable to recover it. 00:35:45.362 [2024-11-18 07:21:05.852941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.362 [2024-11-18 07:21:05.852968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.362 qpair failed and we were unable to recover it. 00:35:45.362 [2024-11-18 07:21:05.853053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.362 [2024-11-18 07:21:05.853079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.362 qpair failed and we were unable to recover it. 00:35:45.362 [2024-11-18 07:21:05.853218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.362 [2024-11-18 07:21:05.853244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.362 qpair failed and we were unable to recover it. 00:35:45.362 [2024-11-18 07:21:05.853331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.362 [2024-11-18 07:21:05.853356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.362 qpair failed and we were unable to recover it. 00:35:45.362 [2024-11-18 07:21:05.853499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.362 [2024-11-18 07:21:05.853529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.362 qpair failed and we were unable to recover it. 00:35:45.362 [2024-11-18 07:21:05.853648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.362 [2024-11-18 07:21:05.853674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.362 qpair failed and we were unable to recover it. 00:35:45.362 [2024-11-18 07:21:05.853756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.362 [2024-11-18 07:21:05.853783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.362 qpair failed and we were unable to recover it. 00:35:45.362 [2024-11-18 07:21:05.853926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.362 [2024-11-18 07:21:05.853952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.362 qpair failed and we were unable to recover it. 00:35:45.362 [2024-11-18 07:21:05.854059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.362 [2024-11-18 07:21:05.854085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.362 qpair failed and we were unable to recover it. 00:35:45.362 [2024-11-18 07:21:05.854160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.362 [2024-11-18 07:21:05.854186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.362 qpair failed and we were unable to recover it. 00:35:45.362 [2024-11-18 07:21:05.854294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.362 [2024-11-18 07:21:05.854321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.362 qpair failed and we were unable to recover it. 00:35:45.362 [2024-11-18 07:21:05.854409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.362 [2024-11-18 07:21:05.854435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.362 qpair failed and we were unable to recover it. 00:35:45.362 [2024-11-18 07:21:05.854572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.362 [2024-11-18 07:21:05.854599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.362 qpair failed and we were unable to recover it. 00:35:45.362 [2024-11-18 07:21:05.854708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.362 [2024-11-18 07:21:05.854734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.362 qpair failed and we were unable to recover it. 00:35:45.362 [2024-11-18 07:21:05.854821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.362 [2024-11-18 07:21:05.854846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.362 qpair failed and we were unable to recover it. 00:35:45.362 [2024-11-18 07:21:05.854926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.362 [2024-11-18 07:21:05.854952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.362 qpair failed and we were unable to recover it. 00:35:45.362 [2024-11-18 07:21:05.855061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.362 [2024-11-18 07:21:05.855087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.362 qpair failed and we were unable to recover it. 00:35:45.362 [2024-11-18 07:21:05.855201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.362 [2024-11-18 07:21:05.855229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.362 qpair failed and we were unable to recover it. 00:35:45.362 [2024-11-18 07:21:05.855337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.362 [2024-11-18 07:21:05.855368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.362 qpair failed and we were unable to recover it. 00:35:45.362 [2024-11-18 07:21:05.855441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.362 [2024-11-18 07:21:05.855468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.362 qpair failed and we were unable to recover it. 00:35:45.362 [2024-11-18 07:21:05.855592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.362 [2024-11-18 07:21:05.855620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.362 qpair failed and we were unable to recover it. 00:35:45.362 [2024-11-18 07:21:05.855733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.362 [2024-11-18 07:21:05.855758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.362 qpair failed and we were unable to recover it. 00:35:45.362 [2024-11-18 07:21:05.855883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.362 [2024-11-18 07:21:05.855909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.362 qpair failed and we were unable to recover it. 00:35:45.362 [2024-11-18 07:21:05.856024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.362 [2024-11-18 07:21:05.856049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.362 qpair failed and we were unable to recover it. 00:35:45.362 [2024-11-18 07:21:05.856134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.362 [2024-11-18 07:21:05.856160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.362 qpair failed and we were unable to recover it. 00:35:45.362 [2024-11-18 07:21:05.856269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.362 [2024-11-18 07:21:05.856295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.362 qpair failed and we were unable to recover it. 00:35:45.362 [2024-11-18 07:21:05.856382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.362 [2024-11-18 07:21:05.856410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.362 qpair failed and we were unable to recover it. 00:35:45.362 [2024-11-18 07:21:05.856538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.362 [2024-11-18 07:21:05.856565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.362 qpair failed and we were unable to recover it. 00:35:45.362 [2024-11-18 07:21:05.856687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.362 [2024-11-18 07:21:05.856714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.362 qpair failed and we were unable to recover it. 00:35:45.362 [2024-11-18 07:21:05.856827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.362 [2024-11-18 07:21:05.856854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.362 qpair failed and we were unable to recover it. 00:35:45.362 [2024-11-18 07:21:05.856967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.362 [2024-11-18 07:21:05.856993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.362 qpair failed and we were unable to recover it. 00:35:45.362 [2024-11-18 07:21:05.857104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.362 [2024-11-18 07:21:05.857130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.362 qpair failed and we were unable to recover it. 00:35:45.362 [2024-11-18 07:21:05.857226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.362 [2024-11-18 07:21:05.857253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.362 qpair failed and we were unable to recover it. 00:35:45.362 [2024-11-18 07:21:05.857373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.362 [2024-11-18 07:21:05.857400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.362 qpair failed and we were unable to recover it. 00:35:45.362 [2024-11-18 07:21:05.857518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.362 [2024-11-18 07:21:05.857545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.362 qpair failed and we were unable to recover it. 00:35:45.362 [2024-11-18 07:21:05.857665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.362 [2024-11-18 07:21:05.857692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.362 qpair failed and we were unable to recover it. 00:35:45.362 [2024-11-18 07:21:05.857788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.363 [2024-11-18 07:21:05.857814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.363 qpair failed and we were unable to recover it. 00:35:45.363 [2024-11-18 07:21:05.857961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.363 [2024-11-18 07:21:05.857987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.363 qpair failed and we were unable to recover it. 00:35:45.363 [2024-11-18 07:21:05.858065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.363 [2024-11-18 07:21:05.858091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.363 qpair failed and we were unable to recover it. 00:35:45.363 [2024-11-18 07:21:05.858185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.363 [2024-11-18 07:21:05.858212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.363 qpair failed and we were unable to recover it. 00:35:45.363 [2024-11-18 07:21:05.858291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.363 [2024-11-18 07:21:05.858318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.363 qpair failed and we were unable to recover it. 00:35:45.363 [2024-11-18 07:21:05.858463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.363 [2024-11-18 07:21:05.858495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.363 qpair failed and we were unable to recover it. 00:35:45.363 [2024-11-18 07:21:05.858609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.363 [2024-11-18 07:21:05.858636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.363 qpair failed and we were unable to recover it. 00:35:45.363 [2024-11-18 07:21:05.858755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.363 [2024-11-18 07:21:05.858782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.363 qpair failed and we were unable to recover it. 00:35:45.363 [2024-11-18 07:21:05.858878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.363 [2024-11-18 07:21:05.858904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.363 qpair failed and we were unable to recover it. 00:35:45.363 [2024-11-18 07:21:05.859026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.363 [2024-11-18 07:21:05.859053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.363 qpair failed and we were unable to recover it. 00:35:45.363 [2024-11-18 07:21:05.859138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.363 [2024-11-18 07:21:05.859165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.363 qpair failed and we were unable to recover it. 00:35:45.363 [2024-11-18 07:21:05.859278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.363 [2024-11-18 07:21:05.859304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.363 qpair failed and we were unable to recover it. 00:35:45.363 [2024-11-18 07:21:05.859444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.363 [2024-11-18 07:21:05.859470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.363 qpair failed and we were unable to recover it. 00:35:45.363 [2024-11-18 07:21:05.859562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.363 [2024-11-18 07:21:05.859592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.363 qpair failed and we were unable to recover it. 00:35:45.363 [2024-11-18 07:21:05.859681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.363 [2024-11-18 07:21:05.859720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.363 qpair failed and we were unable to recover it. 00:35:45.363 [2024-11-18 07:21:05.859847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.363 [2024-11-18 07:21:05.859874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.363 qpair failed and we were unable to recover it. 00:35:45.363 [2024-11-18 07:21:05.859986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.363 [2024-11-18 07:21:05.860014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.363 qpair failed and we were unable to recover it. 00:35:45.363 [2024-11-18 07:21:05.860104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.363 [2024-11-18 07:21:05.860131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.363 qpair failed and we were unable to recover it. 00:35:45.363 [2024-11-18 07:21:05.860253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.363 [2024-11-18 07:21:05.860301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.363 qpair failed and we were unable to recover it. 00:35:45.363 [2024-11-18 07:21:05.860390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.363 [2024-11-18 07:21:05.860418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.363 qpair failed and we were unable to recover it. 00:35:45.363 [2024-11-18 07:21:05.860549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.363 [2024-11-18 07:21:05.860576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.363 qpair failed and we were unable to recover it. 00:35:45.363 [2024-11-18 07:21:05.860691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.363 [2024-11-18 07:21:05.860718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.363 qpair failed and we were unable to recover it. 00:35:45.363 [2024-11-18 07:21:05.860808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.363 [2024-11-18 07:21:05.860839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.363 qpair failed and we were unable to recover it. 00:35:45.363 [2024-11-18 07:21:05.860954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.363 [2024-11-18 07:21:05.860981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.363 qpair failed and we were unable to recover it. 00:35:45.363 [2024-11-18 07:21:05.861058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.363 [2024-11-18 07:21:05.861085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.363 qpair failed and we were unable to recover it. 00:35:45.363 [2024-11-18 07:21:05.861202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.363 [2024-11-18 07:21:05.861228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.363 qpair failed and we were unable to recover it. 00:35:45.363 [2024-11-18 07:21:05.861353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.363 [2024-11-18 07:21:05.861379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.363 qpair failed and we were unable to recover it. 00:35:45.363 [2024-11-18 07:21:05.861509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.363 [2024-11-18 07:21:05.861537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.363 qpair failed and we were unable to recover it. 00:35:45.363 [2024-11-18 07:21:05.861632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.363 [2024-11-18 07:21:05.861658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.363 qpair failed and we were unable to recover it. 00:35:45.363 [2024-11-18 07:21:05.861750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.363 [2024-11-18 07:21:05.861776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.363 qpair failed and we were unable to recover it. 00:35:45.363 [2024-11-18 07:21:05.861917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.363 [2024-11-18 07:21:05.861944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.363 qpair failed and we were unable to recover it. 00:35:45.363 [2024-11-18 07:21:05.862033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.363 [2024-11-18 07:21:05.862061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.363 qpair failed and we were unable to recover it. 00:35:45.363 [2024-11-18 07:21:05.862152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.363 [2024-11-18 07:21:05.862180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.363 qpair failed and we were unable to recover it. 00:35:45.363 [2024-11-18 07:21:05.862295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.363 [2024-11-18 07:21:05.862321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.363 qpair failed and we were unable to recover it. 00:35:45.363 [2024-11-18 07:21:05.862407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.363 [2024-11-18 07:21:05.862434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.363 qpair failed and we were unable to recover it. 00:35:45.363 [2024-11-18 07:21:05.862545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.363 [2024-11-18 07:21:05.862572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.363 qpair failed and we were unable to recover it. 00:35:45.363 [2024-11-18 07:21:05.862689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.363 [2024-11-18 07:21:05.862716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.363 qpair failed and we were unable to recover it. 00:35:45.363 [2024-11-18 07:21:05.862812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.363 [2024-11-18 07:21:05.862840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.363 qpair failed and we were unable to recover it. 00:35:45.364 [2024-11-18 07:21:05.862926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.364 [2024-11-18 07:21:05.862952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.364 qpair failed and we were unable to recover it. 00:35:45.364 [2024-11-18 07:21:05.863062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.364 [2024-11-18 07:21:05.863089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.364 qpair failed and we were unable to recover it. 00:35:45.364 [2024-11-18 07:21:05.863227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.364 [2024-11-18 07:21:05.863254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.364 qpair failed and we were unable to recover it. 00:35:45.364 [2024-11-18 07:21:05.863343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.364 [2024-11-18 07:21:05.863371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.364 qpair failed and we were unable to recover it. 00:35:45.364 [2024-11-18 07:21:05.863483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.364 [2024-11-18 07:21:05.863515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.364 qpair failed and we were unable to recover it. 00:35:45.364 [2024-11-18 07:21:05.863633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.364 [2024-11-18 07:21:05.863659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.364 qpair failed and we were unable to recover it. 00:35:45.364 [2024-11-18 07:21:05.863742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.364 [2024-11-18 07:21:05.863767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.364 qpair failed and we were unable to recover it. 00:35:45.364 [2024-11-18 07:21:05.863844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.364 [2024-11-18 07:21:05.863868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.364 qpair failed and we were unable to recover it. 00:35:45.364 [2024-11-18 07:21:05.863967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.364 [2024-11-18 07:21:05.863995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.364 qpair failed and we were unable to recover it. 00:35:45.364 [2024-11-18 07:21:05.864110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.364 [2024-11-18 07:21:05.864137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.364 qpair failed and we were unable to recover it. 00:35:45.364 [2024-11-18 07:21:05.864253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.364 [2024-11-18 07:21:05.864279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.364 qpair failed and we were unable to recover it. 00:35:45.364 [2024-11-18 07:21:05.864405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.364 [2024-11-18 07:21:05.864433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.364 qpair failed and we were unable to recover it. 00:35:45.364 [2024-11-18 07:21:05.864538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.364 [2024-11-18 07:21:05.864564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.364 qpair failed and we were unable to recover it. 00:35:45.364 [2024-11-18 07:21:05.864654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.364 [2024-11-18 07:21:05.864681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.364 qpair failed and we were unable to recover it. 00:35:45.364 [2024-11-18 07:21:05.864788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.364 [2024-11-18 07:21:05.864814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.364 qpair failed and we were unable to recover it. 00:35:45.364 [2024-11-18 07:21:05.864924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.364 [2024-11-18 07:21:05.864951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.364 qpair failed and we were unable to recover it. 00:35:45.364 [2024-11-18 07:21:05.865062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.364 [2024-11-18 07:21:05.865087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.364 qpair failed and we were unable to recover it. 00:35:45.364 [2024-11-18 07:21:05.865184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.364 [2024-11-18 07:21:05.865211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.364 qpair failed and we were unable to recover it. 00:35:45.364 [2024-11-18 07:21:05.865310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.364 [2024-11-18 07:21:05.865339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.364 qpair failed and we were unable to recover it. 00:35:45.364 [2024-11-18 07:21:05.865447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.364 [2024-11-18 07:21:05.865473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.364 qpair failed and we were unable to recover it. 00:35:45.364 [2024-11-18 07:21:05.865569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.364 [2024-11-18 07:21:05.865595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.364 qpair failed and we were unable to recover it. 00:35:45.364 [2024-11-18 07:21:05.865713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.364 [2024-11-18 07:21:05.865739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.364 qpair failed and we were unable to recover it. 00:35:45.364 [2024-11-18 07:21:05.865855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.364 [2024-11-18 07:21:05.865881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.364 qpair failed and we were unable to recover it. 00:35:45.364 [2024-11-18 07:21:05.865964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.364 [2024-11-18 07:21:05.865993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.364 qpair failed and we were unable to recover it. 00:35:45.364 [2024-11-18 07:21:05.866107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.364 [2024-11-18 07:21:05.866139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.364 qpair failed and we were unable to recover it. 00:35:45.364 [2024-11-18 07:21:05.866256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.364 [2024-11-18 07:21:05.866282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.364 qpair failed and we were unable to recover it. 00:35:45.364 [2024-11-18 07:21:05.866371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.364 [2024-11-18 07:21:05.866397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.364 qpair failed and we were unable to recover it. 00:35:45.364 [2024-11-18 07:21:05.866517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.364 [2024-11-18 07:21:05.866543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.364 qpair failed and we were unable to recover it. 00:35:45.364 [2024-11-18 07:21:05.866657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.364 [2024-11-18 07:21:05.866685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.364 qpair failed and we were unable to recover it. 00:35:45.364 [2024-11-18 07:21:05.866836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.364 [2024-11-18 07:21:05.866862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.364 qpair failed and we were unable to recover it. 00:35:45.364 [2024-11-18 07:21:05.866984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.364 [2024-11-18 07:21:05.867010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.364 qpair failed and we were unable to recover it. 00:35:45.364 [2024-11-18 07:21:05.867097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.364 [2024-11-18 07:21:05.867123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.364 qpair failed and we were unable to recover it. 00:35:45.364 [2024-11-18 07:21:05.867241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.364 [2024-11-18 07:21:05.867268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.364 qpair failed and we were unable to recover it. 00:35:45.364 [2024-11-18 07:21:05.867355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.364 [2024-11-18 07:21:05.867382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.364 qpair failed and we were unable to recover it. 00:35:45.364 [2024-11-18 07:21:05.867505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.364 [2024-11-18 07:21:05.867532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.364 qpair failed and we were unable to recover it. 00:35:45.364 [2024-11-18 07:21:05.867649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.364 [2024-11-18 07:21:05.867675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.364 qpair failed and we were unable to recover it. 00:35:45.364 [2024-11-18 07:21:05.867767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.364 [2024-11-18 07:21:05.867801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.364 qpair failed and we were unable to recover it. 00:35:45.364 [2024-11-18 07:21:05.867943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.365 [2024-11-18 07:21:05.867970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.365 qpair failed and we were unable to recover it. 00:35:45.365 [2024-11-18 07:21:05.868120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.365 [2024-11-18 07:21:05.868147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.365 qpair failed and we were unable to recover it. 00:35:45.365 [2024-11-18 07:21:05.868238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.365 [2024-11-18 07:21:05.868264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.365 qpair failed and we were unable to recover it. 00:35:45.365 [2024-11-18 07:21:05.868344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.365 [2024-11-18 07:21:05.868371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.365 qpair failed and we were unable to recover it. 00:35:45.365 [2024-11-18 07:21:05.868483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.365 [2024-11-18 07:21:05.868516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.365 qpair failed and we were unable to recover it. 00:35:45.365 [2024-11-18 07:21:05.868633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.365 [2024-11-18 07:21:05.868659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.365 qpair failed and we were unable to recover it. 00:35:45.365 [2024-11-18 07:21:05.868734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.365 [2024-11-18 07:21:05.868759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.365 qpair failed and we were unable to recover it. 00:35:45.365 [2024-11-18 07:21:05.868882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.365 [2024-11-18 07:21:05.868907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.365 qpair failed and we were unable to recover it. 00:35:45.365 [2024-11-18 07:21:05.868985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.365 [2024-11-18 07:21:05.869011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.365 qpair failed and we were unable to recover it. 00:35:45.365 [2024-11-18 07:21:05.869116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.365 [2024-11-18 07:21:05.869142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.365 qpair failed and we were unable to recover it. 00:35:45.365 [2024-11-18 07:21:05.869256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.365 [2024-11-18 07:21:05.869284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.365 qpair failed and we were unable to recover it. 00:35:45.365 [2024-11-18 07:21:05.869396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.365 [2024-11-18 07:21:05.869424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.365 qpair failed and we were unable to recover it. 00:35:45.365 [2024-11-18 07:21:05.869543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.365 [2024-11-18 07:21:05.869582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.365 qpair failed and we were unable to recover it. 00:35:45.365 [2024-11-18 07:21:05.869668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.365 [2024-11-18 07:21:05.869694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.365 qpair failed and we were unable to recover it. 00:35:45.365 [2024-11-18 07:21:05.869799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.365 [2024-11-18 07:21:05.869827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.365 qpair failed and we were unable to recover it. 00:35:45.365 [2024-11-18 07:21:05.869909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.365 [2024-11-18 07:21:05.869936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.365 qpair failed and we were unable to recover it. 00:35:45.365 [2024-11-18 07:21:05.870055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.365 [2024-11-18 07:21:05.870082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.365 qpair failed and we were unable to recover it. 00:35:45.365 [2024-11-18 07:21:05.870205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.365 [2024-11-18 07:21:05.870231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.365 qpair failed and we were unable to recover it. 00:35:45.365 [2024-11-18 07:21:05.870375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.365 [2024-11-18 07:21:05.870401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.365 qpair failed and we were unable to recover it. 00:35:45.365 [2024-11-18 07:21:05.870524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.365 [2024-11-18 07:21:05.870551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.365 qpair failed and we were unable to recover it. 00:35:45.365 [2024-11-18 07:21:05.870624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.365 [2024-11-18 07:21:05.870650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.365 qpair failed and we were unable to recover it. 00:35:45.365 [2024-11-18 07:21:05.870766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.365 [2024-11-18 07:21:05.870791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.365 qpair failed and we were unable to recover it. 00:35:45.365 [2024-11-18 07:21:05.870900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.365 [2024-11-18 07:21:05.870926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.365 qpair failed and we were unable to recover it. 00:35:45.365 [2024-11-18 07:21:05.871010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.365 [2024-11-18 07:21:05.871036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.365 qpair failed and we were unable to recover it. 00:35:45.365 [2024-11-18 07:21:05.871177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.365 [2024-11-18 07:21:05.871203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.365 qpair failed and we were unable to recover it. 00:35:45.365 [2024-11-18 07:21:05.871280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.365 [2024-11-18 07:21:05.871305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.365 qpair failed and we were unable to recover it. 00:35:45.365 [2024-11-18 07:21:05.871404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.365 [2024-11-18 07:21:05.871442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.365 qpair failed and we were unable to recover it. 00:35:45.365 [2024-11-18 07:21:05.871550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.365 [2024-11-18 07:21:05.871584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.365 qpair failed and we were unable to recover it. 00:35:45.365 [2024-11-18 07:21:05.871676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.365 [2024-11-18 07:21:05.871703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.365 qpair failed and we were unable to recover it. 00:35:45.365 [2024-11-18 07:21:05.871783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.365 [2024-11-18 07:21:05.871808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.365 qpair failed and we were unable to recover it. 00:35:45.365 [2024-11-18 07:21:05.871919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.365 [2024-11-18 07:21:05.871945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.365 qpair failed and we were unable to recover it. 00:35:45.365 [2024-11-18 07:21:05.872028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.365 [2024-11-18 07:21:05.872054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.365 qpair failed and we were unable to recover it. 00:35:45.365 [2024-11-18 07:21:05.872166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.365 [2024-11-18 07:21:05.872192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.365 qpair failed and we were unable to recover it. 00:35:45.365 [2024-11-18 07:21:05.872279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.365 [2024-11-18 07:21:05.872305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.365 qpair failed and we were unable to recover it. 00:35:45.365 [2024-11-18 07:21:05.872418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.366 [2024-11-18 07:21:05.872444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.366 qpair failed and we were unable to recover it. 00:35:45.366 [2024-11-18 07:21:05.872566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.366 [2024-11-18 07:21:05.872592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.366 qpair failed and we were unable to recover it. 00:35:45.366 [2024-11-18 07:21:05.872666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.366 [2024-11-18 07:21:05.872691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.366 qpair failed and we were unable to recover it. 00:35:45.366 [2024-11-18 07:21:05.872776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.366 [2024-11-18 07:21:05.872802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.366 qpair failed and we were unable to recover it. 00:35:45.366 [2024-11-18 07:21:05.872928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.366 [2024-11-18 07:21:05.872953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.366 qpair failed and we were unable to recover it. 00:35:45.366 [2024-11-18 07:21:05.873046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.366 [2024-11-18 07:21:05.873070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.366 qpair failed and we were unable to recover it. 00:35:45.366 [2024-11-18 07:21:05.873184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.366 [2024-11-18 07:21:05.873210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.366 qpair failed and we were unable to recover it. 00:35:45.366 [2024-11-18 07:21:05.873303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.366 [2024-11-18 07:21:05.873329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.366 qpair failed and we were unable to recover it. 00:35:45.366 [2024-11-18 07:21:05.873414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.366 [2024-11-18 07:21:05.873439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.366 qpair failed and we were unable to recover it. 00:35:45.366 [2024-11-18 07:21:05.873567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.366 [2024-11-18 07:21:05.873593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.366 qpair failed and we were unable to recover it. 00:35:45.366 [2024-11-18 07:21:05.873668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.366 [2024-11-18 07:21:05.873695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.366 qpair failed and we were unable to recover it. 00:35:45.366 [2024-11-18 07:21:05.873773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.366 [2024-11-18 07:21:05.873799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.366 qpair failed and we were unable to recover it. 00:35:45.366 [2024-11-18 07:21:05.873884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.366 [2024-11-18 07:21:05.873909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.366 qpair failed and we were unable to recover it. 00:35:45.366 [2024-11-18 07:21:05.874024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.366 [2024-11-18 07:21:05.874050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.366 qpair failed and we were unable to recover it. 00:35:45.366 [2024-11-18 07:21:05.874165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.366 [2024-11-18 07:21:05.874192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.366 qpair failed and we were unable to recover it. 00:35:45.366 [2024-11-18 07:21:05.874270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.366 [2024-11-18 07:21:05.874296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.366 qpair failed and we were unable to recover it. 00:35:45.366 [2024-11-18 07:21:05.874385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.366 [2024-11-18 07:21:05.874410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.366 qpair failed and we were unable to recover it. 00:35:45.366 [2024-11-18 07:21:05.874511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.366 [2024-11-18 07:21:05.874537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.366 qpair failed and we were unable to recover it. 00:35:45.366 [2024-11-18 07:21:05.874622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.366 [2024-11-18 07:21:05.874648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.366 qpair failed and we were unable to recover it. 00:35:45.366 [2024-11-18 07:21:05.874770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.366 [2024-11-18 07:21:05.874804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.366 qpair failed and we were unable to recover it. 00:35:45.366 [2024-11-18 07:21:05.874892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.366 [2024-11-18 07:21:05.874923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.366 qpair failed and we were unable to recover it. 00:35:45.366 [2024-11-18 07:21:05.875009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.366 [2024-11-18 07:21:05.875035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.366 qpair failed and we were unable to recover it. 00:35:45.366 [2024-11-18 07:21:05.875178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.366 [2024-11-18 07:21:05.875204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.366 qpair failed and we were unable to recover it. 00:35:45.366 [2024-11-18 07:21:05.875282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.366 [2024-11-18 07:21:05.875308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.366 qpair failed and we were unable to recover it. 00:35:45.366 [2024-11-18 07:21:05.875432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.366 [2024-11-18 07:21:05.875459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.366 qpair failed and we were unable to recover it. 00:35:45.366 [2024-11-18 07:21:05.875576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.366 [2024-11-18 07:21:05.875615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.366 qpair failed and we were unable to recover it. 00:35:45.366 [2024-11-18 07:21:05.875739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.366 [2024-11-18 07:21:05.875767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.366 qpair failed and we were unable to recover it. 00:35:45.366 [2024-11-18 07:21:05.875857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.366 [2024-11-18 07:21:05.875884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.366 qpair failed and we were unable to recover it. 00:35:45.366 [2024-11-18 07:21:05.875971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.366 [2024-11-18 07:21:05.875997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.366 qpair failed and we were unable to recover it. 00:35:45.366 [2024-11-18 07:21:05.876105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.366 [2024-11-18 07:21:05.876133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.366 qpair failed and we were unable to recover it. 00:35:45.366 [2024-11-18 07:21:05.876245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.366 [2024-11-18 07:21:05.876273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.366 qpair failed and we were unable to recover it. 00:35:45.366 [2024-11-18 07:21:05.876392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.366 [2024-11-18 07:21:05.876417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.366 qpair failed and we were unable to recover it. 00:35:45.366 [2024-11-18 07:21:05.876531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.366 [2024-11-18 07:21:05.876557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.366 qpair failed and we were unable to recover it. 00:35:45.366 [2024-11-18 07:21:05.876666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.366 [2024-11-18 07:21:05.876692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.366 qpair failed and we were unable to recover it. 00:35:45.366 [2024-11-18 07:21:05.876784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.366 [2024-11-18 07:21:05.876809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.366 qpair failed and we were unable to recover it. 00:35:45.366 [2024-11-18 07:21:05.876915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.366 [2024-11-18 07:21:05.876952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.366 qpair failed and we were unable to recover it. 00:35:45.366 [2024-11-18 07:21:05.877027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.366 [2024-11-18 07:21:05.877052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.366 qpair failed and we were unable to recover it. 00:35:45.366 [2024-11-18 07:21:05.877203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.366 [2024-11-18 07:21:05.877249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.366 qpair failed and we were unable to recover it. 00:35:45.367 [2024-11-18 07:21:05.877394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.367 [2024-11-18 07:21:05.877420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.367 qpair failed and we were unable to recover it. 00:35:45.367 [2024-11-18 07:21:05.877564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.367 [2024-11-18 07:21:05.877590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.367 qpair failed and we were unable to recover it. 00:35:45.367 [2024-11-18 07:21:05.877676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.367 [2024-11-18 07:21:05.877702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.367 qpair failed and we were unable to recover it. 00:35:45.367 [2024-11-18 07:21:05.877782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.367 [2024-11-18 07:21:05.877809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.367 qpair failed and we were unable to recover it. 00:35:45.367 [2024-11-18 07:21:05.877987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.367 [2024-11-18 07:21:05.878034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.367 qpair failed and we were unable to recover it. 00:35:45.367 [2024-11-18 07:21:05.878134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.367 [2024-11-18 07:21:05.878191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.367 qpair failed and we were unable to recover it. 00:35:45.367 [2024-11-18 07:21:05.878340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.367 [2024-11-18 07:21:05.878366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.367 qpair failed and we were unable to recover it. 00:35:45.367 [2024-11-18 07:21:05.878443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.367 [2024-11-18 07:21:05.878470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.367 qpair failed and we were unable to recover it. 00:35:45.367 [2024-11-18 07:21:05.878590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.367 [2024-11-18 07:21:05.878616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.367 qpair failed and we were unable to recover it. 00:35:45.367 [2024-11-18 07:21:05.878734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.367 [2024-11-18 07:21:05.878765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.367 qpair failed and we were unable to recover it. 00:35:45.367 [2024-11-18 07:21:05.878891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.367 [2024-11-18 07:21:05.878916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.367 qpair failed and we were unable to recover it. 00:35:45.367 [2024-11-18 07:21:05.879008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.367 [2024-11-18 07:21:05.879034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.367 qpair failed and we were unable to recover it. 00:35:45.367 [2024-11-18 07:21:05.879148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.367 [2024-11-18 07:21:05.879174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.367 qpair failed and we were unable to recover it. 00:35:45.367 [2024-11-18 07:21:05.879258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.367 [2024-11-18 07:21:05.879283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.367 qpair failed and we were unable to recover it. 00:35:45.367 [2024-11-18 07:21:05.879392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.367 [2024-11-18 07:21:05.879418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.367 qpair failed and we were unable to recover it. 00:35:45.367 [2024-11-18 07:21:05.879544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.367 [2024-11-18 07:21:05.879571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.367 qpair failed and we were unable to recover it. 00:35:45.367 [2024-11-18 07:21:05.879659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.367 [2024-11-18 07:21:05.879684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.367 qpair failed and we were unable to recover it. 00:35:45.367 [2024-11-18 07:21:05.879768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.367 [2024-11-18 07:21:05.879793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.367 qpair failed and we were unable to recover it. 00:35:45.367 [2024-11-18 07:21:05.879954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.367 [2024-11-18 07:21:05.879980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.367 qpair failed and we were unable to recover it. 00:35:45.367 [2024-11-18 07:21:05.880063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.367 [2024-11-18 07:21:05.880088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.367 qpair failed and we were unable to recover it. 00:35:45.367 [2024-11-18 07:21:05.880169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.367 [2024-11-18 07:21:05.880195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.367 qpair failed and we were unable to recover it. 00:35:45.367 [2024-11-18 07:21:05.880308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.367 [2024-11-18 07:21:05.880341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.367 qpair failed and we were unable to recover it. 00:35:45.367 [2024-11-18 07:21:05.880439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.367 [2024-11-18 07:21:05.880487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.367 qpair failed and we were unable to recover it. 00:35:45.367 [2024-11-18 07:21:05.880594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.367 [2024-11-18 07:21:05.880622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.367 qpair failed and we were unable to recover it. 00:35:45.367 [2024-11-18 07:21:05.880711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.367 [2024-11-18 07:21:05.880738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.367 qpair failed and we were unable to recover it. 00:35:45.367 [2024-11-18 07:21:05.880862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.367 [2024-11-18 07:21:05.880888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.367 qpair failed and we were unable to recover it. 00:35:45.367 [2024-11-18 07:21:05.881034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.367 [2024-11-18 07:21:05.881061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.367 qpair failed and we were unable to recover it. 00:35:45.367 [2024-11-18 07:21:05.881209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.367 [2024-11-18 07:21:05.881235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.367 qpair failed and we were unable to recover it. 00:35:45.367 [2024-11-18 07:21:05.881356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.367 [2024-11-18 07:21:05.881383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.367 qpair failed and we were unable to recover it. 00:35:45.367 [2024-11-18 07:21:05.881510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.367 [2024-11-18 07:21:05.881538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.367 qpair failed and we were unable to recover it. 00:35:45.367 [2024-11-18 07:21:05.881624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.367 [2024-11-18 07:21:05.881650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.367 qpair failed and we were unable to recover it. 00:35:45.367 [2024-11-18 07:21:05.881740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.367 [2024-11-18 07:21:05.881766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.367 qpair failed and we were unable to recover it. 00:35:45.367 [2024-11-18 07:21:05.881912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.367 [2024-11-18 07:21:05.881938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.367 qpair failed and we were unable to recover it. 00:35:45.367 [2024-11-18 07:21:05.882044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.367 [2024-11-18 07:21:05.882070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.367 qpair failed and we were unable to recover it. 00:35:45.367 [2024-11-18 07:21:05.882146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.367 [2024-11-18 07:21:05.882172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.367 qpair failed and we were unable to recover it. 00:35:45.367 [2024-11-18 07:21:05.882276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.367 [2024-11-18 07:21:05.882302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.367 qpair failed and we were unable to recover it. 00:35:45.367 [2024-11-18 07:21:05.882420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.367 [2024-11-18 07:21:05.882446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.368 qpair failed and we were unable to recover it. 00:35:45.368 [2024-11-18 07:21:05.882539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.368 [2024-11-18 07:21:05.882566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.368 qpair failed and we were unable to recover it. 00:35:45.368 [2024-11-18 07:21:05.882655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.368 [2024-11-18 07:21:05.882682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.368 qpair failed and we were unable to recover it. 00:35:45.368 [2024-11-18 07:21:05.882761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.368 [2024-11-18 07:21:05.882787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.368 qpair failed and we were unable to recover it. 00:35:45.368 [2024-11-18 07:21:05.882874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.368 [2024-11-18 07:21:05.882903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.368 qpair failed and we were unable to recover it. 00:35:45.368 [2024-11-18 07:21:05.883016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.368 [2024-11-18 07:21:05.883042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.368 qpair failed and we were unable to recover it. 00:35:45.368 [2024-11-18 07:21:05.883137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.368 [2024-11-18 07:21:05.883162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.368 qpair failed and we were unable to recover it. 00:35:45.368 [2024-11-18 07:21:05.883274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.368 [2024-11-18 07:21:05.883300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.368 qpair failed and we were unable to recover it. 00:35:45.368 [2024-11-18 07:21:05.883390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.368 [2024-11-18 07:21:05.883417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.368 qpair failed and we were unable to recover it. 00:35:45.368 [2024-11-18 07:21:05.883509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.368 [2024-11-18 07:21:05.883537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.368 qpair failed and we were unable to recover it. 00:35:45.368 [2024-11-18 07:21:05.883634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.368 [2024-11-18 07:21:05.883660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.368 qpair failed and we were unable to recover it. 00:35:45.368 [2024-11-18 07:21:05.883752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.368 [2024-11-18 07:21:05.883778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.368 qpair failed and we were unable to recover it. 00:35:45.368 [2024-11-18 07:21:05.883887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.368 [2024-11-18 07:21:05.883921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.368 qpair failed and we were unable to recover it. 00:35:45.368 [2024-11-18 07:21:05.884011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.368 [2024-11-18 07:21:05.884053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.368 qpair failed and we were unable to recover it. 00:35:45.368 [2024-11-18 07:21:05.884174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.368 [2024-11-18 07:21:05.884200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.368 qpair failed and we were unable to recover it. 00:35:45.368 [2024-11-18 07:21:05.884295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.368 [2024-11-18 07:21:05.884321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.368 qpair failed and we were unable to recover it. 00:35:45.368 [2024-11-18 07:21:05.884433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.368 [2024-11-18 07:21:05.884460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.368 qpair failed and we were unable to recover it. 00:35:45.368 [2024-11-18 07:21:05.884579] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217e970 is same with the state(6) to be set 00:35:45.368 [2024-11-18 07:21:05.884720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.368 [2024-11-18 07:21:05.884749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.368 qpair failed and we were unable to recover it. 00:35:45.368 [2024-11-18 07:21:05.884839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.368 [2024-11-18 07:21:05.884865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.368 qpair failed and we were unable to recover it. 00:35:45.368 [2024-11-18 07:21:05.884977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.368 [2024-11-18 07:21:05.885003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.368 qpair failed and we were unable to recover it. 00:35:45.368 [2024-11-18 07:21:05.885129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.368 [2024-11-18 07:21:05.885155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.368 qpair failed and we were unable to recover it. 00:35:45.368 [2024-11-18 07:21:05.885243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.368 [2024-11-18 07:21:05.885268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.368 qpair failed and we were unable to recover it. 00:35:45.368 [2024-11-18 07:21:05.885382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.368 [2024-11-18 07:21:05.885408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.368 qpair failed and we were unable to recover it. 00:35:45.368 [2024-11-18 07:21:05.885529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.368 [2024-11-18 07:21:05.885557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.368 qpair failed and we were unable to recover it. 00:35:45.368 [2024-11-18 07:21:05.885685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.368 [2024-11-18 07:21:05.885722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.368 qpair failed and we were unable to recover it. 00:35:45.368 [2024-11-18 07:21:05.885852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.368 [2024-11-18 07:21:05.885879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.368 qpair failed and we were unable to recover it. 00:35:45.368 [2024-11-18 07:21:05.885963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.368 [2024-11-18 07:21:05.885991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.368 qpair failed and we were unable to recover it. 00:35:45.368 [2024-11-18 07:21:05.886105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.368 [2024-11-18 07:21:05.886130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.368 qpair failed and we were unable to recover it. 00:35:45.368 [2024-11-18 07:21:05.886239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.368 [2024-11-18 07:21:05.886265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.368 qpair failed and we were unable to recover it. 00:35:45.368 [2024-11-18 07:21:05.886379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.368 [2024-11-18 07:21:05.886405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.368 qpair failed and we were unable to recover it. 00:35:45.368 [2024-11-18 07:21:05.886520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.368 [2024-11-18 07:21:05.886548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.368 qpair failed and we were unable to recover it. 00:35:45.368 [2024-11-18 07:21:05.886644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.368 [2024-11-18 07:21:05.886670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.368 qpair failed and we were unable to recover it. 00:35:45.368 [2024-11-18 07:21:05.886756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.368 [2024-11-18 07:21:05.886784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.368 qpair failed and we were unable to recover it. 00:35:45.368 [2024-11-18 07:21:05.886875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.368 [2024-11-18 07:21:05.886901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.368 qpair failed and we were unable to recover it. 00:35:45.368 [2024-11-18 07:21:05.887014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.368 [2024-11-18 07:21:05.887040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.368 qpair failed and we were unable to recover it. 00:35:45.368 [2024-11-18 07:21:05.887149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.368 [2024-11-18 07:21:05.887175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.368 qpair failed and we were unable to recover it. 00:35:45.368 [2024-11-18 07:21:05.887264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.368 [2024-11-18 07:21:05.887294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.369 qpair failed and we were unable to recover it. 00:35:45.369 [2024-11-18 07:21:05.887375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.369 [2024-11-18 07:21:05.887402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.369 qpair failed and we were unable to recover it. 00:35:45.369 [2024-11-18 07:21:05.887514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.369 [2024-11-18 07:21:05.887541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.369 qpair failed and we were unable to recover it. 00:35:45.369 [2024-11-18 07:21:05.887636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.369 [2024-11-18 07:21:05.887667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.369 qpair failed and we were unable to recover it. 00:35:45.369 [2024-11-18 07:21:05.887748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.369 [2024-11-18 07:21:05.887774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.369 qpair failed and we were unable to recover it. 00:35:45.369 [2024-11-18 07:21:05.887880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.369 [2024-11-18 07:21:05.887906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.369 qpair failed and we were unable to recover it. 00:35:45.369 [2024-11-18 07:21:05.887985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.369 [2024-11-18 07:21:05.888012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.369 qpair failed and we were unable to recover it. 00:35:45.369 [2024-11-18 07:21:05.888119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.369 [2024-11-18 07:21:05.888145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.369 qpair failed and we were unable to recover it. 00:35:45.369 [2024-11-18 07:21:05.888262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.369 [2024-11-18 07:21:05.888288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.369 qpair failed and we were unable to recover it. 00:35:45.369 [2024-11-18 07:21:05.888400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.369 [2024-11-18 07:21:05.888426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.369 qpair failed and we were unable to recover it. 00:35:45.369 [2024-11-18 07:21:05.888523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.369 [2024-11-18 07:21:05.888549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.369 qpair failed and we were unable to recover it. 00:35:45.369 [2024-11-18 07:21:05.888635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.369 [2024-11-18 07:21:05.888660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.369 qpair failed and we were unable to recover it. 00:35:45.369 [2024-11-18 07:21:05.888750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.369 [2024-11-18 07:21:05.888777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.369 qpair failed and we were unable to recover it. 00:35:45.369 [2024-11-18 07:21:05.888864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.369 [2024-11-18 07:21:05.888889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.369 qpair failed and we were unable to recover it. 00:35:45.369 [2024-11-18 07:21:05.889034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.369 [2024-11-18 07:21:05.889067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.369 qpair failed and we were unable to recover it. 00:35:45.369 [2024-11-18 07:21:05.889150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.369 [2024-11-18 07:21:05.889176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.369 qpair failed and we were unable to recover it. 00:35:45.369 [2024-11-18 07:21:05.889261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.369 [2024-11-18 07:21:05.889287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.369 qpair failed and we were unable to recover it. 00:35:45.369 [2024-11-18 07:21:05.889401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.369 [2024-11-18 07:21:05.889428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.369 qpair failed and we were unable to recover it. 00:35:45.369 [2024-11-18 07:21:05.889573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.369 [2024-11-18 07:21:05.889602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.369 qpair failed and we were unable to recover it. 00:35:45.369 [2024-11-18 07:21:05.889694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.369 [2024-11-18 07:21:05.889721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.369 qpair failed and we were unable to recover it. 00:35:45.369 [2024-11-18 07:21:05.889836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.369 [2024-11-18 07:21:05.889862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.369 qpair failed and we were unable to recover it. 00:35:45.369 [2024-11-18 07:21:05.889973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.369 [2024-11-18 07:21:05.889998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.369 qpair failed and we were unable to recover it. 00:35:45.369 [2024-11-18 07:21:05.890074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.369 [2024-11-18 07:21:05.890101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.369 qpair failed and we were unable to recover it. 00:35:45.369 [2024-11-18 07:21:05.890190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.369 [2024-11-18 07:21:05.890216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.369 qpair failed and we were unable to recover it. 00:35:45.369 [2024-11-18 07:21:05.890331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.369 [2024-11-18 07:21:05.890357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.369 qpair failed and we were unable to recover it. 00:35:45.369 [2024-11-18 07:21:05.890475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.369 [2024-11-18 07:21:05.890508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.369 qpair failed and we were unable to recover it. 00:35:45.369 [2024-11-18 07:21:05.890623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.369 [2024-11-18 07:21:05.890650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.369 qpair failed and we were unable to recover it. 00:35:45.369 [2024-11-18 07:21:05.890762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.369 [2024-11-18 07:21:05.890788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.369 qpair failed and we were unable to recover it. 00:35:45.369 [2024-11-18 07:21:05.890907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.369 [2024-11-18 07:21:05.890933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.369 qpair failed and we were unable to recover it. 00:35:45.369 [2024-11-18 07:21:05.891078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.369 [2024-11-18 07:21:05.891105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.369 qpair failed and we were unable to recover it. 00:35:45.369 [2024-11-18 07:21:05.891252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.369 [2024-11-18 07:21:05.891279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.369 qpair failed and we were unable to recover it. 00:35:45.369 [2024-11-18 07:21:05.891426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.369 [2024-11-18 07:21:05.891452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.369 qpair failed and we were unable to recover it. 00:35:45.369 [2024-11-18 07:21:05.891557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.369 [2024-11-18 07:21:05.891584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.369 qpair failed and we were unable to recover it. 00:35:45.369 [2024-11-18 07:21:05.891699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.369 [2024-11-18 07:21:05.891725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.369 qpair failed and we were unable to recover it. 00:35:45.369 [2024-11-18 07:21:05.891844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.369 [2024-11-18 07:21:05.891870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.369 qpair failed and we were unable to recover it. 00:35:45.369 [2024-11-18 07:21:05.891953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.369 [2024-11-18 07:21:05.891978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.369 qpair failed and we were unable to recover it. 00:35:45.369 [2024-11-18 07:21:05.892126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.369 [2024-11-18 07:21:05.892176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.369 qpair failed and we were unable to recover it. 00:35:45.370 [2024-11-18 07:21:05.892286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.370 [2024-11-18 07:21:05.892322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.370 qpair failed and we were unable to recover it. 00:35:45.370 [2024-11-18 07:21:05.892441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.370 [2024-11-18 07:21:05.892466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.370 qpair failed and we were unable to recover it. 00:35:45.370 [2024-11-18 07:21:05.892590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.370 [2024-11-18 07:21:05.892619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.370 qpair failed and we were unable to recover it. 00:35:45.370 [2024-11-18 07:21:05.892731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.370 [2024-11-18 07:21:05.892757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.370 qpair failed and we were unable to recover it. 00:35:45.370 [2024-11-18 07:21:05.892870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.370 [2024-11-18 07:21:05.892896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.370 qpair failed and we were unable to recover it. 00:35:45.370 [2024-11-18 07:21:05.893007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.370 [2024-11-18 07:21:05.893033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.370 qpair failed and we were unable to recover it. 00:35:45.370 [2024-11-18 07:21:05.893164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.370 [2024-11-18 07:21:05.893200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.370 qpair failed and we were unable to recover it. 00:35:45.370 [2024-11-18 07:21:05.893287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.370 [2024-11-18 07:21:05.893314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.370 qpair failed and we were unable to recover it. 00:35:45.370 [2024-11-18 07:21:05.893427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.370 [2024-11-18 07:21:05.893453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.370 qpair failed and we were unable to recover it. 00:35:45.370 [2024-11-18 07:21:05.893573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.370 [2024-11-18 07:21:05.893599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.370 qpair failed and we were unable to recover it. 00:35:45.370 [2024-11-18 07:21:05.893713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.370 [2024-11-18 07:21:05.893740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.370 qpair failed and we were unable to recover it. 00:35:45.370 [2024-11-18 07:21:05.893869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.370 [2024-11-18 07:21:05.893896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.370 qpair failed and we were unable to recover it. 00:35:45.370 [2024-11-18 07:21:05.893981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.370 [2024-11-18 07:21:05.894007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.370 qpair failed and we were unable to recover it. 00:35:45.370 [2024-11-18 07:21:05.894129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.370 [2024-11-18 07:21:05.894156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.370 qpair failed and we were unable to recover it. 00:35:45.370 [2024-11-18 07:21:05.894262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.370 [2024-11-18 07:21:05.894289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.370 qpair failed and we were unable to recover it. 00:35:45.370 [2024-11-18 07:21:05.894376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.370 [2024-11-18 07:21:05.894402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.370 qpair failed and we were unable to recover it. 00:35:45.370 [2024-11-18 07:21:05.894484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.370 [2024-11-18 07:21:05.894515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.370 qpair failed and we were unable to recover it. 00:35:45.370 [2024-11-18 07:21:05.894653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.370 [2024-11-18 07:21:05.894680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.370 qpair failed and we were unable to recover it. 00:35:45.370 [2024-11-18 07:21:05.894776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.370 [2024-11-18 07:21:05.894807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.370 qpair failed and we were unable to recover it. 00:35:45.370 [2024-11-18 07:21:05.894956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.370 [2024-11-18 07:21:05.894982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.370 qpair failed and we were unable to recover it. 00:35:45.370 [2024-11-18 07:21:05.895097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.370 [2024-11-18 07:21:05.895124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.370 qpair failed and we were unable to recover it. 00:35:45.370 [2024-11-18 07:21:05.895262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.370 [2024-11-18 07:21:05.895288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.370 qpair failed and we were unable to recover it. 00:35:45.370 [2024-11-18 07:21:05.895398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.370 [2024-11-18 07:21:05.895424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.370 qpair failed and we were unable to recover it. 00:35:45.370 [2024-11-18 07:21:05.895547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.370 [2024-11-18 07:21:05.895575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.370 qpair failed and we were unable to recover it. 00:35:45.370 [2024-11-18 07:21:05.895651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.370 [2024-11-18 07:21:05.895685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.370 qpair failed and we were unable to recover it. 00:35:45.370 [2024-11-18 07:21:05.895772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.370 [2024-11-18 07:21:05.895798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.370 qpair failed and we were unable to recover it. 00:35:45.370 [2024-11-18 07:21:05.895882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.370 [2024-11-18 07:21:05.895908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.370 qpair failed and we were unable to recover it. 00:35:45.370 [2024-11-18 07:21:05.896034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.370 [2024-11-18 07:21:05.896060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.370 qpair failed and we were unable to recover it. 00:35:45.370 [2024-11-18 07:21:05.896175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.370 [2024-11-18 07:21:05.896200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.370 qpair failed and we were unable to recover it. 00:35:45.370 [2024-11-18 07:21:05.896339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.370 [2024-11-18 07:21:05.896365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.370 qpair failed and we were unable to recover it. 00:35:45.370 [2024-11-18 07:21:05.896459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.370 [2024-11-18 07:21:05.896485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.370 qpair failed and we were unable to recover it. 00:35:45.370 [2024-11-18 07:21:05.896575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.370 [2024-11-18 07:21:05.896601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.370 qpair failed and we were unable to recover it. 00:35:45.370 [2024-11-18 07:21:05.896679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.371 [2024-11-18 07:21:05.896709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.371 qpair failed and we were unable to recover it. 00:35:45.371 [2024-11-18 07:21:05.896816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.371 [2024-11-18 07:21:05.896854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.371 qpair failed and we were unable to recover it. 00:35:45.371 [2024-11-18 07:21:05.896969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.371 [2024-11-18 07:21:05.896997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.371 qpair failed and we were unable to recover it. 00:35:45.371 [2024-11-18 07:21:05.897080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.371 [2024-11-18 07:21:05.897108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.371 qpair failed and we were unable to recover it. 00:35:45.371 [2024-11-18 07:21:05.897222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.371 [2024-11-18 07:21:05.897249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.371 qpair failed and we were unable to recover it. 00:35:45.371 [2024-11-18 07:21:05.897331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.371 [2024-11-18 07:21:05.897358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.371 qpair failed and we were unable to recover it. 00:35:45.371 [2024-11-18 07:21:05.897449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.371 [2024-11-18 07:21:05.897477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.371 qpair failed and we were unable to recover it. 00:35:45.371 [2024-11-18 07:21:05.897606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.371 [2024-11-18 07:21:05.897633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.371 qpair failed and we were unable to recover it. 00:35:45.371 [2024-11-18 07:21:05.897762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.371 [2024-11-18 07:21:05.897800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.371 qpair failed and we were unable to recover it. 00:35:45.371 [2024-11-18 07:21:05.897975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.371 [2024-11-18 07:21:05.898013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.371 qpair failed and we were unable to recover it. 00:35:45.371 [2024-11-18 07:21:05.898158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.371 [2024-11-18 07:21:05.898187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.371 qpair failed and we were unable to recover it. 00:35:45.371 [2024-11-18 07:21:05.898277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.371 [2024-11-18 07:21:05.898303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.371 qpair failed and we were unable to recover it. 00:35:45.371 [2024-11-18 07:21:05.898412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.371 [2024-11-18 07:21:05.898439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.371 qpair failed and we were unable to recover it. 00:35:45.371 [2024-11-18 07:21:05.898553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.371 [2024-11-18 07:21:05.898580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.371 qpair failed and we were unable to recover it. 00:35:45.371 [2024-11-18 07:21:05.898671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.371 [2024-11-18 07:21:05.898701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.371 qpair failed and we were unable to recover it. 00:35:45.371 [2024-11-18 07:21:05.898791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.371 [2024-11-18 07:21:05.898818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.371 qpair failed and we were unable to recover it. 00:35:45.371 [2024-11-18 07:21:05.898894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.371 [2024-11-18 07:21:05.898921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.371 qpair failed and we were unable to recover it. 00:35:45.371 [2024-11-18 07:21:05.899034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.371 [2024-11-18 07:21:05.899060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.371 qpair failed and we were unable to recover it. 00:35:45.371 [2024-11-18 07:21:05.899142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.371 [2024-11-18 07:21:05.899168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.371 qpair failed and we were unable to recover it. 00:35:45.371 [2024-11-18 07:21:05.899309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.371 [2024-11-18 07:21:05.899336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.371 qpair failed and we were unable to recover it. 00:35:45.371 [2024-11-18 07:21:05.899454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.371 [2024-11-18 07:21:05.899480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.371 qpair failed and we were unable to recover it. 00:35:45.371 [2024-11-18 07:21:05.899632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.371 [2024-11-18 07:21:05.899658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.371 qpair failed and we were unable to recover it. 00:35:45.371 [2024-11-18 07:21:05.899742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.371 [2024-11-18 07:21:05.899769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.371 qpair failed and we were unable to recover it. 00:35:45.371 [2024-11-18 07:21:05.899882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.371 [2024-11-18 07:21:05.899909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.371 qpair failed and we were unable to recover it. 00:35:45.371 [2024-11-18 07:21:05.899990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.371 [2024-11-18 07:21:05.900016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.371 qpair failed and we were unable to recover it. 00:35:45.371 [2024-11-18 07:21:05.900106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.371 [2024-11-18 07:21:05.900133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.371 qpair failed and we were unable to recover it. 00:35:45.371 [2024-11-18 07:21:05.900269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.371 [2024-11-18 07:21:05.900296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.371 qpair failed and we were unable to recover it. 00:35:45.371 [2024-11-18 07:21:05.900406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.371 [2024-11-18 07:21:05.900432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.371 qpair failed and we were unable to recover it. 00:35:45.371 [2024-11-18 07:21:05.900540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.371 [2024-11-18 07:21:05.900567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.371 qpair failed and we were unable to recover it. 00:35:45.371 [2024-11-18 07:21:05.900689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.371 [2024-11-18 07:21:05.900715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.371 qpair failed and we were unable to recover it. 00:35:45.371 [2024-11-18 07:21:05.900852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.371 [2024-11-18 07:21:05.900878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.371 qpair failed and we were unable to recover it. 00:35:45.371 [2024-11-18 07:21:05.900964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.371 [2024-11-18 07:21:05.900990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.371 qpair failed and we were unable to recover it. 00:35:45.371 [2024-11-18 07:21:05.901108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.371 [2024-11-18 07:21:05.901134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.371 qpair failed and we were unable to recover it. 00:35:45.371 [2024-11-18 07:21:05.901246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.371 [2024-11-18 07:21:05.901272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.371 qpair failed and we were unable to recover it. 00:35:45.371 [2024-11-18 07:21:05.901418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.371 [2024-11-18 07:21:05.901444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.371 qpair failed and we were unable to recover it. 00:35:45.371 [2024-11-18 07:21:05.901562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.371 [2024-11-18 07:21:05.901589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.371 qpair failed and we were unable to recover it. 00:35:45.371 [2024-11-18 07:21:05.901701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.371 [2024-11-18 07:21:05.901728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.371 qpair failed and we were unable to recover it. 00:35:45.371 [2024-11-18 07:21:05.901823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.371 [2024-11-18 07:21:05.901849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.372 qpair failed and we were unable to recover it. 00:35:45.372 [2024-11-18 07:21:05.901930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.372 [2024-11-18 07:21:05.901957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.372 qpair failed and we were unable to recover it. 00:35:45.372 [2024-11-18 07:21:05.902068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.372 [2024-11-18 07:21:05.902094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.372 qpair failed and we were unable to recover it. 00:35:45.372 [2024-11-18 07:21:05.902179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.372 [2024-11-18 07:21:05.902206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.372 qpair failed and we were unable to recover it. 00:35:45.372 [2024-11-18 07:21:05.902297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.372 [2024-11-18 07:21:05.902323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.372 qpair failed and we were unable to recover it. 00:35:45.372 [2024-11-18 07:21:05.902471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.372 [2024-11-18 07:21:05.902519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.372 qpair failed and we were unable to recover it. 00:35:45.372 [2024-11-18 07:21:05.902635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.372 [2024-11-18 07:21:05.902663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.372 qpair failed and we were unable to recover it. 00:35:45.372 [2024-11-18 07:21:05.902754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.372 [2024-11-18 07:21:05.902780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.372 qpair failed and we were unable to recover it. 00:35:45.372 [2024-11-18 07:21:05.902889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.372 [2024-11-18 07:21:05.902916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.372 qpair failed and we were unable to recover it. 00:35:45.372 [2024-11-18 07:21:05.903001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.372 [2024-11-18 07:21:05.903027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.372 qpair failed and we were unable to recover it. 00:35:45.372 [2024-11-18 07:21:05.903175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.372 [2024-11-18 07:21:05.903201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.372 qpair failed and we were unable to recover it. 00:35:45.372 [2024-11-18 07:21:05.903328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.372 [2024-11-18 07:21:05.903365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.372 qpair failed and we were unable to recover it. 00:35:45.372 [2024-11-18 07:21:05.903498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.372 [2024-11-18 07:21:05.903537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.372 qpair failed and we were unable to recover it. 00:35:45.372 [2024-11-18 07:21:05.903632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.372 [2024-11-18 07:21:05.903661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.372 qpair failed and we were unable to recover it. 00:35:45.372 [2024-11-18 07:21:05.903776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.372 [2024-11-18 07:21:05.903802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.372 qpair failed and we were unable to recover it. 00:35:45.372 [2024-11-18 07:21:05.903917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.372 [2024-11-18 07:21:05.903944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.372 qpair failed and we were unable to recover it. 00:35:45.372 [2024-11-18 07:21:05.904052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.372 [2024-11-18 07:21:05.904079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.372 qpair failed and we were unable to recover it. 00:35:45.372 [2024-11-18 07:21:05.904225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.372 [2024-11-18 07:21:05.904266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.372 qpair failed and we were unable to recover it. 00:35:45.372 [2024-11-18 07:21:05.904400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.372 [2024-11-18 07:21:05.904429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.372 qpair failed and we were unable to recover it. 00:35:45.372 [2024-11-18 07:21:05.904558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.372 [2024-11-18 07:21:05.904583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.372 qpair failed and we were unable to recover it. 00:35:45.372 [2024-11-18 07:21:05.904665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.372 [2024-11-18 07:21:05.904691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.372 qpair failed and we were unable to recover it. 00:35:45.372 [2024-11-18 07:21:05.904809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.372 [2024-11-18 07:21:05.904863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.372 qpair failed and we were unable to recover it. 00:35:45.372 [2024-11-18 07:21:05.904999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.372 [2024-11-18 07:21:05.905045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.372 qpair failed and we were unable to recover it. 00:35:45.372 [2024-11-18 07:21:05.905186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.372 [2024-11-18 07:21:05.905235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.372 qpair failed and we were unable to recover it. 00:35:45.372 [2024-11-18 07:21:05.905359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.372 [2024-11-18 07:21:05.905388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.372 qpair failed and we were unable to recover it. 00:35:45.372 [2024-11-18 07:21:05.905476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.372 [2024-11-18 07:21:05.905523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.372 qpair failed and we were unable to recover it. 00:35:45.372 [2024-11-18 07:21:05.905666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.372 [2024-11-18 07:21:05.905692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.372 qpair failed and we were unable to recover it. 00:35:45.372 [2024-11-18 07:21:05.905810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.372 [2024-11-18 07:21:05.905838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.372 qpair failed and we were unable to recover it. 00:35:45.372 [2024-11-18 07:21:05.905931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.372 [2024-11-18 07:21:05.905979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.372 qpair failed and we were unable to recover it. 00:35:45.372 [2024-11-18 07:21:05.906121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.372 [2024-11-18 07:21:05.906155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.372 qpair failed and we were unable to recover it. 00:35:45.372 [2024-11-18 07:21:05.906371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.372 [2024-11-18 07:21:05.906406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.372 qpair failed and we were unable to recover it. 00:35:45.372 [2024-11-18 07:21:05.906562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.372 [2024-11-18 07:21:05.906590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.372 qpair failed and we were unable to recover it. 00:35:45.372 [2024-11-18 07:21:05.906702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.372 [2024-11-18 07:21:05.906729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.372 qpair failed and we were unable to recover it. 00:35:45.372 [2024-11-18 07:21:05.906826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.372 [2024-11-18 07:21:05.906853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.372 qpair failed and we were unable to recover it. 00:35:45.372 [2024-11-18 07:21:05.906968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.372 [2024-11-18 07:21:05.906995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.372 qpair failed and we were unable to recover it. 00:35:45.372 [2024-11-18 07:21:05.907096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.372 [2024-11-18 07:21:05.907134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.372 qpair failed and we were unable to recover it. 00:35:45.372 [2024-11-18 07:21:05.907295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.372 [2024-11-18 07:21:05.907340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.372 qpair failed and we were unable to recover it. 00:35:45.372 [2024-11-18 07:21:05.907423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.372 [2024-11-18 07:21:05.907450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.372 qpair failed and we were unable to recover it. 00:35:45.372 [2024-11-18 07:21:05.907546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.373 [2024-11-18 07:21:05.907573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.373 qpair failed and we were unable to recover it. 00:35:45.373 [2024-11-18 07:21:05.907702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.373 [2024-11-18 07:21:05.907750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.373 qpair failed and we were unable to recover it. 00:35:45.373 [2024-11-18 07:21:05.907922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.373 [2024-11-18 07:21:05.907973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.373 qpair failed and we were unable to recover it. 00:35:45.373 [2024-11-18 07:21:05.908116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.373 [2024-11-18 07:21:05.908167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.373 qpair failed and we were unable to recover it. 00:35:45.373 [2024-11-18 07:21:05.908257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.373 [2024-11-18 07:21:05.908284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.373 qpair failed and we were unable to recover it. 00:35:45.373 [2024-11-18 07:21:05.908363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.373 [2024-11-18 07:21:05.908389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.373 qpair failed and we were unable to recover it. 00:35:45.373 [2024-11-18 07:21:05.908520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.373 [2024-11-18 07:21:05.908554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.373 qpair failed and we were unable to recover it. 00:35:45.373 [2024-11-18 07:21:05.908672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.373 [2024-11-18 07:21:05.908699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.373 qpair failed and we were unable to recover it. 00:35:45.373 [2024-11-18 07:21:05.908824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.373 [2024-11-18 07:21:05.908888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.373 qpair failed and we were unable to recover it. 00:35:45.373 [2024-11-18 07:21:05.909106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.373 [2024-11-18 07:21:05.909151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.373 qpair failed and we were unable to recover it. 00:35:45.373 [2024-11-18 07:21:05.909377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.373 [2024-11-18 07:21:05.909422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.373 qpair failed and we were unable to recover it. 00:35:45.373 [2024-11-18 07:21:05.909581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.373 [2024-11-18 07:21:05.909609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.373 qpair failed and we were unable to recover it. 00:35:45.373 [2024-11-18 07:21:05.909704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.373 [2024-11-18 07:21:05.909731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.373 qpair failed and we were unable to recover it. 00:35:45.373 [2024-11-18 07:21:05.909830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.373 [2024-11-18 07:21:05.909886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.373 qpair failed and we were unable to recover it. 00:35:45.373 [2024-11-18 07:21:05.910065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.373 [2024-11-18 07:21:05.910110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.373 qpair failed and we were unable to recover it. 00:35:45.373 [2024-11-18 07:21:05.910295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.373 [2024-11-18 07:21:05.910347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.373 qpair failed and we were unable to recover it. 00:35:45.373 [2024-11-18 07:21:05.910486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.373 [2024-11-18 07:21:05.910519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.373 qpair failed and we were unable to recover it. 00:35:45.373 [2024-11-18 07:21:05.910657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.373 [2024-11-18 07:21:05.910683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.373 qpair failed and we were unable to recover it. 00:35:45.373 [2024-11-18 07:21:05.910797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.373 [2024-11-18 07:21:05.910825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.373 qpair failed and we were unable to recover it. 00:35:45.373 [2024-11-18 07:21:05.910970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.373 [2024-11-18 07:21:05.911022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.373 qpair failed and we were unable to recover it. 00:35:45.373 [2024-11-18 07:21:05.911170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.373 [2024-11-18 07:21:05.911197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.373 qpair failed and we were unable to recover it. 00:35:45.373 [2024-11-18 07:21:05.911320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.373 [2024-11-18 07:21:05.911348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.373 qpair failed and we were unable to recover it. 00:35:45.373 [2024-11-18 07:21:05.911437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.373 [2024-11-18 07:21:05.911464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.373 qpair failed and we were unable to recover it. 00:35:45.373 [2024-11-18 07:21:05.911560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.373 [2024-11-18 07:21:05.911588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.373 qpair failed and we were unable to recover it. 00:35:45.373 [2024-11-18 07:21:05.911668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.373 [2024-11-18 07:21:05.911696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.373 qpair failed and we were unable to recover it. 00:35:45.373 [2024-11-18 07:21:05.911817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.373 [2024-11-18 07:21:05.911844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.373 qpair failed and we were unable to recover it. 00:35:45.373 [2024-11-18 07:21:05.911958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.373 [2024-11-18 07:21:05.911986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.373 qpair failed and we were unable to recover it. 00:35:45.373 [2024-11-18 07:21:05.912099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.373 [2024-11-18 07:21:05.912128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.373 qpair failed and we were unable to recover it. 00:35:45.373 [2024-11-18 07:21:05.912323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.373 [2024-11-18 07:21:05.912350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.373 qpair failed and we were unable to recover it. 00:35:45.373 [2024-11-18 07:21:05.912429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.373 [2024-11-18 07:21:05.912456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.373 qpair failed and we were unable to recover it. 00:35:45.373 [2024-11-18 07:21:05.912603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.373 [2024-11-18 07:21:05.912631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.373 qpair failed and we were unable to recover it. 00:35:45.373 [2024-11-18 07:21:05.912743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.373 [2024-11-18 07:21:05.912770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.373 qpair failed and we were unable to recover it. 00:35:45.373 [2024-11-18 07:21:05.912874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.373 [2024-11-18 07:21:05.912901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.373 qpair failed and we were unable to recover it. 00:35:45.373 [2024-11-18 07:21:05.913017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.373 [2024-11-18 07:21:05.913045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.373 qpair failed and we were unable to recover it. 00:35:45.373 [2024-11-18 07:21:05.913159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.373 [2024-11-18 07:21:05.913186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.373 qpair failed and we were unable to recover it. 00:35:45.373 [2024-11-18 07:21:05.913299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.373 [2024-11-18 07:21:05.913328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.373 qpair failed and we were unable to recover it. 00:35:45.373 [2024-11-18 07:21:05.913413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.373 [2024-11-18 07:21:05.913440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.373 qpair failed and we were unable to recover it. 00:35:45.373 [2024-11-18 07:21:05.913525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.373 [2024-11-18 07:21:05.913553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.373 qpair failed and we were unable to recover it. 00:35:45.373 [2024-11-18 07:21:05.913637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.374 [2024-11-18 07:21:05.913665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.374 qpair failed and we were unable to recover it. 00:35:45.374 [2024-11-18 07:21:05.913751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.374 [2024-11-18 07:21:05.913780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.374 qpair failed and we were unable to recover it. 00:35:45.374 [2024-11-18 07:21:05.913929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.374 [2024-11-18 07:21:05.913972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.374 qpair failed and we were unable to recover it. 00:35:45.374 [2024-11-18 07:21:05.914213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.374 [2024-11-18 07:21:05.914279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.374 qpair failed and we were unable to recover it. 00:35:45.374 [2024-11-18 07:21:05.914464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.374 [2024-11-18 07:21:05.914496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.374 qpair failed and we were unable to recover it. 00:35:45.374 [2024-11-18 07:21:05.914581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.374 [2024-11-18 07:21:05.914608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.374 qpair failed and we were unable to recover it. 00:35:45.374 [2024-11-18 07:21:05.914683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.374 [2024-11-18 07:21:05.914710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.374 qpair failed and we were unable to recover it. 00:35:45.374 [2024-11-18 07:21:05.914848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.374 [2024-11-18 07:21:05.914875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.374 qpair failed and we were unable to recover it. 00:35:45.374 [2024-11-18 07:21:05.915004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.374 [2024-11-18 07:21:05.915037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.374 qpair failed and we were unable to recover it. 00:35:45.374 [2024-11-18 07:21:05.915151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.374 [2024-11-18 07:21:05.915179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.374 qpair failed and we were unable to recover it. 00:35:45.374 [2024-11-18 07:21:05.915274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.374 [2024-11-18 07:21:05.915303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.374 qpair failed and we were unable to recover it. 00:35:45.374 [2024-11-18 07:21:05.915456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.374 [2024-11-18 07:21:05.915503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.374 qpair failed and we were unable to recover it. 00:35:45.374 [2024-11-18 07:21:05.915599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.374 [2024-11-18 07:21:05.915628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.374 qpair failed and we were unable to recover it. 00:35:45.374 [2024-11-18 07:21:05.915724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.374 [2024-11-18 07:21:05.915751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.374 qpair failed and we were unable to recover it. 00:35:45.374 [2024-11-18 07:21:05.915840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.374 [2024-11-18 07:21:05.915866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.374 qpair failed and we were unable to recover it. 00:35:45.374 [2024-11-18 07:21:05.915967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.374 [2024-11-18 07:21:05.916002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.374 qpair failed and we were unable to recover it. 00:35:45.374 [2024-11-18 07:21:05.916155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.374 [2024-11-18 07:21:05.916202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.374 qpair failed and we were unable to recover it. 00:35:45.374 [2024-11-18 07:21:05.916329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.374 [2024-11-18 07:21:05.916369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.374 qpair failed and we were unable to recover it. 00:35:45.374 [2024-11-18 07:21:05.916483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.374 [2024-11-18 07:21:05.916526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.374 qpair failed and we were unable to recover it. 00:35:45.374 [2024-11-18 07:21:05.916623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.374 [2024-11-18 07:21:05.916651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.374 qpair failed and we were unable to recover it. 00:35:45.374 [2024-11-18 07:21:05.916759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.374 [2024-11-18 07:21:05.916818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.374 qpair failed and we were unable to recover it. 00:35:45.374 [2024-11-18 07:21:05.916963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.374 [2024-11-18 07:21:05.917009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.374 qpair failed and we were unable to recover it. 00:35:45.374 [2024-11-18 07:21:05.917185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.374 [2024-11-18 07:21:05.917235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.374 qpair failed and we were unable to recover it. 00:35:45.374 [2024-11-18 07:21:05.917323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.374 [2024-11-18 07:21:05.917351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.374 qpair failed and we were unable to recover it. 00:35:45.374 [2024-11-18 07:21:05.917460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.374 [2024-11-18 07:21:05.917504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.374 qpair failed and we were unable to recover it. 00:35:45.374 [2024-11-18 07:21:05.917596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.374 [2024-11-18 07:21:05.917622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.374 qpair failed and we were unable to recover it. 00:35:45.374 [2024-11-18 07:21:05.917715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.374 [2024-11-18 07:21:05.917749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.374 qpair failed and we were unable to recover it. 00:35:45.374 [2024-11-18 07:21:05.917880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.374 [2024-11-18 07:21:05.917930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.374 qpair failed and we were unable to recover it. 00:35:45.374 [2024-11-18 07:21:05.918041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.374 [2024-11-18 07:21:05.918071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.374 qpair failed and we were unable to recover it. 00:35:45.374 [2024-11-18 07:21:05.918190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.374 [2024-11-18 07:21:05.918217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.374 qpair failed and we were unable to recover it. 00:35:45.374 [2024-11-18 07:21:05.918334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.374 [2024-11-18 07:21:05.918362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.374 qpair failed and we were unable to recover it. 00:35:45.374 [2024-11-18 07:21:05.918472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.374 [2024-11-18 07:21:05.918516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.374 qpair failed and we were unable to recover it. 00:35:45.374 [2024-11-18 07:21:05.918598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.374 [2024-11-18 07:21:05.918625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.374 qpair failed and we were unable to recover it. 00:35:45.374 [2024-11-18 07:21:05.918740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.375 [2024-11-18 07:21:05.918768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.375 qpair failed and we were unable to recover it. 00:35:45.375 [2024-11-18 07:21:05.918855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.375 [2024-11-18 07:21:05.918883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.375 qpair failed and we were unable to recover it. 00:35:45.375 [2024-11-18 07:21:05.918982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.375 [2024-11-18 07:21:05.919011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.375 qpair failed and we were unable to recover it. 00:35:45.375 [2024-11-18 07:21:05.919182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.375 [2024-11-18 07:21:05.919231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.375 qpair failed and we were unable to recover it. 00:35:45.375 [2024-11-18 07:21:05.919315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.375 [2024-11-18 07:21:05.919342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.375 qpair failed and we were unable to recover it. 00:35:45.375 [2024-11-18 07:21:05.919454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.375 [2024-11-18 07:21:05.919502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.375 qpair failed and we were unable to recover it. 00:35:45.375 [2024-11-18 07:21:05.919603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.375 [2024-11-18 07:21:05.919630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.375 qpair failed and we were unable to recover it. 00:35:45.375 [2024-11-18 07:21:05.919724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.375 [2024-11-18 07:21:05.919751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.375 qpair failed and we were unable to recover it. 00:35:45.375 [2024-11-18 07:21:05.919882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.375 [2024-11-18 07:21:05.919913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.375 qpair failed and we were unable to recover it. 00:35:45.375 [2024-11-18 07:21:05.920028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.375 [2024-11-18 07:21:05.920055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.375 qpair failed and we were unable to recover it. 00:35:45.375 [2024-11-18 07:21:05.920143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.375 [2024-11-18 07:21:05.920172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.375 qpair failed and we were unable to recover it. 00:35:45.375 [2024-11-18 07:21:05.920285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.375 [2024-11-18 07:21:05.920312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.375 qpair failed and we were unable to recover it. 00:35:45.375 [2024-11-18 07:21:05.920402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.375 [2024-11-18 07:21:05.920430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.375 qpair failed and we were unable to recover it. 00:35:45.375 [2024-11-18 07:21:05.920544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.375 [2024-11-18 07:21:05.920572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.375 qpair failed and we were unable to recover it. 00:35:45.375 [2024-11-18 07:21:05.920667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.375 [2024-11-18 07:21:05.920695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.375 qpair failed and we were unable to recover it. 00:35:45.375 [2024-11-18 07:21:05.920825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.375 [2024-11-18 07:21:05.920857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.375 qpair failed and we were unable to recover it. 00:35:45.375 [2024-11-18 07:21:05.920943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.375 [2024-11-18 07:21:05.920971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.375 qpair failed and we were unable to recover it. 00:35:45.375 [2024-11-18 07:21:05.921059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.375 [2024-11-18 07:21:05.921107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.375 qpair failed and we were unable to recover it. 00:35:45.375 [2024-11-18 07:21:05.921264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.375 [2024-11-18 07:21:05.921291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.375 qpair failed and we were unable to recover it. 00:35:45.375 [2024-11-18 07:21:05.921378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.375 [2024-11-18 07:21:05.921406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.375 qpair failed and we were unable to recover it. 00:35:45.375 [2024-11-18 07:21:05.921523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.375 [2024-11-18 07:21:05.921551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.375 qpair failed and we were unable to recover it. 00:35:45.375 [2024-11-18 07:21:05.921644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.375 [2024-11-18 07:21:05.921672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.375 qpair failed and we were unable to recover it. 00:35:45.375 [2024-11-18 07:21:05.921780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.375 [2024-11-18 07:21:05.921821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.375 qpair failed and we were unable to recover it. 00:35:45.375 [2024-11-18 07:21:05.922021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.375 [2024-11-18 07:21:05.922077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.375 qpair failed and we were unable to recover it. 00:35:45.375 [2024-11-18 07:21:05.922287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.375 [2024-11-18 07:21:05.922333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.375 qpair failed and we were unable to recover it. 00:35:45.375 [2024-11-18 07:21:05.922450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.375 [2024-11-18 07:21:05.922477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.375 qpair failed and we were unable to recover it. 00:35:45.375 [2024-11-18 07:21:05.922581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.375 [2024-11-18 07:21:05.922609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.375 qpair failed and we were unable to recover it. 00:35:45.375 [2024-11-18 07:21:05.922686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.375 [2024-11-18 07:21:05.922713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.375 qpair failed and we were unable to recover it. 00:35:45.375 [2024-11-18 07:21:05.922809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.375 [2024-11-18 07:21:05.922837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.375 qpair failed and we were unable to recover it. 00:35:45.375 [2024-11-18 07:21:05.923019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.375 [2024-11-18 07:21:05.923054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.375 qpair failed and we were unable to recover it. 00:35:45.375 [2024-11-18 07:21:05.923201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.375 [2024-11-18 07:21:05.923236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.375 qpair failed and we were unable to recover it. 00:35:45.375 [2024-11-18 07:21:05.923376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.375 [2024-11-18 07:21:05.923404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.375 qpair failed and we were unable to recover it. 00:35:45.375 [2024-11-18 07:21:05.923515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.375 [2024-11-18 07:21:05.923555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.375 qpair failed and we were unable to recover it. 00:35:45.375 [2024-11-18 07:21:05.923647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.375 [2024-11-18 07:21:05.923676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.375 qpair failed and we were unable to recover it. 00:35:45.375 [2024-11-18 07:21:05.923774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.375 [2024-11-18 07:21:05.923812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.375 qpair failed and we were unable to recover it. 00:35:45.375 [2024-11-18 07:21:05.923936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.375 [2024-11-18 07:21:05.923963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.375 qpair failed and we were unable to recover it. 00:35:45.375 [2024-11-18 07:21:05.924052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.375 [2024-11-18 07:21:05.924079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.375 qpair failed and we were unable to recover it. 00:35:45.375 [2024-11-18 07:21:05.924193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.375 [2024-11-18 07:21:05.924219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.375 qpair failed and we were unable to recover it. 00:35:45.376 [2024-11-18 07:21:05.924361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.376 [2024-11-18 07:21:05.924389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.376 qpair failed and we were unable to recover it. 00:35:45.376 [2024-11-18 07:21:05.924478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.376 [2024-11-18 07:21:05.924513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.376 qpair failed and we were unable to recover it. 00:35:45.376 [2024-11-18 07:21:05.924598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.376 [2024-11-18 07:21:05.924626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.376 qpair failed and we were unable to recover it. 00:35:45.376 [2024-11-18 07:21:05.924714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.376 [2024-11-18 07:21:05.924741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.376 qpair failed and we were unable to recover it. 00:35:45.376 [2024-11-18 07:21:05.924829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.376 [2024-11-18 07:21:05.924856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.376 qpair failed and we were unable to recover it. 00:35:45.376 [2024-11-18 07:21:05.924987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.376 [2024-11-18 07:21:05.925027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.376 qpair failed and we were unable to recover it. 00:35:45.376 [2024-11-18 07:21:05.925165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.376 [2024-11-18 07:21:05.925217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.376 qpair failed and we were unable to recover it. 00:35:45.376 [2024-11-18 07:21:05.925336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.376 [2024-11-18 07:21:05.925364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.376 qpair failed and we were unable to recover it. 00:35:45.376 [2024-11-18 07:21:05.925453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.376 [2024-11-18 07:21:05.925496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.376 qpair failed and we were unable to recover it. 00:35:45.376 [2024-11-18 07:21:05.925590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.376 [2024-11-18 07:21:05.925617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.376 qpair failed and we were unable to recover it. 00:35:45.376 [2024-11-18 07:21:05.925704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.376 [2024-11-18 07:21:05.925731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.376 qpair failed and we were unable to recover it. 00:35:45.376 [2024-11-18 07:21:05.925828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.376 [2024-11-18 07:21:05.925857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.376 qpair failed and we were unable to recover it. 00:35:45.376 [2024-11-18 07:21:05.925968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.376 [2024-11-18 07:21:05.925997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.376 qpair failed and we were unable to recover it. 00:35:45.376 [2024-11-18 07:21:05.926077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.376 [2024-11-18 07:21:05.926104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.376 qpair failed and we were unable to recover it. 00:35:45.376 [2024-11-18 07:21:05.926212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.376 [2024-11-18 07:21:05.926238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.376 qpair failed and we were unable to recover it. 00:35:45.376 [2024-11-18 07:21:05.926321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.376 [2024-11-18 07:21:05.926348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.376 qpair failed and we were unable to recover it. 00:35:45.376 [2024-11-18 07:21:05.926461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.376 [2024-11-18 07:21:05.926501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.376 qpair failed and we were unable to recover it. 00:35:45.376 [2024-11-18 07:21:05.926590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.376 [2024-11-18 07:21:05.926625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.376 qpair failed and we were unable to recover it. 00:35:45.376 [2024-11-18 07:21:05.926712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.376 [2024-11-18 07:21:05.926739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.376 qpair failed and we were unable to recover it. 00:35:45.376 [2024-11-18 07:21:05.926831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.376 [2024-11-18 07:21:05.926858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.376 qpair failed and we were unable to recover it. 00:35:45.376 [2024-11-18 07:21:05.926943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.376 [2024-11-18 07:21:05.926970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.376 qpair failed and we were unable to recover it. 00:35:45.376 [2024-11-18 07:21:05.927109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.376 [2024-11-18 07:21:05.927136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.376 qpair failed and we were unable to recover it. 00:35:45.376 [2024-11-18 07:21:05.927276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.376 [2024-11-18 07:21:05.927303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.376 qpair failed and we were unable to recover it. 00:35:45.376 [2024-11-18 07:21:05.927419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.376 [2024-11-18 07:21:05.927446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.376 qpair failed and we were unable to recover it. 00:35:45.376 [2024-11-18 07:21:05.927538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.376 [2024-11-18 07:21:05.927566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.376 qpair failed and we were unable to recover it. 00:35:45.376 [2024-11-18 07:21:05.927659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.376 [2024-11-18 07:21:05.927686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.376 qpair failed and we were unable to recover it. 00:35:45.376 [2024-11-18 07:21:05.927770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.376 [2024-11-18 07:21:05.927807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.376 qpair failed and we were unable to recover it. 00:35:45.376 [2024-11-18 07:21:05.927920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.376 [2024-11-18 07:21:05.927947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.376 qpair failed and we were unable to recover it. 00:35:45.376 [2024-11-18 07:21:05.928069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.376 [2024-11-18 07:21:05.928096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.376 qpair failed and we were unable to recover it. 00:35:45.376 [2024-11-18 07:21:05.928213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.376 [2024-11-18 07:21:05.928240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.376 qpair failed and we were unable to recover it. 00:35:45.376 [2024-11-18 07:21:05.928332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.376 [2024-11-18 07:21:05.928359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.376 qpair failed and we were unable to recover it. 00:35:45.376 [2024-11-18 07:21:05.928441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.376 [2024-11-18 07:21:05.928468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.376 qpair failed and we were unable to recover it. 00:35:45.376 [2024-11-18 07:21:05.928566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.376 [2024-11-18 07:21:05.928594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.376 qpair failed and we were unable to recover it. 00:35:45.376 [2024-11-18 07:21:05.928678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.376 [2024-11-18 07:21:05.928704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.376 qpair failed and we were unable to recover it. 00:35:45.376 [2024-11-18 07:21:05.928793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.376 [2024-11-18 07:21:05.928819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.376 qpair failed and we were unable to recover it. 00:35:45.376 [2024-11-18 07:21:05.928933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.376 [2024-11-18 07:21:05.928962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.376 qpair failed and we were unable to recover it. 00:35:45.376 [2024-11-18 07:21:05.929078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.376 [2024-11-18 07:21:05.929105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.376 qpair failed and we were unable to recover it. 00:35:45.376 [2024-11-18 07:21:05.929219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.377 [2024-11-18 07:21:05.929245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.377 qpair failed and we were unable to recover it. 00:35:45.377 [2024-11-18 07:21:05.929327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.377 [2024-11-18 07:21:05.929355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.377 qpair failed and we were unable to recover it. 00:35:45.377 [2024-11-18 07:21:05.929431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.377 [2024-11-18 07:21:05.929458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.377 qpair failed and we were unable to recover it. 00:35:45.377 [2024-11-18 07:21:05.929570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.377 [2024-11-18 07:21:05.929597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.377 qpair failed and we were unable to recover it. 00:35:45.377 [2024-11-18 07:21:05.929727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.377 [2024-11-18 07:21:05.929754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.377 qpair failed and we were unable to recover it. 00:35:45.377 [2024-11-18 07:21:05.929867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.377 [2024-11-18 07:21:05.929894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.377 qpair failed and we were unable to recover it. 00:35:45.377 [2024-11-18 07:21:05.929984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.377 [2024-11-18 07:21:05.930012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.377 qpair failed and we were unable to recover it. 00:35:45.377 [2024-11-18 07:21:05.930137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.377 [2024-11-18 07:21:05.930165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.377 qpair failed and we were unable to recover it. 00:35:45.377 [2024-11-18 07:21:05.930294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.377 [2024-11-18 07:21:05.930334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.377 qpair failed and we were unable to recover it. 00:35:45.377 [2024-11-18 07:21:05.930427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.377 [2024-11-18 07:21:05.930456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.377 qpair failed and we were unable to recover it. 00:35:45.377 [2024-11-18 07:21:05.930555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.377 [2024-11-18 07:21:05.930584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.377 qpair failed and we were unable to recover it. 00:35:45.377 [2024-11-18 07:21:05.930679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.377 [2024-11-18 07:21:05.930706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.377 qpair failed and we were unable to recover it. 00:35:45.377 [2024-11-18 07:21:05.930801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.377 [2024-11-18 07:21:05.930828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.377 qpair failed and we were unable to recover it. 00:35:45.377 [2024-11-18 07:21:05.930961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.377 [2024-11-18 07:21:05.930997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.377 qpair failed and we were unable to recover it. 00:35:45.377 [2024-11-18 07:21:05.931163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.377 [2024-11-18 07:21:05.931210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.377 qpair failed and we were unable to recover it. 00:35:45.377 [2024-11-18 07:21:05.931331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.377 [2024-11-18 07:21:05.931357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.377 qpair failed and we were unable to recover it. 00:35:45.377 [2024-11-18 07:21:05.931475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.377 [2024-11-18 07:21:05.931514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.377 qpair failed and we were unable to recover it. 00:35:45.377 [2024-11-18 07:21:05.931601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.377 [2024-11-18 07:21:05.931627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.377 qpair failed and we were unable to recover it. 00:35:45.377 [2024-11-18 07:21:05.931707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.377 [2024-11-18 07:21:05.931734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.377 qpair failed and we were unable to recover it. 00:35:45.377 [2024-11-18 07:21:05.931821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.377 [2024-11-18 07:21:05.931858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.377 qpair failed and we were unable to recover it. 00:35:45.377 [2024-11-18 07:21:05.931979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.377 [2024-11-18 07:21:05.932014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.377 qpair failed and we were unable to recover it. 00:35:45.377 [2024-11-18 07:21:05.932111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.377 [2024-11-18 07:21:05.932139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.377 qpair failed and we were unable to recover it. 00:35:45.377 [2024-11-18 07:21:05.932223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.377 [2024-11-18 07:21:05.932252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.377 qpair failed and we were unable to recover it. 00:35:45.377 [2024-11-18 07:21:05.932366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.377 [2024-11-18 07:21:05.932393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.377 qpair failed and we were unable to recover it. 00:35:45.377 [2024-11-18 07:21:05.932531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.377 [2024-11-18 07:21:05.932558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.377 qpair failed and we were unable to recover it. 00:35:45.377 [2024-11-18 07:21:05.932647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.377 [2024-11-18 07:21:05.932675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.377 qpair failed and we were unable to recover it. 00:35:45.377 [2024-11-18 07:21:05.932770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.377 [2024-11-18 07:21:05.932799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.377 qpair failed and we were unable to recover it. 00:35:45.377 [2024-11-18 07:21:05.932913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.377 [2024-11-18 07:21:05.932940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.377 qpair failed and we were unable to recover it. 00:35:45.377 [2024-11-18 07:21:05.933054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.377 [2024-11-18 07:21:05.933082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.377 qpair failed and we were unable to recover it. 00:35:45.377 [2024-11-18 07:21:05.933202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.377 [2024-11-18 07:21:05.933230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.377 qpair failed and we were unable to recover it. 00:35:45.377 [2024-11-18 07:21:05.933346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.377 [2024-11-18 07:21:05.933373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.377 qpair failed and we were unable to recover it. 00:35:45.377 [2024-11-18 07:21:05.933485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.377 [2024-11-18 07:21:05.933521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.377 qpair failed and we were unable to recover it. 00:35:45.377 [2024-11-18 07:21:05.933637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.377 [2024-11-18 07:21:05.933685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.377 qpair failed and we were unable to recover it. 00:35:45.377 [2024-11-18 07:21:05.933825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.377 [2024-11-18 07:21:05.933873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.377 qpair failed and we were unable to recover it. 00:35:45.377 [2024-11-18 07:21:05.934006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.377 [2024-11-18 07:21:05.934057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.377 qpair failed and we were unable to recover it. 00:35:45.377 [2024-11-18 07:21:05.934195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.377 [2024-11-18 07:21:05.934221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.377 qpair failed and we were unable to recover it. 00:35:45.377 [2024-11-18 07:21:05.934331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.377 [2024-11-18 07:21:05.934368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.377 qpair failed and we were unable to recover it. 00:35:45.377 [2024-11-18 07:21:05.934451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.377 [2024-11-18 07:21:05.934478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.377 qpair failed and we were unable to recover it. 00:35:45.377 [2024-11-18 07:21:05.934558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.377 [2024-11-18 07:21:05.934584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.377 qpair failed and we were unable to recover it. 00:35:45.377 [2024-11-18 07:21:05.934671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.377 [2024-11-18 07:21:05.934697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.377 qpair failed and we were unable to recover it. 00:35:45.377 [2024-11-18 07:21:05.934822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.378 [2024-11-18 07:21:05.934849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.378 qpair failed and we were unable to recover it. 00:35:45.378 [2024-11-18 07:21:05.934956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.378 [2024-11-18 07:21:05.934983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.378 qpair failed and we were unable to recover it. 00:35:45.378 [2024-11-18 07:21:05.935066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.378 [2024-11-18 07:21:05.935093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.378 qpair failed and we were unable to recover it. 00:35:45.378 [2024-11-18 07:21:05.935199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.378 [2024-11-18 07:21:05.935226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.378 qpair failed and we were unable to recover it. 00:35:45.378 [2024-11-18 07:21:05.935348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.378 [2024-11-18 07:21:05.935388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.378 qpair failed and we were unable to recover it. 00:35:45.378 [2024-11-18 07:21:05.935499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.378 [2024-11-18 07:21:05.935538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.378 qpair failed and we were unable to recover it. 00:35:45.378 [2024-11-18 07:21:05.935633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.378 [2024-11-18 07:21:05.935661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.378 qpair failed and we were unable to recover it. 00:35:45.378 [2024-11-18 07:21:05.935759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.378 [2024-11-18 07:21:05.935795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.378 qpair failed and we were unable to recover it. 00:35:45.378 [2024-11-18 07:21:05.935908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.378 [2024-11-18 07:21:05.935934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.378 qpair failed and we were unable to recover it. 00:35:45.378 [2024-11-18 07:21:05.936026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.378 [2024-11-18 07:21:05.936055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.378 qpair failed and we were unable to recover it. 00:35:45.378 [2024-11-18 07:21:05.936200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.378 [2024-11-18 07:21:05.936251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.378 qpair failed and we were unable to recover it. 00:35:45.378 [2024-11-18 07:21:05.936353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.378 [2024-11-18 07:21:05.936384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.378 qpair failed and we were unable to recover it. 00:35:45.378 [2024-11-18 07:21:05.936501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.378 [2024-11-18 07:21:05.936530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.378 qpair failed and we were unable to recover it. 00:35:45.378 [2024-11-18 07:21:05.936624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.378 [2024-11-18 07:21:05.936651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.378 qpair failed and we were unable to recover it. 00:35:45.378 [2024-11-18 07:21:05.936761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.378 [2024-11-18 07:21:05.936797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.378 qpair failed and we were unable to recover it. 00:35:45.378 [2024-11-18 07:21:05.936940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.378 [2024-11-18 07:21:05.936976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.378 qpair failed and we were unable to recover it. 00:35:45.378 [2024-11-18 07:21:05.937165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.378 [2024-11-18 07:21:05.937200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.378 qpair failed and we were unable to recover it. 00:35:45.378 [2024-11-18 07:21:05.937338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.378 [2024-11-18 07:21:05.937366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.378 qpair failed and we were unable to recover it. 00:35:45.378 [2024-11-18 07:21:05.937503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.378 [2024-11-18 07:21:05.937543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.378 qpair failed and we were unable to recover it. 00:35:45.378 [2024-11-18 07:21:05.937638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.378 [2024-11-18 07:21:05.937666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.378 qpair failed and we were unable to recover it. 00:35:45.378 [2024-11-18 07:21:05.937754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.378 [2024-11-18 07:21:05.937807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.378 qpair failed and we were unable to recover it. 00:35:45.378 [2024-11-18 07:21:05.937932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.378 [2024-11-18 07:21:05.937966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.378 qpair failed and we were unable to recover it. 00:35:45.378 [2024-11-18 07:21:05.938100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.378 [2024-11-18 07:21:05.938134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.378 qpair failed and we were unable to recover it. 00:35:45.378 [2024-11-18 07:21:05.938271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.378 [2024-11-18 07:21:05.938305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.378 qpair failed and we were unable to recover it. 00:35:45.378 [2024-11-18 07:21:05.938420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.378 [2024-11-18 07:21:05.938448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.378 qpair failed and we were unable to recover it. 00:35:45.378 [2024-11-18 07:21:05.938554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.378 [2024-11-18 07:21:05.938583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.378 qpair failed and we were unable to recover it. 00:35:45.378 [2024-11-18 07:21:05.938660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.378 [2024-11-18 07:21:05.938686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.378 qpair failed and we were unable to recover it. 00:35:45.378 [2024-11-18 07:21:05.938821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.378 [2024-11-18 07:21:05.938868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.378 qpair failed and we were unable to recover it. 00:35:45.378 [2024-11-18 07:21:05.939008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.378 [2024-11-18 07:21:05.939060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.378 qpair failed and we were unable to recover it. 00:35:45.378 [2024-11-18 07:21:05.939238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.378 [2024-11-18 07:21:05.939276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.378 qpair failed and we were unable to recover it. 00:35:45.378 [2024-11-18 07:21:05.939383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.378 [2024-11-18 07:21:05.939419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.378 qpair failed and we were unable to recover it. 00:35:45.378 [2024-11-18 07:21:05.939554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.378 [2024-11-18 07:21:05.939582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.378 qpair failed and we were unable to recover it. 00:35:45.378 [2024-11-18 07:21:05.939699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.378 [2024-11-18 07:21:05.939745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.378 qpair failed and we were unable to recover it. 00:35:45.378 [2024-11-18 07:21:05.939840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.378 [2024-11-18 07:21:05.939867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.378 qpair failed and we were unable to recover it. 00:35:45.378 [2024-11-18 07:21:05.940005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.378 [2024-11-18 07:21:05.940055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.378 qpair failed and we were unable to recover it. 00:35:45.378 [2024-11-18 07:21:05.940173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.378 [2024-11-18 07:21:05.940219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.378 qpair failed and we were unable to recover it. 00:35:45.378 [2024-11-18 07:21:05.940336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.378 [2024-11-18 07:21:05.940363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.378 qpair failed and we were unable to recover it. 00:35:45.378 [2024-11-18 07:21:05.940472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.378 [2024-11-18 07:21:05.940504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.378 qpair failed and we were unable to recover it. 00:35:45.378 [2024-11-18 07:21:05.940587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.378 [2024-11-18 07:21:05.940614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.378 qpair failed and we were unable to recover it. 00:35:45.378 [2024-11-18 07:21:05.940718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.378 [2024-11-18 07:21:05.940758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.378 qpair failed and we were unable to recover it. 00:35:45.378 [2024-11-18 07:21:05.940857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.378 [2024-11-18 07:21:05.940886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.378 qpair failed and we were unable to recover it. 00:35:45.378 [2024-11-18 07:21:05.941006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.379 [2024-11-18 07:21:05.941034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.379 qpair failed and we were unable to recover it. 00:35:45.379 [2024-11-18 07:21:05.941149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.379 [2024-11-18 07:21:05.941209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.379 qpair failed and we were unable to recover it. 00:35:45.379 [2024-11-18 07:21:05.941380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.379 [2024-11-18 07:21:05.941407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.379 qpair failed and we were unable to recover it. 00:35:45.379 [2024-11-18 07:21:05.941501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.379 [2024-11-18 07:21:05.941530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.379 qpair failed and we were unable to recover it. 00:35:45.379 [2024-11-18 07:21:05.941619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.379 [2024-11-18 07:21:05.941646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.379 qpair failed and we were unable to recover it. 00:35:45.379 [2024-11-18 07:21:05.941735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.379 [2024-11-18 07:21:05.941761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.379 qpair failed and we were unable to recover it. 00:35:45.379 [2024-11-18 07:21:05.941891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.379 [2024-11-18 07:21:05.941945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.379 qpair failed and we were unable to recover it. 00:35:45.379 [2024-11-18 07:21:05.942039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.379 [2024-11-18 07:21:05.942067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.379 qpair failed and we were unable to recover it. 00:35:45.379 [2024-11-18 07:21:05.942184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.379 [2024-11-18 07:21:05.942211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.379 qpair failed and we were unable to recover it. 00:35:45.379 [2024-11-18 07:21:05.942296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.379 [2024-11-18 07:21:05.942323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.379 qpair failed and we were unable to recover it. 00:35:45.379 [2024-11-18 07:21:05.942463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.379 [2024-11-18 07:21:05.942496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.379 qpair failed and we were unable to recover it. 00:35:45.379 [2024-11-18 07:21:05.942580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.379 [2024-11-18 07:21:05.942608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.379 qpair failed and we were unable to recover it. 00:35:45.379 [2024-11-18 07:21:05.942690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.379 [2024-11-18 07:21:05.942716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.379 qpair failed and we were unable to recover it. 00:35:45.379 [2024-11-18 07:21:05.942819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.379 [2024-11-18 07:21:05.942854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.379 qpair failed and we were unable to recover it. 00:35:45.379 [2024-11-18 07:21:05.942982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.379 [2024-11-18 07:21:05.943010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.379 qpair failed and we were unable to recover it. 00:35:45.379 [2024-11-18 07:21:05.943096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.379 [2024-11-18 07:21:05.943122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.379 qpair failed and we were unable to recover it. 00:35:45.379 [2024-11-18 07:21:05.943208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.379 [2024-11-18 07:21:05.943235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.379 qpair failed and we were unable to recover it. 00:35:45.379 [2024-11-18 07:21:05.943316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.379 [2024-11-18 07:21:05.943344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.379 qpair failed and we were unable to recover it. 00:35:45.379 [2024-11-18 07:21:05.943433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.379 [2024-11-18 07:21:05.943460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.379 qpair failed and we were unable to recover it. 00:35:45.379 [2024-11-18 07:21:05.943576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.379 [2024-11-18 07:21:05.943603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.379 qpair failed and we were unable to recover it. 00:35:45.379 [2024-11-18 07:21:05.943693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.379 [2024-11-18 07:21:05.943721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.379 qpair failed and we were unable to recover it. 00:35:45.379 [2024-11-18 07:21:05.943847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.379 [2024-11-18 07:21:05.943884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.379 qpair failed and we were unable to recover it. 00:35:45.379 [2024-11-18 07:21:05.944031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.379 [2024-11-18 07:21:05.944059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.379 qpair failed and we were unable to recover it. 00:35:45.379 [2024-11-18 07:21:05.944173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.379 [2024-11-18 07:21:05.944200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.379 qpair failed and we were unable to recover it. 00:35:45.379 [2024-11-18 07:21:05.944282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.379 [2024-11-18 07:21:05.944309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.379 qpair failed and we were unable to recover it. 00:35:45.379 [2024-11-18 07:21:05.944422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.379 [2024-11-18 07:21:05.944449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.379 qpair failed and we were unable to recover it. 00:35:45.379 [2024-11-18 07:21:05.944591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.379 [2024-11-18 07:21:05.944640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.379 qpair failed and we were unable to recover it. 00:35:45.379 [2024-11-18 07:21:05.944752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.379 [2024-11-18 07:21:05.944801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.379 qpair failed and we were unable to recover it. 00:35:45.379 [2024-11-18 07:21:05.944882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.379 [2024-11-18 07:21:05.944910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.379 qpair failed and we were unable to recover it. 00:35:45.379 [2024-11-18 07:21:05.945024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.379 [2024-11-18 07:21:05.945051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.379 qpair failed and we were unable to recover it. 00:35:45.379 [2024-11-18 07:21:05.945165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.379 [2024-11-18 07:21:05.945193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.379 qpair failed and we were unable to recover it. 00:35:45.379 [2024-11-18 07:21:05.945279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.379 [2024-11-18 07:21:05.945307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.379 qpair failed and we were unable to recover it. 00:35:45.379 [2024-11-18 07:21:05.945405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.379 [2024-11-18 07:21:05.945443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.379 qpair failed and we were unable to recover it. 00:35:45.379 [2024-11-18 07:21:05.945550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.379 [2024-11-18 07:21:05.945580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.379 qpair failed and we were unable to recover it. 00:35:45.379 [2024-11-18 07:21:05.945669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.379 [2024-11-18 07:21:05.945697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.379 qpair failed and we were unable to recover it. 00:35:45.379 [2024-11-18 07:21:05.945786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.379 [2024-11-18 07:21:05.945813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.379 qpair failed and we were unable to recover it. 00:35:45.379 [2024-11-18 07:21:05.945907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.379 [2024-11-18 07:21:05.945935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.379 qpair failed and we were unable to recover it. 00:35:45.379 [2024-11-18 07:21:05.946048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.380 [2024-11-18 07:21:05.946095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.380 qpair failed and we were unable to recover it. 00:35:45.380 [2024-11-18 07:21:05.946235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.380 [2024-11-18 07:21:05.946285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.380 qpair failed and we were unable to recover it. 00:35:45.380 [2024-11-18 07:21:05.946412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.380 [2024-11-18 07:21:05.946452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.380 qpair failed and we were unable to recover it. 00:35:45.380 [2024-11-18 07:21:05.946558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.380 [2024-11-18 07:21:05.946588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.380 qpair failed and we were unable to recover it. 00:35:45.380 [2024-11-18 07:21:05.946681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.380 [2024-11-18 07:21:05.946708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.380 qpair failed and we were unable to recover it. 00:35:45.380 [2024-11-18 07:21:05.946864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.380 [2024-11-18 07:21:05.946897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.380 qpair failed and we were unable to recover it. 00:35:45.380 [2024-11-18 07:21:05.946998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.380 [2024-11-18 07:21:05.947053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.380 qpair failed and we were unable to recover it. 00:35:45.380 [2024-11-18 07:21:05.947161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.380 [2024-11-18 07:21:05.947189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.380 qpair failed and we were unable to recover it. 00:35:45.380 [2024-11-18 07:21:05.947302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.380 [2024-11-18 07:21:05.947331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.380 qpair failed and we were unable to recover it. 00:35:45.380 [2024-11-18 07:21:05.947447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.380 [2024-11-18 07:21:05.947495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.380 qpair failed and we were unable to recover it. 00:35:45.380 [2024-11-18 07:21:05.947594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.380 [2024-11-18 07:21:05.947621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.380 qpair failed and we were unable to recover it. 00:35:45.380 [2024-11-18 07:21:05.947723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.380 [2024-11-18 07:21:05.947756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.380 qpair failed and we were unable to recover it. 00:35:45.380 [2024-11-18 07:21:05.947873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.380 [2024-11-18 07:21:05.947900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.380 qpair failed and we were unable to recover it. 00:35:45.380 [2024-11-18 07:21:05.947977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.380 [2024-11-18 07:21:05.948004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.380 qpair failed and we were unable to recover it. 00:35:45.380 [2024-11-18 07:21:05.948110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.380 [2024-11-18 07:21:05.948137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.380 qpair failed and we were unable to recover it. 00:35:45.380 [2024-11-18 07:21:05.948251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.380 [2024-11-18 07:21:05.948292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.380 qpair failed and we were unable to recover it. 00:35:45.380 [2024-11-18 07:21:05.948416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.380 [2024-11-18 07:21:05.948446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.380 qpair failed and we were unable to recover it. 00:35:45.380 [2024-11-18 07:21:05.948541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.380 [2024-11-18 07:21:05.948570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.380 qpair failed and we were unable to recover it. 00:35:45.380 [2024-11-18 07:21:05.948666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.380 [2024-11-18 07:21:05.948693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.380 qpair failed and we were unable to recover it. 00:35:45.380 [2024-11-18 07:21:05.948809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.380 [2024-11-18 07:21:05.948838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.380 qpair failed and we were unable to recover it. 00:35:45.380 [2024-11-18 07:21:05.948954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.380 [2024-11-18 07:21:05.948981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.380 qpair failed and we were unable to recover it. 00:35:45.380 [2024-11-18 07:21:05.949154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.380 [2024-11-18 07:21:05.949188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.380 qpair failed and we were unable to recover it. 00:35:45.380 [2024-11-18 07:21:05.949292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.380 [2024-11-18 07:21:05.949325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.380 qpair failed and we were unable to recover it. 00:35:45.380 [2024-11-18 07:21:05.949449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.380 [2024-11-18 07:21:05.949496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.380 qpair failed and we were unable to recover it. 00:35:45.380 [2024-11-18 07:21:05.949591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.380 [2024-11-18 07:21:05.949618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.380 qpair failed and we were unable to recover it. 00:35:45.380 [2024-11-18 07:21:05.949736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.380 [2024-11-18 07:21:05.949782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.380 qpair failed and we were unable to recover it. 00:35:45.380 [2024-11-18 07:21:05.949938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.380 [2024-11-18 07:21:05.949984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.380 qpair failed and we were unable to recover it. 00:35:45.380 [2024-11-18 07:21:05.950070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.380 [2024-11-18 07:21:05.950097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.380 qpair failed and we were unable to recover it. 00:35:45.380 [2024-11-18 07:21:05.950204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.380 [2024-11-18 07:21:05.950253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.380 qpair failed and we were unable to recover it. 00:35:45.380 [2024-11-18 07:21:05.950335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.380 [2024-11-18 07:21:05.950363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.380 qpair failed and we were unable to recover it. 00:35:45.380 [2024-11-18 07:21:05.950473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.380 [2024-11-18 07:21:05.950507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.380 qpair failed and we were unable to recover it. 00:35:45.380 [2024-11-18 07:21:05.950621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.380 [2024-11-18 07:21:05.950660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.380 qpair failed and we were unable to recover it. 00:35:45.380 [2024-11-18 07:21:05.950757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.380 [2024-11-18 07:21:05.950807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.380 qpair failed and we were unable to recover it. 00:35:45.380 [2024-11-18 07:21:05.950924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.380 [2024-11-18 07:21:05.950958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.380 qpair failed and we were unable to recover it. 00:35:45.380 [2024-11-18 07:21:05.951105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.380 [2024-11-18 07:21:05.951150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.380 qpair failed and we were unable to recover it. 00:35:45.380 [2024-11-18 07:21:05.951298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.380 [2024-11-18 07:21:05.951331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.380 qpair failed and we were unable to recover it. 00:35:45.380 [2024-11-18 07:21:05.951432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.380 [2024-11-18 07:21:05.951470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.380 qpair failed and we were unable to recover it. 00:35:45.380 [2024-11-18 07:21:05.951589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.380 [2024-11-18 07:21:05.951617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.380 qpair failed and we were unable to recover it. 00:35:45.380 [2024-11-18 07:21:05.951737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.380 [2024-11-18 07:21:05.951785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.380 qpair failed and we were unable to recover it. 00:35:45.380 [2024-11-18 07:21:05.951932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.380 [2024-11-18 07:21:05.951978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.380 qpair failed and we were unable to recover it. 00:35:45.380 [2024-11-18 07:21:05.952126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.380 [2024-11-18 07:21:05.952161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.380 qpair failed and we were unable to recover it. 00:35:45.380 [2024-11-18 07:21:05.952275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.380 [2024-11-18 07:21:05.952303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.380 qpair failed and we were unable to recover it. 00:35:45.380 [2024-11-18 07:21:05.952416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.380 [2024-11-18 07:21:05.952442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.380 qpair failed and we were unable to recover it. 00:35:45.380 [2024-11-18 07:21:05.952535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.380 [2024-11-18 07:21:05.952563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.380 qpair failed and we were unable to recover it. 00:35:45.380 [2024-11-18 07:21:05.952675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.381 [2024-11-18 07:21:05.952723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.381 qpair failed and we were unable to recover it. 00:35:45.381 [2024-11-18 07:21:05.952834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.381 [2024-11-18 07:21:05.952883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.381 qpair failed and we were unable to recover it. 00:35:45.381 [2024-11-18 07:21:05.952962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.381 [2024-11-18 07:21:05.952989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.381 qpair failed and we were unable to recover it. 00:35:45.381 [2024-11-18 07:21:05.953096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.381 [2024-11-18 07:21:05.953122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.381 qpair failed and we were unable to recover it. 00:35:45.381 [2024-11-18 07:21:05.953263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.381 [2024-11-18 07:21:05.953290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.381 qpair failed and we were unable to recover it. 00:35:45.381 [2024-11-18 07:21:05.953390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.381 [2024-11-18 07:21:05.953437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.381 qpair failed and we were unable to recover it. 00:35:45.381 [2024-11-18 07:21:05.953547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.381 [2024-11-18 07:21:05.953576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.381 qpair failed and we were unable to recover it. 00:35:45.381 [2024-11-18 07:21:05.953658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.381 [2024-11-18 07:21:05.953685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.381 qpair failed and we were unable to recover it. 00:35:45.381 [2024-11-18 07:21:05.953761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.381 [2024-11-18 07:21:05.953787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.381 qpair failed and we were unable to recover it. 00:35:45.381 [2024-11-18 07:21:05.953871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.381 [2024-11-18 07:21:05.953898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.381 qpair failed and we were unable to recover it. 00:35:45.381 [2024-11-18 07:21:05.954008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.381 [2024-11-18 07:21:05.954035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.381 qpair failed and we were unable to recover it. 00:35:45.381 [2024-11-18 07:21:05.954206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.381 [2024-11-18 07:21:05.954270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.381 qpair failed and we were unable to recover it. 00:35:45.381 [2024-11-18 07:21:05.954383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.381 [2024-11-18 07:21:05.954411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.381 qpair failed and we were unable to recover it. 00:35:45.381 [2024-11-18 07:21:05.954547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.381 [2024-11-18 07:21:05.954586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.381 qpair failed and we were unable to recover it. 00:35:45.381 [2024-11-18 07:21:05.954685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.381 [2024-11-18 07:21:05.954713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.381 qpair failed and we were unable to recover it. 00:35:45.381 [2024-11-18 07:21:05.954804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.381 [2024-11-18 07:21:05.954840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.381 qpair failed and we were unable to recover it. 00:35:45.381 [2024-11-18 07:21:05.954949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.381 [2024-11-18 07:21:05.954976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.381 qpair failed and we were unable to recover it. 00:35:45.381 [2024-11-18 07:21:05.955068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.381 [2024-11-18 07:21:05.955096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.381 qpair failed and we were unable to recover it. 00:35:45.381 [2024-11-18 07:21:05.955255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.381 [2024-11-18 07:21:05.955281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.381 qpair failed and we were unable to recover it. 00:35:45.381 [2024-11-18 07:21:05.955362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.381 [2024-11-18 07:21:05.955389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.381 qpair failed and we were unable to recover it. 00:35:45.381 [2024-11-18 07:21:05.955529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.381 [2024-11-18 07:21:05.955556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.381 qpair failed and we were unable to recover it. 00:35:45.381 [2024-11-18 07:21:05.955651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.381 [2024-11-18 07:21:05.955678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.381 qpair failed and we were unable to recover it. 00:35:45.381 [2024-11-18 07:21:05.955780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.381 [2024-11-18 07:21:05.955819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.381 qpair failed and we were unable to recover it. 00:35:45.381 [2024-11-18 07:21:05.955991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.381 [2024-11-18 07:21:05.956018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.381 qpair failed and we were unable to recover it. 00:35:45.381 [2024-11-18 07:21:05.956127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.381 [2024-11-18 07:21:05.956154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.381 qpair failed and we were unable to recover it. 00:35:45.381 [2024-11-18 07:21:05.956240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.381 [2024-11-18 07:21:05.956266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.381 qpair failed and we were unable to recover it. 00:35:45.381 [2024-11-18 07:21:05.956350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.381 [2024-11-18 07:21:05.956377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.381 qpair failed and we were unable to recover it. 00:35:45.381 [2024-11-18 07:21:05.956471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.381 [2024-11-18 07:21:05.956514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.381 qpair failed and we were unable to recover it. 00:35:45.381 [2024-11-18 07:21:05.956601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.381 [2024-11-18 07:21:05.956627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.381 qpair failed and we were unable to recover it. 00:35:45.381 [2024-11-18 07:21:05.956712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.381 [2024-11-18 07:21:05.956758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.381 qpair failed and we were unable to recover it. 00:35:45.381 [2024-11-18 07:21:05.956902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.381 [2024-11-18 07:21:05.956936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.381 qpair failed and we were unable to recover it. 00:35:45.381 [2024-11-18 07:21:05.957077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.381 [2024-11-18 07:21:05.957103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.381 qpair failed and we were unable to recover it. 00:35:45.381 [2024-11-18 07:21:05.957215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.381 [2024-11-18 07:21:05.957249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.381 qpair failed and we were unable to recover it. 00:35:45.381 [2024-11-18 07:21:05.957366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.381 [2024-11-18 07:21:05.957393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.381 qpair failed and we were unable to recover it. 00:35:45.381 [2024-11-18 07:21:05.957480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.381 [2024-11-18 07:21:05.957515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.381 qpair failed and we were unable to recover it. 00:35:45.381 [2024-11-18 07:21:05.957603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.381 [2024-11-18 07:21:05.957629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.381 qpair failed and we were unable to recover it. 00:35:45.381 [2024-11-18 07:21:05.957738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.381 [2024-11-18 07:21:05.957770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.381 qpair failed and we were unable to recover it. 00:35:45.381 [2024-11-18 07:21:05.957897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.381 [2024-11-18 07:21:05.957929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.381 qpair failed and we were unable to recover it. 00:35:45.381 [2024-11-18 07:21:05.958090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.381 [2024-11-18 07:21:05.958124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.381 qpair failed and we were unable to recover it. 00:35:45.381 [2024-11-18 07:21:05.958260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.381 [2024-11-18 07:21:05.958294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.381 qpair failed and we were unable to recover it. 00:35:45.381 [2024-11-18 07:21:05.958399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.381 [2024-11-18 07:21:05.958425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.381 qpair failed and we were unable to recover it. 00:35:45.381 [2024-11-18 07:21:05.958513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.381 [2024-11-18 07:21:05.958551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.381 qpair failed and we were unable to recover it. 00:35:45.381 [2024-11-18 07:21:05.958645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.381 [2024-11-18 07:21:05.958671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.381 qpair failed and we were unable to recover it. 00:35:45.381 [2024-11-18 07:21:05.958754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.381 [2024-11-18 07:21:05.958791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.381 qpair failed and we were unable to recover it. 00:35:45.381 [2024-11-18 07:21:05.958963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.382 [2024-11-18 07:21:05.959009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.382 qpair failed and we were unable to recover it. 00:35:45.382 [2024-11-18 07:21:05.959123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.382 [2024-11-18 07:21:05.959168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.382 qpair failed and we were unable to recover it. 00:35:45.382 [2024-11-18 07:21:05.959363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.382 [2024-11-18 07:21:05.959396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.382 qpair failed and we were unable to recover it. 00:35:45.382 [2024-11-18 07:21:05.959527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.382 [2024-11-18 07:21:05.959556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.382 qpair failed and we were unable to recover it. 00:35:45.382 [2024-11-18 07:21:05.959651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.382 [2024-11-18 07:21:05.959677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.382 qpair failed and we were unable to recover it. 00:35:45.382 [2024-11-18 07:21:05.959769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.382 [2024-11-18 07:21:05.959807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.382 qpair failed and we were unable to recover it. 00:35:45.382 [2024-11-18 07:21:05.959926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.382 [2024-11-18 07:21:05.959953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.382 qpair failed and we were unable to recover it. 00:35:45.382 [2024-11-18 07:21:05.960090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.382 [2024-11-18 07:21:05.960117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.382 qpair failed and we were unable to recover it. 00:35:45.382 [2024-11-18 07:21:05.960235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.382 [2024-11-18 07:21:05.960263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.382 qpair failed and we were unable to recover it. 00:35:45.382 [2024-11-18 07:21:05.960406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.382 [2024-11-18 07:21:05.960434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.382 qpair failed and we were unable to recover it. 00:35:45.382 [2024-11-18 07:21:05.960545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.382 [2024-11-18 07:21:05.960573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.382 qpair failed and we were unable to recover it. 00:35:45.382 [2024-11-18 07:21:05.960662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.382 [2024-11-18 07:21:05.960689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.382 qpair failed and we were unable to recover it. 00:35:45.382 [2024-11-18 07:21:05.960804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.382 [2024-11-18 07:21:05.960831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.382 qpair failed and we were unable to recover it. 00:35:45.382 [2024-11-18 07:21:05.960920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.382 [2024-11-18 07:21:05.960947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.382 qpair failed and we were unable to recover it. 00:35:45.382 [2024-11-18 07:21:05.961054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.382 [2024-11-18 07:21:05.961082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.382 qpair failed and we were unable to recover it. 00:35:45.382 [2024-11-18 07:21:05.961169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.382 [2024-11-18 07:21:05.961201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.382 qpair failed and we were unable to recover it. 00:35:45.382 [2024-11-18 07:21:05.961291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.382 [2024-11-18 07:21:05.961318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.382 qpair failed and we were unable to recover it. 00:35:45.382 [2024-11-18 07:21:05.961433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.382 [2024-11-18 07:21:05.961459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.382 qpair failed and we were unable to recover it. 00:35:45.382 [2024-11-18 07:21:05.961574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.382 [2024-11-18 07:21:05.961600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.382 qpair failed and we were unable to recover it. 00:35:45.382 [2024-11-18 07:21:05.961706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.382 [2024-11-18 07:21:05.961746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.382 qpair failed and we were unable to recover it. 00:35:45.382 [2024-11-18 07:21:05.961895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.382 [2024-11-18 07:21:05.961942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.382 qpair failed and we were unable to recover it. 00:35:45.382 [2024-11-18 07:21:05.962112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.382 [2024-11-18 07:21:05.962163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.382 qpair failed and we were unable to recover it. 00:35:45.382 [2024-11-18 07:21:05.962389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.382 [2024-11-18 07:21:05.962444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.382 qpair failed and we were unable to recover it. 00:35:45.382 [2024-11-18 07:21:05.962530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.382 [2024-11-18 07:21:05.962557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.382 qpair failed and we were unable to recover it. 00:35:45.382 [2024-11-18 07:21:05.962650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.382 [2024-11-18 07:21:05.962677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.382 qpair failed and we were unable to recover it. 00:35:45.382 [2024-11-18 07:21:05.962791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.382 [2024-11-18 07:21:05.962824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.382 qpair failed and we were unable to recover it. 00:35:45.382 [2024-11-18 07:21:05.962951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.382 [2024-11-18 07:21:05.963003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.382 qpair failed and we were unable to recover it. 00:35:45.382 [2024-11-18 07:21:05.963235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.382 [2024-11-18 07:21:05.963289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.382 qpair failed and we were unable to recover it. 00:35:45.382 [2024-11-18 07:21:05.963403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.382 [2024-11-18 07:21:05.963430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.382 qpair failed and we were unable to recover it. 00:35:45.382 [2024-11-18 07:21:05.963541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.382 [2024-11-18 07:21:05.963568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.382 qpair failed and we were unable to recover it. 00:35:45.382 [2024-11-18 07:21:05.963656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.382 [2024-11-18 07:21:05.963683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.382 qpair failed and we were unable to recover it. 00:35:45.382 [2024-11-18 07:21:05.963768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.382 [2024-11-18 07:21:05.963801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.382 qpair failed and we were unable to recover it. 00:35:45.382 [2024-11-18 07:21:05.963949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.382 [2024-11-18 07:21:05.963975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.382 qpair failed and we were unable to recover it. 00:35:45.382 [2024-11-18 07:21:05.964093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.382 [2024-11-18 07:21:05.964119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.382 qpair failed and we were unable to recover it. 00:35:45.382 [2024-11-18 07:21:05.964240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.382 [2024-11-18 07:21:05.964268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.382 qpair failed and we were unable to recover it. 00:35:45.382 [2024-11-18 07:21:05.964396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.382 [2024-11-18 07:21:05.964436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.382 qpair failed and we were unable to recover it. 00:35:45.382 [2024-11-18 07:21:05.964549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.382 [2024-11-18 07:21:05.964578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.382 qpair failed and we were unable to recover it. 00:35:45.382 [2024-11-18 07:21:05.964686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.382 [2024-11-18 07:21:05.964736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.382 qpair failed and we were unable to recover it. 00:35:45.382 [2024-11-18 07:21:05.964877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.382 [2024-11-18 07:21:05.964925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.382 qpair failed and we were unable to recover it. 00:35:45.382 [2024-11-18 07:21:05.965018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.382 [2024-11-18 07:21:05.965045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.382 qpair failed and we were unable to recover it. 00:35:45.382 [2024-11-18 07:21:05.965155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.382 [2024-11-18 07:21:05.965182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.382 qpair failed and we were unable to recover it. 00:35:45.382 [2024-11-18 07:21:05.965292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.382 [2024-11-18 07:21:05.965319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.382 qpair failed and we were unable to recover it. 00:35:45.382 [2024-11-18 07:21:05.965451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.382 [2024-11-18 07:21:05.965513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.382 qpair failed and we were unable to recover it. 00:35:45.382 [2024-11-18 07:21:05.965608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.382 [2024-11-18 07:21:05.965637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.382 qpair failed and we were unable to recover it. 00:35:45.382 [2024-11-18 07:21:05.965757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.383 [2024-11-18 07:21:05.965796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.383 qpair failed and we were unable to recover it. 00:35:45.383 [2024-11-18 07:21:05.965883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.383 [2024-11-18 07:21:05.965911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.383 qpair failed and we were unable to recover it. 00:35:45.383 [2024-11-18 07:21:05.965999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.383 [2024-11-18 07:21:05.966028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.383 qpair failed and we were unable to recover it. 00:35:45.383 [2024-11-18 07:21:05.966169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.383 [2024-11-18 07:21:05.966195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.383 qpair failed and we were unable to recover it. 00:35:45.383 [2024-11-18 07:21:05.966310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.383 [2024-11-18 07:21:05.966338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.383 qpair failed and we were unable to recover it. 00:35:45.383 [2024-11-18 07:21:05.966452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.383 [2024-11-18 07:21:05.966497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.383 qpair failed and we were unable to recover it. 00:35:45.383 [2024-11-18 07:21:05.966576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.383 [2024-11-18 07:21:05.966603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.383 qpair failed and we were unable to recover it. 00:35:45.383 [2024-11-18 07:21:05.966716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.383 [2024-11-18 07:21:05.966768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.383 qpair failed and we were unable to recover it. 00:35:45.383 [2024-11-18 07:21:05.966891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.383 [2024-11-18 07:21:05.966918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.383 qpair failed and we were unable to recover it. 00:35:45.383 [2024-11-18 07:21:05.967058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.383 [2024-11-18 07:21:05.967085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.383 qpair failed and we were unable to recover it. 00:35:45.383 [2024-11-18 07:21:05.967181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.383 [2024-11-18 07:21:05.967208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.383 qpair failed and we were unable to recover it. 00:35:45.383 [2024-11-18 07:21:05.967314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.383 [2024-11-18 07:21:05.967346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.383 qpair failed and we were unable to recover it. 00:35:45.383 [2024-11-18 07:21:05.967488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.383 [2024-11-18 07:21:05.967522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.383 qpair failed and we were unable to recover it. 00:35:45.383 [2024-11-18 07:21:05.967617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.383 [2024-11-18 07:21:05.967644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.383 qpair failed and we were unable to recover it. 00:35:45.383 [2024-11-18 07:21:05.967755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.383 [2024-11-18 07:21:05.967793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.383 qpair failed and we were unable to recover it. 00:35:45.383 [2024-11-18 07:21:05.967917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.383 [2024-11-18 07:21:05.967943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.383 qpair failed and we were unable to recover it. 00:35:45.383 [2024-11-18 07:21:05.968029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.383 [2024-11-18 07:21:05.968056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.383 qpair failed and we were unable to recover it. 00:35:45.383 [2024-11-18 07:21:05.968208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.383 [2024-11-18 07:21:05.968248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.383 qpair failed and we were unable to recover it. 00:35:45.383 [2024-11-18 07:21:05.968368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.383 [2024-11-18 07:21:05.968397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.383 qpair failed and we were unable to recover it. 00:35:45.383 [2024-11-18 07:21:05.968531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.383 [2024-11-18 07:21:05.968561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.383 qpair failed and we were unable to recover it. 00:35:45.383 [2024-11-18 07:21:05.968648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.383 [2024-11-18 07:21:05.968675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.383 qpair failed and we were unable to recover it. 00:35:45.383 [2024-11-18 07:21:05.968815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.383 [2024-11-18 07:21:05.968874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.383 qpair failed and we were unable to recover it. 00:35:45.383 [2024-11-18 07:21:05.968980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.383 [2024-11-18 07:21:05.969015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.383 qpair failed and we were unable to recover it. 00:35:45.383 [2024-11-18 07:21:05.969187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.383 [2024-11-18 07:21:05.969236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.383 qpair failed and we were unable to recover it. 00:35:45.383 [2024-11-18 07:21:05.969377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.383 [2024-11-18 07:21:05.969403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.383 qpair failed and we were unable to recover it. 00:35:45.383 [2024-11-18 07:21:05.969520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.383 [2024-11-18 07:21:05.969548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.383 qpair failed and we were unable to recover it. 00:35:45.383 [2024-11-18 07:21:05.969639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.383 [2024-11-18 07:21:05.969666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.383 qpair failed and we were unable to recover it. 00:35:45.383 [2024-11-18 07:21:05.969780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.383 [2024-11-18 07:21:05.969807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.383 qpair failed and we were unable to recover it. 00:35:45.383 [2024-11-18 07:21:05.969916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.383 [2024-11-18 07:21:05.969943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.383 qpair failed and we were unable to recover it. 00:35:45.383 [2024-11-18 07:21:05.970055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.383 [2024-11-18 07:21:05.970081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.383 qpair failed and we were unable to recover it. 00:35:45.383 [2024-11-18 07:21:05.970171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.383 [2024-11-18 07:21:05.970210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.383 qpair failed and we were unable to recover it. 00:35:45.383 [2024-11-18 07:21:05.970309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.383 [2024-11-18 07:21:05.970339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.383 qpair failed and we were unable to recover it. 00:35:45.383 [2024-11-18 07:21:05.970453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.383 [2024-11-18 07:21:05.970498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.383 qpair failed and we were unable to recover it. 00:35:45.383 [2024-11-18 07:21:05.970584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.383 [2024-11-18 07:21:05.970611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.383 qpair failed and we were unable to recover it. 00:35:45.383 [2024-11-18 07:21:05.970724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.383 [2024-11-18 07:21:05.970752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.383 qpair failed and we were unable to recover it. 00:35:45.383 [2024-11-18 07:21:05.970917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.383 [2024-11-18 07:21:05.970944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.383 qpair failed and we were unable to recover it. 00:35:45.383 [2024-11-18 07:21:05.971058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.383 [2024-11-18 07:21:05.971085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.384 qpair failed and we were unable to recover it. 00:35:45.384 [2024-11-18 07:21:05.971175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.384 [2024-11-18 07:21:05.971203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.384 qpair failed and we were unable to recover it. 00:35:45.384 [2024-11-18 07:21:05.971289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.384 [2024-11-18 07:21:05.971319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.384 qpair failed and we were unable to recover it. 00:35:45.384 [2024-11-18 07:21:05.971412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.384 [2024-11-18 07:21:05.971440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.384 qpair failed and we were unable to recover it. 00:35:45.384 [2024-11-18 07:21:05.971569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.384 [2024-11-18 07:21:05.971596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.384 qpair failed and we were unable to recover it. 00:35:45.384 [2024-11-18 07:21:05.971679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.384 [2024-11-18 07:21:05.971706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.384 qpair failed and we were unable to recover it. 00:35:45.384 [2024-11-18 07:21:05.971896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.384 [2024-11-18 07:21:05.971953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.384 qpair failed and we were unable to recover it. 00:35:45.384 [2024-11-18 07:21:05.972140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.384 [2024-11-18 07:21:05.972168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.384 qpair failed and we were unable to recover it. 00:35:45.384 [2024-11-18 07:21:05.972279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.384 [2024-11-18 07:21:05.972305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.384 qpair failed and we were unable to recover it. 00:35:45.384 [2024-11-18 07:21:05.972388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.384 [2024-11-18 07:21:05.972415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.384 qpair failed and we were unable to recover it. 00:35:45.384 [2024-11-18 07:21:05.972535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.384 [2024-11-18 07:21:05.972575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.384 qpair failed and we were unable to recover it. 00:35:45.384 [2024-11-18 07:21:05.972703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.384 [2024-11-18 07:21:05.972732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.384 qpair failed and we were unable to recover it. 00:35:45.384 [2024-11-18 07:21:05.972851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.384 [2024-11-18 07:21:05.972881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.384 qpair failed and we were unable to recover it. 00:35:45.384 [2024-11-18 07:21:05.973016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.384 [2024-11-18 07:21:05.973064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.384 qpair failed and we were unable to recover it. 00:35:45.384 [2024-11-18 07:21:05.973251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.384 [2024-11-18 07:21:05.973305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.384 qpair failed and we were unable to recover it. 00:35:45.384 [2024-11-18 07:21:05.973423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.384 [2024-11-18 07:21:05.973451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.384 qpair failed and we were unable to recover it. 00:35:45.384 [2024-11-18 07:21:05.973560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.384 [2024-11-18 07:21:05.973588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.384 qpair failed and we were unable to recover it. 00:35:45.384 [2024-11-18 07:21:05.973670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.384 [2024-11-18 07:21:05.973697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.384 qpair failed and we were unable to recover it. 00:35:45.384 [2024-11-18 07:21:05.973774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.384 [2024-11-18 07:21:05.973801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.384 qpair failed and we were unable to recover it. 00:35:45.384 [2024-11-18 07:21:05.973921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.384 [2024-11-18 07:21:05.973947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.384 qpair failed and we were unable to recover it. 00:35:45.384 [2024-11-18 07:21:05.974058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.384 [2024-11-18 07:21:05.974085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.384 qpair failed and we were unable to recover it. 00:35:45.384 [2024-11-18 07:21:05.974179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.384 [2024-11-18 07:21:05.974206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.384 qpair failed and we were unable to recover it. 00:35:45.384 [2024-11-18 07:21:05.974354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.384 [2024-11-18 07:21:05.974380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.384 qpair failed and we were unable to recover it. 00:35:45.384 [2024-11-18 07:21:05.974525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.384 [2024-11-18 07:21:05.974552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.384 qpair failed and we were unable to recover it. 00:35:45.384 [2024-11-18 07:21:05.974665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.384 [2024-11-18 07:21:05.974694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.384 qpair failed and we were unable to recover it. 00:35:45.384 [2024-11-18 07:21:05.974820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.384 [2024-11-18 07:21:05.974848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.384 qpair failed and we were unable to recover it. 00:35:45.384 [2024-11-18 07:21:05.974961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.384 [2024-11-18 07:21:05.974989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.384 qpair failed and we were unable to recover it. 00:35:45.384 [2024-11-18 07:21:05.975100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.384 [2024-11-18 07:21:05.975128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.384 qpair failed and we were unable to recover it. 00:35:45.385 [2024-11-18 07:21:05.975240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.385 [2024-11-18 07:21:05.975267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.385 qpair failed and we were unable to recover it. 00:35:45.385 [2024-11-18 07:21:05.975414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.385 [2024-11-18 07:21:05.975441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.385 qpair failed and we were unable to recover it. 00:35:45.385 [2024-11-18 07:21:05.975567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.385 [2024-11-18 07:21:05.975596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.385 qpair failed and we were unable to recover it. 00:35:45.385 [2024-11-18 07:21:05.975719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.385 [2024-11-18 07:21:05.975747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.385 qpair failed and we were unable to recover it. 00:35:45.385 [2024-11-18 07:21:05.975828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.385 [2024-11-18 07:21:05.975864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.385 qpair failed and we were unable to recover it. 00:35:45.385 [2024-11-18 07:21:05.976003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.385 [2024-11-18 07:21:05.976031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.385 qpair failed and we were unable to recover it. 00:35:45.385 [2024-11-18 07:21:05.976146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.385 [2024-11-18 07:21:05.976174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.385 qpair failed and we were unable to recover it. 00:35:45.385 [2024-11-18 07:21:05.976289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.385 [2024-11-18 07:21:05.976316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.385 qpair failed and we were unable to recover it. 00:35:45.385 [2024-11-18 07:21:05.976406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.385 [2024-11-18 07:21:05.976433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.385 qpair failed and we were unable to recover it. 00:35:45.385 [2024-11-18 07:21:05.976552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.385 [2024-11-18 07:21:05.976579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.385 qpair failed and we were unable to recover it. 00:35:45.385 [2024-11-18 07:21:05.976695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.385 [2024-11-18 07:21:05.976722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.385 qpair failed and we were unable to recover it. 00:35:45.385 [2024-11-18 07:21:05.976850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.385 [2024-11-18 07:21:05.976898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.385 qpair failed and we were unable to recover it. 00:35:45.385 [2024-11-18 07:21:05.977079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.385 [2024-11-18 07:21:05.977138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.385 qpair failed and we were unable to recover it. 00:35:45.385 [2024-11-18 07:21:05.977275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.385 [2024-11-18 07:21:05.977323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.385 qpair failed and we were unable to recover it. 00:35:45.385 [2024-11-18 07:21:05.977447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.385 [2024-11-18 07:21:05.977497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.385 qpair failed and we were unable to recover it. 00:35:45.385 [2024-11-18 07:21:05.977595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.385 [2024-11-18 07:21:05.977623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.385 qpair failed and we were unable to recover it. 00:35:45.385 [2024-11-18 07:21:05.977732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.385 [2024-11-18 07:21:05.977759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.385 qpair failed and we were unable to recover it. 00:35:45.385 [2024-11-18 07:21:05.977897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.385 [2024-11-18 07:21:05.977945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.385 qpair failed and we were unable to recover it. 00:35:45.385 [2024-11-18 07:21:05.978082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.385 [2024-11-18 07:21:05.978130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.385 qpair failed and we were unable to recover it. 00:35:45.385 [2024-11-18 07:21:05.978258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.385 [2024-11-18 07:21:05.978304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.385 qpair failed and we were unable to recover it. 00:35:45.385 [2024-11-18 07:21:05.978447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.385 [2024-11-18 07:21:05.978485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.385 qpair failed and we were unable to recover it. 00:35:45.385 [2024-11-18 07:21:05.978603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.385 [2024-11-18 07:21:05.978630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.385 qpair failed and we were unable to recover it. 00:35:45.385 [2024-11-18 07:21:05.978794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.385 [2024-11-18 07:21:05.978833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.385 qpair failed and we were unable to recover it. 00:35:45.385 [2024-11-18 07:21:05.978957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.385 [2024-11-18 07:21:05.978985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.385 qpair failed and we were unable to recover it. 00:35:45.385 [2024-11-18 07:21:05.979077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.385 [2024-11-18 07:21:05.979104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.385 qpair failed and we were unable to recover it. 00:35:45.385 [2024-11-18 07:21:05.979194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.385 [2024-11-18 07:21:05.979221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.385 qpair failed and we were unable to recover it. 00:35:45.385 [2024-11-18 07:21:05.979333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.385 [2024-11-18 07:21:05.979359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.385 qpair failed and we were unable to recover it. 00:35:45.385 [2024-11-18 07:21:05.979472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.385 [2024-11-18 07:21:05.979508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.385 qpair failed and we were unable to recover it. 00:35:45.385 [2024-11-18 07:21:05.979609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.385 [2024-11-18 07:21:05.979636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.385 qpair failed and we were unable to recover it. 00:35:45.385 [2024-11-18 07:21:05.979741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.386 [2024-11-18 07:21:05.979789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.386 qpair failed and we were unable to recover it. 00:35:45.386 [2024-11-18 07:21:05.979940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.386 [2024-11-18 07:21:05.979986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.386 qpair failed and we were unable to recover it. 00:35:45.386 [2024-11-18 07:21:05.980074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.386 [2024-11-18 07:21:05.980102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.386 qpair failed and we were unable to recover it. 00:35:45.386 [2024-11-18 07:21:05.980319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.386 [2024-11-18 07:21:05.980373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.386 qpair failed and we were unable to recover it. 00:35:45.386 [2024-11-18 07:21:05.980503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.386 [2024-11-18 07:21:05.980543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.386 qpair failed and we were unable to recover it. 00:35:45.386 [2024-11-18 07:21:05.980634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.386 [2024-11-18 07:21:05.980663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.386 qpair failed and we were unable to recover it. 00:35:45.386 [2024-11-18 07:21:05.980781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.386 [2024-11-18 07:21:05.980809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.386 qpair failed and we were unable to recover it. 00:35:45.386 [2024-11-18 07:21:05.980951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.386 [2024-11-18 07:21:05.980988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.386 qpair failed and we were unable to recover it. 00:35:45.386 [2024-11-18 07:21:05.981196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.386 [2024-11-18 07:21:05.981263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.386 qpair failed and we were unable to recover it. 00:35:45.386 [2024-11-18 07:21:05.981479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.386 [2024-11-18 07:21:05.981537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.386 qpair failed and we were unable to recover it. 00:35:45.386 [2024-11-18 07:21:05.981651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.386 [2024-11-18 07:21:05.981678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.386 qpair failed and we were unable to recover it. 00:35:45.386 [2024-11-18 07:21:05.981785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.386 [2024-11-18 07:21:05.981829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.386 qpair failed and we were unable to recover it. 00:35:45.386 [2024-11-18 07:21:05.981976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.386 [2024-11-18 07:21:05.982011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.386 qpair failed and we were unable to recover it. 00:35:45.386 [2024-11-18 07:21:05.982149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.386 [2024-11-18 07:21:05.982184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.386 qpair failed and we were unable to recover it. 00:35:45.386 [2024-11-18 07:21:05.982296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.386 [2024-11-18 07:21:05.982331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.386 qpair failed and we were unable to recover it. 00:35:45.386 [2024-11-18 07:21:05.982472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.386 [2024-11-18 07:21:05.982514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.386 qpair failed and we were unable to recover it. 00:35:45.386 [2024-11-18 07:21:05.982657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.386 [2024-11-18 07:21:05.982686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.386 qpair failed and we were unable to recover it. 00:35:45.386 [2024-11-18 07:21:05.982827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.386 [2024-11-18 07:21:05.982854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.386 qpair failed and we were unable to recover it. 00:35:45.386 [2024-11-18 07:21:05.982964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.386 [2024-11-18 07:21:05.982991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.386 qpair failed and we were unable to recover it. 00:35:45.386 [2024-11-18 07:21:05.983140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.386 [2024-11-18 07:21:05.983193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.386 qpair failed and we were unable to recover it. 00:35:45.386 [2024-11-18 07:21:05.983277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.386 [2024-11-18 07:21:05.983305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.386 qpair failed and we were unable to recover it. 00:35:45.386 [2024-11-18 07:21:05.983425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.386 [2024-11-18 07:21:05.983464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.386 qpair failed and we were unable to recover it. 00:35:45.386 [2024-11-18 07:21:05.983603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.386 [2024-11-18 07:21:05.983632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.386 qpair failed and we were unable to recover it. 00:35:45.386 [2024-11-18 07:21:05.983752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.386 [2024-11-18 07:21:05.983781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.386 qpair failed and we were unable to recover it. 00:35:45.386 [2024-11-18 07:21:05.983909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.386 [2024-11-18 07:21:05.983958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.386 qpair failed and we were unable to recover it. 00:35:45.386 [2024-11-18 07:21:05.984112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.386 [2024-11-18 07:21:05.984154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.386 qpair failed and we were unable to recover it. 00:35:45.386 [2024-11-18 07:21:05.984358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.386 [2024-11-18 07:21:05.984394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.386 qpair failed and we were unable to recover it. 00:35:45.386 [2024-11-18 07:21:05.984564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.386 [2024-11-18 07:21:05.984593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.386 qpair failed and we were unable to recover it. 00:35:45.386 [2024-11-18 07:21:05.984710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.386 [2024-11-18 07:21:05.984738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.386 qpair failed and we were unable to recover it. 00:35:45.386 [2024-11-18 07:21:05.984868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.386 [2024-11-18 07:21:05.984917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.386 qpair failed and we were unable to recover it. 00:35:45.386 [2024-11-18 07:21:05.985001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.386 [2024-11-18 07:21:05.985029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.386 qpair failed and we were unable to recover it. 00:35:45.386 [2024-11-18 07:21:05.985164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.386 [2024-11-18 07:21:05.985210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.386 qpair failed and we were unable to recover it. 00:35:45.386 [2024-11-18 07:21:05.985367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.386 [2024-11-18 07:21:05.985406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.387 qpair failed and we were unable to recover it. 00:35:45.387 [2024-11-18 07:21:05.985510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.387 [2024-11-18 07:21:05.985539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.387 qpair failed and we were unable to recover it. 00:35:45.387 [2024-11-18 07:21:05.985644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.387 [2024-11-18 07:21:05.985672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.387 qpair failed and we were unable to recover it. 00:35:45.387 [2024-11-18 07:21:05.985755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.387 [2024-11-18 07:21:05.985805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.387 qpair failed and we were unable to recover it. 00:35:45.387 [2024-11-18 07:21:05.986008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.387 [2024-11-18 07:21:05.986043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.387 qpair failed and we were unable to recover it. 00:35:45.387 [2024-11-18 07:21:05.986145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.387 [2024-11-18 07:21:05.986192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.387 qpair failed and we were unable to recover it. 00:35:45.387 [2024-11-18 07:21:05.986351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.387 [2024-11-18 07:21:05.986379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.387 qpair failed and we were unable to recover it. 00:35:45.387 [2024-11-18 07:21:05.986516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.387 [2024-11-18 07:21:05.986557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.387 qpair failed and we were unable to recover it. 00:35:45.387 [2024-11-18 07:21:05.986704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.387 [2024-11-18 07:21:05.986732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.387 qpair failed and we were unable to recover it. 00:35:45.387 [2024-11-18 07:21:05.986825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.387 [2024-11-18 07:21:05.986853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.387 qpair failed and we were unable to recover it. 00:35:45.387 [2024-11-18 07:21:05.986968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.387 [2024-11-18 07:21:05.987018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.387 qpair failed and we were unable to recover it. 00:35:45.387 [2024-11-18 07:21:05.987141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.387 [2024-11-18 07:21:05.987168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.387 qpair failed and we were unable to recover it. 00:35:45.387 [2024-11-18 07:21:05.987271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.387 [2024-11-18 07:21:05.987307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.387 qpair failed and we were unable to recover it. 00:35:45.387 [2024-11-18 07:21:05.987402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.387 [2024-11-18 07:21:05.987432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.387 qpair failed and we were unable to recover it. 00:35:45.387 [2024-11-18 07:21:05.987557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.387 [2024-11-18 07:21:05.987585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.387 qpair failed and we were unable to recover it. 00:35:45.387 [2024-11-18 07:21:05.987702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.387 [2024-11-18 07:21:05.987730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.387 qpair failed and we were unable to recover it. 00:35:45.387 [2024-11-18 07:21:05.987847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.387 [2024-11-18 07:21:05.987883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.387 qpair failed and we were unable to recover it. 00:35:45.387 [2024-11-18 07:21:05.987999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.387 [2024-11-18 07:21:05.988027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.387 qpair failed and we were unable to recover it. 00:35:45.387 [2024-11-18 07:21:05.988139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.387 [2024-11-18 07:21:05.988167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.387 qpair failed and we were unable to recover it. 00:35:45.387 [2024-11-18 07:21:05.988299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.387 [2024-11-18 07:21:05.988335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.387 qpair failed and we were unable to recover it. 00:35:45.387 [2024-11-18 07:21:05.988465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.387 [2024-11-18 07:21:05.988525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.387 qpair failed and we were unable to recover it. 00:35:45.387 [2024-11-18 07:21:05.988645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.387 [2024-11-18 07:21:05.988674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.387 qpair failed and we were unable to recover it. 00:35:45.387 [2024-11-18 07:21:05.988803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.387 [2024-11-18 07:21:05.988851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.387 qpair failed and we were unable to recover it. 00:35:45.387 [2024-11-18 07:21:05.988990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.387 [2024-11-18 07:21:05.989026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.387 qpair failed and we were unable to recover it. 00:35:45.387 [2024-11-18 07:21:05.989132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.387 [2024-11-18 07:21:05.989188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.387 qpair failed and we were unable to recover it. 00:35:45.387 [2024-11-18 07:21:05.989323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.387 [2024-11-18 07:21:05.989370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.387 qpair failed and we were unable to recover it. 00:35:45.387 [2024-11-18 07:21:05.989508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.387 [2024-11-18 07:21:05.989536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.387 qpair failed and we were unable to recover it. 00:35:45.387 [2024-11-18 07:21:05.989648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.387 [2024-11-18 07:21:05.989674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.387 qpair failed and we were unable to recover it. 00:35:45.387 [2024-11-18 07:21:05.989791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.387 [2024-11-18 07:21:05.989818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.387 qpair failed and we were unable to recover it. 00:35:45.387 [2024-11-18 07:21:05.989975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.387 [2024-11-18 07:21:05.990010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.387 qpair failed and we were unable to recover it. 00:35:45.387 [2024-11-18 07:21:05.990120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.387 [2024-11-18 07:21:05.990176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.387 qpair failed and we were unable to recover it. 00:35:45.387 [2024-11-18 07:21:05.990327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.387 [2024-11-18 07:21:05.990365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.387 qpair failed and we were unable to recover it. 00:35:45.387 [2024-11-18 07:21:05.990524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.388 [2024-11-18 07:21:05.990554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.388 qpair failed and we were unable to recover it. 00:35:45.388 [2024-11-18 07:21:05.990645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.388 [2024-11-18 07:21:05.990671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.388 qpair failed and we were unable to recover it. 00:35:45.388 [2024-11-18 07:21:05.990775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.388 [2024-11-18 07:21:05.990801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.388 qpair failed and we were unable to recover it. 00:35:45.388 [2024-11-18 07:21:05.990911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.388 [2024-11-18 07:21:05.990938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.388 qpair failed and we were unable to recover it. 00:35:45.388 [2024-11-18 07:21:05.991080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.388 [2024-11-18 07:21:05.991129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.388 qpair failed and we were unable to recover it. 00:35:45.388 [2024-11-18 07:21:05.991247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.388 [2024-11-18 07:21:05.991274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.388 qpair failed and we were unable to recover it. 00:35:45.388 [2024-11-18 07:21:05.991381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.388 [2024-11-18 07:21:05.991408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.388 qpair failed and we were unable to recover it. 00:35:45.388 [2024-11-18 07:21:05.991526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.388 [2024-11-18 07:21:05.991553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.388 qpair failed and we were unable to recover it. 00:35:45.388 [2024-11-18 07:21:05.991645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.388 [2024-11-18 07:21:05.991674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.388 qpair failed and we were unable to recover it. 00:35:45.388 [2024-11-18 07:21:05.991767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.388 [2024-11-18 07:21:05.991794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.388 qpair failed and we were unable to recover it. 00:35:45.388 [2024-11-18 07:21:05.991931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.388 [2024-11-18 07:21:05.991958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.388 qpair failed and we were unable to recover it. 00:35:45.388 [2024-11-18 07:21:05.992066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.388 [2024-11-18 07:21:05.992093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.388 qpair failed and we were unable to recover it. 00:35:45.388 [2024-11-18 07:21:05.992227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.388 [2024-11-18 07:21:05.992268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.388 qpair failed and we were unable to recover it. 00:35:45.388 [2024-11-18 07:21:05.992418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.388 [2024-11-18 07:21:05.992446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.388 qpair failed and we were unable to recover it. 00:35:45.388 [2024-11-18 07:21:05.992552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.388 [2024-11-18 07:21:05.992580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.388 qpair failed and we were unable to recover it. 00:35:45.388 [2024-11-18 07:21:05.992677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.388 [2024-11-18 07:21:05.992704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.388 qpair failed and we were unable to recover it. 00:35:45.388 [2024-11-18 07:21:05.992790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.388 [2024-11-18 07:21:05.992847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.388 qpair failed and we were unable to recover it. 00:35:45.388 [2024-11-18 07:21:05.993007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.388 [2024-11-18 07:21:05.993057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.388 qpair failed and we were unable to recover it. 00:35:45.388 [2024-11-18 07:21:05.993187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.388 [2024-11-18 07:21:05.993236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.388 qpair failed and we were unable to recover it. 00:35:45.388 [2024-11-18 07:21:05.993366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.388 [2024-11-18 07:21:05.993405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.388 qpair failed and we were unable to recover it. 00:35:45.388 [2024-11-18 07:21:05.993555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.388 [2024-11-18 07:21:05.993585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.388 qpair failed and we were unable to recover it. 00:35:45.388 [2024-11-18 07:21:05.993727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.388 [2024-11-18 07:21:05.993755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.388 qpair failed and we were unable to recover it. 00:35:45.388 [2024-11-18 07:21:05.993905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.388 [2024-11-18 07:21:05.993957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.388 qpair failed and we were unable to recover it. 00:35:45.388 [2024-11-18 07:21:05.994111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.388 [2024-11-18 07:21:05.994147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.388 qpair failed and we were unable to recover it. 00:35:45.388 [2024-11-18 07:21:05.994259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.388 [2024-11-18 07:21:05.994298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.388 qpair failed and we were unable to recover it. 00:35:45.388 [2024-11-18 07:21:05.994424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.388 [2024-11-18 07:21:05.994451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.388 qpair failed and we were unable to recover it. 00:35:45.388 [2024-11-18 07:21:05.994543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.388 [2024-11-18 07:21:05.994571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.388 qpair failed and we were unable to recover it. 00:35:45.388 [2024-11-18 07:21:05.994691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.389 [2024-11-18 07:21:05.994719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.389 qpair failed and we were unable to recover it. 00:35:45.389 [2024-11-18 07:21:05.994796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.389 [2024-11-18 07:21:05.994828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.389 qpair failed and we were unable to recover it. 00:35:45.389 [2024-11-18 07:21:05.995003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.389 [2024-11-18 07:21:05.995038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.389 qpair failed and we were unable to recover it. 00:35:45.389 [2024-11-18 07:21:05.995172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.389 [2024-11-18 07:21:05.995211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.389 qpair failed and we were unable to recover it. 00:35:45.389 [2024-11-18 07:21:05.995320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.389 [2024-11-18 07:21:05.995349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.389 qpair failed and we were unable to recover it. 00:35:45.389 [2024-11-18 07:21:05.995439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.389 [2024-11-18 07:21:05.995468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.389 qpair failed and we were unable to recover it. 00:35:45.389 [2024-11-18 07:21:05.995600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.389 [2024-11-18 07:21:05.995628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.389 qpair failed and we were unable to recover it. 00:35:45.389 [2024-11-18 07:21:05.995751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.389 [2024-11-18 07:21:05.995798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.389 qpair failed and we were unable to recover it. 00:35:45.389 [2024-11-18 07:21:05.995997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.389 [2024-11-18 07:21:05.996031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.389 qpair failed and we were unable to recover it. 00:35:45.389 [2024-11-18 07:21:05.996243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.389 [2024-11-18 07:21:05.996277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.389 qpair failed and we were unable to recover it. 00:35:45.389 [2024-11-18 07:21:05.996386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.389 [2024-11-18 07:21:05.996422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.389 qpair failed and we were unable to recover it. 00:35:45.389 [2024-11-18 07:21:05.996563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.389 [2024-11-18 07:21:05.996591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.389 qpair failed and we were unable to recover it. 00:35:45.389 [2024-11-18 07:21:05.996705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.389 [2024-11-18 07:21:05.996733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.389 qpair failed and we were unable to recover it. 00:35:45.389 [2024-11-18 07:21:05.996831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.389 [2024-11-18 07:21:05.996870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.389 qpair failed and we were unable to recover it. 00:35:45.389 [2024-11-18 07:21:05.996955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.389 [2024-11-18 07:21:05.996983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.389 qpair failed and we were unable to recover it. 00:35:45.389 [2024-11-18 07:21:05.997174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.389 [2024-11-18 07:21:05.997237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.389 qpair failed and we were unable to recover it. 00:35:45.389 [2024-11-18 07:21:05.997351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.389 [2024-11-18 07:21:05.997380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.389 qpair failed and we were unable to recover it. 00:35:45.389 [2024-11-18 07:21:05.997485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.389 [2024-11-18 07:21:05.997523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.389 qpair failed and we were unable to recover it. 00:35:45.389 [2024-11-18 07:21:05.997615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.389 [2024-11-18 07:21:05.997643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.389 qpair failed and we were unable to recover it. 00:35:45.389 [2024-11-18 07:21:05.997733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.389 [2024-11-18 07:21:05.997761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.389 qpair failed and we were unable to recover it. 00:35:45.389 [2024-11-18 07:21:05.997875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.389 [2024-11-18 07:21:05.997903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.389 qpair failed and we were unable to recover it. 00:35:45.389 [2024-11-18 07:21:05.998001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.389 [2024-11-18 07:21:05.998040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.389 qpair failed and we were unable to recover it. 00:35:45.389 [2024-11-18 07:21:05.998134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.389 [2024-11-18 07:21:05.998162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.389 qpair failed and we were unable to recover it. 00:35:45.389 [2024-11-18 07:21:05.998293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.389 [2024-11-18 07:21:05.998332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.389 qpair failed and we were unable to recover it. 00:35:45.389 [2024-11-18 07:21:05.998429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.389 [2024-11-18 07:21:05.998458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.389 qpair failed and we were unable to recover it. 00:35:45.389 [2024-11-18 07:21:05.998583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.389 [2024-11-18 07:21:05.998611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.389 qpair failed and we were unable to recover it. 00:35:45.389 [2024-11-18 07:21:05.998727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.389 [2024-11-18 07:21:05.998754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.389 qpair failed and we were unable to recover it. 00:35:45.389 [2024-11-18 07:21:05.998843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.389 [2024-11-18 07:21:05.998870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.389 qpair failed and we were unable to recover it. 00:35:45.389 [2024-11-18 07:21:05.998952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.389 [2024-11-18 07:21:05.998984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.389 qpair failed and we were unable to recover it. 00:35:45.389 [2024-11-18 07:21:05.999111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.389 [2024-11-18 07:21:05.999139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.389 qpair failed and we were unable to recover it. 00:35:45.389 [2024-11-18 07:21:05.999221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.389 [2024-11-18 07:21:05.999258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.389 qpair failed and we were unable to recover it. 00:35:45.390 [2024-11-18 07:21:05.999347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.390 [2024-11-18 07:21:05.999380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.390 qpair failed and we were unable to recover it. 00:35:45.390 [2024-11-18 07:21:05.999496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.390 [2024-11-18 07:21:05.999524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.390 qpair failed and we were unable to recover it. 00:35:45.390 [2024-11-18 07:21:05.999648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.390 [2024-11-18 07:21:05.999675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.390 qpair failed and we were unable to recover it. 00:35:45.390 [2024-11-18 07:21:05.999768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.390 [2024-11-18 07:21:05.999795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.390 qpair failed and we were unable to recover it. 00:35:45.390 [2024-11-18 07:21:05.999946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.390 [2024-11-18 07:21:05.999981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.390 qpair failed and we were unable to recover it. 00:35:45.390 [2024-11-18 07:21:06.000078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.390 [2024-11-18 07:21:06.000113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.390 qpair failed and we were unable to recover it. 00:35:45.390 [2024-11-18 07:21:06.000220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.390 [2024-11-18 07:21:06.000263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.390 qpair failed and we were unable to recover it. 00:35:45.390 [2024-11-18 07:21:06.000404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.390 [2024-11-18 07:21:06.000432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.390 qpair failed and we were unable to recover it. 00:35:45.390 [2024-11-18 07:21:06.000564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.390 [2024-11-18 07:21:06.000594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.390 qpair failed and we were unable to recover it. 00:35:45.390 [2024-11-18 07:21:06.000676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.390 [2024-11-18 07:21:06.000703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.390 qpair failed and we were unable to recover it. 00:35:45.390 [2024-11-18 07:21:06.000867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.390 [2024-11-18 07:21:06.000894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.390 qpair failed and we were unable to recover it. 00:35:45.390 [2024-11-18 07:21:06.001069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.390 [2024-11-18 07:21:06.001118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.390 qpair failed and we were unable to recover it. 00:35:45.390 [2024-11-18 07:21:06.001229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.390 [2024-11-18 07:21:06.001255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.390 qpair failed and we were unable to recover it. 00:35:45.390 [2024-11-18 07:21:06.001379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.390 [2024-11-18 07:21:06.001406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.390 qpair failed and we were unable to recover it. 00:35:45.390 [2024-11-18 07:21:06.001488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.390 [2024-11-18 07:21:06.001533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.390 qpair failed and we were unable to recover it. 00:35:45.390 [2024-11-18 07:21:06.001617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.390 [2024-11-18 07:21:06.001644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.390 qpair failed and we were unable to recover it. 00:35:45.390 [2024-11-18 07:21:06.001729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.390 [2024-11-18 07:21:06.001756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.390 qpair failed and we were unable to recover it. 00:35:45.390 [2024-11-18 07:21:06.001839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.390 [2024-11-18 07:21:06.001877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.390 qpair failed and we were unable to recover it. 00:35:45.390 [2024-11-18 07:21:06.002005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.390 [2024-11-18 07:21:06.002046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.390 qpair failed and we were unable to recover it. 00:35:45.390 [2024-11-18 07:21:06.002173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.390 [2024-11-18 07:21:06.002201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.390 qpair failed and we were unable to recover it. 00:35:45.390 [2024-11-18 07:21:06.002318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.390 [2024-11-18 07:21:06.002345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.390 qpair failed and we were unable to recover it. 00:35:45.390 [2024-11-18 07:21:06.002449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.390 [2024-11-18 07:21:06.002475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.390 qpair failed and we were unable to recover it. 00:35:45.390 [2024-11-18 07:21:06.002590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.390 [2024-11-18 07:21:06.002619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.390 qpair failed and we were unable to recover it. 00:35:45.390 [2024-11-18 07:21:06.002706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.390 [2024-11-18 07:21:06.002734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.390 qpair failed and we were unable to recover it. 00:35:45.390 [2024-11-18 07:21:06.002952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.390 [2024-11-18 07:21:06.002998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.390 qpair failed and we were unable to recover it. 00:35:45.390 [2024-11-18 07:21:06.003179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.390 [2024-11-18 07:21:06.003214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.390 qpair failed and we were unable to recover it. 00:35:45.390 [2024-11-18 07:21:06.003378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.390 [2024-11-18 07:21:06.003405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.390 qpair failed and we were unable to recover it. 00:35:45.390 [2024-11-18 07:21:06.003505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.390 [2024-11-18 07:21:06.003545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.390 qpair failed and we were unable to recover it. 00:35:45.390 [2024-11-18 07:21:06.003671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.390 [2024-11-18 07:21:06.003699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.390 qpair failed and we were unable to recover it. 00:35:45.390 [2024-11-18 07:21:06.003838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.390 [2024-11-18 07:21:06.003886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.390 qpair failed and we were unable to recover it. 00:35:45.390 [2024-11-18 07:21:06.004061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.390 [2024-11-18 07:21:06.004097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.390 qpair failed and we were unable to recover it. 00:35:45.390 [2024-11-18 07:21:06.004247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.390 [2024-11-18 07:21:06.004296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.390 qpair failed and we were unable to recover it. 00:35:45.390 [2024-11-18 07:21:06.004414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.390 [2024-11-18 07:21:06.004443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:45.390 qpair failed and we were unable to recover it. 00:35:45.390 [2024-11-18 07:21:06.004596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.391 [2024-11-18 07:21:06.004624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.391 qpair failed and we were unable to recover it. 00:35:45.391 [2024-11-18 07:21:06.004739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.391 [2024-11-18 07:21:06.004833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.391 qpair failed and we were unable to recover it. 00:35:45.391 [2024-11-18 07:21:06.005085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.391 [2024-11-18 07:21:06.005139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.391 qpair failed and we were unable to recover it. 00:35:45.391 [2024-11-18 07:21:06.005261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.391 [2024-11-18 07:21:06.005290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.391 qpair failed and we were unable to recover it. 00:35:45.391 [2024-11-18 07:21:06.005417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.391 [2024-11-18 07:21:06.005468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.391 qpair failed and we were unable to recover it. 00:35:45.391 [2024-11-18 07:21:06.005588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.391 [2024-11-18 07:21:06.005628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.391 qpair failed and we were unable to recover it. 00:35:45.391 [2024-11-18 07:21:06.005776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.391 [2024-11-18 07:21:06.005833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.391 qpair failed and we were unable to recover it. 00:35:45.391 [2024-11-18 07:21:06.006030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.391 [2024-11-18 07:21:06.006084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.391 qpair failed and we were unable to recover it. 00:35:45.391 [2024-11-18 07:21:06.006282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.391 [2024-11-18 07:21:06.006338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.391 qpair failed and we were unable to recover it. 00:35:45.391 [2024-11-18 07:21:06.006445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.391 [2024-11-18 07:21:06.006472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.391 qpair failed and we were unable to recover it. 00:35:45.391 [2024-11-18 07:21:06.006625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.391 [2024-11-18 07:21:06.006652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.391 qpair failed and we were unable to recover it. 00:35:45.391 [2024-11-18 07:21:06.006737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.391 [2024-11-18 07:21:06.006764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.391 qpair failed and we were unable to recover it. 00:35:45.391 [2024-11-18 07:21:06.006902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.391 [2024-11-18 07:21:06.006952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.391 qpair failed and we were unable to recover it. 00:35:45.391 [2024-11-18 07:21:06.007156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.391 [2024-11-18 07:21:06.007183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.391 qpair failed and we were unable to recover it. 00:35:45.391 [2024-11-18 07:21:06.007322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.391 [2024-11-18 07:21:06.007352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.391 qpair failed and we were unable to recover it. 00:35:45.391 [2024-11-18 07:21:06.007445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.391 [2024-11-18 07:21:06.007472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.391 qpair failed and we were unable to recover it. 00:35:45.391 [2024-11-18 07:21:06.007640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.391 [2024-11-18 07:21:06.007667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.391 qpair failed and we were unable to recover it. 00:35:45.391 [2024-11-18 07:21:06.007748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.391 [2024-11-18 07:21:06.007825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.391 qpair failed and we were unable to recover it. 00:35:45.391 [2024-11-18 07:21:06.008131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.391 [2024-11-18 07:21:06.008198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.391 qpair failed and we were unable to recover it. 00:35:45.391 [2024-11-18 07:21:06.008513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.391 [2024-11-18 07:21:06.008565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.391 qpair failed and we were unable to recover it. 00:35:45.391 [2024-11-18 07:21:06.008650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.391 [2024-11-18 07:21:06.008677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.391 qpair failed and we were unable to recover it. 00:35:45.391 [2024-11-18 07:21:06.008818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.391 [2024-11-18 07:21:06.008865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.391 qpair failed and we were unable to recover it. 00:35:45.391 [2024-11-18 07:21:06.009027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.391 [2024-11-18 07:21:06.009053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.391 qpair failed and we were unable to recover it. 00:35:45.391 [2024-11-18 07:21:06.009301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.391 [2024-11-18 07:21:06.009363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.391 qpair failed and we were unable to recover it. 00:35:45.391 [2024-11-18 07:21:06.009452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.391 [2024-11-18 07:21:06.009497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.391 qpair failed and we were unable to recover it. 00:35:45.391 [2024-11-18 07:21:06.009609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.391 [2024-11-18 07:21:06.009635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.391 qpair failed and we were unable to recover it. 00:35:45.391 [2024-11-18 07:21:06.009725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.391 [2024-11-18 07:21:06.009752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.391 qpair failed and we were unable to recover it. 00:35:45.391 [2024-11-18 07:21:06.009894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.391 [2024-11-18 07:21:06.009939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.391 qpair failed and we were unable to recover it. 00:35:45.391 [2024-11-18 07:21:06.010073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.391 [2024-11-18 07:21:06.010127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.391 qpair failed and we were unable to recover it. 00:35:45.391 [2024-11-18 07:21:06.010237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.391 [2024-11-18 07:21:06.010263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.391 qpair failed and we were unable to recover it. 00:35:45.391 [2024-11-18 07:21:06.010375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.391 [2024-11-18 07:21:06.010402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.391 qpair failed and we were unable to recover it. 00:35:45.391 [2024-11-18 07:21:06.010518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.391 [2024-11-18 07:21:06.010551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.391 qpair failed and we were unable to recover it. 00:35:45.391 [2024-11-18 07:21:06.010628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.391 [2024-11-18 07:21:06.010655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.391 qpair failed and we were unable to recover it. 00:35:45.391 [2024-11-18 07:21:06.010765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.391 [2024-11-18 07:21:06.010791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.391 qpair failed and we were unable to recover it. 00:35:45.391 [2024-11-18 07:21:06.010870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.391 [2024-11-18 07:21:06.010896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.391 qpair failed and we were unable to recover it. 00:35:45.391 [2024-11-18 07:21:06.010980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.392 [2024-11-18 07:21:06.011008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.392 qpair failed and we were unable to recover it. 00:35:45.392 [2024-11-18 07:21:06.011153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.392 [2024-11-18 07:21:06.011179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.392 qpair failed and we were unable to recover it. 00:35:45.392 [2024-11-18 07:21:06.011301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.392 [2024-11-18 07:21:06.011342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.392 qpair failed and we were unable to recover it. 00:35:45.392 [2024-11-18 07:21:06.011467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.392 [2024-11-18 07:21:06.011503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.392 qpair failed and we were unable to recover it. 00:35:45.392 [2024-11-18 07:21:06.011654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.392 [2024-11-18 07:21:06.011682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.392 qpair failed and we were unable to recover it. 00:35:45.392 [2024-11-18 07:21:06.011806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.392 [2024-11-18 07:21:06.011834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.392 qpair failed and we were unable to recover it. 00:35:45.392 [2024-11-18 07:21:06.011953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.392 [2024-11-18 07:21:06.011981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.392 qpair failed and we were unable to recover it. 00:35:45.392 [2024-11-18 07:21:06.012157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.392 [2024-11-18 07:21:06.012210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.392 qpair failed and we were unable to recover it. 00:35:45.392 [2024-11-18 07:21:06.012300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.392 [2024-11-18 07:21:06.012327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.392 qpair failed and we were unable to recover it. 00:35:45.392 [2024-11-18 07:21:06.012418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.392 [2024-11-18 07:21:06.012444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.392 qpair failed and we were unable to recover it. 00:35:45.392 [2024-11-18 07:21:06.012594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.392 [2024-11-18 07:21:06.012622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.392 qpair failed and we were unable to recover it. 00:35:45.392 [2024-11-18 07:21:06.012716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.392 [2024-11-18 07:21:06.012744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.392 qpair failed and we were unable to recover it. 00:35:45.392 [2024-11-18 07:21:06.012878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.392 [2024-11-18 07:21:06.012904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.392 qpair failed and we were unable to recover it. 00:35:45.392 [2024-11-18 07:21:06.013097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.392 [2024-11-18 07:21:06.013172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.392 qpair failed and we were unable to recover it. 00:35:45.392 [2024-11-18 07:21:06.013312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.392 [2024-11-18 07:21:06.013339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.392 qpair failed and we were unable to recover it. 00:35:45.392 [2024-11-18 07:21:06.013422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.392 [2024-11-18 07:21:06.013452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.392 qpair failed and we were unable to recover it. 00:35:45.392 [2024-11-18 07:21:06.013592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.392 [2024-11-18 07:21:06.013620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.392 qpair failed and we were unable to recover it. 00:35:45.392 [2024-11-18 07:21:06.013702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.392 [2024-11-18 07:21:06.013729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.392 qpair failed and we were unable to recover it. 00:35:45.392 [2024-11-18 07:21:06.013843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.392 [2024-11-18 07:21:06.013871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.392 qpair failed and we were unable to recover it. 00:35:45.392 [2024-11-18 07:21:06.013976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.392 [2024-11-18 07:21:06.014012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.392 qpair failed and we were unable to recover it. 00:35:45.392 [2024-11-18 07:21:06.014147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.392 [2024-11-18 07:21:06.014175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.392 qpair failed and we were unable to recover it. 00:35:45.392 [2024-11-18 07:21:06.014472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.392 [2024-11-18 07:21:06.014554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.392 qpair failed and we were unable to recover it. 00:35:45.392 [2024-11-18 07:21:06.014700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.392 [2024-11-18 07:21:06.014727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.392 qpair failed and we were unable to recover it. 00:35:45.392 [2024-11-18 07:21:06.014873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.392 [2024-11-18 07:21:06.014919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.392 qpair failed and we were unable to recover it. 00:35:45.392 [2024-11-18 07:21:06.015182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.392 [2024-11-18 07:21:06.015248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.392 qpair failed and we were unable to recover it. 00:35:45.392 [2024-11-18 07:21:06.015539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.392 [2024-11-18 07:21:06.015567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.392 qpair failed and we were unable to recover it. 00:35:45.392 [2024-11-18 07:21:06.015653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.392 [2024-11-18 07:21:06.015681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.392 qpair failed and we were unable to recover it. 00:35:45.392 [2024-11-18 07:21:06.015792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.392 [2024-11-18 07:21:06.015819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.392 qpair failed and we were unable to recover it. 00:35:45.392 [2024-11-18 07:21:06.015928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.392 [2024-11-18 07:21:06.015963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.392 qpair failed and we were unable to recover it. 00:35:45.392 [2024-11-18 07:21:06.016125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.392 [2024-11-18 07:21:06.016161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.392 qpair failed and we were unable to recover it. 00:35:45.392 [2024-11-18 07:21:06.016339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.392 [2024-11-18 07:21:06.016405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.392 qpair failed and we were unable to recover it. 00:35:45.392 [2024-11-18 07:21:06.016642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.392 [2024-11-18 07:21:06.016670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.392 qpair failed and we were unable to recover it. 00:35:45.392 [2024-11-18 07:21:06.016783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.392 [2024-11-18 07:21:06.016815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.392 qpair failed and we were unable to recover it. 00:35:45.393 [2024-11-18 07:21:06.016904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.393 [2024-11-18 07:21:06.016930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.393 qpair failed and we were unable to recover it. 00:35:45.393 [2024-11-18 07:21:06.017177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.393 [2024-11-18 07:21:06.017213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.393 qpair failed and we were unable to recover it. 00:35:45.393 [2024-11-18 07:21:06.017404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.393 [2024-11-18 07:21:06.017432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.393 qpair failed and we were unable to recover it. 00:35:45.393 [2024-11-18 07:21:06.017544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.393 [2024-11-18 07:21:06.017576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.393 qpair failed and we were unable to recover it. 00:35:45.393 [2024-11-18 07:21:06.017690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.393 [2024-11-18 07:21:06.017722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.393 qpair failed and we were unable to recover it. 00:35:45.393 [2024-11-18 07:21:06.017806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.393 [2024-11-18 07:21:06.017863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.393 qpair failed and we were unable to recover it. 00:35:45.393 [2024-11-18 07:21:06.017987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.393 [2024-11-18 07:21:06.018023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.393 qpair failed and we were unable to recover it. 00:35:45.393 [2024-11-18 07:21:06.018308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.393 [2024-11-18 07:21:06.018375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.393 qpair failed and we were unable to recover it. 00:35:45.393 [2024-11-18 07:21:06.018573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.393 [2024-11-18 07:21:06.018601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.393 qpair failed and we were unable to recover it. 00:35:45.393 [2024-11-18 07:21:06.018688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.393 [2024-11-18 07:21:06.018717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.393 qpair failed and we were unable to recover it. 00:35:45.393 [2024-11-18 07:21:06.018865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.393 [2024-11-18 07:21:06.018902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.393 qpair failed and we were unable to recover it. 00:35:45.393 [2024-11-18 07:21:06.019104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.393 [2024-11-18 07:21:06.019139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.393 qpair failed and we were unable to recover it. 00:35:45.393 [2024-11-18 07:21:06.019240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.393 [2024-11-18 07:21:06.019276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.393 qpair failed and we were unable to recover it. 00:35:45.393 [2024-11-18 07:21:06.019393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.393 [2024-11-18 07:21:06.019428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.393 qpair failed and we were unable to recover it. 00:35:45.393 [2024-11-18 07:21:06.019577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.393 [2024-11-18 07:21:06.019604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.393 qpair failed and we were unable to recover it. 00:35:45.393 [2024-11-18 07:21:06.019744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.393 [2024-11-18 07:21:06.019771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.393 qpair failed and we were unable to recover it. 00:35:45.393 [2024-11-18 07:21:06.019894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.393 [2024-11-18 07:21:06.019931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.393 qpair failed and we were unable to recover it. 00:35:45.393 [2024-11-18 07:21:06.020110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.393 [2024-11-18 07:21:06.020145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.393 qpair failed and we were unable to recover it. 00:35:45.393 [2024-11-18 07:21:06.020299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.393 [2024-11-18 07:21:06.020334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.393 qpair failed and we were unable to recover it. 00:35:45.393 [2024-11-18 07:21:06.020479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.393 [2024-11-18 07:21:06.020540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.393 qpair failed and we were unable to recover it. 00:35:45.393 [2024-11-18 07:21:06.020655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.393 [2024-11-18 07:21:06.020682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.393 qpair failed and we were unable to recover it. 00:35:45.393 [2024-11-18 07:21:06.020795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.393 [2024-11-18 07:21:06.020822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.393 qpair failed and we were unable to recover it. 00:35:45.393 [2024-11-18 07:21:06.020945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.393 [2024-11-18 07:21:06.020973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.393 qpair failed and we were unable to recover it. 00:35:45.393 [2024-11-18 07:21:06.021148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.393 [2024-11-18 07:21:06.021222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.393 qpair failed and we were unable to recover it. 00:35:45.393 [2024-11-18 07:21:06.021479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.393 [2024-11-18 07:21:06.021539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.393 qpair failed and we were unable to recover it. 00:35:45.393 [2024-11-18 07:21:06.021622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.393 [2024-11-18 07:21:06.021652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.393 qpair failed and we were unable to recover it. 00:35:45.393 [2024-11-18 07:21:06.021785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.393 [2024-11-18 07:21:06.021825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.393 qpair failed and we were unable to recover it. 00:35:45.393 [2024-11-18 07:21:06.022054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.393 [2024-11-18 07:21:06.022105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.393 qpair failed and we were unable to recover it. 00:35:45.393 [2024-11-18 07:21:06.022242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.394 [2024-11-18 07:21:06.022291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.394 qpair failed and we were unable to recover it. 00:35:45.394 [2024-11-18 07:21:06.022439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.394 [2024-11-18 07:21:06.022466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.394 qpair failed and we were unable to recover it. 00:35:45.394 [2024-11-18 07:21:06.022606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.394 [2024-11-18 07:21:06.022634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.394 qpair failed and we were unable to recover it. 00:35:45.394 [2024-11-18 07:21:06.022749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.394 [2024-11-18 07:21:06.022799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.394 qpair failed and we were unable to recover it. 00:35:45.394 [2024-11-18 07:21:06.022938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.394 [2024-11-18 07:21:06.022986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.394 qpair failed and we were unable to recover it. 00:35:45.394 [2024-11-18 07:21:06.023138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.394 [2024-11-18 07:21:06.023197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.394 qpair failed and we were unable to recover it. 00:35:45.394 [2024-11-18 07:21:06.023285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.394 [2024-11-18 07:21:06.023312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.394 qpair failed and we were unable to recover it. 00:35:45.394 [2024-11-18 07:21:06.023397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.394 [2024-11-18 07:21:06.023435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.394 qpair failed and we were unable to recover it. 00:35:45.394 [2024-11-18 07:21:06.023543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.394 [2024-11-18 07:21:06.023583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.394 qpair failed and we were unable to recover it. 00:35:45.394 [2024-11-18 07:21:06.023688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.394 [2024-11-18 07:21:06.023716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.394 qpair failed and we were unable to recover it. 00:35:45.394 [2024-11-18 07:21:06.023839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.394 [2024-11-18 07:21:06.023867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.394 qpair failed and we were unable to recover it. 00:35:45.394 [2024-11-18 07:21:06.024014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.394 [2024-11-18 07:21:06.024041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.394 qpair failed and we were unable to recover it. 00:35:45.394 [2024-11-18 07:21:06.024150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.394 [2024-11-18 07:21:06.024177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.394 qpair failed and we were unable to recover it. 00:35:45.394 [2024-11-18 07:21:06.024293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.394 [2024-11-18 07:21:06.024323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.394 qpair failed and we were unable to recover it. 00:35:45.394 [2024-11-18 07:21:06.024473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.394 [2024-11-18 07:21:06.024509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.394 qpair failed and we were unable to recover it. 00:35:45.394 [2024-11-18 07:21:06.024627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.394 [2024-11-18 07:21:06.024661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.394 qpair failed and we were unable to recover it. 00:35:45.394 [2024-11-18 07:21:06.024828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.394 [2024-11-18 07:21:06.024880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.394 qpair failed and we were unable to recover it. 00:35:45.394 [2024-11-18 07:21:06.025131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.394 [2024-11-18 07:21:06.025201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.394 qpair failed and we were unable to recover it. 00:35:45.394 [2024-11-18 07:21:06.025411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.394 [2024-11-18 07:21:06.025446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.394 qpair failed and we were unable to recover it. 00:35:45.394 [2024-11-18 07:21:06.025652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.394 [2024-11-18 07:21:06.025680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.394 qpair failed and we were unable to recover it. 00:35:45.394 [2024-11-18 07:21:06.025789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.394 [2024-11-18 07:21:06.025825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.394 qpair failed and we were unable to recover it. 00:35:45.394 [2024-11-18 07:21:06.025972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.394 [2024-11-18 07:21:06.026025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.394 qpair failed and we were unable to recover it. 00:35:45.394 [2024-11-18 07:21:06.026187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.394 [2024-11-18 07:21:06.026247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.394 qpair failed and we were unable to recover it. 00:35:45.394 [2024-11-18 07:21:06.026397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.394 [2024-11-18 07:21:06.026424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.394 qpair failed and we were unable to recover it. 00:35:45.394 [2024-11-18 07:21:06.026572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.394 [2024-11-18 07:21:06.026626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.394 qpair failed and we were unable to recover it. 00:35:45.394 [2024-11-18 07:21:06.026741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.394 [2024-11-18 07:21:06.026788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.394 qpair failed and we were unable to recover it. 00:35:45.394 [2024-11-18 07:21:06.026915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.394 [2024-11-18 07:21:06.026942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.394 qpair failed and we were unable to recover it. 00:35:45.394 [2024-11-18 07:21:06.027087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.394 [2024-11-18 07:21:06.027146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.394 qpair failed and we were unable to recover it. 00:35:45.394 [2024-11-18 07:21:06.027263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.394 [2024-11-18 07:21:06.027290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.394 qpair failed and we were unable to recover it. 00:35:45.394 [2024-11-18 07:21:06.027371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.394 [2024-11-18 07:21:06.027397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.394 qpair failed and we were unable to recover it. 00:35:45.394 [2024-11-18 07:21:06.027513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.394 [2024-11-18 07:21:06.027551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.394 qpair failed and we were unable to recover it. 00:35:45.394 [2024-11-18 07:21:06.027669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.395 [2024-11-18 07:21:06.027708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.395 qpair failed and we were unable to recover it. 00:35:45.395 [2024-11-18 07:21:06.027836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.395 [2024-11-18 07:21:06.027948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.395 qpair failed and we were unable to recover it. 00:35:45.395 [2024-11-18 07:21:06.028252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.395 [2024-11-18 07:21:06.028321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.395 qpair failed and we were unable to recover it. 00:35:45.395 [2024-11-18 07:21:06.028595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.395 [2024-11-18 07:21:06.028632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.395 qpair failed and we were unable to recover it. 00:35:45.395 [2024-11-18 07:21:06.028742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.395 [2024-11-18 07:21:06.028776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.395 qpair failed and we were unable to recover it. 00:35:45.395 [2024-11-18 07:21:06.028937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.395 [2024-11-18 07:21:06.028972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.395 qpair failed and we were unable to recover it. 00:35:45.395 [2024-11-18 07:21:06.029097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.395 [2024-11-18 07:21:06.029133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.395 qpair failed and we were unable to recover it. 00:35:45.395 [2024-11-18 07:21:06.029254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.395 [2024-11-18 07:21:06.029282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.395 qpair failed and we were unable to recover it. 00:35:45.395 [2024-11-18 07:21:06.029368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.395 [2024-11-18 07:21:06.029395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.395 qpair failed and we were unable to recover it. 00:35:45.395 [2024-11-18 07:21:06.029525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.395 [2024-11-18 07:21:06.029552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.395 qpair failed and we were unable to recover it. 00:35:45.395 [2024-11-18 07:21:06.029649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.395 [2024-11-18 07:21:06.029684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.395 qpair failed and we were unable to recover it. 00:35:45.395 [2024-11-18 07:21:06.029878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.395 [2024-11-18 07:21:06.029929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.395 qpair failed and we were unable to recover it. 00:35:45.395 [2024-11-18 07:21:06.030118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.395 [2024-11-18 07:21:06.030177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.395 qpair failed and we were unable to recover it. 00:35:45.395 [2024-11-18 07:21:06.030265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.395 [2024-11-18 07:21:06.030292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.395 qpair failed and we were unable to recover it. 00:35:45.395 [2024-11-18 07:21:06.030408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.395 [2024-11-18 07:21:06.030434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:45.395 qpair failed and we were unable to recover it. 00:35:45.395 [2024-11-18 07:21:06.030569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.395 [2024-11-18 07:21:06.030609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.395 qpair failed and we were unable to recover it. 00:35:45.395 [2024-11-18 07:21:06.030704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.395 [2024-11-18 07:21:06.030732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.395 qpair failed and we were unable to recover it. 00:35:45.395 [2024-11-18 07:21:06.030881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.395 [2024-11-18 07:21:06.030907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.395 qpair failed and we were unable to recover it. 00:35:45.395 [2024-11-18 07:21:06.031063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.395 [2024-11-18 07:21:06.031090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.395 qpair failed and we were unable to recover it. 00:35:45.395 [2024-11-18 07:21:06.031277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.395 [2024-11-18 07:21:06.031312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.395 qpair failed and we were unable to recover it. 00:35:45.395 [2024-11-18 07:21:06.031455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.395 [2024-11-18 07:21:06.031623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.395 qpair failed and we were unable to recover it. 00:35:45.395 [2024-11-18 07:21:06.031759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.395 [2024-11-18 07:21:06.031849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.395 qpair failed and we were unable to recover it. 00:35:45.395 [2024-11-18 07:21:06.032150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.395 [2024-11-18 07:21:06.032216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.395 qpair failed and we were unable to recover it. 00:35:45.395 [2024-11-18 07:21:06.032484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.395 [2024-11-18 07:21:06.032541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.395 qpair failed and we were unable to recover it. 00:35:45.395 [2024-11-18 07:21:06.032649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.395 [2024-11-18 07:21:06.032683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.395 qpair failed and we were unable to recover it. 00:35:45.395 [2024-11-18 07:21:06.032837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.395 [2024-11-18 07:21:06.032882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.395 qpair failed and we were unable to recover it. 00:35:45.395 [2024-11-18 07:21:06.033125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.395 [2024-11-18 07:21:06.033190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.395 qpair failed and we were unable to recover it. 00:35:45.395 [2024-11-18 07:21:06.033442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.395 [2024-11-18 07:21:06.033528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.395 qpair failed and we were unable to recover it. 00:35:45.395 [2024-11-18 07:21:06.033699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.395 [2024-11-18 07:21:06.033727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.395 qpair failed and we were unable to recover it. 00:35:45.395 [2024-11-18 07:21:06.033877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.395 [2024-11-18 07:21:06.033911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.395 qpair failed and we were unable to recover it. 00:35:45.395 [2024-11-18 07:21:06.034030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.395 [2024-11-18 07:21:06.034093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.395 qpair failed and we were unable to recover it. 00:35:45.395 [2024-11-18 07:21:06.034274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.395 [2024-11-18 07:21:06.034339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.395 qpair failed and we were unable to recover it. 00:35:45.395 [2024-11-18 07:21:06.034567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.396 [2024-11-18 07:21:06.034595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.396 qpair failed and we were unable to recover it. 00:35:45.396 [2024-11-18 07:21:06.034737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.396 [2024-11-18 07:21:06.034764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.396 qpair failed and we were unable to recover it. 00:35:45.396 [2024-11-18 07:21:06.034863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.396 [2024-11-18 07:21:06.034906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.396 qpair failed and we were unable to recover it. 00:35:45.396 [2024-11-18 07:21:06.035013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.396 [2024-11-18 07:21:06.035048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.396 qpair failed and we were unable to recover it. 00:35:45.396 [2024-11-18 07:21:06.035164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.396 [2024-11-18 07:21:06.035210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.396 qpair failed and we were unable to recover it. 00:35:45.396 [2024-11-18 07:21:06.035346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.396 [2024-11-18 07:21:06.035381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.396 qpair failed and we were unable to recover it. 00:35:45.396 [2024-11-18 07:21:06.035552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.396 [2024-11-18 07:21:06.035584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.396 qpair failed and we were unable to recover it. 00:35:45.396 [2024-11-18 07:21:06.035700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.396 [2024-11-18 07:21:06.035726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.396 qpair failed and we were unable to recover it. 00:35:45.396 [2024-11-18 07:21:06.035840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.396 [2024-11-18 07:21:06.035866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.396 qpair failed and we were unable to recover it. 00:35:45.396 [2024-11-18 07:21:06.036009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.396 [2024-11-18 07:21:06.036043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.396 qpair failed and we were unable to recover it. 00:35:45.396 [2024-11-18 07:21:06.036206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.396 [2024-11-18 07:21:06.036240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.396 qpair failed and we were unable to recover it. 00:35:45.396 [2024-11-18 07:21:06.036409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.396 [2024-11-18 07:21:06.036451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.396 qpair failed and we were unable to recover it. 00:35:45.396 [2024-11-18 07:21:06.036624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.396 [2024-11-18 07:21:06.036651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.396 qpair failed and we were unable to recover it. 00:35:45.396 [2024-11-18 07:21:06.036762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.396 [2024-11-18 07:21:06.036789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.396 qpair failed and we were unable to recover it. 00:35:45.396 [2024-11-18 07:21:06.036900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.396 [2024-11-18 07:21:06.036927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.396 qpair failed and we were unable to recover it. 00:35:45.396 [2024-11-18 07:21:06.037070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.396 [2024-11-18 07:21:06.037104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.396 qpair failed and we were unable to recover it. 00:35:45.396 [2024-11-18 07:21:06.037225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.396 [2024-11-18 07:21:06.037268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.396 qpair failed and we were unable to recover it. 00:35:45.396 [2024-11-18 07:21:06.037425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.396 [2024-11-18 07:21:06.037470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.396 qpair failed and we were unable to recover it. 00:35:45.396 [2024-11-18 07:21:06.037624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.396 [2024-11-18 07:21:06.037651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.396 qpair failed and we were unable to recover it. 00:35:45.396 [2024-11-18 07:21:06.037739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.396 [2024-11-18 07:21:06.037767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.396 qpair failed and we were unable to recover it. 00:35:45.396 [2024-11-18 07:21:06.037869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.396 [2024-11-18 07:21:06.037895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.396 qpair failed and we were unable to recover it. 00:35:45.396 [2024-11-18 07:21:06.038093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.396 [2024-11-18 07:21:06.038127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.396 qpair failed and we were unable to recover it. 00:35:45.396 [2024-11-18 07:21:06.038299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.396 [2024-11-18 07:21:06.038333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.396 qpair failed and we were unable to recover it. 00:35:45.396 [2024-11-18 07:21:06.038511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.396 [2024-11-18 07:21:06.038557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.396 qpair failed and we were unable to recover it. 00:35:45.396 [2024-11-18 07:21:06.038641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.396 [2024-11-18 07:21:06.038668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.396 qpair failed and we were unable to recover it. 00:35:45.396 [2024-11-18 07:21:06.038778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.396 [2024-11-18 07:21:06.038809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.396 qpair failed and we were unable to recover it. 00:35:45.396 [2024-11-18 07:21:06.038897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.396 [2024-11-18 07:21:06.038965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.396 qpair failed and we were unable to recover it. 00:35:45.396 [2024-11-18 07:21:06.039199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.396 [2024-11-18 07:21:06.039264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.396 qpair failed and we were unable to recover it. 00:35:45.396 [2024-11-18 07:21:06.039440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.396 [2024-11-18 07:21:06.039466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.396 qpair failed and we were unable to recover it. 00:35:45.396 [2024-11-18 07:21:06.039586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.396 [2024-11-18 07:21:06.039612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.396 qpair failed and we were unable to recover it. 00:35:45.396 [2024-11-18 07:21:06.039730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.396 [2024-11-18 07:21:06.039776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.396 qpair failed and we were unable to recover it. 00:35:45.396 [2024-11-18 07:21:06.039931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.396 [2024-11-18 07:21:06.039957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.396 qpair failed and we were unable to recover it. 00:35:45.396 [2024-11-18 07:21:06.040097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.397 [2024-11-18 07:21:06.040131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.397 qpair failed and we were unable to recover it. 00:35:45.397 [2024-11-18 07:21:06.040323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.397 [2024-11-18 07:21:06.040392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.397 qpair failed and we were unable to recover it. 00:35:45.397 [2024-11-18 07:21:06.040560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.397 [2024-11-18 07:21:06.040587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.397 qpair failed and we were unable to recover it. 00:35:45.397 [2024-11-18 07:21:06.040703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.397 [2024-11-18 07:21:06.040729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.397 qpair failed and we were unable to recover it. 00:35:45.397 [2024-11-18 07:21:06.040897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.397 [2024-11-18 07:21:06.040964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.397 qpair failed and we were unable to recover it. 00:35:45.397 [2024-11-18 07:21:06.041212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.397 [2024-11-18 07:21:06.041275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.397 qpair failed and we were unable to recover it. 00:35:45.397 [2024-11-18 07:21:06.041597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.397 [2024-11-18 07:21:06.041638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.397 qpair failed and we were unable to recover it. 00:35:45.397 [2024-11-18 07:21:06.041731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.397 [2024-11-18 07:21:06.041762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.397 qpair failed and we were unable to recover it. 00:35:45.397 [2024-11-18 07:21:06.041886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.397 [2024-11-18 07:21:06.041924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.397 qpair failed and we were unable to recover it. 00:35:45.397 [2024-11-18 07:21:06.042043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.397 [2024-11-18 07:21:06.042071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.397 qpair failed and we were unable to recover it. 00:35:45.397 [2024-11-18 07:21:06.042191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.397 [2024-11-18 07:21:06.042223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.397 qpair failed and we were unable to recover it. 00:35:45.397 [2024-11-18 07:21:06.042334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.397 [2024-11-18 07:21:06.042370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:45.397 qpair failed and we were unable to recover it. 00:35:45.397 [2024-11-18 07:21:06.042476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.397 [2024-11-18 07:21:06.042536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.397 qpair failed and we were unable to recover it. 00:35:45.397 [2024-11-18 07:21:06.042686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.397 [2024-11-18 07:21:06.042721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.397 qpair failed and we were unable to recover it. 00:35:45.397 [2024-11-18 07:21:06.042862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.397 [2024-11-18 07:21:06.042928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.397 qpair failed and we were unable to recover it. 00:35:45.397 [2024-11-18 07:21:06.043234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.397 [2024-11-18 07:21:06.043298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.397 qpair failed and we were unable to recover it. 00:35:45.397 [2024-11-18 07:21:06.043574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.397 [2024-11-18 07:21:06.043640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.397 qpair failed and we were unable to recover it. 00:35:45.397 [2024-11-18 07:21:06.043901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.397 [2024-11-18 07:21:06.043935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.397 qpair failed and we were unable to recover it. 00:35:45.397 [2024-11-18 07:21:06.044050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.397 [2024-11-18 07:21:06.044092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.397 qpair failed and we were unable to recover it. 00:35:45.397 [2024-11-18 07:21:06.044264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.397 [2024-11-18 07:21:06.044299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.397 qpair failed and we were unable to recover it. 00:35:45.397 [2024-11-18 07:21:06.044432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.397 [2024-11-18 07:21:06.044467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.397 qpair failed and we were unable to recover it. 00:35:45.397 [2024-11-18 07:21:06.044632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.397 [2024-11-18 07:21:06.044705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.397 qpair failed and we were unable to recover it. 00:35:45.397 [2024-11-18 07:21:06.044957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.397 [2024-11-18 07:21:06.045024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.397 qpair failed and we were unable to recover it. 00:35:45.397 [2024-11-18 07:21:06.045263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.397 [2024-11-18 07:21:06.045296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.397 qpair failed and we were unable to recover it. 00:35:45.397 [2024-11-18 07:21:06.045416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.397 [2024-11-18 07:21:06.045450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.397 qpair failed and we were unable to recover it. 00:35:45.397 [2024-11-18 07:21:06.045737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.397 [2024-11-18 07:21:06.045812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.397 qpair failed and we were unable to recover it. 00:35:45.397 [2024-11-18 07:21:06.046117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.397 [2024-11-18 07:21:06.046182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.397 qpair failed and we were unable to recover it. 00:35:45.397 [2024-11-18 07:21:06.046477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.397 [2024-11-18 07:21:06.046565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.397 qpair failed and we were unable to recover it. 00:35:45.397 [2024-11-18 07:21:06.046794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.397 [2024-11-18 07:21:06.046870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.397 qpair failed and we were unable to recover it. 00:35:45.397 [2024-11-18 07:21:06.047160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.397 [2024-11-18 07:21:06.047225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.397 qpair failed and we were unable to recover it. 00:35:45.397 [2024-11-18 07:21:06.047532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.397 [2024-11-18 07:21:06.047600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.397 qpair failed and we were unable to recover it. 00:35:45.397 [2024-11-18 07:21:06.047901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.397 [2024-11-18 07:21:06.047965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.397 qpair failed and we were unable to recover it. 00:35:45.397 [2024-11-18 07:21:06.048255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.397 [2024-11-18 07:21:06.048321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.397 qpair failed and we were unable to recover it. 00:35:45.397 [2024-11-18 07:21:06.048587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.397 [2024-11-18 07:21:06.048653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.397 qpair failed and we were unable to recover it. 00:35:45.397 [2024-11-18 07:21:06.048962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.397 [2024-11-18 07:21:06.049036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.397 qpair failed and we were unable to recover it. 00:35:45.397 [2024-11-18 07:21:06.049388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.397 [2024-11-18 07:21:06.049422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.397 qpair failed and we were unable to recover it. 00:35:45.397 [2024-11-18 07:21:06.049611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.397 [2024-11-18 07:21:06.049647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.398 qpair failed and we were unable to recover it. 00:35:45.398 [2024-11-18 07:21:06.049757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.398 [2024-11-18 07:21:06.049792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.398 qpair failed and we were unable to recover it. 00:35:45.398 [2024-11-18 07:21:06.049967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.398 [2024-11-18 07:21:06.050002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.398 qpair failed and we were unable to recover it. 00:35:45.398 [2024-11-18 07:21:06.050211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.398 [2024-11-18 07:21:06.050275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.398 qpair failed and we were unable to recover it. 00:35:45.398 [2024-11-18 07:21:06.050576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.398 [2024-11-18 07:21:06.050642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.398 qpair failed and we were unable to recover it. 00:35:45.398 [2024-11-18 07:21:06.050935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.398 [2024-11-18 07:21:06.050978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.398 qpair failed and we were unable to recover it. 00:35:45.398 [2024-11-18 07:21:06.051114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.398 [2024-11-18 07:21:06.051148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.398 qpair failed and we were unable to recover it. 00:35:45.398 [2024-11-18 07:21:06.051324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.398 [2024-11-18 07:21:06.051389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.398 qpair failed and we were unable to recover it. 00:35:45.398 [2024-11-18 07:21:06.051647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.398 [2024-11-18 07:21:06.051682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.398 qpair failed and we were unable to recover it. 00:35:45.398 [2024-11-18 07:21:06.051848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.398 [2024-11-18 07:21:06.051892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.398 qpair failed and we were unable to recover it. 00:35:45.398 [2024-11-18 07:21:06.052191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.398 [2024-11-18 07:21:06.052256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.398 qpair failed and we were unable to recover it. 00:35:45.398 [2024-11-18 07:21:06.052567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.398 [2024-11-18 07:21:06.052634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.398 qpair failed and we were unable to recover it. 00:35:45.398 [2024-11-18 07:21:06.052858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.398 [2024-11-18 07:21:06.052922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.398 qpair failed and we were unable to recover it. 00:35:45.398 [2024-11-18 07:21:06.053209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.398 [2024-11-18 07:21:06.053282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.398 qpair failed and we were unable to recover it. 00:35:45.398 [2024-11-18 07:21:06.053544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.398 [2024-11-18 07:21:06.053579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.398 qpair failed and we were unable to recover it. 00:35:45.398 [2024-11-18 07:21:06.053721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.398 [2024-11-18 07:21:06.053754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.398 qpair failed and we were unable to recover it. 00:35:45.398 [2024-11-18 07:21:06.054006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.398 [2024-11-18 07:21:06.054071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.398 qpair failed and we were unable to recover it. 00:35:45.398 [2024-11-18 07:21:06.054319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.398 [2024-11-18 07:21:06.054390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.398 qpair failed and we were unable to recover it. 00:35:45.398 [2024-11-18 07:21:06.054618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.398 [2024-11-18 07:21:06.054653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.398 qpair failed and we were unable to recover it. 00:35:45.398 [2024-11-18 07:21:06.054805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.398 [2024-11-18 07:21:06.054839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.398 qpair failed and we were unable to recover it. 00:35:45.398 [2024-11-18 07:21:06.055091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.398 [2024-11-18 07:21:06.055126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.398 qpair failed and we were unable to recover it. 00:35:45.398 [2024-11-18 07:21:06.055436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.398 [2024-11-18 07:21:06.055525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.398 qpair failed and we were unable to recover it. 00:35:45.398 [2024-11-18 07:21:06.055792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.398 [2024-11-18 07:21:06.055826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.398 qpair failed and we were unable to recover it. 00:35:45.398 [2024-11-18 07:21:06.055971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.398 [2024-11-18 07:21:06.056005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.398 qpair failed and we were unable to recover it. 00:35:45.398 [2024-11-18 07:21:06.056212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.398 [2024-11-18 07:21:06.056276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.398 qpair failed and we were unable to recover it. 00:35:45.398 [2024-11-18 07:21:06.056584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.398 [2024-11-18 07:21:06.056652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.398 qpair failed and we were unable to recover it. 00:35:45.398 [2024-11-18 07:21:06.056943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.398 [2024-11-18 07:21:06.057007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.398 qpair failed and we were unable to recover it. 00:35:45.398 [2024-11-18 07:21:06.057302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.398 [2024-11-18 07:21:06.057376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.398 qpair failed and we were unable to recover it. 00:35:45.398 [2024-11-18 07:21:06.057694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.398 [2024-11-18 07:21:06.057760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.398 qpair failed and we were unable to recover it. 00:35:45.398 [2024-11-18 07:21:06.058008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.398 [2024-11-18 07:21:06.058079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.398 qpair failed and we were unable to recover it. 00:35:45.398 [2024-11-18 07:21:06.058376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.398 [2024-11-18 07:21:06.058440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.398 qpair failed and we were unable to recover it. 00:35:45.398 [2024-11-18 07:21:06.058763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.398 [2024-11-18 07:21:06.058829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.398 qpair failed and we were unable to recover it. 00:35:45.398 [2024-11-18 07:21:06.059089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.398 [2024-11-18 07:21:06.059163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.398 qpair failed and we were unable to recover it. 00:35:45.398 [2024-11-18 07:21:06.059476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.398 [2024-11-18 07:21:06.059518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.398 qpair failed and we were unable to recover it. 00:35:45.398 [2024-11-18 07:21:06.059699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.398 [2024-11-18 07:21:06.059732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.399 qpair failed and we were unable to recover it. 00:35:45.399 [2024-11-18 07:21:06.059865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.399 [2024-11-18 07:21:06.059941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.399 qpair failed and we were unable to recover it. 00:35:45.399 [2024-11-18 07:21:06.060189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.399 [2024-11-18 07:21:06.060223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.399 qpair failed and we were unable to recover it. 00:35:45.399 [2024-11-18 07:21:06.060397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.399 [2024-11-18 07:21:06.060431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.399 qpair failed and we were unable to recover it. 00:35:45.399 [2024-11-18 07:21:06.060691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.399 [2024-11-18 07:21:06.060757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.399 qpair failed and we were unable to recover it. 00:35:45.399 [2024-11-18 07:21:06.060968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.399 [2024-11-18 07:21:06.061044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.399 qpair failed and we were unable to recover it. 00:35:45.399 [2024-11-18 07:21:06.061304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.399 [2024-11-18 07:21:06.061373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.399 qpair failed and we were unable to recover it. 00:35:45.399 [2024-11-18 07:21:06.061707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.399 [2024-11-18 07:21:06.061772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.399 qpair failed and we were unable to recover it. 00:35:45.399 [2024-11-18 07:21:06.062077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.399 [2024-11-18 07:21:06.062142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.399 qpair failed and we were unable to recover it. 00:35:45.399 [2024-11-18 07:21:06.062402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.399 [2024-11-18 07:21:06.062467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.399 qpair failed and we were unable to recover it. 00:35:45.399 [2024-11-18 07:21:06.062757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.399 [2024-11-18 07:21:06.062791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.399 qpair failed and we were unable to recover it. 00:35:45.399 [2024-11-18 07:21:06.062957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.399 [2024-11-18 07:21:06.062991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.399 qpair failed and we were unable to recover it. 00:35:45.399 [2024-11-18 07:21:06.063242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.399 [2024-11-18 07:21:06.063307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.399 qpair failed and we were unable to recover it. 00:35:45.399 [2024-11-18 07:21:06.063617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.399 [2024-11-18 07:21:06.063651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.399 qpair failed and we were unable to recover it. 00:35:45.399 [2024-11-18 07:21:06.063791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.399 [2024-11-18 07:21:06.063830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.399 qpair failed and we were unable to recover it. 00:35:45.399 [2024-11-18 07:21:06.064060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.399 [2024-11-18 07:21:06.064125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.399 qpair failed and we were unable to recover it. 00:35:45.399 [2024-11-18 07:21:06.064422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.399 [2024-11-18 07:21:06.064486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.399 qpair failed and we were unable to recover it. 00:35:45.399 [2024-11-18 07:21:06.064793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.399 [2024-11-18 07:21:06.064864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.399 qpair failed and we were unable to recover it. 00:35:45.399 [2024-11-18 07:21:06.065167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.399 [2024-11-18 07:21:06.065231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.399 qpair failed and we were unable to recover it. 00:35:45.399 [2024-11-18 07:21:06.065501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.399 [2024-11-18 07:21:06.065568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.399 qpair failed and we were unable to recover it. 00:35:45.399 [2024-11-18 07:21:06.065866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.399 [2024-11-18 07:21:06.065931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.399 qpair failed and we were unable to recover it. 00:35:45.399 [2024-11-18 07:21:06.066229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.399 [2024-11-18 07:21:06.066294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.399 qpair failed and we were unable to recover it. 00:35:45.399 [2024-11-18 07:21:06.066560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.399 [2024-11-18 07:21:06.066625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.399 qpair failed and we were unable to recover it. 00:35:45.399 [2024-11-18 07:21:06.066862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.399 [2024-11-18 07:21:06.066896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.399 qpair failed and we were unable to recover it. 00:35:45.399 [2024-11-18 07:21:06.067046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.399 [2024-11-18 07:21:06.067080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.399 qpair failed and we were unable to recover it. 00:35:45.399 [2024-11-18 07:21:06.067220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.399 [2024-11-18 07:21:06.067254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.399 qpair failed and we were unable to recover it. 00:35:45.399 [2024-11-18 07:21:06.067422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.399 [2024-11-18 07:21:06.067474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.399 qpair failed and we were unable to recover it. 00:35:45.399 [2024-11-18 07:21:06.067610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.399 [2024-11-18 07:21:06.067644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.399 qpair failed and we were unable to recover it. 00:35:45.399 [2024-11-18 07:21:06.067758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.399 [2024-11-18 07:21:06.067792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.399 qpair failed and we were unable to recover it. 00:35:45.399 [2024-11-18 07:21:06.068061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.399 [2024-11-18 07:21:06.068095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.399 qpair failed and we were unable to recover it. 00:35:45.399 [2024-11-18 07:21:06.068208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.399 [2024-11-18 07:21:06.068242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.399 qpair failed and we were unable to recover it. 00:35:45.399 [2024-11-18 07:21:06.068455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.399 [2024-11-18 07:21:06.068549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.399 qpair failed and we were unable to recover it. 00:35:45.399 [2024-11-18 07:21:06.068808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.399 [2024-11-18 07:21:06.068872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.400 qpair failed and we were unable to recover it. 00:35:45.400 [2024-11-18 07:21:06.069164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.400 [2024-11-18 07:21:06.069228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.400 qpair failed and we were unable to recover it. 00:35:45.400 [2024-11-18 07:21:06.069448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.400 [2024-11-18 07:21:06.069533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.400 qpair failed and we were unable to recover it. 00:35:45.400 [2024-11-18 07:21:06.069797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.400 [2024-11-18 07:21:06.069862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.400 qpair failed and we were unable to recover it. 00:35:45.400 [2024-11-18 07:21:06.070145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.400 [2024-11-18 07:21:06.070213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.400 qpair failed and we were unable to recover it. 00:35:45.400 [2024-11-18 07:21:06.070460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.400 [2024-11-18 07:21:06.070546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.400 qpair failed and we were unable to recover it. 00:35:45.400 [2024-11-18 07:21:06.070808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.400 [2024-11-18 07:21:06.070873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.400 qpair failed and we were unable to recover it. 00:35:45.400 [2024-11-18 07:21:06.071163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.400 [2024-11-18 07:21:06.071227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.400 qpair failed and we were unable to recover it. 00:35:45.400 [2024-11-18 07:21:06.071532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.400 [2024-11-18 07:21:06.071598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.400 qpair failed and we were unable to recover it. 00:35:45.400 [2024-11-18 07:21:06.071871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.400 [2024-11-18 07:21:06.071936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.400 qpair failed and we were unable to recover it. 00:35:45.400 [2024-11-18 07:21:06.072195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.400 [2024-11-18 07:21:06.072259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.400 qpair failed and we were unable to recover it. 00:35:45.400 [2024-11-18 07:21:06.072515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.400 [2024-11-18 07:21:06.072582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.400 qpair failed and we were unable to recover it. 00:35:45.400 [2024-11-18 07:21:06.072869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.400 [2024-11-18 07:21:06.072935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.400 qpair failed and we were unable to recover it. 00:35:45.400 [2024-11-18 07:21:06.073185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.400 [2024-11-18 07:21:06.073250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.400 qpair failed and we were unable to recover it. 00:35:45.400 [2024-11-18 07:21:06.073537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.400 [2024-11-18 07:21:06.073603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.400 qpair failed and we were unable to recover it. 00:35:45.400 [2024-11-18 07:21:06.073911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.400 [2024-11-18 07:21:06.073979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.400 qpair failed and we were unable to recover it. 00:35:45.400 [2024-11-18 07:21:06.074232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.400 [2024-11-18 07:21:06.074301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.400 qpair failed and we were unable to recover it. 00:35:45.400 [2024-11-18 07:21:06.074518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.400 [2024-11-18 07:21:06.074583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.400 qpair failed and we were unable to recover it. 00:35:45.400 [2024-11-18 07:21:06.074873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.400 [2024-11-18 07:21:06.074938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.400 qpair failed and we were unable to recover it. 00:35:45.400 [2024-11-18 07:21:06.075212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.400 [2024-11-18 07:21:06.075246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.400 qpair failed and we were unable to recover it. 00:35:45.400 [2024-11-18 07:21:06.075384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.400 [2024-11-18 07:21:06.075420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.400 qpair failed and we were unable to recover it. 00:35:45.400 [2024-11-18 07:21:06.075720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.400 [2024-11-18 07:21:06.075799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.400 qpair failed and we were unable to recover it. 00:35:45.400 [2024-11-18 07:21:06.076099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.400 [2024-11-18 07:21:06.076173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.400 qpair failed and we were unable to recover it. 00:35:45.400 [2024-11-18 07:21:06.076421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.400 [2024-11-18 07:21:06.076474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.400 qpair failed and we were unable to recover it. 00:35:45.400 [2024-11-18 07:21:06.076666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.400 [2024-11-18 07:21:06.076700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.400 qpair failed and we were unable to recover it. 00:35:45.400 [2024-11-18 07:21:06.076841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.400 [2024-11-18 07:21:06.076876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.400 qpair failed and we were unable to recover it. 00:35:45.400 [2024-11-18 07:21:06.076993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.400 [2024-11-18 07:21:06.077027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.400 qpair failed and we were unable to recover it. 00:35:45.400 [2024-11-18 07:21:06.077199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.400 [2024-11-18 07:21:06.077273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.400 qpair failed and we were unable to recover it. 00:35:45.400 [2024-11-18 07:21:06.077509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.400 [2024-11-18 07:21:06.077549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.400 qpair failed and we were unable to recover it. 00:35:45.400 [2024-11-18 07:21:06.077679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.400 [2024-11-18 07:21:06.077714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.400 qpair failed and we were unable to recover it. 00:35:45.400 [2024-11-18 07:21:06.077816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.400 [2024-11-18 07:21:06.077850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.400 qpair failed and we were unable to recover it. 00:35:45.400 [2024-11-18 07:21:06.077984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.400 [2024-11-18 07:21:06.078018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.400 qpair failed and we were unable to recover it. 00:35:45.400 [2024-11-18 07:21:06.078164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.400 [2024-11-18 07:21:06.078198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.400 qpair failed and we were unable to recover it. 00:35:45.401 [2024-11-18 07:21:06.078388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.401 [2024-11-18 07:21:06.078424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.401 qpair failed and we were unable to recover it. 00:35:45.401 [2024-11-18 07:21:06.078590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.401 [2024-11-18 07:21:06.078653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.401 qpair failed and we were unable to recover it. 00:35:45.401 [2024-11-18 07:21:06.078924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.401 [2024-11-18 07:21:06.078990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.401 qpair failed and we were unable to recover it. 00:35:45.401 [2024-11-18 07:21:06.079292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.401 [2024-11-18 07:21:06.079357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.401 qpair failed and we were unable to recover it. 00:35:45.401 [2024-11-18 07:21:06.079617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.401 [2024-11-18 07:21:06.079694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.401 qpair failed and we were unable to recover it. 00:35:45.401 [2024-11-18 07:21:06.079955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.401 [2024-11-18 07:21:06.080020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.401 qpair failed and we were unable to recover it. 00:35:45.401 [2024-11-18 07:21:06.080315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.401 [2024-11-18 07:21:06.080380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.401 qpair failed and we were unable to recover it. 00:35:45.401 [2024-11-18 07:21:06.080644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.401 [2024-11-18 07:21:06.080709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.401 qpair failed and we were unable to recover it. 00:35:45.401 [2024-11-18 07:21:06.081011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.401 [2024-11-18 07:21:06.081076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.401 qpair failed and we were unable to recover it. 00:35:45.401 [2024-11-18 07:21:06.081376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.401 [2024-11-18 07:21:06.081452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.401 qpair failed and we were unable to recover it. 00:35:45.401 [2024-11-18 07:21:06.081775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.401 [2024-11-18 07:21:06.081839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.401 qpair failed and we were unable to recover it. 00:35:45.401 [2024-11-18 07:21:06.082087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.401 [2024-11-18 07:21:06.082151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.401 qpair failed and we were unable to recover it. 00:35:45.401 [2024-11-18 07:21:06.082453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.401 [2024-11-18 07:21:06.082536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.401 qpair failed and we were unable to recover it. 00:35:45.401 [2024-11-18 07:21:06.082803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.401 [2024-11-18 07:21:06.082867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.401 qpair failed and we were unable to recover it. 00:35:45.401 [2024-11-18 07:21:06.083115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.401 [2024-11-18 07:21:06.083179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.401 qpair failed and we were unable to recover it. 00:35:45.401 [2024-11-18 07:21:06.083452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.401 [2024-11-18 07:21:06.083544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.401 qpair failed and we were unable to recover it. 00:35:45.401 [2024-11-18 07:21:06.083792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.401 [2024-11-18 07:21:06.083857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.401 qpair failed and we were unable to recover it. 00:35:45.401 [2024-11-18 07:21:06.084060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.401 [2024-11-18 07:21:06.084125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.401 qpair failed and we were unable to recover it. 00:35:45.401 [2024-11-18 07:21:06.084379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.401 [2024-11-18 07:21:06.084414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.401 qpair failed and we were unable to recover it. 00:35:45.401 [2024-11-18 07:21:06.084586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.401 [2024-11-18 07:21:06.084621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.401 qpair failed and we were unable to recover it. 00:35:45.401 [2024-11-18 07:21:06.084875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.401 [2024-11-18 07:21:06.084940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.401 qpair failed and we were unable to recover it. 00:35:45.401 [2024-11-18 07:21:06.085208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.401 [2024-11-18 07:21:06.085242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.401 qpair failed and we were unable to recover it. 00:35:45.401 [2024-11-18 07:21:06.085387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.401 [2024-11-18 07:21:06.085422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.401 qpair failed and we were unable to recover it. 00:35:45.401 [2024-11-18 07:21:06.085700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.401 [2024-11-18 07:21:06.085770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.401 qpair failed and we were unable to recover it. 00:35:45.401 [2024-11-18 07:21:06.086022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.401 [2024-11-18 07:21:06.086096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.401 qpair failed and we were unable to recover it. 00:35:45.401 [2024-11-18 07:21:06.086388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.401 [2024-11-18 07:21:06.086454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.401 qpair failed and we were unable to recover it. 00:35:45.401 [2024-11-18 07:21:06.086760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.401 [2024-11-18 07:21:06.086824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.401 qpair failed and we were unable to recover it. 00:35:45.401 [2024-11-18 07:21:06.087124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.401 [2024-11-18 07:21:06.087158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.401 qpair failed and we were unable to recover it. 00:35:45.401 [2024-11-18 07:21:06.087304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.401 [2024-11-18 07:21:06.087338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.401 qpair failed and we were unable to recover it. 00:35:45.401 [2024-11-18 07:21:06.087456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.401 [2024-11-18 07:21:06.087509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.401 qpair failed and we were unable to recover it. 00:35:45.401 [2024-11-18 07:21:06.087658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.401 [2024-11-18 07:21:06.087697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.401 qpair failed and we were unable to recover it. 00:35:45.401 [2024-11-18 07:21:06.087814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.401 [2024-11-18 07:21:06.087859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.401 qpair failed and we were unable to recover it. 00:35:45.401 [2024-11-18 07:21:06.088041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.402 [2024-11-18 07:21:06.088107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.402 qpair failed and we were unable to recover it. 00:35:45.402 [2024-11-18 07:21:06.088342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.402 [2024-11-18 07:21:06.088406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.402 qpair failed and we were unable to recover it. 00:35:45.402 [2024-11-18 07:21:06.088655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.402 [2024-11-18 07:21:06.088721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.402 qpair failed and we were unable to recover it. 00:35:45.402 [2024-11-18 07:21:06.088970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.402 [2024-11-18 07:21:06.089036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.402 qpair failed and we were unable to recover it. 00:35:45.402 [2024-11-18 07:21:06.089281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.402 [2024-11-18 07:21:06.089348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.402 qpair failed and we were unable to recover it. 00:35:45.402 [2024-11-18 07:21:06.089599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.402 [2024-11-18 07:21:06.089666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.402 qpair failed and we were unable to recover it. 00:35:45.402 [2024-11-18 07:21:06.089876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.402 [2024-11-18 07:21:06.089942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.402 qpair failed and we were unable to recover it. 00:35:45.402 [2024-11-18 07:21:06.090226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.402 [2024-11-18 07:21:06.090301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.402 qpair failed and we were unable to recover it. 00:35:45.402 [2024-11-18 07:21:06.090519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.402 [2024-11-18 07:21:06.090586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.402 qpair failed and we were unable to recover it. 00:35:45.402 [2024-11-18 07:21:06.090844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.402 [2024-11-18 07:21:06.090911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.402 qpair failed and we were unable to recover it. 00:35:45.402 [2024-11-18 07:21:06.091178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.402 [2024-11-18 07:21:06.091241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.402 qpair failed and we were unable to recover it. 00:35:45.402 [2024-11-18 07:21:06.091512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.402 [2024-11-18 07:21:06.091579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.402 qpair failed and we were unable to recover it. 00:35:45.402 [2024-11-18 07:21:06.091886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.402 [2024-11-18 07:21:06.091962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.402 qpair failed and we were unable to recover it. 00:35:45.402 [2024-11-18 07:21:06.092256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.402 [2024-11-18 07:21:06.092289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.402 qpair failed and we were unable to recover it. 00:35:45.402 [2024-11-18 07:21:06.092404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.402 [2024-11-18 07:21:06.092438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.402 qpair failed and we were unable to recover it. 00:35:45.402 [2024-11-18 07:21:06.092567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.402 [2024-11-18 07:21:06.092602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.402 qpair failed and we were unable to recover it. 00:35:45.402 [2024-11-18 07:21:06.092747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.402 [2024-11-18 07:21:06.092781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.402 qpair failed and we were unable to recover it. 00:35:45.402 [2024-11-18 07:21:06.093015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.402 [2024-11-18 07:21:06.093080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.402 qpair failed and we were unable to recover it. 00:35:45.402 [2024-11-18 07:21:06.093333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.402 [2024-11-18 07:21:06.093398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.402 qpair failed and we were unable to recover it. 00:35:45.402 [2024-11-18 07:21:06.093657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.402 [2024-11-18 07:21:06.093722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.402 qpair failed and we were unable to recover it. 00:35:45.402 [2024-11-18 07:21:06.094016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.402 [2024-11-18 07:21:06.094081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.402 qpair failed and we were unable to recover it. 00:35:45.402 [2024-11-18 07:21:06.094350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.402 [2024-11-18 07:21:06.094414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.402 qpair failed and we were unable to recover it. 00:35:45.402 [2024-11-18 07:21:06.094673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.402 [2024-11-18 07:21:06.094739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.402 qpair failed and we were unable to recover it. 00:35:45.402 [2024-11-18 07:21:06.094949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.402 [2024-11-18 07:21:06.095014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.402 qpair failed and we were unable to recover it. 00:35:45.402 [2024-11-18 07:21:06.095294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.402 [2024-11-18 07:21:06.095358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.402 qpair failed and we were unable to recover it. 00:35:45.402 [2024-11-18 07:21:06.095648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.402 [2024-11-18 07:21:06.095715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.402 qpair failed and we were unable to recover it. 00:35:45.402 [2024-11-18 07:21:06.096012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.402 [2024-11-18 07:21:06.096078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.402 qpair failed and we were unable to recover it. 00:35:45.402 [2024-11-18 07:21:06.096375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.402 [2024-11-18 07:21:06.096439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.402 qpair failed and we were unable to recover it. 00:35:45.402 [2024-11-18 07:21:06.096756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.402 [2024-11-18 07:21:06.096790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.402 qpair failed and we were unable to recover it. 00:35:45.402 [2024-11-18 07:21:06.096936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.402 [2024-11-18 07:21:06.096971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.402 qpair failed and we were unable to recover it. 00:35:45.402 [2024-11-18 07:21:06.097273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.402 [2024-11-18 07:21:06.097348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.402 qpair failed and we were unable to recover it. 00:35:45.403 [2024-11-18 07:21:06.097654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.403 [2024-11-18 07:21:06.097720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.403 qpair failed and we were unable to recover it. 00:35:45.403 [2024-11-18 07:21:06.098014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.403 [2024-11-18 07:21:06.098072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.403 qpair failed and we were unable to recover it. 00:35:45.403 [2024-11-18 07:21:06.098332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.403 [2024-11-18 07:21:06.098398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.403 qpair failed and we were unable to recover it. 00:35:45.403 [2024-11-18 07:21:06.098700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.403 [2024-11-18 07:21:06.098767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.403 qpair failed and we were unable to recover it. 00:35:45.403 [2024-11-18 07:21:06.099012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.403 [2024-11-18 07:21:06.099077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.403 qpair failed and we were unable to recover it. 00:35:45.403 [2024-11-18 07:21:06.099278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.403 [2024-11-18 07:21:06.099341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.403 qpair failed and we were unable to recover it. 00:35:45.403 [2024-11-18 07:21:06.099586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.403 [2024-11-18 07:21:06.099653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.403 qpair failed and we were unable to recover it. 00:35:45.403 [2024-11-18 07:21:06.099925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.403 [2024-11-18 07:21:06.099990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.403 qpair failed and we were unable to recover it. 00:35:45.403 [2024-11-18 07:21:06.100233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.403 [2024-11-18 07:21:06.100300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.403 qpair failed and we were unable to recover it. 00:35:45.403 [2024-11-18 07:21:06.100561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.403 [2024-11-18 07:21:06.100630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.403 qpair failed and we were unable to recover it. 00:35:45.403 [2024-11-18 07:21:06.100928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.403 [2024-11-18 07:21:06.100994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.403 qpair failed and we were unable to recover it. 00:35:45.403 [2024-11-18 07:21:06.101302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.403 [2024-11-18 07:21:06.101376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.403 qpair failed and we were unable to recover it. 00:35:45.403 [2024-11-18 07:21:06.101573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.403 [2024-11-18 07:21:06.101639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.403 qpair failed and we were unable to recover it. 00:35:45.403 [2024-11-18 07:21:06.101852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.403 [2024-11-18 07:21:06.101919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.403 qpair failed and we were unable to recover it. 00:35:45.403 [2024-11-18 07:21:06.102176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.403 [2024-11-18 07:21:06.102240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.403 qpair failed and we were unable to recover it. 00:35:45.403 [2024-11-18 07:21:06.102535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.403 [2024-11-18 07:21:06.102601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.403 qpair failed and we were unable to recover it. 00:35:45.403 [2024-11-18 07:21:06.102868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.403 [2024-11-18 07:21:06.102903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.403 qpair failed and we were unable to recover it. 00:35:45.403 [2024-11-18 07:21:06.103071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.403 [2024-11-18 07:21:06.103105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.403 qpair failed and we were unable to recover it. 00:35:45.403 [2024-11-18 07:21:06.103345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.403 [2024-11-18 07:21:06.103409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.403 qpair failed and we were unable to recover it. 00:35:45.403 [2024-11-18 07:21:06.103671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.403 [2024-11-18 07:21:06.103739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.403 qpair failed and we were unable to recover it. 00:35:45.403 [2024-11-18 07:21:06.103978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.403 [2024-11-18 07:21:06.104054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.403 qpair failed and we were unable to recover it. 00:35:45.403 [2024-11-18 07:21:06.104312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.403 [2024-11-18 07:21:06.104377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.403 qpair failed and we were unable to recover it. 00:35:45.403 [2024-11-18 07:21:06.104651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.403 [2024-11-18 07:21:06.104717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.403 qpair failed and we were unable to recover it. 00:35:45.403 [2024-11-18 07:21:06.105008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.403 [2024-11-18 07:21:06.105072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.403 qpair failed and we were unable to recover it. 00:35:45.403 [2024-11-18 07:21:06.105389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.403 [2024-11-18 07:21:06.105453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.403 qpair failed and we were unable to recover it. 00:35:45.403 [2024-11-18 07:21:06.105742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.403 [2024-11-18 07:21:06.105798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.403 qpair failed and we were unable to recover it. 00:35:45.403 [2024-11-18 07:21:06.105975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.403 [2024-11-18 07:21:06.106025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.403 qpair failed and we were unable to recover it. 00:35:45.403 [2024-11-18 07:21:06.106253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.403 [2024-11-18 07:21:06.106317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.403 qpair failed and we were unable to recover it. 00:35:45.403 [2024-11-18 07:21:06.106564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.403 [2024-11-18 07:21:06.106632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.403 qpair failed and we were unable to recover it. 00:35:45.403 [2024-11-18 07:21:06.106863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.403 [2024-11-18 07:21:06.106929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.403 qpair failed and we were unable to recover it. 00:35:45.403 [2024-11-18 07:21:06.107231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.403 [2024-11-18 07:21:06.107265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.403 qpair failed and we were unable to recover it. 00:35:45.403 [2024-11-18 07:21:06.107444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.403 [2024-11-18 07:21:06.107478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.403 qpair failed and we were unable to recover it. 00:35:45.403 [2024-11-18 07:21:06.107744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.403 [2024-11-18 07:21:06.107812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.403 qpair failed and we were unable to recover it. 00:35:45.403 [2024-11-18 07:21:06.108080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.403 [2024-11-18 07:21:06.108114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.403 qpair failed and we were unable to recover it. 00:35:45.403 [2024-11-18 07:21:06.108267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.404 [2024-11-18 07:21:06.108301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.404 qpair failed and we were unable to recover it. 00:35:45.404 [2024-11-18 07:21:06.108438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.404 [2024-11-18 07:21:06.108529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.404 qpair failed and we were unable to recover it. 00:35:45.404 [2024-11-18 07:21:06.108798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.404 [2024-11-18 07:21:06.108862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.404 qpair failed and we were unable to recover it. 00:35:45.404 [2024-11-18 07:21:06.109107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.404 [2024-11-18 07:21:06.109197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.404 qpair failed and we were unable to recover it. 00:35:45.404 [2024-11-18 07:21:06.109307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.404 [2024-11-18 07:21:06.109340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.404 qpair failed and we were unable to recover it. 00:35:45.404 [2024-11-18 07:21:06.109485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.404 [2024-11-18 07:21:06.109526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.404 qpair failed and we were unable to recover it. 00:35:45.404 [2024-11-18 07:21:06.109697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.404 [2024-11-18 07:21:06.109731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.404 qpair failed and we were unable to recover it. 00:35:45.404 [2024-11-18 07:21:06.109929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.404 [2024-11-18 07:21:06.109962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.404 qpair failed and we were unable to recover it. 00:35:45.404 [2024-11-18 07:21:06.110102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.404 [2024-11-18 07:21:06.110136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.404 qpair failed and we were unable to recover it. 00:35:45.404 [2024-11-18 07:21:06.110296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.404 [2024-11-18 07:21:06.110360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.404 qpair failed and we were unable to recover it. 00:35:45.404 [2024-11-18 07:21:06.110673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.404 [2024-11-18 07:21:06.110745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.404 qpair failed and we were unable to recover it. 00:35:45.404 [2024-11-18 07:21:06.110980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.404 [2024-11-18 07:21:06.111015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.404 qpair failed and we were unable to recover it. 00:35:45.404 [2024-11-18 07:21:06.111143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.404 [2024-11-18 07:21:06.111177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.404 qpair failed and we were unable to recover it. 00:35:45.404 [2024-11-18 07:21:06.111323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.404 [2024-11-18 07:21:06.111361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.404 qpair failed and we were unable to recover it. 00:35:45.404 [2024-11-18 07:21:06.111634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.404 [2024-11-18 07:21:06.111700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.404 qpair failed and we were unable to recover it. 00:35:45.404 [2024-11-18 07:21:06.111957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.404 [2024-11-18 07:21:06.112021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.404 qpair failed and we were unable to recover it. 00:35:45.404 [2024-11-18 07:21:06.112318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.404 [2024-11-18 07:21:06.112382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.404 qpair failed and we were unable to recover it. 00:35:45.404 [2024-11-18 07:21:06.112692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.404 [2024-11-18 07:21:06.112757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.404 qpair failed and we were unable to recover it. 00:35:45.404 [2024-11-18 07:21:06.113071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.404 [2024-11-18 07:21:06.113147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.404 qpair failed and we were unable to recover it. 00:35:45.404 [2024-11-18 07:21:06.113396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.404 [2024-11-18 07:21:06.113459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.404 qpair failed and we were unable to recover it. 00:35:45.404 [2024-11-18 07:21:06.113748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.404 [2024-11-18 07:21:06.113814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.404 qpair failed and we were unable to recover it. 00:35:45.404 [2024-11-18 07:21:06.114055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.404 [2024-11-18 07:21:06.114119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.404 qpair failed and we were unable to recover it. 00:35:45.404 [2024-11-18 07:21:06.114423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.404 [2024-11-18 07:21:06.114516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.404 qpair failed and we were unable to recover it. 00:35:45.404 [2024-11-18 07:21:06.114809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.404 [2024-11-18 07:21:06.114875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.404 qpair failed and we were unable to recover it. 00:35:45.404 [2024-11-18 07:21:06.115129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.404 [2024-11-18 07:21:06.115192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.404 qpair failed and we were unable to recover it. 00:35:45.404 [2024-11-18 07:21:06.115442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.404 [2024-11-18 07:21:06.115476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.404 qpair failed and we were unable to recover it. 00:35:45.404 [2024-11-18 07:21:06.115604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.404 [2024-11-18 07:21:06.115639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.405 qpair failed and we were unable to recover it. 00:35:45.405 [2024-11-18 07:21:06.115862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.405 [2024-11-18 07:21:06.115927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.405 qpair failed and we were unable to recover it. 00:35:45.405 [2024-11-18 07:21:06.116156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.405 [2024-11-18 07:21:06.116191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.405 qpair failed and we were unable to recover it. 00:35:45.405 [2024-11-18 07:21:06.116337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.405 [2024-11-18 07:21:06.116371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.405 qpair failed and we were unable to recover it. 00:35:45.405 [2024-11-18 07:21:06.116480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.405 [2024-11-18 07:21:06.116523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.405 qpair failed and we were unable to recover it. 00:35:45.405 [2024-11-18 07:21:06.116794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.405 [2024-11-18 07:21:06.116828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.405 qpair failed and we were unable to recover it. 00:35:45.405 [2024-11-18 07:21:06.117097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.405 [2024-11-18 07:21:06.117160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.405 qpair failed and we were unable to recover it. 00:35:45.405 [2024-11-18 07:21:06.117409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.405 [2024-11-18 07:21:06.117475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.405 qpair failed and we were unable to recover it. 00:35:45.405 [2024-11-18 07:21:06.117751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.405 [2024-11-18 07:21:06.117817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.405 qpair failed and we were unable to recover it. 00:35:45.405 [2024-11-18 07:21:06.118071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.405 [2024-11-18 07:21:06.118135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.405 qpair failed and we were unable to recover it. 00:35:45.405 [2024-11-18 07:21:06.118357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.405 [2024-11-18 07:21:06.118422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.405 qpair failed and we were unable to recover it. 00:35:45.405 [2024-11-18 07:21:06.118697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.405 [2024-11-18 07:21:06.118764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.405 qpair failed and we were unable to recover it. 00:35:45.405 [2024-11-18 07:21:06.119092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.405 [2024-11-18 07:21:06.119156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.405 qpair failed and we were unable to recover it. 00:35:45.405 [2024-11-18 07:21:06.119417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.405 [2024-11-18 07:21:06.119482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.405 qpair failed and we were unable to recover it. 00:35:45.405 [2024-11-18 07:21:06.119760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.405 [2024-11-18 07:21:06.119826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.405 qpair failed and we were unable to recover it. 00:35:45.405 [2024-11-18 07:21:06.120106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.405 [2024-11-18 07:21:06.120170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.405 qpair failed and we were unable to recover it. 00:35:45.405 [2024-11-18 07:21:06.120469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.405 [2024-11-18 07:21:06.120554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.405 qpair failed and we were unable to recover it. 00:35:45.405 [2024-11-18 07:21:06.120810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.405 [2024-11-18 07:21:06.120875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.405 qpair failed and we were unable to recover it. 00:35:45.405 [2024-11-18 07:21:06.121176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.405 [2024-11-18 07:21:06.121251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.405 qpair failed and we were unable to recover it. 00:35:45.405 [2024-11-18 07:21:06.121558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.405 [2024-11-18 07:21:06.121625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.405 qpair failed and we were unable to recover it. 00:35:45.405 [2024-11-18 07:21:06.121879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.405 [2024-11-18 07:21:06.121946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.405 qpair failed and we were unable to recover it. 00:35:45.405 [2024-11-18 07:21:06.122248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.405 [2024-11-18 07:21:06.122322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.405 qpair failed and we were unable to recover it. 00:35:45.405 [2024-11-18 07:21:06.122584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.405 [2024-11-18 07:21:06.122650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.405 qpair failed and we were unable to recover it. 00:35:45.405 [2024-11-18 07:21:06.122891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.405 [2024-11-18 07:21:06.122956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.405 qpair failed and we were unable to recover it. 00:35:45.405 [2024-11-18 07:21:06.123246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.405 [2024-11-18 07:21:06.123309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.405 qpair failed and we were unable to recover it. 00:35:45.405 [2024-11-18 07:21:06.123592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.405 [2024-11-18 07:21:06.123658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.405 qpair failed and we were unable to recover it. 00:35:45.405 [2024-11-18 07:21:06.123933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.405 [2024-11-18 07:21:06.123967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.405 qpair failed and we were unable to recover it. 00:35:45.405 [2024-11-18 07:21:06.124075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.405 [2024-11-18 07:21:06.124109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.405 qpair failed and we were unable to recover it. 00:35:45.405 [2024-11-18 07:21:06.124237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.405 [2024-11-18 07:21:06.124271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.405 qpair failed and we were unable to recover it. 00:35:45.405 [2024-11-18 07:21:06.124500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.405 [2024-11-18 07:21:06.124535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.405 qpair failed and we were unable to recover it. 00:35:45.405 [2024-11-18 07:21:06.124676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.405 [2024-11-18 07:21:06.124710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.405 qpair failed and we were unable to recover it. 00:35:45.405 [2024-11-18 07:21:06.124905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.406 [2024-11-18 07:21:06.124970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.406 qpair failed and we were unable to recover it. 00:35:45.406 [2024-11-18 07:21:06.125268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.406 [2024-11-18 07:21:06.125344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.406 qpair failed and we were unable to recover it. 00:35:45.406 [2024-11-18 07:21:06.125573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.406 [2024-11-18 07:21:06.125640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.406 qpair failed and we were unable to recover it. 00:35:45.406 [2024-11-18 07:21:06.125892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.406 [2024-11-18 07:21:06.125957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.406 qpair failed and we were unable to recover it. 00:35:45.406 [2024-11-18 07:21:06.126254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.406 [2024-11-18 07:21:06.126317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.406 qpair failed and we were unable to recover it. 00:35:45.406 [2024-11-18 07:21:06.126628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.406 [2024-11-18 07:21:06.126694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.406 qpair failed and we were unable to recover it. 00:35:45.406 [2024-11-18 07:21:06.126906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.406 [2024-11-18 07:21:06.126971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.406 qpair failed and we were unable to recover it. 00:35:45.406 [2024-11-18 07:21:06.127240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.406 [2024-11-18 07:21:06.127303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.406 qpair failed and we were unable to recover it. 00:35:45.406 [2024-11-18 07:21:06.127551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.406 [2024-11-18 07:21:06.127617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.406 qpair failed and we were unable to recover it. 00:35:45.406 [2024-11-18 07:21:06.127916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.406 [2024-11-18 07:21:06.127991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.406 qpair failed and we were unable to recover it. 00:35:45.406 [2024-11-18 07:21:06.128289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.406 [2024-11-18 07:21:06.128323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.406 qpair failed and we were unable to recover it. 00:35:45.406 [2024-11-18 07:21:06.128470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.406 [2024-11-18 07:21:06.128521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.406 qpair failed and we were unable to recover it. 00:35:45.406 [2024-11-18 07:21:06.128693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.406 [2024-11-18 07:21:06.128727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.406 qpair failed and we were unable to recover it. 00:35:45.406 [2024-11-18 07:21:06.128910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.406 [2024-11-18 07:21:06.128973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.406 qpair failed and we were unable to recover it. 00:35:45.406 [2024-11-18 07:21:06.129331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.406 [2024-11-18 07:21:06.129396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.406 qpair failed and we were unable to recover it. 00:35:45.406 [2024-11-18 07:21:06.129714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.406 [2024-11-18 07:21:06.129780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.406 qpair failed and we were unable to recover it. 00:35:45.406 [2024-11-18 07:21:06.130088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.406 [2024-11-18 07:21:06.130163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.406 qpair failed and we were unable to recover it. 00:35:45.406 [2024-11-18 07:21:06.130470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.406 [2024-11-18 07:21:06.130552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.406 qpair failed and we were unable to recover it. 00:35:45.406 [2024-11-18 07:21:06.130828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.406 [2024-11-18 07:21:06.130891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.406 qpair failed and we were unable to recover it. 00:35:45.406 [2024-11-18 07:21:06.131190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.406 [2024-11-18 07:21:06.131254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.406 qpair failed and we were unable to recover it. 00:35:45.406 [2024-11-18 07:21:06.131511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.406 [2024-11-18 07:21:06.131579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.406 qpair failed and we were unable to recover it. 00:35:45.406 [2024-11-18 07:21:06.131881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.406 [2024-11-18 07:21:06.131955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.406 qpair failed and we were unable to recover it. 00:35:45.406 [2024-11-18 07:21:06.132252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.406 [2024-11-18 07:21:06.132316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.406 qpair failed and we were unable to recover it. 00:35:45.406 [2024-11-18 07:21:06.132615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.406 [2024-11-18 07:21:06.132692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.406 qpair failed and we were unable to recover it. 00:35:45.406 [2024-11-18 07:21:06.132959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.406 [2024-11-18 07:21:06.132998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.406 qpair failed and we were unable to recover it. 00:35:45.406 [2024-11-18 07:21:06.133163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.406 [2024-11-18 07:21:06.133197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.406 qpair failed and we were unable to recover it. 00:35:45.406 [2024-11-18 07:21:06.133448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.406 [2024-11-18 07:21:06.133525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.406 qpair failed and we were unable to recover it. 00:35:45.406 [2024-11-18 07:21:06.133816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.406 [2024-11-18 07:21:06.133849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.406 qpair failed and we were unable to recover it. 00:35:45.406 [2024-11-18 07:21:06.133979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.406 [2024-11-18 07:21:06.134012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.406 qpair failed and we were unable to recover it. 00:35:45.406 [2024-11-18 07:21:06.134157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.406 [2024-11-18 07:21:06.134192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.406 qpair failed and we were unable to recover it. 00:35:45.406 [2024-11-18 07:21:06.134458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.406 [2024-11-18 07:21:06.134555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.406 qpair failed and we were unable to recover it. 00:35:45.406 [2024-11-18 07:21:06.134814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.406 [2024-11-18 07:21:06.134878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.407 qpair failed and we were unable to recover it. 00:35:45.407 [2024-11-18 07:21:06.135174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.407 [2024-11-18 07:21:06.135237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.407 qpair failed and we were unable to recover it. 00:35:45.407 [2024-11-18 07:21:06.135517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.407 [2024-11-18 07:21:06.135583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.407 qpair failed and we were unable to recover it. 00:35:45.407 [2024-11-18 07:21:06.135828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.407 [2024-11-18 07:21:06.135862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.407 qpair failed and we were unable to recover it. 00:35:45.407 [2024-11-18 07:21:06.136006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.407 [2024-11-18 07:21:06.136039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.407 qpair failed and we were unable to recover it. 00:35:45.407 [2024-11-18 07:21:06.136191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.407 [2024-11-18 07:21:06.136256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.407 qpair failed and we were unable to recover it. 00:35:45.407 [2024-11-18 07:21:06.136515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.407 [2024-11-18 07:21:06.136581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.407 qpair failed and we were unable to recover it. 00:35:45.407 [2024-11-18 07:21:06.136850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.407 [2024-11-18 07:21:06.136885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.407 qpair failed and we were unable to recover it. 00:35:45.407 [2024-11-18 07:21:06.137062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.407 [2024-11-18 07:21:06.137097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.407 qpair failed and we were unable to recover it. 00:35:45.407 [2024-11-18 07:21:06.137343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.407 [2024-11-18 07:21:06.137408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.407 qpair failed and we were unable to recover it. 00:35:45.407 [2024-11-18 07:21:06.137729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.407 [2024-11-18 07:21:06.137796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.407 qpair failed and we were unable to recover it. 00:35:45.407 [2024-11-18 07:21:06.138079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.407 [2024-11-18 07:21:06.138113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.407 qpair failed and we were unable to recover it. 00:35:45.407 [2024-11-18 07:21:06.138258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.407 [2024-11-18 07:21:06.138319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.407 qpair failed and we were unable to recover it. 00:35:45.407 [2024-11-18 07:21:06.138534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.407 [2024-11-18 07:21:06.138600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.407 qpair failed and we were unable to recover it. 00:35:45.407 [2024-11-18 07:21:06.138967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.407 [2024-11-18 07:21:06.139031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.407 qpair failed and we were unable to recover it. 00:35:45.407 [2024-11-18 07:21:06.139304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.407 [2024-11-18 07:21:06.139379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.407 qpair failed and we were unable to recover it. 00:35:45.407 [2024-11-18 07:21:06.139636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.407 [2024-11-18 07:21:06.139710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.407 qpair failed and we were unable to recover it. 00:35:45.407 [2024-11-18 07:21:06.139937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.407 [2024-11-18 07:21:06.140003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.407 qpair failed and we were unable to recover it. 00:35:45.407 [2024-11-18 07:21:06.140267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.407 [2024-11-18 07:21:06.140301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.407 qpair failed and we were unable to recover it. 00:35:45.407 [2024-11-18 07:21:06.140415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.407 [2024-11-18 07:21:06.140448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.407 qpair failed and we were unable to recover it. 00:35:45.407 [2024-11-18 07:21:06.140636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.407 [2024-11-18 07:21:06.140715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.407 qpair failed and we were unable to recover it. 00:35:45.407 [2024-11-18 07:21:06.140975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.407 [2024-11-18 07:21:06.141040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.407 qpair failed and we were unable to recover it. 00:35:45.407 [2024-11-18 07:21:06.141333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.407 [2024-11-18 07:21:06.141397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.407 qpair failed and we were unable to recover it. 00:35:45.407 [2024-11-18 07:21:06.141667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.407 [2024-11-18 07:21:06.141733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.407 qpair failed and we were unable to recover it. 00:35:45.407 [2024-11-18 07:21:06.141992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.407 [2024-11-18 07:21:06.142058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.407 qpair failed and we were unable to recover it. 00:35:45.407 [2024-11-18 07:21:06.142339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.407 [2024-11-18 07:21:06.142403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.407 qpair failed and we were unable to recover it. 00:35:45.407 [2024-11-18 07:21:06.142688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.407 [2024-11-18 07:21:06.142753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.407 qpair failed and we were unable to recover it. 00:35:45.407 [2024-11-18 07:21:06.143001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.407 [2024-11-18 07:21:06.143073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.407 qpair failed and we were unable to recover it. 00:35:45.407 [2024-11-18 07:21:06.143317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.407 [2024-11-18 07:21:06.143351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.407 qpair failed and we were unable to recover it. 00:35:45.407 [2024-11-18 07:21:06.143501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.407 [2024-11-18 07:21:06.143536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.407 qpair failed and we were unable to recover it. 00:35:45.407 [2024-11-18 07:21:06.143778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.407 [2024-11-18 07:21:06.143842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.407 qpair failed and we were unable to recover it. 00:35:45.407 [2024-11-18 07:21:06.144056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.407 [2024-11-18 07:21:06.144090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.407 qpair failed and we were unable to recover it. 00:35:45.407 [2024-11-18 07:21:06.144259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.408 [2024-11-18 07:21:06.144292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.408 qpair failed and we were unable to recover it. 00:35:45.408 [2024-11-18 07:21:06.144460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.408 [2024-11-18 07:21:06.144556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.408 qpair failed and we were unable to recover it. 00:35:45.408 [2024-11-18 07:21:06.144814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.408 [2024-11-18 07:21:06.144879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.408 qpair failed and we were unable to recover it. 00:35:45.408 [2024-11-18 07:21:06.145140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.408 [2024-11-18 07:21:06.145204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.408 qpair failed and we were unable to recover it. 00:35:45.408 [2024-11-18 07:21:06.145464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.408 [2024-11-18 07:21:06.145555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.408 qpair failed and we were unable to recover it. 00:35:45.408 [2024-11-18 07:21:06.145810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.408 [2024-11-18 07:21:06.145875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.408 qpair failed and we were unable to recover it. 00:35:45.408 [2024-11-18 07:21:06.146179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.408 [2024-11-18 07:21:06.146212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.408 qpair failed and we were unable to recover it. 00:35:45.408 [2024-11-18 07:21:06.146325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.408 [2024-11-18 07:21:06.146357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.408 qpair failed and we were unable to recover it. 00:35:45.408 [2024-11-18 07:21:06.146551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.408 [2024-11-18 07:21:06.146618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.408 qpair failed and we were unable to recover it. 00:35:45.408 [2024-11-18 07:21:06.146877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.408 [2024-11-18 07:21:06.146942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.408 qpair failed and we were unable to recover it. 00:35:45.408 [2024-11-18 07:21:06.147235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.408 [2024-11-18 07:21:06.147298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.408 qpair failed and we were unable to recover it. 00:35:45.408 [2024-11-18 07:21:06.147607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.408 [2024-11-18 07:21:06.147674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.408 qpair failed and we were unable to recover it. 00:35:45.408 [2024-11-18 07:21:06.147861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.408 [2024-11-18 07:21:06.147925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.408 qpair failed and we were unable to recover it. 00:35:45.408 [2024-11-18 07:21:06.148113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.408 [2024-11-18 07:21:06.148165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.408 qpair failed and we were unable to recover it. 00:35:45.408 [2024-11-18 07:21:06.148305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.408 [2024-11-18 07:21:06.148339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.408 qpair failed and we were unable to recover it. 00:35:45.408 [2024-11-18 07:21:06.148528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.408 [2024-11-18 07:21:06.148605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.408 qpair failed and we were unable to recover it. 00:35:45.408 [2024-11-18 07:21:06.148907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.408 [2024-11-18 07:21:06.148980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.408 qpair failed and we were unable to recover it. 00:35:45.408 [2024-11-18 07:21:06.149226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.408 [2024-11-18 07:21:06.149260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.408 qpair failed and we were unable to recover it. 00:35:45.408 [2024-11-18 07:21:06.149470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.408 [2024-11-18 07:21:06.149555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.408 qpair failed and we were unable to recover it. 00:35:45.408 [2024-11-18 07:21:06.149807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.408 [2024-11-18 07:21:06.149871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.408 qpair failed and we were unable to recover it. 00:35:45.408 [2024-11-18 07:21:06.150112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.408 [2024-11-18 07:21:06.150172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.408 qpair failed and we were unable to recover it. 00:35:45.408 [2024-11-18 07:21:06.150475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.408 [2024-11-18 07:21:06.150562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.408 qpair failed and we were unable to recover it. 00:35:45.408 [2024-11-18 07:21:06.150850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.408 [2024-11-18 07:21:06.150915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.408 qpair failed and we were unable to recover it. 00:35:45.408 [2024-11-18 07:21:06.151114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.408 [2024-11-18 07:21:06.151180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.408 qpair failed and we were unable to recover it. 00:35:45.408 [2024-11-18 07:21:06.151437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.408 [2024-11-18 07:21:06.151525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.408 qpair failed and we were unable to recover it. 00:35:45.408 [2024-11-18 07:21:06.151833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.408 [2024-11-18 07:21:06.151906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.408 qpair failed and we were unable to recover it. 00:35:45.408 [2024-11-18 07:21:06.152153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.408 [2024-11-18 07:21:06.152229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.408 qpair failed and we were unable to recover it. 00:35:45.408 [2024-11-18 07:21:06.152523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.408 [2024-11-18 07:21:06.152588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.408 qpair failed and we were unable to recover it. 00:35:45.408 [2024-11-18 07:21:06.152793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.408 [2024-11-18 07:21:06.152859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.408 qpair failed and we were unable to recover it. 00:35:45.408 [2024-11-18 07:21:06.153161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.408 [2024-11-18 07:21:06.153236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.408 qpair failed and we were unable to recover it. 00:35:45.408 [2024-11-18 07:21:06.153510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.408 [2024-11-18 07:21:06.153575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.408 qpair failed and we were unable to recover it. 00:35:45.408 [2024-11-18 07:21:06.153853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.408 [2024-11-18 07:21:06.153917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.408 qpair failed and we were unable to recover it. 00:35:45.408 [2024-11-18 07:21:06.154119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.408 [2024-11-18 07:21:06.154183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.408 qpair failed and we were unable to recover it. 00:35:45.408 [2024-11-18 07:21:06.154451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.408 [2024-11-18 07:21:06.154485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.408 qpair failed and we were unable to recover it. 00:35:45.408 [2024-11-18 07:21:06.154649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.408 [2024-11-18 07:21:06.154683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.408 qpair failed and we were unable to recover it. 00:35:45.409 [2024-11-18 07:21:06.154920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.409 [2024-11-18 07:21:06.154985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.409 qpair failed and we were unable to recover it. 00:35:45.409 [2024-11-18 07:21:06.155201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.409 [2024-11-18 07:21:06.155265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.409 qpair failed and we were unable to recover it. 00:35:45.409 [2024-11-18 07:21:06.155568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.409 [2024-11-18 07:21:06.155632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.409 qpair failed and we were unable to recover it. 00:35:45.409 [2024-11-18 07:21:06.155890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.409 [2024-11-18 07:21:06.155955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.409 qpair failed and we were unable to recover it. 00:35:45.409 [2024-11-18 07:21:06.156203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.409 [2024-11-18 07:21:06.156270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.409 qpair failed and we were unable to recover it. 00:35:45.409 [2024-11-18 07:21:06.156563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.409 [2024-11-18 07:21:06.156628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.409 qpair failed and we were unable to recover it. 00:35:45.409 [2024-11-18 07:21:06.156873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.409 [2024-11-18 07:21:06.156938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.409 qpair failed and we were unable to recover it. 00:35:45.409 [2024-11-18 07:21:06.157184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.409 [2024-11-18 07:21:06.157249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.409 qpair failed and we were unable to recover it. 00:35:45.409 [2024-11-18 07:21:06.157548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.409 [2024-11-18 07:21:06.157582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.409 qpair failed and we were unable to recover it. 00:35:45.409 [2024-11-18 07:21:06.157724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.409 [2024-11-18 07:21:06.157758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.409 qpair failed and we were unable to recover it. 00:35:45.409 [2024-11-18 07:21:06.157859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.409 [2024-11-18 07:21:06.157892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.409 qpair failed and we were unable to recover it. 00:35:45.409 [2024-11-18 07:21:06.158032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.409 [2024-11-18 07:21:06.158067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.409 qpair failed and we were unable to recover it. 00:35:45.409 [2024-11-18 07:21:06.158277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.409 [2024-11-18 07:21:06.158342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.409 qpair failed and we were unable to recover it. 00:35:45.409 [2024-11-18 07:21:06.158548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.409 [2024-11-18 07:21:06.158612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.409 qpair failed and we were unable to recover it. 00:35:45.409 [2024-11-18 07:21:06.158881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.409 [2024-11-18 07:21:06.158945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.409 qpair failed and we were unable to recover it. 00:35:45.409 [2024-11-18 07:21:06.159198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.409 [2024-11-18 07:21:06.159262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.409 qpair failed and we were unable to recover it. 00:35:45.409 [2024-11-18 07:21:06.159551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.409 [2024-11-18 07:21:06.159616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.409 qpair failed and we were unable to recover it. 00:35:45.409 [2024-11-18 07:21:06.159911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.409 [2024-11-18 07:21:06.159975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.409 qpair failed and we were unable to recover it. 00:35:45.409 [2024-11-18 07:21:06.160219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.409 [2024-11-18 07:21:06.160284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.409 qpair failed and we were unable to recover it. 00:35:45.409 [2024-11-18 07:21:06.160554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.409 [2024-11-18 07:21:06.160620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.409 qpair failed and we were unable to recover it. 00:35:45.409 [2024-11-18 07:21:06.160876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.409 [2024-11-18 07:21:06.160940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.409 qpair failed and we were unable to recover it. 00:35:45.409 [2024-11-18 07:21:06.161256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.409 [2024-11-18 07:21:06.161322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.409 qpair failed and we were unable to recover it. 00:35:45.409 [2024-11-18 07:21:06.161579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.409 [2024-11-18 07:21:06.161644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.409 qpair failed and we were unable to recover it. 00:35:45.409 [2024-11-18 07:21:06.161939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.409 [2024-11-18 07:21:06.162012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.409 qpair failed and we were unable to recover it. 00:35:45.409 [2024-11-18 07:21:06.162311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.409 [2024-11-18 07:21:06.162376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.409 qpair failed and we were unable to recover it. 00:35:45.409 [2024-11-18 07:21:06.162628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.409 [2024-11-18 07:21:06.162695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.409 qpair failed and we were unable to recover it. 00:35:45.409 [2024-11-18 07:21:06.162995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.409 [2024-11-18 07:21:06.163069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.409 qpair failed and we were unable to recover it. 00:35:45.409 [2024-11-18 07:21:06.163284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.409 [2024-11-18 07:21:06.163348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.409 qpair failed and we were unable to recover it. 00:35:45.409 [2024-11-18 07:21:06.163599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.409 [2024-11-18 07:21:06.163665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.410 qpair failed and we were unable to recover it. 00:35:45.410 [2024-11-18 07:21:06.163967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.410 [2024-11-18 07:21:06.164035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.410 qpair failed and we were unable to recover it. 00:35:45.410 [2024-11-18 07:21:06.164319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.410 [2024-11-18 07:21:06.164353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.410 qpair failed and we were unable to recover it. 00:35:45.410 [2024-11-18 07:21:06.164535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.410 [2024-11-18 07:21:06.164570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.410 qpair failed and we were unable to recover it. 00:35:45.410 [2024-11-18 07:21:06.164876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.410 [2024-11-18 07:21:06.164951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.410 qpair failed and we were unable to recover it. 00:35:45.410 [2024-11-18 07:21:06.165244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.410 [2024-11-18 07:21:06.165307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.410 qpair failed and we were unable to recover it. 00:35:45.410 [2024-11-18 07:21:06.165603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.410 [2024-11-18 07:21:06.165681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.410 qpair failed and we were unable to recover it. 00:35:45.410 [2024-11-18 07:21:06.165987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.410 [2024-11-18 07:21:06.166022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.410 qpair failed and we were unable to recover it. 00:35:45.410 [2024-11-18 07:21:06.166163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.410 [2024-11-18 07:21:06.166196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.410 qpair failed and we were unable to recover it. 00:35:45.410 [2024-11-18 07:21:06.166336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.410 [2024-11-18 07:21:06.166371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.410 qpair failed and we were unable to recover it. 00:35:45.410 [2024-11-18 07:21:06.166513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.410 [2024-11-18 07:21:06.166560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.410 qpair failed and we were unable to recover it. 00:35:45.410 [2024-11-18 07:21:06.166861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.410 [2024-11-18 07:21:06.166936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.410 qpair failed and we were unable to recover it. 00:35:45.410 [2024-11-18 07:21:06.167235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.410 [2024-11-18 07:21:06.167300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.410 qpair failed and we were unable to recover it. 00:35:45.410 [2024-11-18 07:21:06.167563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.410 [2024-11-18 07:21:06.167628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.410 qpair failed and we were unable to recover it. 00:35:45.410 [2024-11-18 07:21:06.167924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.410 [2024-11-18 07:21:06.167987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.410 qpair failed and we were unable to recover it. 00:35:45.410 [2024-11-18 07:21:06.168278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.410 [2024-11-18 07:21:06.168341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.410 qpair failed and we were unable to recover it. 00:35:45.410 [2024-11-18 07:21:06.168644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.410 [2024-11-18 07:21:06.168719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.410 qpair failed and we were unable to recover it. 00:35:45.410 [2024-11-18 07:21:06.169000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.410 [2024-11-18 07:21:06.169074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.410 qpair failed and we were unable to recover it. 00:35:45.410 [2024-11-18 07:21:06.169333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.410 [2024-11-18 07:21:06.169397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.410 qpair failed and we were unable to recover it. 00:35:45.410 [2024-11-18 07:21:06.169709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.410 [2024-11-18 07:21:06.169779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.410 qpair failed and we were unable to recover it. 00:35:45.410 [2024-11-18 07:21:06.170015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.410 [2024-11-18 07:21:06.170090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.410 qpair failed and we were unable to recover it. 00:35:45.410 [2024-11-18 07:21:06.170285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.410 [2024-11-18 07:21:06.170351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.410 qpair failed and we were unable to recover it. 00:35:45.410 [2024-11-18 07:21:06.170605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.410 [2024-11-18 07:21:06.170669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.410 qpair failed and we were unable to recover it. 00:35:45.410 [2024-11-18 07:21:06.170922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.410 [2024-11-18 07:21:06.170988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.410 qpair failed and we were unable to recover it. 00:35:45.410 [2024-11-18 07:21:06.171256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.410 [2024-11-18 07:21:06.171291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.410 qpair failed and we were unable to recover it. 00:35:45.410 [2024-11-18 07:21:06.171437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.410 [2024-11-18 07:21:06.171470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.410 qpair failed and we were unable to recover it. 00:35:45.410 [2024-11-18 07:21:06.171741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.410 [2024-11-18 07:21:06.171814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.410 qpair failed and we were unable to recover it. 00:35:45.410 [2024-11-18 07:21:06.172051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.410 [2024-11-18 07:21:06.172116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.410 qpair failed and we were unable to recover it. 00:35:45.410 [2024-11-18 07:21:06.172400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.410 [2024-11-18 07:21:06.172463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.410 qpair failed and we were unable to recover it. 00:35:45.410 [2024-11-18 07:21:06.172742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.410 [2024-11-18 07:21:06.172807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.410 qpair failed and we were unable to recover it. 00:35:45.410 [2024-11-18 07:21:06.173089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.411 [2024-11-18 07:21:06.173154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.411 qpair failed and we were unable to recover it. 00:35:45.411 [2024-11-18 07:21:06.173354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.411 [2024-11-18 07:21:06.173420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.411 qpair failed and we were unable to recover it. 00:35:45.411 [2024-11-18 07:21:06.173705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.411 [2024-11-18 07:21:06.173771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.411 qpair failed and we were unable to recover it. 00:35:45.411 [2024-11-18 07:21:06.174031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.411 [2024-11-18 07:21:06.174095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.411 qpair failed and we were unable to recover it. 00:35:45.411 [2024-11-18 07:21:06.174358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.411 [2024-11-18 07:21:06.174422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.411 qpair failed and we were unable to recover it. 00:35:45.411 [2024-11-18 07:21:06.174685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.411 [2024-11-18 07:21:06.174752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.411 qpair failed and we were unable to recover it. 00:35:45.411 [2024-11-18 07:21:06.175008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.411 [2024-11-18 07:21:06.175072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.411 qpair failed and we were unable to recover it. 00:35:45.411 [2024-11-18 07:21:06.175373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.411 [2024-11-18 07:21:06.175438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.411 qpair failed and we were unable to recover it. 00:35:45.411 [2024-11-18 07:21:06.175712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.411 [2024-11-18 07:21:06.175777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.411 qpair failed and we were unable to recover it. 00:35:45.411 [2024-11-18 07:21:06.176043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.411 [2024-11-18 07:21:06.176076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.411 qpair failed and we were unable to recover it. 00:35:45.411 [2024-11-18 07:21:06.176192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.411 [2024-11-18 07:21:06.176226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.411 qpair failed and we were unable to recover it. 00:35:45.411 [2024-11-18 07:21:06.176427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.411 [2024-11-18 07:21:06.176508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.411 qpair failed and we were unable to recover it. 00:35:45.411 [2024-11-18 07:21:06.176740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.411 [2024-11-18 07:21:06.176806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.411 qpair failed and we were unable to recover it. 00:35:45.411 [2024-11-18 07:21:06.177049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.411 [2024-11-18 07:21:06.177114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.411 qpair failed and we were unable to recover it. 00:35:45.411 [2024-11-18 07:21:06.177311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.411 [2024-11-18 07:21:06.177372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.411 qpair failed and we were unable to recover it. 00:35:45.411 [2024-11-18 07:21:06.177515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.411 [2024-11-18 07:21:06.177549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.411 qpair failed and we were unable to recover it. 00:35:45.411 [2024-11-18 07:21:06.177758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.411 [2024-11-18 07:21:06.177826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.411 qpair failed and we were unable to recover it. 00:35:45.411 [2024-11-18 07:21:06.178069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.411 [2024-11-18 07:21:06.178144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.411 qpair failed and we were unable to recover it. 00:35:45.411 [2024-11-18 07:21:06.178388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.411 [2024-11-18 07:21:06.178454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.411 qpair failed and we were unable to recover it. 00:35:45.411 [2024-11-18 07:21:06.178788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.411 [2024-11-18 07:21:06.178863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.411 qpair failed and we were unable to recover it. 00:35:45.411 [2024-11-18 07:21:06.179209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.411 [2024-11-18 07:21:06.179273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.411 qpair failed and we were unable to recover it. 00:35:45.411 [2024-11-18 07:21:06.179588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.411 [2024-11-18 07:21:06.179661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.411 qpair failed and we were unable to recover it. 00:35:45.411 [2024-11-18 07:21:06.179905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.411 [2024-11-18 07:21:06.179969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.411 qpair failed and we were unable to recover it. 00:35:45.411 [2024-11-18 07:21:06.180320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.411 [2024-11-18 07:21:06.180384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.411 qpair failed and we were unable to recover it. 00:35:45.411 [2024-11-18 07:21:06.180650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.411 [2024-11-18 07:21:06.180716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.411 qpair failed and we were unable to recover it. 00:35:45.411 [2024-11-18 07:21:06.181047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.411 [2024-11-18 07:21:06.181121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.411 qpair failed and we were unable to recover it. 00:35:45.411 [2024-11-18 07:21:06.181411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.411 [2024-11-18 07:21:06.181476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.411 qpair failed and we were unable to recover it. 00:35:45.411 [2024-11-18 07:21:06.181748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.411 [2024-11-18 07:21:06.181822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.411 qpair failed and we were unable to recover it. 00:35:45.411 [2024-11-18 07:21:06.182111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.411 [2024-11-18 07:21:06.182174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.411 qpair failed and we were unable to recover it. 00:35:45.411 [2024-11-18 07:21:06.182381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.411 [2024-11-18 07:21:06.182445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.411 qpair failed and we were unable to recover it. 00:35:45.411 [2024-11-18 07:21:06.182756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.411 [2024-11-18 07:21:06.182846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.411 qpair failed and we were unable to recover it. 00:35:45.411 [2024-11-18 07:21:06.183108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.411 [2024-11-18 07:21:06.183172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.411 qpair failed and we were unable to recover it. 00:35:45.411 [2024-11-18 07:21:06.183464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.411 [2024-11-18 07:21:06.183546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.411 qpair failed and we were unable to recover it. 00:35:45.411 [2024-11-18 07:21:06.183825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.411 [2024-11-18 07:21:06.183860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.411 qpair failed and we were unable to recover it. 00:35:45.411 [2024-11-18 07:21:06.183952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.411 [2024-11-18 07:21:06.183985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.411 qpair failed and we were unable to recover it. 00:35:45.412 [2024-11-18 07:21:06.184155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.412 [2024-11-18 07:21:06.184189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.412 qpair failed and we were unable to recover it. 00:35:45.412 [2024-11-18 07:21:06.184357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.412 [2024-11-18 07:21:06.184391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.412 qpair failed and we were unable to recover it. 00:35:45.412 [2024-11-18 07:21:06.184603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.412 [2024-11-18 07:21:06.184668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.412 qpair failed and we were unable to recover it. 00:35:45.412 [2024-11-18 07:21:06.184874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.412 [2024-11-18 07:21:06.184938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.412 qpair failed and we were unable to recover it. 00:35:45.412 [2024-11-18 07:21:06.185224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.412 [2024-11-18 07:21:06.185289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.412 qpair failed and we were unable to recover it. 00:35:45.412 [2024-11-18 07:21:06.185537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.412 [2024-11-18 07:21:06.185603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.412 qpair failed and we were unable to recover it. 00:35:45.412 [2024-11-18 07:21:06.185892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.412 [2024-11-18 07:21:06.185926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.412 qpair failed and we were unable to recover it. 00:35:45.412 [2024-11-18 07:21:06.186079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.412 [2024-11-18 07:21:06.186113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.412 qpair failed and we were unable to recover it. 00:35:45.412 [2024-11-18 07:21:06.186256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.412 [2024-11-18 07:21:06.186290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.412 qpair failed and we were unable to recover it. 00:35:45.412 [2024-11-18 07:21:06.186558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.412 [2024-11-18 07:21:06.186624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.412 qpair failed and we were unable to recover it. 00:35:45.412 [2024-11-18 07:21:06.186876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.412 [2024-11-18 07:21:06.186911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.412 qpair failed and we were unable to recover it. 00:35:45.412 [2024-11-18 07:21:06.187059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.412 [2024-11-18 07:21:06.187093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.412 qpair failed and we were unable to recover it. 00:35:45.412 [2024-11-18 07:21:06.187364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.412 [2024-11-18 07:21:06.187421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.412 qpair failed and we were unable to recover it. 00:35:45.412 [2024-11-18 07:21:06.187691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.412 [2024-11-18 07:21:06.187758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.412 qpair failed and we were unable to recover it. 00:35:45.412 [2024-11-18 07:21:06.188033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.412 [2024-11-18 07:21:06.188067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.412 qpair failed and we were unable to recover it. 00:35:45.412 [2024-11-18 07:21:06.188208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.412 [2024-11-18 07:21:06.188242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.412 qpair failed and we were unable to recover it. 00:35:45.412 [2024-11-18 07:21:06.188384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.412 [2024-11-18 07:21:06.188417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.412 qpair failed and we were unable to recover it. 00:35:45.412 [2024-11-18 07:21:06.188632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.412 [2024-11-18 07:21:06.188667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.412 qpair failed and we were unable to recover it. 00:35:45.412 [2024-11-18 07:21:06.188810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.412 [2024-11-18 07:21:06.188843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.412 qpair failed and we were unable to recover it. 00:35:45.412 [2024-11-18 07:21:06.189048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.412 [2024-11-18 07:21:06.189111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.412 qpair failed and we were unable to recover it. 00:35:45.412 [2024-11-18 07:21:06.189361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.412 [2024-11-18 07:21:06.189429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.412 qpair failed and we were unable to recover it. 00:35:45.412 [2024-11-18 07:21:06.189731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.412 [2024-11-18 07:21:06.189765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.412 qpair failed and we were unable to recover it. 00:35:45.412 [2024-11-18 07:21:06.189970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.412 [2024-11-18 07:21:06.190034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.412 qpair failed and we were unable to recover it. 00:35:45.412 [2024-11-18 07:21:06.190335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.412 [2024-11-18 07:21:06.190410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.412 qpair failed and we were unable to recover it. 00:35:45.412 [2024-11-18 07:21:06.190727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.412 [2024-11-18 07:21:06.190792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.412 qpair failed and we were unable to recover it. 00:35:45.412 [2024-11-18 07:21:06.191046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.412 [2024-11-18 07:21:06.191111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.412 qpair failed and we were unable to recover it. 00:35:45.412 [2024-11-18 07:21:06.191371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.412 [2024-11-18 07:21:06.191436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.412 qpair failed and we were unable to recover it. 00:35:45.412 [2024-11-18 07:21:06.191735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.412 [2024-11-18 07:21:06.191770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.412 qpair failed and we were unable to recover it. 00:35:45.412 [2024-11-18 07:21:06.191912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.412 [2024-11-18 07:21:06.191946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.412 qpair failed and we were unable to recover it. 00:35:45.412 [2024-11-18 07:21:06.192252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.412 [2024-11-18 07:21:06.192315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.413 qpair failed and we were unable to recover it. 00:35:45.413 [2024-11-18 07:21:06.192613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.413 [2024-11-18 07:21:06.192679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.413 qpair failed and we were unable to recover it. 00:35:45.413 [2024-11-18 07:21:06.192940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.413 [2024-11-18 07:21:06.193005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.413 qpair failed and we were unable to recover it. 00:35:45.413 [2024-11-18 07:21:06.193210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.413 [2024-11-18 07:21:06.193275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.413 qpair failed and we were unable to recover it. 00:35:45.413 [2024-11-18 07:21:06.193517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.413 [2024-11-18 07:21:06.193583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.413 qpair failed and we were unable to recover it. 00:35:45.413 [2024-11-18 07:21:06.193808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.413 [2024-11-18 07:21:06.193842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.413 qpair failed and we were unable to recover it. 00:35:45.413 [2024-11-18 07:21:06.193942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.413 [2024-11-18 07:21:06.193975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.413 qpair failed and we were unable to recover it. 00:35:45.413 [2024-11-18 07:21:06.194150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.413 [2024-11-18 07:21:06.194214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.413 qpair failed and we were unable to recover it. 00:35:45.413 [2024-11-18 07:21:06.194518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.413 [2024-11-18 07:21:06.194585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.413 qpair failed and we were unable to recover it. 00:35:45.413 [2024-11-18 07:21:06.194845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.413 [2024-11-18 07:21:06.194909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.413 qpair failed and we were unable to recover it. 00:35:45.413 [2024-11-18 07:21:06.195179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.413 [2024-11-18 07:21:06.195242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.413 qpair failed and we were unable to recover it. 00:35:45.413 [2024-11-18 07:21:06.195546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.413 [2024-11-18 07:21:06.195612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.413 qpair failed and we were unable to recover it. 00:35:45.413 [2024-11-18 07:21:06.195913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.413 [2024-11-18 07:21:06.195988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.413 qpair failed and we were unable to recover it. 00:35:45.413 [2024-11-18 07:21:06.196285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.413 [2024-11-18 07:21:06.196349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.413 qpair failed and we were unable to recover it. 00:35:45.413 [2024-11-18 07:21:06.196643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.413 [2024-11-18 07:21:06.196721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.413 qpair failed and we were unable to recover it. 00:35:45.413 [2024-11-18 07:21:06.196978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.413 [2024-11-18 07:21:06.197041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.413 qpair failed and we were unable to recover it. 00:35:45.413 [2024-11-18 07:21:06.197285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.413 [2024-11-18 07:21:06.197352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.413 qpair failed and we were unable to recover it. 00:35:45.413 [2024-11-18 07:21:06.197606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.413 [2024-11-18 07:21:06.197671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.413 qpair failed and we were unable to recover it. 00:35:45.413 [2024-11-18 07:21:06.197983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.413 [2024-11-18 07:21:06.198057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.413 qpair failed and we were unable to recover it. 00:35:45.413 [2024-11-18 07:21:06.198315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.413 [2024-11-18 07:21:06.198380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.413 qpair failed and we were unable to recover it. 00:35:45.413 [2024-11-18 07:21:06.198606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.413 [2024-11-18 07:21:06.198671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.413 qpair failed and we were unable to recover it. 00:35:45.413 [2024-11-18 07:21:06.198923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.413 [2024-11-18 07:21:06.199001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.413 qpair failed and we were unable to recover it. 00:35:45.413 [2024-11-18 07:21:06.199276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.413 [2024-11-18 07:21:06.199310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.413 qpair failed and we were unable to recover it. 00:35:45.413 [2024-11-18 07:21:06.199414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.413 [2024-11-18 07:21:06.199447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.413 qpair failed and we were unable to recover it. 00:35:45.413 [2024-11-18 07:21:06.199602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.413 [2024-11-18 07:21:06.199637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.413 qpair failed and we were unable to recover it. 00:35:45.413 [2024-11-18 07:21:06.199822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.413 [2024-11-18 07:21:06.199886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.413 qpair failed and we were unable to recover it. 00:35:45.413 [2024-11-18 07:21:06.200119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.413 [2024-11-18 07:21:06.200183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.413 qpair failed and we were unable to recover it. 00:35:45.413 [2024-11-18 07:21:06.200428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.413 [2024-11-18 07:21:06.200513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.413 qpair failed and we were unable to recover it. 00:35:45.413 [2024-11-18 07:21:06.200779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.413 [2024-11-18 07:21:06.200844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.413 qpair failed and we were unable to recover it. 00:35:45.413 [2024-11-18 07:21:06.201130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.413 [2024-11-18 07:21:06.201206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.413 qpair failed and we were unable to recover it. 00:35:45.413 [2024-11-18 07:21:06.201462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.413 [2024-11-18 07:21:06.201545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.413 qpair failed and we were unable to recover it. 00:35:45.413 [2024-11-18 07:21:06.201786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.413 [2024-11-18 07:21:06.201850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.414 qpair failed and we were unable to recover it. 00:35:45.414 [2024-11-18 07:21:06.202096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.414 [2024-11-18 07:21:06.202161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.414 qpair failed and we were unable to recover it. 00:35:45.414 [2024-11-18 07:21:06.202478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.414 [2024-11-18 07:21:06.202569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.414 qpair failed and we were unable to recover it. 00:35:45.414 [2024-11-18 07:21:06.202831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.414 [2024-11-18 07:21:06.202895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.414 qpair failed and we were unable to recover it. 00:35:45.414 [2024-11-18 07:21:06.203126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.414 [2024-11-18 07:21:06.203190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.414 qpair failed and we were unable to recover it. 00:35:45.414 [2024-11-18 07:21:06.203486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.414 [2024-11-18 07:21:06.203581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.414 qpair failed and we were unable to recover it. 00:35:45.414 [2024-11-18 07:21:06.203874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.414 [2024-11-18 07:21:06.203939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.414 qpair failed and we were unable to recover it. 00:35:45.414 [2024-11-18 07:21:06.204234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.414 [2024-11-18 07:21:06.204299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.414 qpair failed and we were unable to recover it. 00:35:45.414 [2024-11-18 07:21:06.204554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.414 [2024-11-18 07:21:06.204621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.414 qpair failed and we were unable to recover it. 00:35:45.414 [2024-11-18 07:21:06.204873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.414 [2024-11-18 07:21:06.204937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.414 qpair failed and we were unable to recover it. 00:35:45.414 [2024-11-18 07:21:06.205233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.414 [2024-11-18 07:21:06.205308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.414 qpair failed and we were unable to recover it. 00:35:45.414 [2024-11-18 07:21:06.205601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.414 [2024-11-18 07:21:06.205666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.414 qpair failed and we were unable to recover it. 00:35:45.414 [2024-11-18 07:21:06.205976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.414 [2024-11-18 07:21:06.206047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.414 qpair failed and we were unable to recover it. 00:35:45.414 [2024-11-18 07:21:06.206314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.414 [2024-11-18 07:21:06.206379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.414 qpair failed and we were unable to recover it. 00:35:45.414 [2024-11-18 07:21:06.206698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.414 [2024-11-18 07:21:06.206770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.414 qpair failed and we were unable to recover it. 00:35:45.414 [2024-11-18 07:21:06.207081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.414 [2024-11-18 07:21:06.207145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.414 qpair failed and we were unable to recover it. 00:35:45.414 [2024-11-18 07:21:06.207446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.414 [2024-11-18 07:21:06.207536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.414 qpair failed and we were unable to recover it. 00:35:45.414 [2024-11-18 07:21:06.207867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.414 [2024-11-18 07:21:06.207942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.414 qpair failed and we were unable to recover it. 00:35:45.414 [2024-11-18 07:21:06.208186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.414 [2024-11-18 07:21:06.208249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.414 qpair failed and we were unable to recover it. 00:35:45.414 [2024-11-18 07:21:06.208548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.414 [2024-11-18 07:21:06.208614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.414 qpair failed and we were unable to recover it. 00:35:45.414 [2024-11-18 07:21:06.208910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.414 [2024-11-18 07:21:06.208974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.414 qpair failed and we were unable to recover it. 00:35:45.414 [2024-11-18 07:21:06.209231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.414 [2024-11-18 07:21:06.209295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.414 qpair failed and we were unable to recover it. 00:35:45.414 [2024-11-18 07:21:06.209538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.414 [2024-11-18 07:21:06.209606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.414 qpair failed and we were unable to recover it. 00:35:45.414 [2024-11-18 07:21:06.209867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.414 [2024-11-18 07:21:06.209902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.414 qpair failed and we were unable to recover it. 00:35:45.414 [2024-11-18 07:21:06.210041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.414 [2024-11-18 07:21:06.210074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.414 qpair failed and we were unable to recover it. 00:35:45.414 [2024-11-18 07:21:06.210298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.414 [2024-11-18 07:21:06.210364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.414 qpair failed and we were unable to recover it. 00:35:45.414 [2024-11-18 07:21:06.210671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.414 [2024-11-18 07:21:06.210748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.414 qpair failed and we were unable to recover it. 00:35:45.414 [2024-11-18 07:21:06.211045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.414 [2024-11-18 07:21:06.211109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.414 qpair failed and we were unable to recover it. 00:35:45.414 [2024-11-18 07:21:06.211413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.414 [2024-11-18 07:21:06.211487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.414 qpair failed and we were unable to recover it. 00:35:45.414 [2024-11-18 07:21:06.211827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.414 [2024-11-18 07:21:06.211892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.414 qpair failed and we were unable to recover it. 00:35:45.414 [2024-11-18 07:21:06.212133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.414 [2024-11-18 07:21:06.212167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.414 qpair failed and we were unable to recover it. 00:35:45.414 [2024-11-18 07:21:06.212309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.414 [2024-11-18 07:21:06.212343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.414 qpair failed and we were unable to recover it. 00:35:45.414 [2024-11-18 07:21:06.212477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.414 [2024-11-18 07:21:06.212523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.414 qpair failed and we were unable to recover it. 00:35:45.414 [2024-11-18 07:21:06.212761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.415 [2024-11-18 07:21:06.212826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.415 qpair failed and we were unable to recover it. 00:35:45.415 [2024-11-18 07:21:06.213072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.415 [2024-11-18 07:21:06.213135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.415 qpair failed and we were unable to recover it. 00:35:45.415 [2024-11-18 07:21:06.213376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.415 [2024-11-18 07:21:06.213442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.415 qpair failed and we were unable to recover it. 00:35:45.415 [2024-11-18 07:21:06.213680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.415 [2024-11-18 07:21:06.213744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.415 qpair failed and we were unable to recover it. 00:35:45.415 [2024-11-18 07:21:06.214039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.415 [2024-11-18 07:21:06.214113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.415 qpair failed and we were unable to recover it. 00:35:45.415 [2024-11-18 07:21:06.214406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.415 [2024-11-18 07:21:06.214470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.415 qpair failed and we were unable to recover it. 00:35:45.415 [2024-11-18 07:21:06.214754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.415 [2024-11-18 07:21:06.214827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.415 qpair failed and we were unable to recover it. 00:35:45.415 [2024-11-18 07:21:06.215118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.415 [2024-11-18 07:21:06.215183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.415 qpair failed and we were unable to recover it. 00:35:45.415 [2024-11-18 07:21:06.215475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.415 [2024-11-18 07:21:06.215558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.415 qpair failed and we were unable to recover it. 00:35:45.415 [2024-11-18 07:21:06.215853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.415 [2024-11-18 07:21:06.215917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.415 qpair failed and we were unable to recover it. 00:35:45.415 [2024-11-18 07:21:06.216159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.415 [2024-11-18 07:21:06.216193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.415 qpair failed and we were unable to recover it. 00:35:45.415 [2024-11-18 07:21:06.216306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.415 [2024-11-18 07:21:06.216345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.415 qpair failed and we were unable to recover it. 00:35:45.415 [2024-11-18 07:21:06.216452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.415 [2024-11-18 07:21:06.216486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.415 qpair failed and we were unable to recover it. 00:35:45.415 [2024-11-18 07:21:06.216631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.415 [2024-11-18 07:21:06.216666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.415 qpair failed and we were unable to recover it. 00:35:45.415 [2024-11-18 07:21:06.216807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.415 [2024-11-18 07:21:06.216873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.415 qpair failed and we were unable to recover it. 00:35:45.415 [2024-11-18 07:21:06.217100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.415 [2024-11-18 07:21:06.217163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.415 qpair failed and we were unable to recover it. 00:35:45.415 [2024-11-18 07:21:06.217376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.415 [2024-11-18 07:21:06.217410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.415 qpair failed and we were unable to recover it. 00:35:45.415 [2024-11-18 07:21:06.217549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.415 [2024-11-18 07:21:06.217585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.415 qpair failed and we were unable to recover it. 00:35:45.415 [2024-11-18 07:21:06.217756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.415 [2024-11-18 07:21:06.217790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.415 qpair failed and we were unable to recover it. 00:35:45.415 [2024-11-18 07:21:06.218027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.415 [2024-11-18 07:21:06.218091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.415 qpair failed and we were unable to recover it. 00:35:45.415 [2024-11-18 07:21:06.218334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.415 [2024-11-18 07:21:06.218401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.415 qpair failed and we were unable to recover it. 00:35:45.415 [2024-11-18 07:21:06.218671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.415 [2024-11-18 07:21:06.218742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.415 qpair failed and we were unable to recover it. 00:35:45.415 [2024-11-18 07:21:06.219003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.415 [2024-11-18 07:21:06.219068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.415 qpair failed and we were unable to recover it. 00:35:45.415 [2024-11-18 07:21:06.219332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.415 [2024-11-18 07:21:06.219398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.415 qpair failed and we were unable to recover it. 00:35:45.415 [2024-11-18 07:21:06.219645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.415 [2024-11-18 07:21:06.219711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.415 qpair failed and we were unable to recover it. 00:35:45.415 [2024-11-18 07:21:06.219981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.416 [2024-11-18 07:21:06.220048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.416 qpair failed and we were unable to recover it. 00:35:45.416 [2024-11-18 07:21:06.220295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.416 [2024-11-18 07:21:06.220359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.416 qpair failed and we were unable to recover it. 00:35:45.416 [2024-11-18 07:21:06.220592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.416 [2024-11-18 07:21:06.220627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.416 qpair failed and we were unable to recover it. 00:35:45.416 [2024-11-18 07:21:06.220766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.416 [2024-11-18 07:21:06.220800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.416 qpair failed and we were unable to recover it. 00:35:45.416 [2024-11-18 07:21:06.221075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.416 [2024-11-18 07:21:06.221139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.416 qpair failed and we were unable to recover it. 00:35:45.416 [2024-11-18 07:21:06.221434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.416 [2024-11-18 07:21:06.221515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.416 qpair failed and we were unable to recover it. 00:35:45.416 [2024-11-18 07:21:06.221806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.416 [2024-11-18 07:21:06.221870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.416 qpair failed and we were unable to recover it. 00:35:45.416 [2024-11-18 07:21:06.222155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.416 [2024-11-18 07:21:06.222219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.416 qpair failed and we were unable to recover it. 00:35:45.416 [2024-11-18 07:21:06.222508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.416 [2024-11-18 07:21:06.222542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.416 qpair failed and we were unable to recover it. 00:35:45.416 [2024-11-18 07:21:06.222654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.416 [2024-11-18 07:21:06.222688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.416 qpair failed and we were unable to recover it. 00:35:45.416 [2024-11-18 07:21:06.222834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.416 [2024-11-18 07:21:06.222868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.416 qpair failed and we were unable to recover it. 00:35:45.416 [2024-11-18 07:21:06.223144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.416 [2024-11-18 07:21:06.223209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.416 qpair failed and we were unable to recover it. 00:35:45.416 [2024-11-18 07:21:06.223420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.416 [2024-11-18 07:21:06.223483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.416 qpair failed and we were unable to recover it. 00:35:45.416 [2024-11-18 07:21:06.223773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.416 [2024-11-18 07:21:06.223838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.416 qpair failed and we were unable to recover it. 00:35:45.416 [2024-11-18 07:21:06.224139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.416 [2024-11-18 07:21:06.224203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.416 qpair failed and we were unable to recover it. 00:35:45.416 [2024-11-18 07:21:06.224449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.416 [2024-11-18 07:21:06.224527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.416 qpair failed and we were unable to recover it. 00:35:45.416 [2024-11-18 07:21:06.224734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.416 [2024-11-18 07:21:06.224800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.416 qpair failed and we were unable to recover it. 00:35:45.416 [2024-11-18 07:21:06.225092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.416 [2024-11-18 07:21:06.225157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.416 qpair failed and we were unable to recover it. 00:35:45.416 [2024-11-18 07:21:06.225457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.416 [2024-11-18 07:21:06.225507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.416 qpair failed and we were unable to recover it. 00:35:45.416 [2024-11-18 07:21:06.225654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.416 [2024-11-18 07:21:06.225689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.416 qpair failed and we were unable to recover it. 00:35:45.416 [2024-11-18 07:21:06.225913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.416 [2024-11-18 07:21:06.225973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.416 qpair failed and we were unable to recover it. 00:35:45.416 [2024-11-18 07:21:06.226123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.416 [2024-11-18 07:21:06.226157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.416 qpair failed and we were unable to recover it. 00:35:45.416 [2024-11-18 07:21:06.226436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.416 [2024-11-18 07:21:06.226470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.416 qpair failed and we were unable to recover it. 00:35:45.416 [2024-11-18 07:21:06.226698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.416 [2024-11-18 07:21:06.226764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.416 qpair failed and we were unable to recover it. 00:35:45.416 [2024-11-18 07:21:06.227017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.416 [2024-11-18 07:21:06.227081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.416 qpair failed and we were unable to recover it. 00:35:45.416 [2024-11-18 07:21:06.227381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.416 [2024-11-18 07:21:06.227455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.416 qpair failed and we were unable to recover it. 00:35:45.416 [2024-11-18 07:21:06.227776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.416 [2024-11-18 07:21:06.227841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.416 qpair failed and we were unable to recover it. 00:35:45.416 [2024-11-18 07:21:06.228143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.416 [2024-11-18 07:21:06.228228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.416 qpair failed and we were unable to recover it. 00:35:45.416 [2024-11-18 07:21:06.228525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.416 [2024-11-18 07:21:06.228560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.416 qpair failed and we were unable to recover it. 00:35:45.416 [2024-11-18 07:21:06.228701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.416 [2024-11-18 07:21:06.228736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.416 qpair failed and we were unable to recover it. 00:35:45.416 [2024-11-18 07:21:06.228876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.416 [2024-11-18 07:21:06.228909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.416 qpair failed and we were unable to recover it. 00:35:45.416 [2024-11-18 07:21:06.229178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.416 [2024-11-18 07:21:06.229242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.416 qpair failed and we were unable to recover it. 00:35:45.416 [2024-11-18 07:21:06.229543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.416 [2024-11-18 07:21:06.229609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.416 qpair failed and we were unable to recover it. 00:35:45.416 [2024-11-18 07:21:06.229852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.416 [2024-11-18 07:21:06.229917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.416 qpair failed and we were unable to recover it. 00:35:45.417 [2024-11-18 07:21:06.230220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.417 [2024-11-18 07:21:06.230295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.417 qpair failed and we were unable to recover it. 00:35:45.417 [2024-11-18 07:21:06.230546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.417 [2024-11-18 07:21:06.230612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.417 qpair failed and we were unable to recover it. 00:35:45.417 [2024-11-18 07:21:06.230911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.417 [2024-11-18 07:21:06.230986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.417 qpair failed and we were unable to recover it. 00:35:45.417 [2024-11-18 07:21:06.231223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.417 [2024-11-18 07:21:06.231287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.417 qpair failed and we were unable to recover it. 00:35:45.417 [2024-11-18 07:21:06.231576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.417 [2024-11-18 07:21:06.231643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.417 qpair failed and we were unable to recover it. 00:35:45.417 [2024-11-18 07:21:06.231886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.417 [2024-11-18 07:21:06.231949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.417 qpair failed and we were unable to recover it. 00:35:45.417 [2024-11-18 07:21:06.232246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.417 [2024-11-18 07:21:06.232321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.417 qpair failed and we were unable to recover it. 00:35:45.417 [2024-11-18 07:21:06.232581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.417 [2024-11-18 07:21:06.232649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.417 qpair failed and we were unable to recover it. 00:35:45.417 [2024-11-18 07:21:06.232858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.417 [2024-11-18 07:21:06.232923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.417 qpair failed and we were unable to recover it. 00:35:45.417 [2024-11-18 07:21:06.233205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.417 [2024-11-18 07:21:06.233270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.417 qpair failed and we were unable to recover it. 00:35:45.417 [2024-11-18 07:21:06.233565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.417 [2024-11-18 07:21:06.233630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.417 qpair failed and we were unable to recover it. 00:35:45.417 [2024-11-18 07:21:06.233929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.417 [2024-11-18 07:21:06.233994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.417 qpair failed and we were unable to recover it. 00:35:45.417 [2024-11-18 07:21:06.234283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.417 [2024-11-18 07:21:06.234348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.417 qpair failed and we were unable to recover it. 00:35:45.417 [2024-11-18 07:21:06.234651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.417 [2024-11-18 07:21:06.234716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.417 qpair failed and we were unable to recover it. 00:35:45.417 [2024-11-18 07:21:06.235023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.417 [2024-11-18 07:21:06.235087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.417 qpair failed and we were unable to recover it. 00:35:45.417 [2024-11-18 07:21:06.235381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.417 [2024-11-18 07:21:06.235445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.417 qpair failed and we were unable to recover it. 00:35:45.417 [2024-11-18 07:21:06.235720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.417 [2024-11-18 07:21:06.235784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.417 qpair failed and we were unable to recover it. 00:35:45.417 [2024-11-18 07:21:06.235996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.417 [2024-11-18 07:21:06.236071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.417 qpair failed and we were unable to recover it. 00:35:45.417 [2024-11-18 07:21:06.236375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.417 [2024-11-18 07:21:06.236450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.417 qpair failed and we were unable to recover it. 00:35:45.417 [2024-11-18 07:21:06.236745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.417 [2024-11-18 07:21:06.236811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.417 qpair failed and we were unable to recover it. 00:35:45.417 [2024-11-18 07:21:06.236997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.417 [2024-11-18 07:21:06.237073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.417 qpair failed and we were unable to recover it. 00:35:45.417 [2024-11-18 07:21:06.237346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.417 [2024-11-18 07:21:06.237392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.417 qpair failed and we were unable to recover it. 00:35:45.417 [2024-11-18 07:21:06.237539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.417 [2024-11-18 07:21:06.237597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.417 qpair failed and we were unable to recover it. 00:35:45.417 [2024-11-18 07:21:06.237870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.417 [2024-11-18 07:21:06.237934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.417 qpair failed and we were unable to recover it. 00:35:45.417 [2024-11-18 07:21:06.238200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.417 [2024-11-18 07:21:06.238264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.417 qpair failed and we were unable to recover it. 00:35:45.417 [2024-11-18 07:21:06.238512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.417 [2024-11-18 07:21:06.238547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.417 qpair failed and we were unable to recover it. 00:35:45.417 [2024-11-18 07:21:06.238683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.417 [2024-11-18 07:21:06.238717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.417 qpair failed and we were unable to recover it. 00:35:45.417 [2024-11-18 07:21:06.238864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.417 [2024-11-18 07:21:06.238938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.417 qpair failed and we were unable to recover it. 00:35:45.417 [2024-11-18 07:21:06.239157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.417 [2024-11-18 07:21:06.239222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.417 qpair failed and we were unable to recover it. 00:35:45.417 [2024-11-18 07:21:06.239473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.417 [2024-11-18 07:21:06.239553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.417 qpair failed and we were unable to recover it. 00:35:45.417 [2024-11-18 07:21:06.239786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.417 [2024-11-18 07:21:06.239850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.417 qpair failed and we were unable to recover it. 00:35:45.417 [2024-11-18 07:21:06.240101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.418 [2024-11-18 07:21:06.240153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.418 qpair failed and we were unable to recover it. 00:35:45.418 [2024-11-18 07:21:06.240253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.418 [2024-11-18 07:21:06.240286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.418 qpair failed and we were unable to recover it. 00:35:45.418 [2024-11-18 07:21:06.240509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.418 [2024-11-18 07:21:06.240575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.418 qpair failed and we were unable to recover it. 00:35:45.418 [2024-11-18 07:21:06.240887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.418 [2024-11-18 07:21:06.240952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.418 qpair failed and we were unable to recover it. 00:35:45.418 [2024-11-18 07:21:06.241246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.418 [2024-11-18 07:21:06.241310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.418 qpair failed and we were unable to recover it. 00:35:45.418 [2024-11-18 07:21:06.241577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.418 [2024-11-18 07:21:06.241643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.418 qpair failed and we were unable to recover it. 00:35:45.418 [2024-11-18 07:21:06.241833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.418 [2024-11-18 07:21:06.241898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.418 qpair failed and we were unable to recover it. 00:35:45.418 [2024-11-18 07:21:06.242126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.418 [2024-11-18 07:21:06.242190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.418 qpair failed and we were unable to recover it. 00:35:45.418 [2024-11-18 07:21:06.242448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.418 [2024-11-18 07:21:06.242533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.418 qpair failed and we were unable to recover it. 00:35:45.418 [2024-11-18 07:21:06.242802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.418 [2024-11-18 07:21:06.242878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.418 qpair failed and we were unable to recover it. 00:35:45.418 [2024-11-18 07:21:06.243167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.418 [2024-11-18 07:21:06.243232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.418 qpair failed and we were unable to recover it. 00:35:45.418 [2024-11-18 07:21:06.243524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.418 [2024-11-18 07:21:06.243559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.418 qpair failed and we were unable to recover it. 00:35:45.418 [2024-11-18 07:21:06.243709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.418 [2024-11-18 07:21:06.243743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.418 qpair failed and we were unable to recover it. 00:35:45.418 [2024-11-18 07:21:06.243994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.418 [2024-11-18 07:21:06.244063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.418 qpair failed and we were unable to recover it. 00:35:45.418 [2024-11-18 07:21:06.244415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.418 [2024-11-18 07:21:06.244481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.418 qpair failed and we were unable to recover it. 00:35:45.418 [2024-11-18 07:21:06.244730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.418 [2024-11-18 07:21:06.244796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.418 qpair failed and we were unable to recover it. 00:35:45.418 [2024-11-18 07:21:06.245048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.418 [2024-11-18 07:21:06.245129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.418 qpair failed and we were unable to recover it. 00:35:45.418 [2024-11-18 07:21:06.245437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.418 [2024-11-18 07:21:06.245526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.418 qpair failed and we were unable to recover it. 00:35:45.418 [2024-11-18 07:21:06.245784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.418 [2024-11-18 07:21:06.245849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.418 qpair failed and we were unable to recover it. 00:35:45.418 [2024-11-18 07:21:06.246151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.418 [2024-11-18 07:21:06.246226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.418 qpair failed and we were unable to recover it. 00:35:45.418 [2024-11-18 07:21:06.246520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.418 [2024-11-18 07:21:06.246586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.418 qpair failed and we were unable to recover it. 00:35:45.418 [2024-11-18 07:21:06.246845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.418 [2024-11-18 07:21:06.246910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.418 qpair failed and we were unable to recover it. 00:35:45.418 [2024-11-18 07:21:06.247148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.418 [2024-11-18 07:21:06.247216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.418 qpair failed and we were unable to recover it. 00:35:45.418 [2024-11-18 07:21:06.247529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.418 [2024-11-18 07:21:06.247605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.418 qpair failed and we were unable to recover it. 00:35:45.418 [2024-11-18 07:21:06.247896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.418 [2024-11-18 07:21:06.247960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.418 qpair failed and we were unable to recover it. 00:35:45.418 [2024-11-18 07:21:06.248162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.418 [2024-11-18 07:21:06.248225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.418 qpair failed and we were unable to recover it. 00:35:45.418 [2024-11-18 07:21:06.248525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.418 [2024-11-18 07:21:06.248591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.418 qpair failed and we were unable to recover it. 00:35:45.418 [2024-11-18 07:21:06.248798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.418 [2024-11-18 07:21:06.248870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.418 qpair failed and we were unable to recover it. 00:35:45.418 [2024-11-18 07:21:06.249098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.418 [2024-11-18 07:21:06.249170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.418 qpair failed and we were unable to recover it. 00:35:45.418 [2024-11-18 07:21:06.249394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.418 [2024-11-18 07:21:06.249459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.418 qpair failed and we were unable to recover it. 00:35:45.418 [2024-11-18 07:21:06.249777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.418 [2024-11-18 07:21:06.249842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.418 qpair failed and we were unable to recover it. 00:35:45.418 [2024-11-18 07:21:06.250147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.418 [2024-11-18 07:21:06.250210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.418 qpair failed and we were unable to recover it. 00:35:45.418 [2024-11-18 07:21:06.250471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.419 [2024-11-18 07:21:06.250550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.419 qpair failed and we were unable to recover it. 00:35:45.419 [2024-11-18 07:21:06.250767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.419 [2024-11-18 07:21:06.250842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.419 qpair failed and we were unable to recover it. 00:35:45.419 [2024-11-18 07:21:06.251110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.419 [2024-11-18 07:21:06.251144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.419 qpair failed and we were unable to recover it. 00:35:45.419 [2024-11-18 07:21:06.251283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.419 [2024-11-18 07:21:06.251318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.419 qpair failed and we were unable to recover it. 00:35:45.419 [2024-11-18 07:21:06.251481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.419 [2024-11-18 07:21:06.251560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.419 qpair failed and we were unable to recover it. 00:35:45.419 [2024-11-18 07:21:06.251829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.419 [2024-11-18 07:21:06.251862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.419 qpair failed and we were unable to recover it. 00:35:45.419 [2024-11-18 07:21:06.252006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.419 [2024-11-18 07:21:06.252041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.419 qpair failed and we were unable to recover it. 00:35:45.419 [2024-11-18 07:21:06.252219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.419 [2024-11-18 07:21:06.252285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.419 qpair failed and we were unable to recover it. 00:35:45.419 [2024-11-18 07:21:06.252552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.419 [2024-11-18 07:21:06.252587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.419 qpair failed and we were unable to recover it. 00:35:45.419 [2024-11-18 07:21:06.252721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.419 [2024-11-18 07:21:06.252755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.419 qpair failed and we were unable to recover it. 00:35:45.419 [2024-11-18 07:21:06.252898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.419 [2024-11-18 07:21:06.252952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.419 qpair failed and we were unable to recover it. 00:35:45.419 [2024-11-18 07:21:06.253154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.419 [2024-11-18 07:21:06.253228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.419 qpair failed and we were unable to recover it. 00:35:45.419 [2024-11-18 07:21:06.253436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.419 [2024-11-18 07:21:06.253540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.419 qpair failed and we were unable to recover it. 00:35:45.419 [2024-11-18 07:21:06.253838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.419 [2024-11-18 07:21:06.253873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.419 qpair failed and we were unable to recover it. 00:35:45.419 [2024-11-18 07:21:06.254044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.419 [2024-11-18 07:21:06.254088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.419 qpair failed and we were unable to recover it. 00:35:45.419 [2024-11-18 07:21:06.254322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.419 [2024-11-18 07:21:06.254401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.419 qpair failed and we were unable to recover it. 00:35:45.419 [2024-11-18 07:21:06.254670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.419 [2024-11-18 07:21:06.254735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.419 qpair failed and we were unable to recover it. 00:35:45.419 [2024-11-18 07:21:06.254999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.419 [2024-11-18 07:21:06.255063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.419 qpair failed and we were unable to recover it. 00:35:45.419 [2024-11-18 07:21:06.255306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.419 [2024-11-18 07:21:06.255370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.419 qpair failed and we were unable to recover it. 00:35:45.419 [2024-11-18 07:21:06.255612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.419 [2024-11-18 07:21:06.255678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.419 qpair failed and we were unable to recover it. 00:35:45.419 [2024-11-18 07:21:06.255979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.419 [2024-11-18 07:21:06.256042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.419 qpair failed and we were unable to recover it. 00:35:45.419 [2024-11-18 07:21:06.256294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.419 [2024-11-18 07:21:06.256358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.419 qpair failed and we were unable to recover it. 00:35:45.419 [2024-11-18 07:21:06.256620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.419 [2024-11-18 07:21:06.256687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.419 qpair failed and we were unable to recover it. 00:35:45.419 [2024-11-18 07:21:06.256988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.419 [2024-11-18 07:21:06.257058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.419 qpair failed and we were unable to recover it. 00:35:45.419 [2024-11-18 07:21:06.257316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.419 [2024-11-18 07:21:06.257391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.419 qpair failed and we were unable to recover it. 00:35:45.419 [2024-11-18 07:21:06.257644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.419 [2024-11-18 07:21:06.257710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.419 qpair failed and we were unable to recover it. 00:35:45.419 [2024-11-18 07:21:06.257916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.419 [2024-11-18 07:21:06.257981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.419 qpair failed and we were unable to recover it. 00:35:45.419 [2024-11-18 07:21:06.258202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.419 [2024-11-18 07:21:06.258266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.419 qpair failed and we were unable to recover it. 00:35:45.419 [2024-11-18 07:21:06.258533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.419 [2024-11-18 07:21:06.258600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.419 qpair failed and we were unable to recover it. 00:35:45.419 [2024-11-18 07:21:06.258876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.419 [2024-11-18 07:21:06.258941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.419 qpair failed and we were unable to recover it. 00:35:45.419 [2024-11-18 07:21:06.259189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.419 [2024-11-18 07:21:06.259261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.419 qpair failed and we were unable to recover it. 00:35:45.419 [2024-11-18 07:21:06.259514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.419 [2024-11-18 07:21:06.259592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.419 qpair failed and we were unable to recover it. 00:35:45.419 [2024-11-18 07:21:06.259815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.419 [2024-11-18 07:21:06.259879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.419 qpair failed and we were unable to recover it. 00:35:45.419 [2024-11-18 07:21:06.260131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.419 [2024-11-18 07:21:06.260195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.419 qpair failed and we were unable to recover it. 00:35:45.419 [2024-11-18 07:21:06.260510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.419 [2024-11-18 07:21:06.260584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.419 qpair failed and we were unable to recover it. 00:35:45.419 [2024-11-18 07:21:06.260832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.420 [2024-11-18 07:21:06.260907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.420 qpair failed and we were unable to recover it. 00:35:45.420 [2024-11-18 07:21:06.261199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.420 [2024-11-18 07:21:06.261272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.420 qpair failed and we were unable to recover it. 00:35:45.420 [2024-11-18 07:21:06.261540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.420 [2024-11-18 07:21:06.261608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.420 qpair failed and we were unable to recover it. 00:35:45.420 [2024-11-18 07:21:06.261900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.420 [2024-11-18 07:21:06.261977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.420 qpair failed and we were unable to recover it. 00:35:45.420 [2024-11-18 07:21:06.262283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.420 [2024-11-18 07:21:06.262358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.420 qpair failed and we were unable to recover it. 00:35:45.420 [2024-11-18 07:21:06.262654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.420 [2024-11-18 07:21:06.262720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.420 qpair failed and we were unable to recover it. 00:35:45.420 [2024-11-18 07:21:06.263016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.420 [2024-11-18 07:21:06.263081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.420 qpair failed and we were unable to recover it. 00:35:45.420 [2024-11-18 07:21:06.263366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.420 [2024-11-18 07:21:06.263429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.420 qpair failed and we were unable to recover it. 00:35:45.420 [2024-11-18 07:21:06.263686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.420 [2024-11-18 07:21:06.263751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.420 qpair failed and we were unable to recover it. 00:35:45.420 [2024-11-18 07:21:06.264006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.420 [2024-11-18 07:21:06.264071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.420 qpair failed and we were unable to recover it. 00:35:45.420 [2024-11-18 07:21:06.264363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.420 [2024-11-18 07:21:06.264427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.420 qpair failed and we were unable to recover it. 00:35:45.420 [2024-11-18 07:21:06.264683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.420 [2024-11-18 07:21:06.264749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.420 qpair failed and we were unable to recover it. 00:35:45.420 [2024-11-18 07:21:06.265044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.420 [2024-11-18 07:21:06.265119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.420 qpair failed and we were unable to recover it. 00:35:45.420 [2024-11-18 07:21:06.265429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.420 [2024-11-18 07:21:06.265519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.420 qpair failed and we were unable to recover it. 00:35:45.420 [2024-11-18 07:21:06.265844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.420 [2024-11-18 07:21:06.265914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.420 qpair failed and we were unable to recover it. 00:35:45.420 [2024-11-18 07:21:06.266210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.420 [2024-11-18 07:21:06.266276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.420 qpair failed and we were unable to recover it. 00:35:45.420 [2024-11-18 07:21:06.266579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.420 [2024-11-18 07:21:06.266646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.420 qpair failed and we were unable to recover it. 00:35:45.420 [2024-11-18 07:21:06.266945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.420 [2024-11-18 07:21:06.266984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.420 qpair failed and we were unable to recover it. 00:35:45.420 [2024-11-18 07:21:06.267144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.420 [2024-11-18 07:21:06.267195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.420 qpair failed and we were unable to recover it. 00:35:45.420 [2024-11-18 07:21:06.267480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.420 [2024-11-18 07:21:06.267572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.420 qpair failed and we were unable to recover it. 00:35:45.420 [2024-11-18 07:21:06.267855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.420 [2024-11-18 07:21:06.267919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.420 qpair failed and we were unable to recover it. 00:35:45.420 [2024-11-18 07:21:06.268198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.420 [2024-11-18 07:21:06.268263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.420 qpair failed and we were unable to recover it. 00:35:45.420 [2024-11-18 07:21:06.268461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.420 [2024-11-18 07:21:06.268520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.420 qpair failed and we were unable to recover it. 00:35:45.420 [2024-11-18 07:21:06.268656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.420 [2024-11-18 07:21:06.268691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.420 qpair failed and we were unable to recover it. 00:35:45.420 [2024-11-18 07:21:06.268971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.420 [2024-11-18 07:21:06.269005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.420 qpair failed and we were unable to recover it. 00:35:45.420 [2024-11-18 07:21:06.269138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.420 [2024-11-18 07:21:06.269171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.420 qpair failed and we were unable to recover it. 00:35:45.420 [2024-11-18 07:21:06.269348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.420 [2024-11-18 07:21:06.269382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.420 qpair failed and we were unable to recover it. 00:35:45.420 [2024-11-18 07:21:06.269661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.420 [2024-11-18 07:21:06.269727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.420 qpair failed and we were unable to recover it. 00:35:45.421 [2024-11-18 07:21:06.270031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.421 [2024-11-18 07:21:06.270094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.421 qpair failed and we were unable to recover it. 00:35:45.421 [2024-11-18 07:21:06.270340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.421 [2024-11-18 07:21:06.270404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.421 qpair failed and we were unable to recover it. 00:35:45.421 [2024-11-18 07:21:06.270716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.421 [2024-11-18 07:21:06.270750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.421 qpair failed and we were unable to recover it. 00:35:45.421 [2024-11-18 07:21:06.270927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.421 [2024-11-18 07:21:06.270961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.421 qpair failed and we were unable to recover it. 00:35:45.421 [2024-11-18 07:21:06.271210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.421 [2024-11-18 07:21:06.271265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.421 qpair failed and we were unable to recover it. 00:35:45.421 [2024-11-18 07:21:06.271527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.421 [2024-11-18 07:21:06.271594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.421 qpair failed and we were unable to recover it. 00:35:45.421 [2024-11-18 07:21:06.271891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.421 [2024-11-18 07:21:06.271956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.421 qpair failed and we were unable to recover it. 00:35:45.421 [2024-11-18 07:21:06.272167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.421 [2024-11-18 07:21:06.272230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.421 qpair failed and we were unable to recover it. 00:35:45.421 [2024-11-18 07:21:06.272522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.421 [2024-11-18 07:21:06.272587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.421 qpair failed and we were unable to recover it. 00:35:45.421 [2024-11-18 07:21:06.272884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.421 [2024-11-18 07:21:06.272949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.421 qpair failed and we were unable to recover it. 00:35:45.421 [2024-11-18 07:21:06.273235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.421 [2024-11-18 07:21:06.273300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.421 qpair failed and we were unable to recover it. 00:35:45.421 [2024-11-18 07:21:06.273586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.421 [2024-11-18 07:21:06.273653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.421 qpair failed and we were unable to recover it. 00:35:45.421 [2024-11-18 07:21:06.273948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.421 [2024-11-18 07:21:06.274012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.421 qpair failed and we were unable to recover it. 00:35:45.421 [2024-11-18 07:21:06.274260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.421 [2024-11-18 07:21:06.274323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.421 qpair failed and we were unable to recover it. 00:35:45.421 [2024-11-18 07:21:06.274616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.421 [2024-11-18 07:21:06.274683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.421 qpair failed and we were unable to recover it. 00:35:45.421 [2024-11-18 07:21:06.274979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.421 [2024-11-18 07:21:06.275042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.421 qpair failed and we were unable to recover it. 00:35:45.421 [2024-11-18 07:21:06.275294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.421 [2024-11-18 07:21:06.275369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.421 qpair failed and we were unable to recover it. 00:35:45.421 [2024-11-18 07:21:06.275614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.421 [2024-11-18 07:21:06.275681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.421 qpair failed and we were unable to recover it. 00:35:45.421 [2024-11-18 07:21:06.275968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.421 [2024-11-18 07:21:06.276032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.421 qpair failed and we were unable to recover it. 00:35:45.421 [2024-11-18 07:21:06.276294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.421 [2024-11-18 07:21:06.276358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.421 qpair failed and we were unable to recover it. 00:35:45.421 [2024-11-18 07:21:06.276659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.421 [2024-11-18 07:21:06.276725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.421 qpair failed and we were unable to recover it. 00:35:45.421 [2024-11-18 07:21:06.277026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.421 [2024-11-18 07:21:06.277100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.421 qpair failed and we were unable to recover it. 00:35:45.421 [2024-11-18 07:21:06.277294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.421 [2024-11-18 07:21:06.277359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.421 qpair failed and we were unable to recover it. 00:35:45.421 [2024-11-18 07:21:06.277623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.421 [2024-11-18 07:21:06.277689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.421 qpair failed and we were unable to recover it. 00:35:45.421 [2024-11-18 07:21:06.277986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.421 [2024-11-18 07:21:06.278060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.421 qpair failed and we were unable to recover it. 00:35:45.421 [2024-11-18 07:21:06.278301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.421 [2024-11-18 07:21:06.278363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.421 qpair failed and we were unable to recover it. 00:35:45.421 [2024-11-18 07:21:06.278591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.421 [2024-11-18 07:21:06.278660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.421 qpair failed and we were unable to recover it. 00:35:45.421 [2024-11-18 07:21:06.278868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.421 [2024-11-18 07:21:06.278934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.421 qpair failed and we were unable to recover it. 00:35:45.421 [2024-11-18 07:21:06.279161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.421 [2024-11-18 07:21:06.279226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.421 qpair failed and we were unable to recover it. 00:35:45.421 [2024-11-18 07:21:06.279517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.421 [2024-11-18 07:21:06.279582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.421 qpair failed and we were unable to recover it. 00:35:45.421 [2024-11-18 07:21:06.279803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.421 [2024-11-18 07:21:06.279869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.421 qpair failed and we were unable to recover it. 00:35:45.421 [2024-11-18 07:21:06.280088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.421 [2024-11-18 07:21:06.280153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.421 qpair failed and we were unable to recover it. 00:35:45.422 [2024-11-18 07:21:06.280441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.422 [2024-11-18 07:21:06.280523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.422 qpair failed and we were unable to recover it. 00:35:45.422 [2024-11-18 07:21:06.280775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.422 [2024-11-18 07:21:06.280840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.422 qpair failed and we were unable to recover it. 00:35:45.422 [2024-11-18 07:21:06.281099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.422 [2024-11-18 07:21:06.281169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.422 qpair failed and we were unable to recover it. 00:35:45.422 [2024-11-18 07:21:06.281479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.422 [2024-11-18 07:21:06.281570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.422 qpair failed and we were unable to recover it. 00:35:45.422 [2024-11-18 07:21:06.281874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.422 [2024-11-18 07:21:06.281940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.422 qpair failed and we were unable to recover it. 00:35:45.422 [2024-11-18 07:21:06.282194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.422 [2024-11-18 07:21:06.282260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.422 qpair failed and we were unable to recover it. 00:35:45.422 [2024-11-18 07:21:06.282487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.422 [2024-11-18 07:21:06.282590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.422 qpair failed and we were unable to recover it. 00:35:45.422 [2024-11-18 07:21:06.282857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.422 [2024-11-18 07:21:06.282922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.422 qpair failed and we were unable to recover it. 00:35:45.701 [2024-11-18 07:21:06.283204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.701 [2024-11-18 07:21:06.283270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.701 qpair failed and we were unable to recover it. 00:35:45.701 [2024-11-18 07:21:06.283483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.701 [2024-11-18 07:21:06.283564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.701 qpair failed and we were unable to recover it. 00:35:45.701 [2024-11-18 07:21:06.283852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.701 [2024-11-18 07:21:06.283916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.701 qpair failed and we were unable to recover it. 00:35:45.701 [2024-11-18 07:21:06.284131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.701 [2024-11-18 07:21:06.284207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.701 qpair failed and we were unable to recover it. 00:35:45.701 [2024-11-18 07:21:06.284511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.701 [2024-11-18 07:21:06.284577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.701 qpair failed and we were unable to recover it. 00:35:45.701 [2024-11-18 07:21:06.284844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.701 [2024-11-18 07:21:06.284909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.701 qpair failed and we were unable to recover it. 00:35:45.701 [2024-11-18 07:21:06.285196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.701 [2024-11-18 07:21:06.285261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.701 qpair failed and we were unable to recover it. 00:35:45.701 [2024-11-18 07:21:06.285532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.701 [2024-11-18 07:21:06.285598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.701 qpair failed and we were unable to recover it. 00:35:45.701 [2024-11-18 07:21:06.285849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.701 [2024-11-18 07:21:06.285915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.701 qpair failed and we were unable to recover it. 00:35:45.701 [2024-11-18 07:21:06.286172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.701 [2024-11-18 07:21:06.286239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.701 qpair failed and we were unable to recover it. 00:35:45.701 [2024-11-18 07:21:06.286558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.701 [2024-11-18 07:21:06.286624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.701 qpair failed and we were unable to recover it. 00:35:45.701 [2024-11-18 07:21:06.286882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.701 [2024-11-18 07:21:06.286948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.701 qpair failed and we were unable to recover it. 00:35:45.701 [2024-11-18 07:21:06.287167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.701 [2024-11-18 07:21:06.287233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.701 qpair failed and we were unable to recover it. 00:35:45.701 [2024-11-18 07:21:06.287413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.701 [2024-11-18 07:21:06.287478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.701 qpair failed and we were unable to recover it. 00:35:45.701 [2024-11-18 07:21:06.287730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.701 [2024-11-18 07:21:06.287795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.701 qpair failed and we were unable to recover it. 00:35:45.701 [2024-11-18 07:21:06.288046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.701 [2024-11-18 07:21:06.288111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.701 qpair failed and we were unable to recover it. 00:35:45.701 [2024-11-18 07:21:06.288360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.701 [2024-11-18 07:21:06.288425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.701 qpair failed and we were unable to recover it. 00:35:45.701 [2024-11-18 07:21:06.288749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.701 [2024-11-18 07:21:06.288823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.701 qpair failed and we were unable to recover it. 00:35:45.701 [2024-11-18 07:21:06.289038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.701 [2024-11-18 07:21:06.289104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.701 qpair failed and we were unable to recover it. 00:35:45.701 [2024-11-18 07:21:06.289401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.701 [2024-11-18 07:21:06.289466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.701 qpair failed and we were unable to recover it. 00:35:45.701 [2024-11-18 07:21:06.289695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.701 [2024-11-18 07:21:06.289761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.701 qpair failed and we were unable to recover it. 00:35:45.701 [2024-11-18 07:21:06.289982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.701 [2024-11-18 07:21:06.290047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.701 qpair failed and we were unable to recover it. 00:35:45.701 [2024-11-18 07:21:06.290308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.701 [2024-11-18 07:21:06.290374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.701 qpair failed and we were unable to recover it. 00:35:45.701 [2024-11-18 07:21:06.290669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.701 [2024-11-18 07:21:06.290735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.701 qpair failed and we were unable to recover it. 00:35:45.701 [2024-11-18 07:21:06.290991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.701 [2024-11-18 07:21:06.291056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.701 qpair failed and we were unable to recover it. 00:35:45.701 [2024-11-18 07:21:06.291246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.701 [2024-11-18 07:21:06.291313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.701 qpair failed and we were unable to recover it. 00:35:45.701 [2024-11-18 07:21:06.291598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.701 [2024-11-18 07:21:06.291665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.701 qpair failed and we were unable to recover it. 00:35:45.701 [2024-11-18 07:21:06.291959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.701 [2024-11-18 07:21:06.292024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.702 qpair failed and we were unable to recover it. 00:35:45.702 [2024-11-18 07:21:06.292267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.702 [2024-11-18 07:21:06.292332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.702 qpair failed and we were unable to recover it. 00:35:45.702 [2024-11-18 07:21:06.292559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.702 [2024-11-18 07:21:06.292624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.702 qpair failed and we were unable to recover it. 00:35:45.702 [2024-11-18 07:21:06.292920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.702 [2024-11-18 07:21:06.292985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.702 qpair failed and we were unable to recover it. 00:35:45.702 [2024-11-18 07:21:06.293252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.702 [2024-11-18 07:21:06.293319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.702 qpair failed and we were unable to recover it. 00:35:45.702 [2024-11-18 07:21:06.293570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.702 [2024-11-18 07:21:06.293635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.702 qpair failed and we were unable to recover it. 00:35:45.702 [2024-11-18 07:21:06.293857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.702 [2024-11-18 07:21:06.293922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.702 qpair failed and we were unable to recover it. 00:35:45.702 [2024-11-18 07:21:06.294159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.702 [2024-11-18 07:21:06.294224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.702 qpair failed and we were unable to recover it. 00:35:45.702 [2024-11-18 07:21:06.294466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.702 [2024-11-18 07:21:06.294557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.702 qpair failed and we were unable to recover it. 00:35:45.702 [2024-11-18 07:21:06.294820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.702 [2024-11-18 07:21:06.294886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.702 qpair failed and we were unable to recover it. 00:35:45.702 [2024-11-18 07:21:06.295135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.702 [2024-11-18 07:21:06.295199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.702 qpair failed and we were unable to recover it. 00:35:45.702 [2024-11-18 07:21:06.295416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.702 [2024-11-18 07:21:06.295481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.702 qpair failed and we were unable to recover it. 00:35:45.702 [2024-11-18 07:21:06.295698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.702 [2024-11-18 07:21:06.295763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.702 qpair failed and we were unable to recover it. 00:35:45.702 [2024-11-18 07:21:06.295953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.702 [2024-11-18 07:21:06.296019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.702 qpair failed and we were unable to recover it. 00:35:45.702 [2024-11-18 07:21:06.296263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.702 [2024-11-18 07:21:06.296330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.702 qpair failed and we were unable to recover it. 00:35:45.702 [2024-11-18 07:21:06.296523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.702 [2024-11-18 07:21:06.296596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.702 qpair failed and we were unable to recover it. 00:35:45.702 [2024-11-18 07:21:06.296816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.702 [2024-11-18 07:21:06.296880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.702 qpair failed and we were unable to recover it. 00:35:45.702 [2024-11-18 07:21:06.297164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.702 [2024-11-18 07:21:06.297239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.702 qpair failed and we were unable to recover it. 00:35:45.702 [2024-11-18 07:21:06.297507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.702 [2024-11-18 07:21:06.297576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.702 qpair failed and we were unable to recover it. 00:35:45.702 [2024-11-18 07:21:06.297831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.702 [2024-11-18 07:21:06.297896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.702 qpair failed and we were unable to recover it. 00:35:45.702 [2024-11-18 07:21:06.298185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.702 [2024-11-18 07:21:06.298251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.702 qpair failed and we were unable to recover it. 00:35:45.702 [2024-11-18 07:21:06.298548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.702 [2024-11-18 07:21:06.298615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.702 qpair failed and we were unable to recover it. 00:35:45.702 [2024-11-18 07:21:06.298915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.702 [2024-11-18 07:21:06.298979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.702 qpair failed and we were unable to recover it. 00:35:45.702 [2024-11-18 07:21:06.299286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.702 [2024-11-18 07:21:06.299352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.702 qpair failed and we were unable to recover it. 00:35:45.702 [2024-11-18 07:21:06.299642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.702 [2024-11-18 07:21:06.299708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.702 qpair failed and we were unable to recover it. 00:35:45.702 [2024-11-18 07:21:06.299991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.702 [2024-11-18 07:21:06.300056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.702 qpair failed and we were unable to recover it. 00:35:45.702 [2024-11-18 07:21:06.300315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.702 [2024-11-18 07:21:06.300382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.702 qpair failed and we were unable to recover it. 00:35:45.702 [2024-11-18 07:21:06.300615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.702 [2024-11-18 07:21:06.300681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.702 qpair failed and we were unable to recover it. 00:35:45.702 [2024-11-18 07:21:06.300932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.702 [2024-11-18 07:21:06.300997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.702 qpair failed and we were unable to recover it. 00:35:45.702 [2024-11-18 07:21:06.301200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.702 [2024-11-18 07:21:06.301265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.702 qpair failed and we were unable to recover it. 00:35:45.702 [2024-11-18 07:21:06.301518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.702 [2024-11-18 07:21:06.301592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.702 qpair failed and we were unable to recover it. 00:35:45.702 [2024-11-18 07:21:06.301873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.702 [2024-11-18 07:21:06.301938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.702 qpair failed and we were unable to recover it. 00:35:45.702 [2024-11-18 07:21:06.302236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.702 [2024-11-18 07:21:06.302301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.702 qpair failed and we were unable to recover it. 00:35:45.702 [2024-11-18 07:21:06.302591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.702 [2024-11-18 07:21:06.302656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.702 qpair failed and we were unable to recover it. 00:35:45.702 [2024-11-18 07:21:06.302950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.702 [2024-11-18 07:21:06.303015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.702 qpair failed and we were unable to recover it. 00:35:45.702 [2024-11-18 07:21:06.303238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.702 [2024-11-18 07:21:06.303305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.702 qpair failed and we were unable to recover it. 00:35:45.702 [2024-11-18 07:21:06.303555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.702 [2024-11-18 07:21:06.303630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.702 qpair failed and we were unable to recover it. 00:35:45.703 [2024-11-18 07:21:06.303932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.703 [2024-11-18 07:21:06.303997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.703 qpair failed and we were unable to recover it. 00:35:45.703 [2024-11-18 07:21:06.304249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.703 [2024-11-18 07:21:06.304314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.703 qpair failed and we were unable to recover it. 00:35:45.703 [2024-11-18 07:21:06.304573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.703 [2024-11-18 07:21:06.304639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.703 qpair failed and we were unable to recover it. 00:35:45.703 [2024-11-18 07:21:06.304873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.703 [2024-11-18 07:21:06.304938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.703 qpair failed and we were unable to recover it. 00:35:45.703 [2024-11-18 07:21:06.305224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.703 [2024-11-18 07:21:06.305290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.703 qpair failed and we were unable to recover it. 00:35:45.703 [2024-11-18 07:21:06.305511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.703 [2024-11-18 07:21:06.305577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.703 qpair failed and we were unable to recover it. 00:35:45.703 [2024-11-18 07:21:06.305872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.703 [2024-11-18 07:21:06.305937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.703 qpair failed and we were unable to recover it. 00:35:45.703 [2024-11-18 07:21:06.306231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.703 [2024-11-18 07:21:06.306307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.703 qpair failed and we were unable to recover it. 00:35:45.703 [2024-11-18 07:21:06.306596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.703 [2024-11-18 07:21:06.306663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.703 qpair failed and we were unable to recover it. 00:35:45.703 [2024-11-18 07:21:06.306905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.703 [2024-11-18 07:21:06.306971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.703 qpair failed and we were unable to recover it. 00:35:45.703 [2024-11-18 07:21:06.307217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.703 [2024-11-18 07:21:06.307282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.703 qpair failed and we were unable to recover it. 00:35:45.703 [2024-11-18 07:21:06.307522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.703 [2024-11-18 07:21:06.307589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.703 qpair failed and we were unable to recover it. 00:35:45.703 [2024-11-18 07:21:06.307830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.703 [2024-11-18 07:21:06.307896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.703 qpair failed and we were unable to recover it. 00:35:45.703 [2024-11-18 07:21:06.308153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.703 [2024-11-18 07:21:06.308217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.703 qpair failed and we were unable to recover it. 00:35:45.703 [2024-11-18 07:21:06.308513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.703 [2024-11-18 07:21:06.308580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.703 qpair failed and we were unable to recover it. 00:35:45.703 [2024-11-18 07:21:06.308834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.703 [2024-11-18 07:21:06.308900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.703 qpair failed and we were unable to recover it. 00:35:45.703 [2024-11-18 07:21:06.309194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.703 [2024-11-18 07:21:06.309259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.703 qpair failed and we were unable to recover it. 00:35:45.703 [2024-11-18 07:21:06.309518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.703 [2024-11-18 07:21:06.309584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.703 qpair failed and we were unable to recover it. 00:35:45.703 [2024-11-18 07:21:06.309874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.703 [2024-11-18 07:21:06.309940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.703 qpair failed and we were unable to recover it. 00:35:45.703 [2024-11-18 07:21:06.310243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.703 [2024-11-18 07:21:06.310307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.703 qpair failed and we were unable to recover it. 00:35:45.703 [2024-11-18 07:21:06.310603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.703 [2024-11-18 07:21:06.310670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.703 qpair failed and we were unable to recover it. 00:35:45.703 [2024-11-18 07:21:06.310979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.703 [2024-11-18 07:21:06.311044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.703 qpair failed and we were unable to recover it. 00:35:45.703 [2024-11-18 07:21:06.311349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.703 [2024-11-18 07:21:06.311414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.703 qpair failed and we were unable to recover it. 00:35:45.703 [2024-11-18 07:21:06.311738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.703 [2024-11-18 07:21:06.311804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.703 qpair failed and we were unable to recover it. 00:35:45.703 [2024-11-18 07:21:06.312093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.703 [2024-11-18 07:21:06.312158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.703 qpair failed and we were unable to recover it. 00:35:45.703 [2024-11-18 07:21:06.312454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.703 [2024-11-18 07:21:06.312534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.703 qpair failed and we were unable to recover it. 00:35:45.703 [2024-11-18 07:21:06.312785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.703 [2024-11-18 07:21:06.312851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.703 qpair failed and we were unable to recover it. 00:35:45.703 [2024-11-18 07:21:06.313052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.703 [2024-11-18 07:21:06.313118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.703 qpair failed and we were unable to recover it. 00:35:45.703 [2024-11-18 07:21:06.313384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.703 [2024-11-18 07:21:06.313449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.703 qpair failed and we were unable to recover it. 00:35:45.703 [2024-11-18 07:21:06.313731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.703 [2024-11-18 07:21:06.313798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.703 qpair failed and we were unable to recover it. 00:35:45.703 [2024-11-18 07:21:06.314012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.703 [2024-11-18 07:21:06.314079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.703 qpair failed and we were unable to recover it. 00:35:45.703 [2024-11-18 07:21:06.314321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.703 [2024-11-18 07:21:06.314387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.703 qpair failed and we were unable to recover it. 00:35:45.703 [2024-11-18 07:21:06.314653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.703 [2024-11-18 07:21:06.314721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.703 qpair failed and we were unable to recover it. 00:35:45.703 [2024-11-18 07:21:06.314956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.703 [2024-11-18 07:21:06.315021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.703 qpair failed and we were unable to recover it. 00:35:45.703 [2024-11-18 07:21:06.315315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.703 [2024-11-18 07:21:06.315390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.703 qpair failed and we were unable to recover it. 00:35:45.703 [2024-11-18 07:21:06.315707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.703 [2024-11-18 07:21:06.315774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.703 qpair failed and we were unable to recover it. 00:35:45.703 [2024-11-18 07:21:06.316022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.703 [2024-11-18 07:21:06.316089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.703 qpair failed and we were unable to recover it. 00:35:45.703 [2024-11-18 07:21:06.316344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.704 [2024-11-18 07:21:06.316410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.704 qpair failed and we were unable to recover it. 00:35:45.704 [2024-11-18 07:21:06.316726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.704 [2024-11-18 07:21:06.316792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.704 qpair failed and we were unable to recover it. 00:35:45.704 [2024-11-18 07:21:06.317091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.704 [2024-11-18 07:21:06.317156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.704 qpair failed and we were unable to recover it. 00:35:45.704 [2024-11-18 07:21:06.317454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.704 [2024-11-18 07:21:06.317539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.704 qpair failed and we were unable to recover it. 00:35:45.704 [2024-11-18 07:21:06.317787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.704 [2024-11-18 07:21:06.317853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.704 qpair failed and we were unable to recover it. 00:35:45.704 [2024-11-18 07:21:06.318070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.704 [2024-11-18 07:21:06.318135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.704 qpair failed and we were unable to recover it. 00:35:45.704 [2024-11-18 07:21:06.318434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.704 [2024-11-18 07:21:06.318525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.704 qpair failed and we were unable to recover it. 00:35:45.704 [2024-11-18 07:21:06.318841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.704 [2024-11-18 07:21:06.318907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.704 qpair failed and we were unable to recover it. 00:35:45.704 [2024-11-18 07:21:06.319202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.704 [2024-11-18 07:21:06.319269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.704 qpair failed and we were unable to recover it. 00:35:45.704 [2024-11-18 07:21:06.319565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.704 [2024-11-18 07:21:06.319632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.704 qpair failed and we were unable to recover it. 00:35:45.704 [2024-11-18 07:21:06.319881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.704 [2024-11-18 07:21:06.319946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.704 qpair failed and we were unable to recover it. 00:35:45.704 [2024-11-18 07:21:06.320132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.704 [2024-11-18 07:21:06.320197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.704 qpair failed and we were unable to recover it. 00:35:45.704 [2024-11-18 07:21:06.320415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.704 [2024-11-18 07:21:06.320481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.704 qpair failed and we were unable to recover it. 00:35:45.704 [2024-11-18 07:21:06.320783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.704 [2024-11-18 07:21:06.320849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.704 qpair failed and we were unable to recover it. 00:35:45.704 [2024-11-18 07:21:06.321141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.704 [2024-11-18 07:21:06.321206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.704 qpair failed and we were unable to recover it. 00:35:45.704 [2024-11-18 07:21:06.321450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.704 [2024-11-18 07:21:06.321532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.704 qpair failed and we were unable to recover it. 00:35:45.704 [2024-11-18 07:21:06.321783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.704 [2024-11-18 07:21:06.321850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.704 qpair failed and we were unable to recover it. 00:35:45.704 [2024-11-18 07:21:06.322141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.704 [2024-11-18 07:21:06.322206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.704 qpair failed and we were unable to recover it. 00:35:45.704 [2024-11-18 07:21:06.322541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.704 [2024-11-18 07:21:06.322609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.704 qpair failed and we were unable to recover it. 00:35:45.704 [2024-11-18 07:21:06.322868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.704 [2024-11-18 07:21:06.322934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.704 qpair failed and we were unable to recover it. 00:35:45.704 [2024-11-18 07:21:06.323223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.704 [2024-11-18 07:21:06.323288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.704 qpair failed and we were unable to recover it. 00:35:45.704 [2024-11-18 07:21:06.323593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.704 [2024-11-18 07:21:06.323659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.704 qpair failed and we were unable to recover it. 00:35:45.704 [2024-11-18 07:21:06.323899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.704 [2024-11-18 07:21:06.323965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.704 qpair failed and we were unable to recover it. 00:35:45.704 [2024-11-18 07:21:06.324231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.704 [2024-11-18 07:21:06.324296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.704 qpair failed and we were unable to recover it. 00:35:45.704 [2024-11-18 07:21:06.324581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.704 [2024-11-18 07:21:06.324646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.704 qpair failed and we were unable to recover it. 00:35:45.704 [2024-11-18 07:21:06.324945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.704 [2024-11-18 07:21:06.325011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.704 qpair failed and we were unable to recover it. 00:35:45.704 [2024-11-18 07:21:06.325259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.704 [2024-11-18 07:21:06.325324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.704 qpair failed and we were unable to recover it. 00:35:45.704 [2024-11-18 07:21:06.325539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.704 [2024-11-18 07:21:06.325606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.704 qpair failed and we were unable to recover it. 00:35:45.704 [2024-11-18 07:21:06.325867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.704 [2024-11-18 07:21:06.325933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.704 qpair failed and we were unable to recover it. 00:35:45.704 [2024-11-18 07:21:06.326176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.704 [2024-11-18 07:21:06.326242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.704 qpair failed and we were unable to recover it. 00:35:45.704 [2024-11-18 07:21:06.326518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.704 [2024-11-18 07:21:06.326584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.704 qpair failed and we were unable to recover it. 00:35:45.704 [2024-11-18 07:21:06.326817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.704 [2024-11-18 07:21:06.326883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.704 qpair failed and we were unable to recover it. 00:35:45.704 [2024-11-18 07:21:06.327152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.704 [2024-11-18 07:21:06.327218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.704 qpair failed and we were unable to recover it. 00:35:45.704 [2024-11-18 07:21:06.327516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.704 [2024-11-18 07:21:06.327583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.704 qpair failed and we were unable to recover it. 00:35:45.704 [2024-11-18 07:21:06.327836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.704 [2024-11-18 07:21:06.327902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.704 qpair failed and we were unable to recover it. 00:35:45.704 [2024-11-18 07:21:06.328148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.704 [2024-11-18 07:21:06.328214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.704 qpair failed and we were unable to recover it. 00:35:45.704 [2024-11-18 07:21:06.328455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.704 [2024-11-18 07:21:06.328537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.704 qpair failed and we were unable to recover it. 00:35:45.704 [2024-11-18 07:21:06.328826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.704 [2024-11-18 07:21:06.328891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.704 qpair failed and we were unable to recover it. 00:35:45.704 [2024-11-18 07:21:06.329150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.705 [2024-11-18 07:21:06.329218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.705 qpair failed and we were unable to recover it. 00:35:45.705 [2024-11-18 07:21:06.329465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.705 [2024-11-18 07:21:06.329547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.705 qpair failed and we were unable to recover it. 00:35:45.705 [2024-11-18 07:21:06.329790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.705 [2024-11-18 07:21:06.329855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.705 qpair failed and we were unable to recover it. 00:35:45.705 [2024-11-18 07:21:06.330104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.705 [2024-11-18 07:21:06.330169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.705 qpair failed and we were unable to recover it. 00:35:45.705 [2024-11-18 07:21:06.330467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.705 [2024-11-18 07:21:06.330560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.705 qpair failed and we were unable to recover it. 00:35:45.705 [2024-11-18 07:21:06.330850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.705 [2024-11-18 07:21:06.330916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.705 qpair failed and we were unable to recover it. 00:35:45.705 [2024-11-18 07:21:06.331213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.705 [2024-11-18 07:21:06.331277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.705 qpair failed and we were unable to recover it. 00:35:45.705 [2024-11-18 07:21:06.331537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.705 [2024-11-18 07:21:06.331604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.705 qpair failed and we were unable to recover it. 00:35:45.705 [2024-11-18 07:21:06.331813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.705 [2024-11-18 07:21:06.331878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.705 qpair failed and we were unable to recover it. 00:35:45.705 [2024-11-18 07:21:06.332122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.705 [2024-11-18 07:21:06.332186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.705 qpair failed and we were unable to recover it. 00:35:45.705 [2024-11-18 07:21:06.332408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.705 [2024-11-18 07:21:06.332473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.705 qpair failed and we were unable to recover it. 00:35:45.705 [2024-11-18 07:21:06.332784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.705 [2024-11-18 07:21:06.332850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.705 qpair failed and we were unable to recover it. 00:35:45.705 [2024-11-18 07:21:06.333067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.705 [2024-11-18 07:21:06.333132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.705 qpair failed and we were unable to recover it. 00:35:45.705 [2024-11-18 07:21:06.333350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.705 [2024-11-18 07:21:06.333416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.705 qpair failed and we were unable to recover it. 00:35:45.705 [2024-11-18 07:21:06.333695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.705 [2024-11-18 07:21:06.333763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.705 qpair failed and we were unable to recover it. 00:35:45.705 [2024-11-18 07:21:06.333979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.705 [2024-11-18 07:21:06.334045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.705 qpair failed and we were unable to recover it. 00:35:45.705 [2024-11-18 07:21:06.334292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.705 [2024-11-18 07:21:06.334358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.705 qpair failed and we were unable to recover it. 00:35:45.705 [2024-11-18 07:21:06.334599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.705 [2024-11-18 07:21:06.334668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.705 qpair failed and we were unable to recover it. 00:35:45.705 [2024-11-18 07:21:06.334955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.705 [2024-11-18 07:21:06.335020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.705 qpair failed and we were unable to recover it. 00:35:45.705 [2024-11-18 07:21:06.335313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.705 [2024-11-18 07:21:06.335379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.705 qpair failed and we were unable to recover it. 00:35:45.705 [2024-11-18 07:21:06.335650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.705 [2024-11-18 07:21:06.335717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.705 qpair failed and we were unable to recover it. 00:35:45.705 [2024-11-18 07:21:06.335974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.705 [2024-11-18 07:21:06.336040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.705 qpair failed and we were unable to recover it. 00:35:45.705 [2024-11-18 07:21:06.336325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.705 [2024-11-18 07:21:06.336391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.705 qpair failed and we were unable to recover it. 00:35:45.705 [2024-11-18 07:21:06.336664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.705 [2024-11-18 07:21:06.336730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.705 qpair failed and we were unable to recover it. 00:35:45.705 [2024-11-18 07:21:06.336996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.705 [2024-11-18 07:21:06.337061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.705 qpair failed and we were unable to recover it. 00:35:45.705 [2024-11-18 07:21:06.337353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.705 [2024-11-18 07:21:06.337418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.705 qpair failed and we were unable to recover it. 00:35:45.705 [2024-11-18 07:21:06.337687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.705 [2024-11-18 07:21:06.337752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.705 qpair failed and we were unable to recover it. 00:35:45.705 [2024-11-18 07:21:06.337999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.705 [2024-11-18 07:21:06.338075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.705 qpair failed and we were unable to recover it. 00:35:45.705 [2024-11-18 07:21:06.338334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.705 [2024-11-18 07:21:06.338399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.705 qpair failed and we were unable to recover it. 00:35:45.705 [2024-11-18 07:21:06.338707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.705 [2024-11-18 07:21:06.338773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.705 qpair failed and we were unable to recover it. 00:35:45.705 [2024-11-18 07:21:06.339068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.705 [2024-11-18 07:21:06.339134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.705 qpair failed and we were unable to recover it. 00:35:45.705 [2024-11-18 07:21:06.339431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.705 [2024-11-18 07:21:06.339513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.705 qpair failed and we were unable to recover it. 00:35:45.705 [2024-11-18 07:21:06.339727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.706 [2024-11-18 07:21:06.339793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.706 qpair failed and we were unable to recover it. 00:35:45.706 [2024-11-18 07:21:06.340082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.706 [2024-11-18 07:21:06.340147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.706 qpair failed and we were unable to recover it. 00:35:45.706 [2024-11-18 07:21:06.340455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.706 [2024-11-18 07:21:06.340537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.706 qpair failed and we were unable to recover it. 00:35:45.706 [2024-11-18 07:21:06.340771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.706 [2024-11-18 07:21:06.340836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.706 qpair failed and we were unable to recover it. 00:35:45.706 [2024-11-18 07:21:06.341089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.706 [2024-11-18 07:21:06.341154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.706 qpair failed and we were unable to recover it. 00:35:45.706 [2024-11-18 07:21:06.341358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.706 [2024-11-18 07:21:06.341424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.706 qpair failed and we were unable to recover it. 00:35:45.706 [2024-11-18 07:21:06.341730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.706 [2024-11-18 07:21:06.341796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.706 qpair failed and we were unable to recover it. 00:35:45.706 [2024-11-18 07:21:06.342049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.706 [2024-11-18 07:21:06.342114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.706 qpair failed and we were unable to recover it. 00:35:45.706 [2024-11-18 07:21:06.342371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.706 [2024-11-18 07:21:06.342437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.706 qpair failed and we were unable to recover it. 00:35:45.706 [2024-11-18 07:21:06.342780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.706 [2024-11-18 07:21:06.342846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.706 qpair failed and we were unable to recover it. 00:35:45.706 [2024-11-18 07:21:06.343129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.706 [2024-11-18 07:21:06.343194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.706 qpair failed and we were unable to recover it. 00:35:45.706 [2024-11-18 07:21:06.343438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.706 [2024-11-18 07:21:06.343521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.706 qpair failed and we were unable to recover it. 00:35:45.706 [2024-11-18 07:21:06.343813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.706 [2024-11-18 07:21:06.343879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.706 qpair failed and we were unable to recover it. 00:35:45.706 [2024-11-18 07:21:06.344133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.706 [2024-11-18 07:21:06.344199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.706 qpair failed and we were unable to recover it. 00:35:45.706 [2024-11-18 07:21:06.344458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.706 [2024-11-18 07:21:06.344551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.706 qpair failed and we were unable to recover it. 00:35:45.706 [2024-11-18 07:21:06.344804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.706 [2024-11-18 07:21:06.344871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.706 qpair failed and we were unable to recover it. 00:35:45.706 [2024-11-18 07:21:06.345094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.706 [2024-11-18 07:21:06.345159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.706 qpair failed and we were unable to recover it. 00:35:45.706 [2024-11-18 07:21:06.345406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.706 [2024-11-18 07:21:06.345473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.706 qpair failed and we were unable to recover it. 00:35:45.706 [2024-11-18 07:21:06.345766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.706 [2024-11-18 07:21:06.345833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.706 qpair failed and we were unable to recover it. 00:35:45.706 [2024-11-18 07:21:06.346077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.706 [2024-11-18 07:21:06.346142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.706 qpair failed and we were unable to recover it. 00:35:45.706 [2024-11-18 07:21:06.346402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.706 [2024-11-18 07:21:06.346467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.706 qpair failed and we were unable to recover it. 00:35:45.706 [2024-11-18 07:21:06.346755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.706 [2024-11-18 07:21:06.346821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.706 qpair failed and we were unable to recover it. 00:35:45.706 [2024-11-18 07:21:06.347053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.706 [2024-11-18 07:21:06.347129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.706 qpair failed and we were unable to recover it. 00:35:45.706 [2024-11-18 07:21:06.347340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.706 [2024-11-18 07:21:06.347406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.706 qpair failed and we were unable to recover it. 00:35:45.706 [2024-11-18 07:21:06.347716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.706 [2024-11-18 07:21:06.347783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.706 qpair failed and we were unable to recover it. 00:35:45.706 [2024-11-18 07:21:06.348079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.706 [2024-11-18 07:21:06.348144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.706 qpair failed and we were unable to recover it. 00:35:45.706 [2024-11-18 07:21:06.348394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.706 [2024-11-18 07:21:06.348459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.706 qpair failed and we were unable to recover it. 00:35:45.706 [2024-11-18 07:21:06.348708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.706 [2024-11-18 07:21:06.348774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.706 qpair failed and we were unable to recover it. 00:35:45.706 [2024-11-18 07:21:06.349077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.706 [2024-11-18 07:21:06.349141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.706 qpair failed and we were unable to recover it. 00:35:45.706 [2024-11-18 07:21:06.349442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.706 [2024-11-18 07:21:06.349524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.706 qpair failed and we were unable to recover it. 00:35:45.706 [2024-11-18 07:21:06.349819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.706 [2024-11-18 07:21:06.349885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.706 qpair failed and we were unable to recover it. 00:35:45.706 [2024-11-18 07:21:06.350159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.706 [2024-11-18 07:21:06.350222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.706 qpair failed and we were unable to recover it. 00:35:45.706 [2024-11-18 07:21:06.350462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.706 [2024-11-18 07:21:06.350560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.706 qpair failed and we were unable to recover it. 00:35:45.706 [2024-11-18 07:21:06.350803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.706 [2024-11-18 07:21:06.350869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.706 qpair failed and we were unable to recover it. 00:35:45.706 [2024-11-18 07:21:06.351153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.706 [2024-11-18 07:21:06.351218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.706 qpair failed and we were unable to recover it. 00:35:45.706 [2024-11-18 07:21:06.351527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.706 [2024-11-18 07:21:06.351594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.706 qpair failed and we were unable to recover it. 00:35:45.706 [2024-11-18 07:21:06.351873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.707 [2024-11-18 07:21:06.351939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.707 qpair failed and we were unable to recover it. 00:35:45.707 [2024-11-18 07:21:06.352241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.707 [2024-11-18 07:21:06.352306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.707 qpair failed and we were unable to recover it. 00:35:45.707 [2024-11-18 07:21:06.352558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.707 [2024-11-18 07:21:06.352624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.707 qpair failed and we were unable to recover it. 00:35:45.707 [2024-11-18 07:21:06.352928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.707 [2024-11-18 07:21:06.352993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.707 qpair failed and we were unable to recover it. 00:35:45.707 [2024-11-18 07:21:06.353285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.707 [2024-11-18 07:21:06.353350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.707 qpair failed and we were unable to recover it. 00:35:45.707 [2024-11-18 07:21:06.353658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.707 [2024-11-18 07:21:06.353724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.707 qpair failed and we were unable to recover it. 00:35:45.707 [2024-11-18 07:21:06.354024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.707 [2024-11-18 07:21:06.354088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.707 qpair failed and we were unable to recover it. 00:35:45.707 [2024-11-18 07:21:06.354338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.707 [2024-11-18 07:21:06.354406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.707 qpair failed and we were unable to recover it. 00:35:45.707 [2024-11-18 07:21:06.354631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.707 [2024-11-18 07:21:06.354697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.707 qpair failed and we were unable to recover it. 00:35:45.707 [2024-11-18 07:21:06.354930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.707 [2024-11-18 07:21:06.354995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.707 qpair failed and we were unable to recover it. 00:35:45.707 [2024-11-18 07:21:06.355250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.707 [2024-11-18 07:21:06.355314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.707 qpair failed and we were unable to recover it. 00:35:45.707 [2024-11-18 07:21:06.355617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.707 [2024-11-18 07:21:06.355683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.707 qpair failed and we were unable to recover it. 00:35:45.707 [2024-11-18 07:21:06.355937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.707 [2024-11-18 07:21:06.356002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.707 qpair failed and we were unable to recover it. 00:35:45.707 [2024-11-18 07:21:06.356245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.707 [2024-11-18 07:21:06.356320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.707 qpair failed and we were unable to recover it. 00:35:45.707 [2024-11-18 07:21:06.356613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.707 [2024-11-18 07:21:06.356680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.707 qpair failed and we were unable to recover it. 00:35:45.707 [2024-11-18 07:21:06.356945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.707 [2024-11-18 07:21:06.357010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.707 qpair failed and we were unable to recover it. 00:35:45.707 [2024-11-18 07:21:06.357298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.707 [2024-11-18 07:21:06.357362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.707 qpair failed and we were unable to recover it. 00:35:45.707 [2024-11-18 07:21:06.357662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.707 [2024-11-18 07:21:06.357728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.707 qpair failed and we were unable to recover it. 00:35:45.707 [2024-11-18 07:21:06.358019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.707 [2024-11-18 07:21:06.358084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.707 qpair failed and we were unable to recover it. 00:35:45.707 [2024-11-18 07:21:06.358325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.707 [2024-11-18 07:21:06.358388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.707 qpair failed and we were unable to recover it. 00:35:45.707 [2024-11-18 07:21:06.358619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.707 [2024-11-18 07:21:06.358685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.707 qpair failed and we were unable to recover it. 00:35:45.707 [2024-11-18 07:21:06.358973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.707 [2024-11-18 07:21:06.359038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.707 qpair failed and we were unable to recover it. 00:35:45.707 [2024-11-18 07:21:06.359333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.707 [2024-11-18 07:21:06.359398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.707 qpair failed and we were unable to recover it. 00:35:45.707 [2024-11-18 07:21:06.359630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.707 [2024-11-18 07:21:06.359695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.707 qpair failed and we were unable to recover it. 00:35:45.707 [2024-11-18 07:21:06.359953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.707 [2024-11-18 07:21:06.360018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.707 qpair failed and we were unable to recover it. 00:35:45.707 [2024-11-18 07:21:06.360272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.707 [2024-11-18 07:21:06.360337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.707 qpair failed and we were unable to recover it. 00:35:45.707 [2024-11-18 07:21:06.360579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.707 [2024-11-18 07:21:06.360646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.707 qpair failed and we were unable to recover it. 00:35:45.707 [2024-11-18 07:21:06.360956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.707 [2024-11-18 07:21:06.361023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.707 qpair failed and we were unable to recover it. 00:35:45.707 [2024-11-18 07:21:06.361308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.707 [2024-11-18 07:21:06.361374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.707 qpair failed and we were unable to recover it. 00:35:45.707 [2024-11-18 07:21:06.361617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.707 [2024-11-18 07:21:06.361684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.707 qpair failed and we were unable to recover it. 00:35:45.707 [2024-11-18 07:21:06.361936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.707 [2024-11-18 07:21:06.362001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.707 qpair failed and we were unable to recover it. 00:35:45.707 [2024-11-18 07:21:06.362296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.707 [2024-11-18 07:21:06.362360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.707 qpair failed and we were unable to recover it. 00:35:45.707 [2024-11-18 07:21:06.362633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.707 [2024-11-18 07:21:06.362701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.707 qpair failed and we were unable to recover it. 00:35:45.707 [2024-11-18 07:21:06.362993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.707 [2024-11-18 07:21:06.363058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.707 qpair failed and we were unable to recover it. 00:35:45.707 [2024-11-18 07:21:06.363311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.707 [2024-11-18 07:21:06.363375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.707 qpair failed and we were unable to recover it. 00:35:45.707 [2024-11-18 07:21:06.363671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.707 [2024-11-18 07:21:06.363738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.707 qpair failed and we were unable to recover it. 00:35:45.707 [2024-11-18 07:21:06.363994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.707 [2024-11-18 07:21:06.364060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.707 qpair failed and we were unable to recover it. 00:35:45.707 [2024-11-18 07:21:06.364351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.707 [2024-11-18 07:21:06.364416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.707 qpair failed and we were unable to recover it. 00:35:45.707 [2024-11-18 07:21:06.364631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.708 [2024-11-18 07:21:06.364697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.708 qpair failed and we were unable to recover it. 00:35:45.708 [2024-11-18 07:21:06.364968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.708 [2024-11-18 07:21:06.365033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.708 qpair failed and we were unable to recover it. 00:35:45.708 [2024-11-18 07:21:06.365280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.708 [2024-11-18 07:21:06.365345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.708 qpair failed and we were unable to recover it. 00:35:45.708 [2024-11-18 07:21:06.365659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.708 [2024-11-18 07:21:06.365726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.708 qpair failed and we were unable to recover it. 00:35:45.708 [2024-11-18 07:21:06.365977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.708 [2024-11-18 07:21:06.366042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.708 qpair failed and we were unable to recover it. 00:35:45.708 [2024-11-18 07:21:06.366306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.708 [2024-11-18 07:21:06.366370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.708 qpair failed and we were unable to recover it. 00:35:45.708 [2024-11-18 07:21:06.366655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.708 [2024-11-18 07:21:06.366722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.708 qpair failed and we were unable to recover it. 00:35:45.708 [2024-11-18 07:21:06.366989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.708 [2024-11-18 07:21:06.367053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.708 qpair failed and we were unable to recover it. 00:35:45.708 [2024-11-18 07:21:06.367307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.708 [2024-11-18 07:21:06.367372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.708 qpair failed and we were unable to recover it. 00:35:45.708 [2024-11-18 07:21:06.367665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.708 [2024-11-18 07:21:06.367731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.708 qpair failed and we were unable to recover it. 00:35:45.708 [2024-11-18 07:21:06.367989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.708 [2024-11-18 07:21:06.368053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.708 qpair failed and we were unable to recover it. 00:35:45.708 [2024-11-18 07:21:06.368349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.708 [2024-11-18 07:21:06.368414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.708 qpair failed and we were unable to recover it. 00:35:45.708 [2024-11-18 07:21:06.368634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.708 [2024-11-18 07:21:06.368702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.708 qpair failed and we were unable to recover it. 00:35:45.708 [2024-11-18 07:21:06.368965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.708 [2024-11-18 07:21:06.369030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.708 qpair failed and we were unable to recover it. 00:35:45.708 [2024-11-18 07:21:06.369333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.708 [2024-11-18 07:21:06.369398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.708 qpair failed and we were unable to recover it. 00:35:45.708 [2024-11-18 07:21:06.369682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.708 [2024-11-18 07:21:06.369749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.708 qpair failed and we were unable to recover it. 00:35:45.708 [2024-11-18 07:21:06.370045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.708 [2024-11-18 07:21:06.370119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.708 qpair failed and we were unable to recover it. 00:35:45.708 [2024-11-18 07:21:06.370332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.708 [2024-11-18 07:21:06.370398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.708 qpair failed and we were unable to recover it. 00:35:45.708 [2024-11-18 07:21:06.370666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.708 [2024-11-18 07:21:06.370732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.708 qpair failed and we were unable to recover it. 00:35:45.708 [2024-11-18 07:21:06.370970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.708 [2024-11-18 07:21:06.371034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.708 qpair failed and we were unable to recover it. 00:35:45.708 [2024-11-18 07:21:06.371319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.708 [2024-11-18 07:21:06.371385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.708 qpair failed and we were unable to recover it. 00:35:45.708 [2024-11-18 07:21:06.371692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.708 [2024-11-18 07:21:06.371760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.708 qpair failed and we were unable to recover it. 00:35:45.708 [2024-11-18 07:21:06.372046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.708 [2024-11-18 07:21:06.372111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.708 qpair failed and we were unable to recover it. 00:35:45.708 [2024-11-18 07:21:06.372406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.708 [2024-11-18 07:21:06.372471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.708 qpair failed and we were unable to recover it. 00:35:45.708 [2024-11-18 07:21:06.372802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.708 [2024-11-18 07:21:06.372869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.708 qpair failed and we were unable to recover it. 00:35:45.708 [2024-11-18 07:21:06.373083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.708 [2024-11-18 07:21:06.373149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.708 qpair failed and we were unable to recover it. 00:35:45.708 [2024-11-18 07:21:06.373406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.708 [2024-11-18 07:21:06.373471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.708 qpair failed and we were unable to recover it. 00:35:45.708 [2024-11-18 07:21:06.373754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.708 [2024-11-18 07:21:06.373819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.708 qpair failed and we were unable to recover it. 00:35:45.708 [2024-11-18 07:21:06.374113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.708 [2024-11-18 07:21:06.374178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.708 qpair failed and we were unable to recover it. 00:35:45.708 [2024-11-18 07:21:06.374381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.708 [2024-11-18 07:21:06.374446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.708 qpair failed and we were unable to recover it. 00:35:45.708 [2024-11-18 07:21:06.374778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.708 [2024-11-18 07:21:06.374845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.708 qpair failed and we were unable to recover it. 00:35:45.708 [2024-11-18 07:21:06.375097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.708 [2024-11-18 07:21:06.375164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.708 qpair failed and we were unable to recover it. 00:35:45.708 [2024-11-18 07:21:06.375457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.708 [2024-11-18 07:21:06.375542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.708 qpair failed and we were unable to recover it. 00:35:45.708 [2024-11-18 07:21:06.375761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.708 [2024-11-18 07:21:06.375825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.708 qpair failed and we were unable to recover it. 00:35:45.708 [2024-11-18 07:21:06.376113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.708 [2024-11-18 07:21:06.376178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.708 qpair failed and we were unable to recover it. 00:35:45.708 [2024-11-18 07:21:06.376477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.708 [2024-11-18 07:21:06.376560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.708 qpair failed and we were unable to recover it. 00:35:45.708 [2024-11-18 07:21:06.376881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.708 [2024-11-18 07:21:06.376945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.708 qpair failed and we were unable to recover it. 00:35:45.708 [2024-11-18 07:21:06.377232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.708 [2024-11-18 07:21:06.377297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.709 qpair failed and we were unable to recover it. 00:35:45.709 [2024-11-18 07:21:06.377596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.709 [2024-11-18 07:21:06.377664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.709 qpair failed and we were unable to recover it. 00:35:45.709 [2024-11-18 07:21:06.377915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.709 [2024-11-18 07:21:06.377979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.709 qpair failed and we were unable to recover it. 00:35:45.709 [2024-11-18 07:21:06.378274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.709 [2024-11-18 07:21:06.378339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.709 qpair failed and we were unable to recover it. 00:35:45.709 [2024-11-18 07:21:06.378590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.709 [2024-11-18 07:21:06.378656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.709 qpair failed and we were unable to recover it. 00:35:45.709 [2024-11-18 07:21:06.378877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.709 [2024-11-18 07:21:06.378941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.709 qpair failed and we were unable to recover it. 00:35:45.709 [2024-11-18 07:21:06.379187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.709 [2024-11-18 07:21:06.379269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.709 qpair failed and we were unable to recover it. 00:35:45.709 [2024-11-18 07:21:06.379518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.709 [2024-11-18 07:21:06.379585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.709 qpair failed and we were unable to recover it. 00:35:45.709 [2024-11-18 07:21:06.379827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.709 [2024-11-18 07:21:06.379891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.709 qpair failed and we were unable to recover it. 00:35:45.709 [2024-11-18 07:21:06.380192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.709 [2024-11-18 07:21:06.380258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.709 qpair failed and we were unable to recover it. 00:35:45.709 [2024-11-18 07:21:06.380518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.709 [2024-11-18 07:21:06.380585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.709 qpair failed and we were unable to recover it. 00:35:45.709 [2024-11-18 07:21:06.380843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.709 [2024-11-18 07:21:06.380907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.709 qpair failed and we were unable to recover it. 00:35:45.709 [2024-11-18 07:21:06.381192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.709 [2024-11-18 07:21:06.381258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.709 qpair failed and we were unable to recover it. 00:35:45.709 [2024-11-18 07:21:06.381547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.709 [2024-11-18 07:21:06.381615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.709 qpair failed and we were unable to recover it. 00:35:45.709 [2024-11-18 07:21:06.381879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.709 [2024-11-18 07:21:06.381943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.709 qpair failed and we were unable to recover it. 00:35:45.709 [2024-11-18 07:21:06.382197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.709 [2024-11-18 07:21:06.382262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.709 qpair failed and we were unable to recover it. 00:35:45.709 [2024-11-18 07:21:06.382517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.709 [2024-11-18 07:21:06.382585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.709 qpair failed and we were unable to recover it. 00:35:45.709 [2024-11-18 07:21:06.382838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.709 [2024-11-18 07:21:06.382903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.709 qpair failed and we were unable to recover it. 00:35:45.709 [2024-11-18 07:21:06.383141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.709 [2024-11-18 07:21:06.383206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.709 qpair failed and we were unable to recover it. 00:35:45.709 [2024-11-18 07:21:06.383406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.709 [2024-11-18 07:21:06.383471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.709 qpair failed and we were unable to recover it. 00:35:45.709 [2024-11-18 07:21:06.383765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.709 [2024-11-18 07:21:06.383830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.709 qpair failed and we were unable to recover it. 00:35:45.709 [2024-11-18 07:21:06.384056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.709 [2024-11-18 07:21:06.384121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.709 qpair failed and we were unable to recover it. 00:35:45.709 [2024-11-18 07:21:06.384367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.709 [2024-11-18 07:21:06.384435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.709 qpair failed and we were unable to recover it. 00:35:45.709 [2024-11-18 07:21:06.384710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.709 [2024-11-18 07:21:06.384776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.709 qpair failed and we were unable to recover it. 00:35:45.709 [2024-11-18 07:21:06.385031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.709 [2024-11-18 07:21:06.385096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.709 qpair failed and we were unable to recover it. 00:35:45.709 [2024-11-18 07:21:06.385388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.709 [2024-11-18 07:21:06.385454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.709 qpair failed and we were unable to recover it. 00:35:45.709 [2024-11-18 07:21:06.385765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.709 [2024-11-18 07:21:06.385830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.709 qpair failed and we were unable to recover it. 00:35:45.709 [2024-11-18 07:21:06.386069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.709 [2024-11-18 07:21:06.386135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.709 qpair failed and we were unable to recover it. 00:35:45.709 [2024-11-18 07:21:06.386389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.709 [2024-11-18 07:21:06.386454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.709 qpair failed and we were unable to recover it. 00:35:45.709 [2024-11-18 07:21:06.386720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.709 [2024-11-18 07:21:06.386786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.709 qpair failed and we were unable to recover it. 00:35:45.709 [2024-11-18 07:21:06.387080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.709 [2024-11-18 07:21:06.387145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.709 qpair failed and we were unable to recover it. 00:35:45.709 [2024-11-18 07:21:06.387403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.709 [2024-11-18 07:21:06.387467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.709 qpair failed and we were unable to recover it. 00:35:45.709 [2024-11-18 07:21:06.387785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.709 [2024-11-18 07:21:06.387851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.709 qpair failed and we were unable to recover it. 00:35:45.709 [2024-11-18 07:21:06.388152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.709 [2024-11-18 07:21:06.388228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.709 qpair failed and we were unable to recover it. 00:35:45.709 [2024-11-18 07:21:06.388531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.709 [2024-11-18 07:21:06.388598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.709 qpair failed and we were unable to recover it. 00:35:45.709 [2024-11-18 07:21:06.388844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.709 [2024-11-18 07:21:06.388911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.709 qpair failed and we were unable to recover it. 00:35:45.709 [2024-11-18 07:21:06.389195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.709 [2024-11-18 07:21:06.389260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.709 qpair failed and we were unable to recover it. 00:35:45.709 [2024-11-18 07:21:06.389518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.709 [2024-11-18 07:21:06.389583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.709 qpair failed and we were unable to recover it. 00:35:45.709 [2024-11-18 07:21:06.389831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.709 [2024-11-18 07:21:06.389897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.710 qpair failed and we were unable to recover it. 00:35:45.710 [2024-11-18 07:21:06.390188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.710 [2024-11-18 07:21:06.390253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.710 qpair failed and we were unable to recover it. 00:35:45.710 [2024-11-18 07:21:06.390468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.710 [2024-11-18 07:21:06.390566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.710 qpair failed and we were unable to recover it. 00:35:45.710 [2024-11-18 07:21:06.390827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.710 [2024-11-18 07:21:06.390891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.710 qpair failed and we were unable to recover it. 00:35:45.710 [2024-11-18 07:21:06.391096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.710 [2024-11-18 07:21:06.391161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.710 qpair failed and we were unable to recover it. 00:35:45.710 [2024-11-18 07:21:06.391445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.710 [2024-11-18 07:21:06.391525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.710 qpair failed and we were unable to recover it. 00:35:45.710 [2024-11-18 07:21:06.391825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.710 [2024-11-18 07:21:06.391889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.710 qpair failed and we were unable to recover it. 00:35:45.710 [2024-11-18 07:21:06.392138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.710 [2024-11-18 07:21:06.392203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.710 qpair failed and we were unable to recover it. 00:35:45.710 [2024-11-18 07:21:06.392508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.710 [2024-11-18 07:21:06.392575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.710 qpair failed and we were unable to recover it. 00:35:45.710 [2024-11-18 07:21:06.392880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.710 [2024-11-18 07:21:06.392945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.710 qpair failed and we were unable to recover it. 00:35:45.710 [2024-11-18 07:21:06.393244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.710 [2024-11-18 07:21:06.393309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.710 qpair failed and we were unable to recover it. 00:35:45.710 [2024-11-18 07:21:06.393606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.710 [2024-11-18 07:21:06.393674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.710 qpair failed and we were unable to recover it. 00:35:45.710 [2024-11-18 07:21:06.393975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.710 [2024-11-18 07:21:06.394040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.710 qpair failed and we were unable to recover it. 00:35:45.710 [2024-11-18 07:21:06.394339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.710 [2024-11-18 07:21:06.394404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.710 qpair failed and we were unable to recover it. 00:35:45.710 [2024-11-18 07:21:06.394735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.710 [2024-11-18 07:21:06.394801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.710 qpair failed and we were unable to recover it. 00:35:45.710 [2024-11-18 07:21:06.395051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.710 [2024-11-18 07:21:06.395116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.710 qpair failed and we were unable to recover it. 00:35:45.710 [2024-11-18 07:21:06.395410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.710 [2024-11-18 07:21:06.395475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.710 qpair failed and we were unable to recover it. 00:35:45.710 [2024-11-18 07:21:06.395756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.710 [2024-11-18 07:21:06.395821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.710 qpair failed and we were unable to recover it. 00:35:45.710 [2024-11-18 07:21:06.396013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.710 [2024-11-18 07:21:06.396079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.710 qpair failed and we were unable to recover it. 00:35:45.710 [2024-11-18 07:21:06.396365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.710 [2024-11-18 07:21:06.396430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.710 qpair failed and we were unable to recover it. 00:35:45.710 [2024-11-18 07:21:06.396725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.710 [2024-11-18 07:21:06.396790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.710 qpair failed and we were unable to recover it. 00:35:45.710 [2024-11-18 07:21:06.397033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.710 [2024-11-18 07:21:06.397100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.710 qpair failed and we were unable to recover it. 00:35:45.710 [2024-11-18 07:21:06.397327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.710 [2024-11-18 07:21:06.397397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.710 qpair failed and we were unable to recover it. 00:35:45.710 [2024-11-18 07:21:06.397740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.710 [2024-11-18 07:21:06.397807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.710 qpair failed and we were unable to recover it. 00:35:45.710 [2024-11-18 07:21:06.398046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.710 [2024-11-18 07:21:06.398111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.710 qpair failed and we were unable to recover it. 00:35:45.710 [2024-11-18 07:21:06.398402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.710 [2024-11-18 07:21:06.398469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.710 qpair failed and we were unable to recover it. 00:35:45.710 [2024-11-18 07:21:06.398802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.710 [2024-11-18 07:21:06.398869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.710 qpair failed and we were unable to recover it. 00:35:45.710 [2024-11-18 07:21:06.399177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.710 [2024-11-18 07:21:06.399243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.710 qpair failed and we were unable to recover it. 00:35:45.710 [2024-11-18 07:21:06.399535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.710 [2024-11-18 07:21:06.399602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.710 qpair failed and we were unable to recover it. 00:35:45.710 [2024-11-18 07:21:06.399895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.710 [2024-11-18 07:21:06.399959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.710 qpair failed and we were unable to recover it. 00:35:45.710 [2024-11-18 07:21:06.400246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.710 [2024-11-18 07:21:06.400312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.710 qpair failed and we were unable to recover it. 00:35:45.710 [2024-11-18 07:21:06.400552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.710 [2024-11-18 07:21:06.400619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.710 qpair failed and we were unable to recover it. 00:35:45.710 [2024-11-18 07:21:06.400917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.710 [2024-11-18 07:21:06.400981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.710 qpair failed and we were unable to recover it. 00:35:45.710 [2024-11-18 07:21:06.401239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.711 [2024-11-18 07:21:06.401305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.711 qpair failed and we were unable to recover it. 00:35:45.711 [2024-11-18 07:21:06.401549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.711 [2024-11-18 07:21:06.401618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.711 qpair failed and we were unable to recover it. 00:35:45.711 [2024-11-18 07:21:06.401908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.711 [2024-11-18 07:21:06.401973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.711 qpair failed and we were unable to recover it. 00:35:45.711 [2024-11-18 07:21:06.402280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.711 [2024-11-18 07:21:06.402346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.711 qpair failed and we were unable to recover it. 00:35:45.711 [2024-11-18 07:21:06.402587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.711 [2024-11-18 07:21:06.402654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.711 qpair failed and we were unable to recover it. 00:35:45.711 [2024-11-18 07:21:06.402906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.711 [2024-11-18 07:21:06.402970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.711 qpair failed and we were unable to recover it. 00:35:45.711 [2024-11-18 07:21:06.403255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.711 [2024-11-18 07:21:06.403321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.711 qpair failed and we were unable to recover it. 00:35:45.711 [2024-11-18 07:21:06.403555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.711 [2024-11-18 07:21:06.403624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.711 qpair failed and we were unable to recover it. 00:35:45.711 [2024-11-18 07:21:06.403912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.711 [2024-11-18 07:21:06.403977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.711 qpair failed and we were unable to recover it. 00:35:45.711 [2024-11-18 07:21:06.404275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.711 [2024-11-18 07:21:06.404341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.711 qpair failed and we were unable to recover it. 00:35:45.711 [2024-11-18 07:21:06.404603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.711 [2024-11-18 07:21:06.404670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.711 qpair failed and we were unable to recover it. 00:35:45.711 [2024-11-18 07:21:06.404958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.711 [2024-11-18 07:21:06.405023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.711 qpair failed and we were unable to recover it. 00:35:45.711 [2024-11-18 07:21:06.405312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.711 [2024-11-18 07:21:06.405377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.711 qpair failed and we were unable to recover it. 00:35:45.711 [2024-11-18 07:21:06.405684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.711 [2024-11-18 07:21:06.405751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.711 qpair failed and we were unable to recover it. 00:35:45.711 [2024-11-18 07:21:06.406037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.711 [2024-11-18 07:21:06.406102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.711 qpair failed and we were unable to recover it. 00:35:45.711 [2024-11-18 07:21:06.406371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.711 [2024-11-18 07:21:06.406436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.711 qpair failed and we were unable to recover it. 00:35:45.711 [2024-11-18 07:21:06.406704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.711 [2024-11-18 07:21:06.406771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.711 qpair failed and we were unable to recover it. 00:35:45.711 [2024-11-18 07:21:06.407038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.711 [2024-11-18 07:21:06.407104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.711 qpair failed and we were unable to recover it. 00:35:45.711 [2024-11-18 07:21:06.407334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.711 [2024-11-18 07:21:06.407399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.711 qpair failed and we were unable to recover it. 00:35:45.711 [2024-11-18 07:21:06.407643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.711 [2024-11-18 07:21:06.407710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.711 qpair failed and we were unable to recover it. 00:35:45.711 [2024-11-18 07:21:06.408000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.711 [2024-11-18 07:21:06.408066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.711 qpair failed and we were unable to recover it. 00:35:45.711 [2024-11-18 07:21:06.408308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.711 [2024-11-18 07:21:06.408373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.711 qpair failed and we were unable to recover it. 00:35:45.711 [2024-11-18 07:21:06.408607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.711 [2024-11-18 07:21:06.408674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.711 qpair failed and we were unable to recover it. 00:35:45.711 [2024-11-18 07:21:06.408977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.711 [2024-11-18 07:21:06.409044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.711 qpair failed and we were unable to recover it. 00:35:45.711 [2024-11-18 07:21:06.409346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.711 [2024-11-18 07:21:06.409411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.711 qpair failed and we were unable to recover it. 00:35:45.711 [2024-11-18 07:21:06.409638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.711 [2024-11-18 07:21:06.409706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.711 qpair failed and we were unable to recover it. 00:35:45.711 [2024-11-18 07:21:06.410008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.711 [2024-11-18 07:21:06.410075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.711 qpair failed and we were unable to recover it. 00:35:45.711 [2024-11-18 07:21:06.410285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.711 [2024-11-18 07:21:06.410349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.711 qpair failed and we were unable to recover it. 00:35:45.711 [2024-11-18 07:21:06.410638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.711 [2024-11-18 07:21:06.410705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.711 qpair failed and we were unable to recover it. 00:35:45.711 [2024-11-18 07:21:06.410903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.711 [2024-11-18 07:21:06.410970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.711 qpair failed and we were unable to recover it. 00:35:45.711 [2024-11-18 07:21:06.411157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.711 [2024-11-18 07:21:06.411233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.711 qpair failed and we were unable to recover it. 00:35:45.711 [2024-11-18 07:21:06.411454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.711 [2024-11-18 07:21:06.411533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.711 qpair failed and we were unable to recover it. 00:35:45.711 [2024-11-18 07:21:06.411756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.711 [2024-11-18 07:21:06.411821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.711 qpair failed and we were unable to recover it. 00:35:45.711 [2024-11-18 07:21:06.412122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.711 [2024-11-18 07:21:06.412188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.711 qpair failed and we were unable to recover it. 00:35:45.711 [2024-11-18 07:21:06.412400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.711 [2024-11-18 07:21:06.412465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.711 qpair failed and we were unable to recover it. 00:35:45.711 [2024-11-18 07:21:06.412754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.711 [2024-11-18 07:21:06.412819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.711 qpair failed and we were unable to recover it. 00:35:45.711 [2024-11-18 07:21:06.413067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.711 [2024-11-18 07:21:06.413131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.711 qpair failed and we were unable to recover it. 00:35:45.711 [2024-11-18 07:21:06.413414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.711 [2024-11-18 07:21:06.413480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.711 qpair failed and we were unable to recover it. 00:35:45.711 [2024-11-18 07:21:06.413745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.712 [2024-11-18 07:21:06.413811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.712 qpair failed and we were unable to recover it. 00:35:45.712 [2024-11-18 07:21:06.414115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.712 [2024-11-18 07:21:06.414180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.712 qpair failed and we were unable to recover it. 00:35:45.712 [2024-11-18 07:21:06.414485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.712 [2024-11-18 07:21:06.414566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.712 qpair failed and we were unable to recover it. 00:35:45.712 [2024-11-18 07:21:06.414770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.712 [2024-11-18 07:21:06.414836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.712 qpair failed and we were unable to recover it. 00:35:45.712 [2024-11-18 07:21:06.415129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.712 [2024-11-18 07:21:06.415195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.712 qpair failed and we were unable to recover it. 00:35:45.712 [2024-11-18 07:21:06.415440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.712 [2024-11-18 07:21:06.415521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.712 qpair failed and we were unable to recover it. 00:35:45.712 [2024-11-18 07:21:06.415833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.712 [2024-11-18 07:21:06.415899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.712 qpair failed and we were unable to recover it. 00:35:45.712 [2024-11-18 07:21:06.416149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.712 [2024-11-18 07:21:06.416216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.712 qpair failed and we were unable to recover it. 00:35:45.712 [2024-11-18 07:21:06.416465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.712 [2024-11-18 07:21:06.416547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.712 qpair failed and we were unable to recover it. 00:35:45.712 [2024-11-18 07:21:06.416848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.712 [2024-11-18 07:21:06.416914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.712 qpair failed and we were unable to recover it. 00:35:45.712 [2024-11-18 07:21:06.417211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.712 [2024-11-18 07:21:06.417277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.712 qpair failed and we were unable to recover it. 00:35:45.712 [2024-11-18 07:21:06.417470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.712 [2024-11-18 07:21:06.417553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.712 qpair failed and we were unable to recover it. 00:35:45.712 [2024-11-18 07:21:06.417771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.712 [2024-11-18 07:21:06.417837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.712 qpair failed and we were unable to recover it. 00:35:45.712 [2024-11-18 07:21:06.418136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.712 [2024-11-18 07:21:06.418201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.712 qpair failed and we were unable to recover it. 00:35:45.712 [2024-11-18 07:21:06.418517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.712 [2024-11-18 07:21:06.418585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.712 qpair failed and we were unable to recover it. 00:35:45.712 [2024-11-18 07:21:06.418852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.712 [2024-11-18 07:21:06.418916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.712 qpair failed and we were unable to recover it. 00:35:45.712 [2024-11-18 07:21:06.419210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.712 [2024-11-18 07:21:06.419276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.712 qpair failed and we were unable to recover it. 00:35:45.712 [2024-11-18 07:21:06.419510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.712 [2024-11-18 07:21:06.419579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.712 qpair failed and we were unable to recover it. 00:35:45.712 [2024-11-18 07:21:06.419846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.712 [2024-11-18 07:21:06.419912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.712 qpair failed and we were unable to recover it. 00:35:45.712 [2024-11-18 07:21:06.420194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.712 [2024-11-18 07:21:06.420269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.712 qpair failed and we were unable to recover it. 00:35:45.712 [2024-11-18 07:21:06.420571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.712 [2024-11-18 07:21:06.420637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.712 qpair failed and we were unable to recover it. 00:35:45.712 [2024-11-18 07:21:06.420884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.712 [2024-11-18 07:21:06.420949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.712 qpair failed and we were unable to recover it. 00:35:45.712 [2024-11-18 07:21:06.421186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.712 [2024-11-18 07:21:06.421251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.712 qpair failed and we were unable to recover it. 00:35:45.712 [2024-11-18 07:21:06.421546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.712 [2024-11-18 07:21:06.421612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.712 qpair failed and we were unable to recover it. 00:35:45.712 [2024-11-18 07:21:06.421900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.712 [2024-11-18 07:21:06.421965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.712 qpair failed and we were unable to recover it. 00:35:45.712 [2024-11-18 07:21:06.422257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.712 [2024-11-18 07:21:06.422323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.712 qpair failed and we were unable to recover it. 00:35:45.712 [2024-11-18 07:21:06.422611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.712 [2024-11-18 07:21:06.422676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.712 qpair failed and we were unable to recover it. 00:35:45.712 [2024-11-18 07:21:06.422936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.712 [2024-11-18 07:21:06.423000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.712 qpair failed and we were unable to recover it. 00:35:45.712 [2024-11-18 07:21:06.423287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.712 [2024-11-18 07:21:06.423354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.712 qpair failed and we were unable to recover it. 00:35:45.712 [2024-11-18 07:21:06.423659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.712 [2024-11-18 07:21:06.423725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.712 qpair failed and we were unable to recover it. 00:35:45.712 [2024-11-18 07:21:06.423974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.712 [2024-11-18 07:21:06.424040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.712 qpair failed and we were unable to recover it. 00:35:45.712 [2024-11-18 07:21:06.424301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.712 [2024-11-18 07:21:06.424366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.712 qpair failed and we were unable to recover it. 00:35:45.712 [2024-11-18 07:21:06.424655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.712 [2024-11-18 07:21:06.424721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.712 qpair failed and we were unable to recover it. 00:35:45.712 [2024-11-18 07:21:06.424946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.712 [2024-11-18 07:21:06.425012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.712 qpair failed and we were unable to recover it. 00:35:45.712 [2024-11-18 07:21:06.425296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.712 [2024-11-18 07:21:06.425363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.712 qpair failed and we were unable to recover it. 00:35:45.712 [2024-11-18 07:21:06.425650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.712 [2024-11-18 07:21:06.425717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.712 qpair failed and we were unable to recover it. 00:35:45.712 [2024-11-18 07:21:06.426019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.712 [2024-11-18 07:21:06.426085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.712 qpair failed and we were unable to recover it. 00:35:45.712 [2024-11-18 07:21:06.426328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.712 [2024-11-18 07:21:06.426392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.712 qpair failed and we were unable to recover it. 00:35:45.712 [2024-11-18 07:21:06.426695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.713 [2024-11-18 07:21:06.426762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.713 qpair failed and we were unable to recover it. 00:35:45.713 [2024-11-18 07:21:06.427071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.713 [2024-11-18 07:21:06.427136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.713 qpair failed and we were unable to recover it. 00:35:45.713 [2024-11-18 07:21:06.427443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.713 [2024-11-18 07:21:06.427526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.713 qpair failed and we were unable to recover it. 00:35:45.713 [2024-11-18 07:21:06.427820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.713 [2024-11-18 07:21:06.427886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.713 qpair failed and we were unable to recover it. 00:35:45.713 [2024-11-18 07:21:06.428154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.713 [2024-11-18 07:21:06.428219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.713 qpair failed and we were unable to recover it. 00:35:45.713 [2024-11-18 07:21:06.428446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.713 [2024-11-18 07:21:06.428528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.713 qpair failed and we were unable to recover it. 00:35:45.713 [2024-11-18 07:21:06.428819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.713 [2024-11-18 07:21:06.428884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.713 qpair failed and we were unable to recover it. 00:35:45.713 [2024-11-18 07:21:06.429143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.713 [2024-11-18 07:21:06.429207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.713 qpair failed and we were unable to recover it. 00:35:45.713 [2024-11-18 07:21:06.429523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.713 [2024-11-18 07:21:06.429589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.713 qpair failed and we were unable to recover it. 00:35:45.713 [2024-11-18 07:21:06.429900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.713 [2024-11-18 07:21:06.429966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.713 qpair failed and we were unable to recover it. 00:35:45.713 [2024-11-18 07:21:06.430223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.713 [2024-11-18 07:21:06.430288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.713 qpair failed and we were unable to recover it. 00:35:45.713 [2024-11-18 07:21:06.430587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.713 [2024-11-18 07:21:06.430653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.713 qpair failed and we were unable to recover it. 00:35:45.713 [2024-11-18 07:21:06.430916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.713 [2024-11-18 07:21:06.430982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.713 qpair failed and we were unable to recover it. 00:35:45.713 [2024-11-18 07:21:06.431216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.713 [2024-11-18 07:21:06.431280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.713 qpair failed and we were unable to recover it. 00:35:45.713 [2024-11-18 07:21:06.431532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.713 [2024-11-18 07:21:06.431600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.713 qpair failed and we were unable to recover it. 00:35:45.713 [2024-11-18 07:21:06.431905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.713 [2024-11-18 07:21:06.431970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.713 qpair failed and we were unable to recover it. 00:35:45.713 [2024-11-18 07:21:06.432268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.713 [2024-11-18 07:21:06.432332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.713 qpair failed and we were unable to recover it. 00:35:45.713 [2024-11-18 07:21:06.432633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.713 [2024-11-18 07:21:06.432699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.713 qpair failed and we were unable to recover it. 00:35:45.713 [2024-11-18 07:21:06.432907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.713 [2024-11-18 07:21:06.432972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.713 qpair failed and we were unable to recover it. 00:35:45.713 [2024-11-18 07:21:06.433264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.713 [2024-11-18 07:21:06.433329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.713 qpair failed and we were unable to recover it. 00:35:45.713 [2024-11-18 07:21:06.433587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.713 [2024-11-18 07:21:06.433654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.713 qpair failed and we were unable to recover it. 00:35:45.713 [2024-11-18 07:21:06.433950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.713 [2024-11-18 07:21:06.434013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.713 qpair failed and we were unable to recover it. 00:35:45.713 [2024-11-18 07:21:06.434300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.713 [2024-11-18 07:21:06.434366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.713 qpair failed and we were unable to recover it. 00:35:45.713 [2024-11-18 07:21:06.434613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.713 [2024-11-18 07:21:06.434681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.713 qpair failed and we were unable to recover it. 00:35:45.713 [2024-11-18 07:21:06.434972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.713 [2024-11-18 07:21:06.435037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.713 qpair failed and we were unable to recover it. 00:35:45.713 [2024-11-18 07:21:06.435279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.713 [2024-11-18 07:21:06.435344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.713 qpair failed and we were unable to recover it. 00:35:45.713 [2024-11-18 07:21:06.435651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.713 [2024-11-18 07:21:06.435717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.713 qpair failed and we were unable to recover it. 00:35:45.713 [2024-11-18 07:21:06.436029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.713 [2024-11-18 07:21:06.436093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.713 qpair failed and we were unable to recover it. 00:35:45.713 [2024-11-18 07:21:06.436374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.713 [2024-11-18 07:21:06.436439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.713 qpair failed and we were unable to recover it. 00:35:45.713 [2024-11-18 07:21:06.436724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.713 [2024-11-18 07:21:06.436799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.713 qpair failed and we were unable to recover it. 00:35:45.713 [2024-11-18 07:21:06.437084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.713 [2024-11-18 07:21:06.437150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.713 qpair failed and we were unable to recover it. 00:35:45.713 [2024-11-18 07:21:06.437406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.713 [2024-11-18 07:21:06.437471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.713 qpair failed and we were unable to recover it. 00:35:45.713 [2024-11-18 07:21:06.437705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.713 [2024-11-18 07:21:06.437738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.713 qpair failed and we were unable to recover it. 00:35:45.713 [2024-11-18 07:21:06.437904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.713 [2024-11-18 07:21:06.437937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.713 qpair failed and we were unable to recover it. 00:35:45.713 [2024-11-18 07:21:06.438046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.713 [2024-11-18 07:21:06.438080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.713 qpair failed and we were unable to recover it. 00:35:45.713 [2024-11-18 07:21:06.438216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.713 [2024-11-18 07:21:06.438248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.713 qpair failed and we were unable to recover it. 00:35:45.713 [2024-11-18 07:21:06.438393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.713 [2024-11-18 07:21:06.438427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.713 qpair failed and we were unable to recover it. 00:35:45.713 [2024-11-18 07:21:06.438564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.713 [2024-11-18 07:21:06.438598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.713 qpair failed and we were unable to recover it. 00:35:45.713 [2024-11-18 07:21:06.438729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.714 [2024-11-18 07:21:06.438773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.714 qpair failed and we were unable to recover it. 00:35:45.714 [2024-11-18 07:21:06.438908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.714 [2024-11-18 07:21:06.438942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.714 qpair failed and we were unable to recover it. 00:35:45.714 [2024-11-18 07:21:06.439072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.714 [2024-11-18 07:21:06.439105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.714 qpair failed and we were unable to recover it. 00:35:45.714 [2024-11-18 07:21:06.439357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.714 [2024-11-18 07:21:06.439431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.714 qpair failed and we were unable to recover it. 00:35:45.714 [2024-11-18 07:21:06.439653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.714 [2024-11-18 07:21:06.439686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.714 qpair failed and we were unable to recover it. 00:35:45.714 [2024-11-18 07:21:06.439833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.714 [2024-11-18 07:21:06.439907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.714 qpair failed and we were unable to recover it. 00:35:45.714 [2024-11-18 07:21:06.440202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.714 [2024-11-18 07:21:06.440267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.714 qpair failed and we were unable to recover it. 00:35:45.714 [2024-11-18 07:21:06.440518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.714 [2024-11-18 07:21:06.440551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.714 qpair failed and we were unable to recover it. 00:35:45.714 [2024-11-18 07:21:06.440693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.714 [2024-11-18 07:21:06.440726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.714 qpair failed and we were unable to recover it. 00:35:45.714 [2024-11-18 07:21:06.441000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.714 [2024-11-18 07:21:06.441063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.714 qpair failed and we were unable to recover it. 00:35:45.714 [2024-11-18 07:21:06.441355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.714 [2024-11-18 07:21:06.441388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.714 qpair failed and we were unable to recover it. 00:35:45.714 [2024-11-18 07:21:06.441651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.714 [2024-11-18 07:21:06.441690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.714 qpair failed and we were unable to recover it. 00:35:45.714 [2024-11-18 07:21:06.441856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.714 [2024-11-18 07:21:06.441923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.714 qpair failed and we were unable to recover it. 00:35:45.714 [2024-11-18 07:21:06.442215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.714 [2024-11-18 07:21:06.442278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.714 qpair failed and we were unable to recover it. 00:35:45.714 [2024-11-18 07:21:06.442573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.714 [2024-11-18 07:21:06.442607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.714 qpair failed and we were unable to recover it. 00:35:45.714 [2024-11-18 07:21:06.442758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.714 [2024-11-18 07:21:06.442830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.714 qpair failed and we were unable to recover it. 00:35:45.714 [2024-11-18 07:21:06.443064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.714 [2024-11-18 07:21:06.443098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.714 qpair failed and we were unable to recover it. 00:35:45.714 [2024-11-18 07:21:06.443199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.714 [2024-11-18 07:21:06.443253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.714 qpair failed and we were unable to recover it. 00:35:45.714 [2024-11-18 07:21:06.443538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.714 [2024-11-18 07:21:06.443571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.714 qpair failed and we were unable to recover it. 00:35:45.714 [2024-11-18 07:21:06.443711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.714 [2024-11-18 07:21:06.443743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.714 qpair failed and we were unable to recover it. 00:35:45.714 [2024-11-18 07:21:06.443879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.714 [2024-11-18 07:21:06.443910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.714 qpair failed and we were unable to recover it. 00:35:45.714 [2024-11-18 07:21:06.444131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.714 [2024-11-18 07:21:06.444197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.714 qpair failed and we were unable to recover it. 00:35:45.714 [2024-11-18 07:21:06.444545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.714 [2024-11-18 07:21:06.444578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.714 qpair failed and we were unable to recover it. 00:35:45.714 [2024-11-18 07:21:06.444678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.714 [2024-11-18 07:21:06.444711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.714 qpair failed and we were unable to recover it. 00:35:45.714 [2024-11-18 07:21:06.444901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.714 [2024-11-18 07:21:06.444968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.714 qpair failed and we were unable to recover it. 00:35:45.714 [2024-11-18 07:21:06.445213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.714 [2024-11-18 07:21:06.445279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.714 qpair failed and we were unable to recover it. 00:35:45.714 [2024-11-18 07:21:06.445555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.714 [2024-11-18 07:21:06.445589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.714 qpair failed and we were unable to recover it. 00:35:45.714 [2024-11-18 07:21:06.445720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.714 [2024-11-18 07:21:06.445754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.714 qpair failed and we were unable to recover it. 00:35:45.714 [2024-11-18 07:21:06.446004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.714 [2024-11-18 07:21:06.446038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.714 qpair failed and we were unable to recover it. 00:35:45.714 [2024-11-18 07:21:06.446223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.714 [2024-11-18 07:21:06.446288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.714 qpair failed and we were unable to recover it. 00:35:45.714 [2024-11-18 07:21:06.446561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.714 [2024-11-18 07:21:06.446596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.714 qpair failed and we were unable to recover it. 00:35:45.714 [2024-11-18 07:21:06.446759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.714 [2024-11-18 07:21:06.446804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.714 qpair failed and we were unable to recover it. 00:35:45.714 [2024-11-18 07:21:06.447048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.714 [2024-11-18 07:21:06.447112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.714 qpair failed and we were unable to recover it. 00:35:45.714 [2024-11-18 07:21:06.447303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.714 [2024-11-18 07:21:06.447368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.714 qpair failed and we were unable to recover it. 00:35:45.714 [2024-11-18 07:21:06.447590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.714 [2024-11-18 07:21:06.447624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.714 qpair failed and we were unable to recover it. 00:35:45.714 [2024-11-18 07:21:06.447756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.714 [2024-11-18 07:21:06.447827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.714 qpair failed and we were unable to recover it. 00:35:45.714 [2024-11-18 07:21:06.448075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.714 [2024-11-18 07:21:06.448141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.714 qpair failed and we were unable to recover it. 00:35:45.714 [2024-11-18 07:21:06.448426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.715 [2024-11-18 07:21:06.448504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.715 qpair failed and we were unable to recover it. 00:35:45.715 [2024-11-18 07:21:06.448661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.715 [2024-11-18 07:21:06.448700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.715 qpair failed and we were unable to recover it. 00:35:45.715 [2024-11-18 07:21:06.448895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.715 [2024-11-18 07:21:06.448961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.715 qpair failed and we were unable to recover it. 00:35:45.715 [2024-11-18 07:21:06.449183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.715 [2024-11-18 07:21:06.449247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.715 qpair failed and we were unable to recover it. 00:35:45.715 [2024-11-18 07:21:06.449543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.715 [2024-11-18 07:21:06.449577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.715 qpair failed and we were unable to recover it. 00:35:45.715 [2024-11-18 07:21:06.449717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.715 [2024-11-18 07:21:06.449750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.715 qpair failed and we were unable to recover it. 00:35:45.715 [2024-11-18 07:21:06.449990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.715 [2024-11-18 07:21:06.450023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.715 qpair failed and we were unable to recover it. 00:35:45.715 [2024-11-18 07:21:06.450125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.715 [2024-11-18 07:21:06.450179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.715 qpair failed and we were unable to recover it. 00:35:45.715 [2024-11-18 07:21:06.450422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.715 [2024-11-18 07:21:06.450487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.715 qpair failed and we were unable to recover it. 00:35:45.715 [2024-11-18 07:21:06.450649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.715 [2024-11-18 07:21:06.450682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.715 qpair failed and we were unable to recover it. 00:35:45.715 [2024-11-18 07:21:06.450853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.715 [2024-11-18 07:21:06.450931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.715 qpair failed and we were unable to recover it. 00:35:45.715 [2024-11-18 07:21:06.451226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.715 [2024-11-18 07:21:06.451292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.715 qpair failed and we were unable to recover it. 00:35:45.715 [2024-11-18 07:21:06.451484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.715 [2024-11-18 07:21:06.451527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.715 qpair failed and we were unable to recover it. 00:35:45.715 [2024-11-18 07:21:06.451634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.715 [2024-11-18 07:21:06.451667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.715 qpair failed and we were unable to recover it. 00:35:45.715 [2024-11-18 07:21:06.451788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.715 [2024-11-18 07:21:06.451827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.715 qpair failed and we were unable to recover it. 00:35:45.715 [2024-11-18 07:21:06.452072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.715 [2024-11-18 07:21:06.452137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.715 qpair failed and we were unable to recover it. 00:35:45.715 [2024-11-18 07:21:06.452426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.715 [2024-11-18 07:21:06.452548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.715 qpair failed and we were unable to recover it. 00:35:45.715 [2024-11-18 07:21:06.452730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.715 [2024-11-18 07:21:06.452763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.715 qpair failed and we were unable to recover it. 00:35:45.715 [2024-11-18 07:21:06.453080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.715 [2024-11-18 07:21:06.453146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.715 qpair failed and we were unable to recover it. 00:35:45.715 [2024-11-18 07:21:06.453411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.715 [2024-11-18 07:21:06.453475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.715 qpair failed and we were unable to recover it. 00:35:45.715 [2024-11-18 07:21:06.453677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.715 [2024-11-18 07:21:06.453710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.715 qpair failed and we were unable to recover it. 00:35:45.715 [2024-11-18 07:21:06.453899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.715 [2024-11-18 07:21:06.453966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.715 qpair failed and we were unable to recover it. 00:35:45.715 [2024-11-18 07:21:06.454246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.715 [2024-11-18 07:21:06.454311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.715 qpair failed and we were unable to recover it. 00:35:45.715 [2024-11-18 07:21:06.454562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.715 [2024-11-18 07:21:06.454628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.715 qpair failed and we were unable to recover it. 00:35:45.715 [2024-11-18 07:21:06.454920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.715 [2024-11-18 07:21:06.454986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.715 qpair failed and we were unable to recover it. 00:35:45.715 [2024-11-18 07:21:06.455280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.715 [2024-11-18 07:21:06.455345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.715 qpair failed and we were unable to recover it. 00:35:45.715 [2024-11-18 07:21:06.455650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.715 [2024-11-18 07:21:06.455716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.715 qpair failed and we were unable to recover it. 00:35:45.715 [2024-11-18 07:21:06.455978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.715 [2024-11-18 07:21:06.456044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.715 qpair failed and we were unable to recover it. 00:35:45.715 [2024-11-18 07:21:06.456325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.715 [2024-11-18 07:21:06.456399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.715 qpair failed and we were unable to recover it. 00:35:45.715 [2024-11-18 07:21:06.456659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.715 [2024-11-18 07:21:06.456727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.715 qpair failed and we were unable to recover it. 00:35:45.715 [2024-11-18 07:21:06.456969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.715 [2024-11-18 07:21:06.457036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.715 qpair failed and we were unable to recover it. 00:35:45.715 [2024-11-18 07:21:06.457289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.715 [2024-11-18 07:21:06.457355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.715 qpair failed and we were unable to recover it. 00:35:45.715 [2024-11-18 07:21:06.457616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.715 [2024-11-18 07:21:06.457682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.715 qpair failed and we were unable to recover it. 00:35:45.715 [2024-11-18 07:21:06.457981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.716 [2024-11-18 07:21:06.458046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.716 qpair failed and we were unable to recover it. 00:35:45.716 [2024-11-18 07:21:06.458312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.716 [2024-11-18 07:21:06.458377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.716 qpair failed and we were unable to recover it. 00:35:45.716 [2024-11-18 07:21:06.458606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.716 [2024-11-18 07:21:06.458672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.716 qpair failed and we were unable to recover it. 00:35:45.716 [2024-11-18 07:21:06.458896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.716 [2024-11-18 07:21:06.458960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.716 qpair failed and we were unable to recover it. 00:35:45.716 [2024-11-18 07:21:06.459212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.716 [2024-11-18 07:21:06.459277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.716 qpair failed and we were unable to recover it. 00:35:45.716 [2024-11-18 07:21:06.459523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.716 [2024-11-18 07:21:06.459600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.716 qpair failed and we were unable to recover it. 00:35:45.716 [2024-11-18 07:21:06.459857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.716 [2024-11-18 07:21:06.459921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.716 qpair failed and we were unable to recover it. 00:35:45.716 [2024-11-18 07:21:06.460203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.716 [2024-11-18 07:21:06.460269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.716 qpair failed and we were unable to recover it. 00:35:45.716 [2024-11-18 07:21:06.460542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.716 [2024-11-18 07:21:06.460607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.716 qpair failed and we were unable to recover it. 00:35:45.716 [2024-11-18 07:21:06.460920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.716 [2024-11-18 07:21:06.460985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.716 qpair failed and we were unable to recover it. 00:35:45.716 [2024-11-18 07:21:06.461278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.716 [2024-11-18 07:21:06.461344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.716 qpair failed and we were unable to recover it. 00:35:45.716 [2024-11-18 07:21:06.461602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.716 [2024-11-18 07:21:06.461667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.716 qpair failed and we were unable to recover it. 00:35:45.716 [2024-11-18 07:21:06.461928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.716 [2024-11-18 07:21:06.461993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.716 qpair failed and we were unable to recover it. 00:35:45.716 [2024-11-18 07:21:06.462296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.716 [2024-11-18 07:21:06.462362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.716 qpair failed and we were unable to recover it. 00:35:45.716 [2024-11-18 07:21:06.462664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.716 [2024-11-18 07:21:06.462729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.716 qpair failed and we were unable to recover it. 00:35:45.716 [2024-11-18 07:21:06.463047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.716 [2024-11-18 07:21:06.463112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.716 qpair failed and we were unable to recover it. 00:35:45.716 [2024-11-18 07:21:06.463301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.716 [2024-11-18 07:21:06.463367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.716 qpair failed and we were unable to recover it. 00:35:45.716 [2024-11-18 07:21:06.463606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.716 [2024-11-18 07:21:06.463672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.716 qpair failed and we were unable to recover it. 00:35:45.716 [2024-11-18 07:21:06.463929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.716 [2024-11-18 07:21:06.463995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.716 qpair failed and we were unable to recover it. 00:35:45.716 [2024-11-18 07:21:06.464281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.716 [2024-11-18 07:21:06.464347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.716 qpair failed and we were unable to recover it. 00:35:45.716 [2024-11-18 07:21:06.464635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.716 [2024-11-18 07:21:06.464701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.716 qpair failed and we were unable to recover it. 00:35:45.716 [2024-11-18 07:21:06.465005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.716 [2024-11-18 07:21:06.465070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.716 qpair failed and we were unable to recover it. 00:35:45.716 [2024-11-18 07:21:06.465364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.716 [2024-11-18 07:21:06.465429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.716 qpair failed and we were unable to recover it. 00:35:45.716 [2024-11-18 07:21:06.465712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.716 [2024-11-18 07:21:06.465777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.716 qpair failed and we were unable to recover it. 00:35:45.716 [2024-11-18 07:21:06.466023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.716 [2024-11-18 07:21:06.466089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.716 qpair failed and we were unable to recover it. 00:35:45.716 [2024-11-18 07:21:06.466305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.716 [2024-11-18 07:21:06.466371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.716 qpair failed and we were unable to recover it. 00:35:45.716 [2024-11-18 07:21:06.466638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.716 [2024-11-18 07:21:06.466703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.716 qpair failed and we were unable to recover it. 00:35:45.716 [2024-11-18 07:21:06.466955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.716 [2024-11-18 07:21:06.467022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.716 qpair failed and we were unable to recover it. 00:35:45.716 [2024-11-18 07:21:06.467326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.716 [2024-11-18 07:21:06.467392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.716 qpair failed and we were unable to recover it. 00:35:45.716 [2024-11-18 07:21:06.467716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.716 [2024-11-18 07:21:06.467784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.716 qpair failed and we were unable to recover it. 00:35:45.716 [2024-11-18 07:21:06.468077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.716 [2024-11-18 07:21:06.468143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.716 qpair failed and we were unable to recover it. 00:35:45.716 [2024-11-18 07:21:06.468355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.716 [2024-11-18 07:21:06.468421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.716 qpair failed and we were unable to recover it. 00:35:45.716 [2024-11-18 07:21:06.468722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.716 [2024-11-18 07:21:06.468791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.716 qpair failed and we were unable to recover it. 00:35:45.716 [2024-11-18 07:21:06.469096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.716 [2024-11-18 07:21:06.469161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.716 qpair failed and we were unable to recover it. 00:35:45.716 [2024-11-18 07:21:06.469448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.716 [2024-11-18 07:21:06.469539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.716 qpair failed and we were unable to recover it. 00:35:45.716 [2024-11-18 07:21:06.469815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.716 [2024-11-18 07:21:06.469881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.716 qpair failed and we were unable to recover it. 00:35:45.716 [2024-11-18 07:21:06.470141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.716 [2024-11-18 07:21:06.470208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.716 qpair failed and we were unable to recover it. 00:35:45.716 [2024-11-18 07:21:06.470446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.716 [2024-11-18 07:21:06.470539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.716 qpair failed and we were unable to recover it. 00:35:45.716 [2024-11-18 07:21:06.470838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.717 [2024-11-18 07:21:06.470904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.717 qpair failed and we were unable to recover it. 00:35:45.717 [2024-11-18 07:21:06.471152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.717 [2024-11-18 07:21:06.471218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.717 qpair failed and we were unable to recover it. 00:35:45.717 [2024-11-18 07:21:06.471523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.717 [2024-11-18 07:21:06.471597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.717 qpair failed and we were unable to recover it. 00:35:45.717 [2024-11-18 07:21:06.471839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.717 [2024-11-18 07:21:06.471905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.717 qpair failed and we were unable to recover it. 00:35:45.717 [2024-11-18 07:21:06.472205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.717 [2024-11-18 07:21:06.472270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.717 qpair failed and we were unable to recover it. 00:35:45.717 [2024-11-18 07:21:06.472541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.717 [2024-11-18 07:21:06.472607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.717 qpair failed and we were unable to recover it. 00:35:45.717 [2024-11-18 07:21:06.472901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.717 [2024-11-18 07:21:06.472965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.717 qpair failed and we were unable to recover it. 00:35:45.717 [2024-11-18 07:21:06.473255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.717 [2024-11-18 07:21:06.473321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.717 qpair failed and we were unable to recover it. 00:35:45.717 [2024-11-18 07:21:06.473579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.717 [2024-11-18 07:21:06.473644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.717 qpair failed and we were unable to recover it. 00:35:45.717 [2024-11-18 07:21:06.473912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.717 [2024-11-18 07:21:06.473978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.717 qpair failed and we were unable to recover it. 00:35:45.717 [2024-11-18 07:21:06.474283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.717 [2024-11-18 07:21:06.474348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.717 qpair failed and we were unable to recover it. 00:35:45.717 [2024-11-18 07:21:06.474541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.717 [2024-11-18 07:21:06.474606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.717 qpair failed and we were unable to recover it. 00:35:45.717 [2024-11-18 07:21:06.474876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.717 [2024-11-18 07:21:06.474942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.717 qpair failed and we were unable to recover it. 00:35:45.717 [2024-11-18 07:21:06.475248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.717 [2024-11-18 07:21:06.475314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.717 qpair failed and we were unable to recover it. 00:35:45.717 [2024-11-18 07:21:06.475620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.717 [2024-11-18 07:21:06.475687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.717 qpair failed and we were unable to recover it. 00:35:45.717 [2024-11-18 07:21:06.475995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.717 [2024-11-18 07:21:06.476060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.717 qpair failed and we were unable to recover it. 00:35:45.717 [2024-11-18 07:21:06.476268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.717 [2024-11-18 07:21:06.476330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.717 qpair failed and we were unable to recover it. 00:35:45.717 [2024-11-18 07:21:06.476571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.717 [2024-11-18 07:21:06.476638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.717 qpair failed and we were unable to recover it. 00:35:45.717 [2024-11-18 07:21:06.476886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.717 [2024-11-18 07:21:06.476951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.717 qpair failed and we were unable to recover it. 00:35:45.717 [2024-11-18 07:21:06.477165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.717 [2024-11-18 07:21:06.477229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.717 qpair failed and we were unable to recover it. 00:35:45.717 [2024-11-18 07:21:06.477456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.717 [2024-11-18 07:21:06.477543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.717 qpair failed and we were unable to recover it. 00:35:45.717 [2024-11-18 07:21:06.477788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.717 [2024-11-18 07:21:06.477853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.717 qpair failed and we were unable to recover it. 00:35:45.717 [2024-11-18 07:21:06.478078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.717 [2024-11-18 07:21:06.478143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.717 qpair failed and we were unable to recover it. 00:35:45.717 [2024-11-18 07:21:06.478424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.717 [2024-11-18 07:21:06.478511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.717 qpair failed and we were unable to recover it. 00:35:45.717 [2024-11-18 07:21:06.478785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.717 [2024-11-18 07:21:06.478851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.717 qpair failed and we were unable to recover it. 00:35:45.717 [2024-11-18 07:21:06.479141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.717 [2024-11-18 07:21:06.479215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.717 qpair failed and we were unable to recover it. 00:35:45.717 [2024-11-18 07:21:06.479522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.717 [2024-11-18 07:21:06.479596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.717 qpair failed and we were unable to recover it. 00:35:45.717 [2024-11-18 07:21:06.479803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.717 [2024-11-18 07:21:06.479869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.717 qpair failed and we were unable to recover it. 00:35:45.717 [2024-11-18 07:21:06.480153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.717 [2024-11-18 07:21:06.480217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.717 qpair failed and we were unable to recover it. 00:35:45.717 [2024-11-18 07:21:06.480473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.717 [2024-11-18 07:21:06.480571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.717 qpair failed and we were unable to recover it. 00:35:45.717 [2024-11-18 07:21:06.480875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.717 [2024-11-18 07:21:06.480940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.717 qpair failed and we were unable to recover it. 00:35:45.717 [2024-11-18 07:21:06.481229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.717 [2024-11-18 07:21:06.481295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.717 qpair failed and we were unable to recover it. 00:35:45.717 [2024-11-18 07:21:06.481550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.717 [2024-11-18 07:21:06.481618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.717 qpair failed and we were unable to recover it. 00:35:45.717 [2024-11-18 07:21:06.481902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.717 [2024-11-18 07:21:06.481968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.717 qpair failed and we were unable to recover it. 00:35:45.717 [2024-11-18 07:21:06.482263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.717 [2024-11-18 07:21:06.482328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.717 qpair failed and we were unable to recover it. 00:35:45.717 [2024-11-18 07:21:06.482621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.717 [2024-11-18 07:21:06.482688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.717 qpair failed and we were unable to recover it. 00:35:45.717 [2024-11-18 07:21:06.482990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.717 [2024-11-18 07:21:06.483055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.717 qpair failed and we were unable to recover it. 00:35:45.717 [2024-11-18 07:21:06.483317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.717 [2024-11-18 07:21:06.483381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.717 qpair failed and we were unable to recover it. 00:35:45.717 [2024-11-18 07:21:06.483615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.718 [2024-11-18 07:21:06.483681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.718 qpair failed and we were unable to recover it. 00:35:45.718 [2024-11-18 07:21:06.483948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.718 [2024-11-18 07:21:06.484013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.718 qpair failed and we were unable to recover it. 00:35:45.718 [2024-11-18 07:21:06.484253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.718 [2024-11-18 07:21:06.484317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.718 qpair failed and we were unable to recover it. 00:35:45.718 [2024-11-18 07:21:06.484577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.718 [2024-11-18 07:21:06.484644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.718 qpair failed and we were unable to recover it. 00:35:45.718 [2024-11-18 07:21:06.484930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.718 [2024-11-18 07:21:06.484995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.718 qpair failed and we were unable to recover it. 00:35:45.718 [2024-11-18 07:21:06.485270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.718 [2024-11-18 07:21:06.485333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.718 qpair failed and we were unable to recover it. 00:35:45.718 [2024-11-18 07:21:06.485590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.718 [2024-11-18 07:21:06.485657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.718 qpair failed and we were unable to recover it. 00:35:45.718 [2024-11-18 07:21:06.485925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.718 [2024-11-18 07:21:06.485991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.718 qpair failed and we were unable to recover it. 00:35:45.718 [2024-11-18 07:21:06.486236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.718 [2024-11-18 07:21:06.486302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.718 qpair failed and we were unable to recover it. 00:35:45.718 [2024-11-18 07:21:06.486589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.718 [2024-11-18 07:21:06.486655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.718 qpair failed and we were unable to recover it. 00:35:45.718 [2024-11-18 07:21:06.486912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.718 [2024-11-18 07:21:06.486977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.718 qpair failed and we were unable to recover it. 00:35:45.718 [2024-11-18 07:21:06.487189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.718 [2024-11-18 07:21:06.487253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.718 qpair failed and we were unable to recover it. 00:35:45.718 [2024-11-18 07:21:06.487510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.718 [2024-11-18 07:21:06.487576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.718 qpair failed and we were unable to recover it. 00:35:45.718 [2024-11-18 07:21:06.487821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.718 [2024-11-18 07:21:06.487889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.718 qpair failed and we were unable to recover it. 00:35:45.718 [2024-11-18 07:21:06.488176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.718 [2024-11-18 07:21:06.488252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.718 qpair failed and we were unable to recover it. 00:35:45.718 [2024-11-18 07:21:06.488518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.718 [2024-11-18 07:21:06.488586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.718 qpair failed and we were unable to recover it. 00:35:45.718 [2024-11-18 07:21:06.488857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.718 [2024-11-18 07:21:06.488923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.718 qpair failed and we were unable to recover it. 00:35:45.718 [2024-11-18 07:21:06.489221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.718 [2024-11-18 07:21:06.489285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.718 qpair failed and we were unable to recover it. 00:35:45.718 [2024-11-18 07:21:06.489532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.718 [2024-11-18 07:21:06.489601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.718 qpair failed and we were unable to recover it. 00:35:45.718 [2024-11-18 07:21:06.489818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.718 [2024-11-18 07:21:06.489884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.718 qpair failed and we were unable to recover it. 00:35:45.718 [2024-11-18 07:21:06.490178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.718 [2024-11-18 07:21:06.490242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.718 qpair failed and we were unable to recover it. 00:35:45.718 [2024-11-18 07:21:06.490540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.718 [2024-11-18 07:21:06.490607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.718 qpair failed and we were unable to recover it. 00:35:45.718 [2024-11-18 07:21:06.490856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.718 [2024-11-18 07:21:06.490922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.718 qpair failed and we were unable to recover it. 00:35:45.718 [2024-11-18 07:21:06.491120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.718 [2024-11-18 07:21:06.491184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.718 qpair failed and we were unable to recover it. 00:35:45.718 [2024-11-18 07:21:06.491407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.718 [2024-11-18 07:21:06.491472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.718 qpair failed and we were unable to recover it. 00:35:45.718 [2024-11-18 07:21:06.491747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.718 [2024-11-18 07:21:06.491813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.718 qpair failed and we were unable to recover it. 00:35:45.718 [2024-11-18 07:21:06.492070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.718 [2024-11-18 07:21:06.492134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.718 qpair failed and we were unable to recover it. 00:35:45.718 [2024-11-18 07:21:06.492363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.718 [2024-11-18 07:21:06.492428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.718 qpair failed and we were unable to recover it. 00:35:45.718 [2024-11-18 07:21:06.492761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.718 [2024-11-18 07:21:06.492827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.718 qpair failed and we were unable to recover it. 00:35:45.718 [2024-11-18 07:21:06.493094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.718 [2024-11-18 07:21:06.493160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.718 qpair failed and we were unable to recover it. 00:35:45.718 [2024-11-18 07:21:06.493464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.718 [2024-11-18 07:21:06.493548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.718 qpair failed and we were unable to recover it. 00:35:45.718 [2024-11-18 07:21:06.493806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.718 [2024-11-18 07:21:06.493872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.718 qpair failed and we were unable to recover it. 00:35:45.718 [2024-11-18 07:21:06.494132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.718 [2024-11-18 07:21:06.494197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.718 qpair failed and we were unable to recover it. 00:35:45.718 [2024-11-18 07:21:06.494486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.718 [2024-11-18 07:21:06.494568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.718 qpair failed and we were unable to recover it. 00:35:45.718 [2024-11-18 07:21:06.494851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.718 [2024-11-18 07:21:06.494916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.718 qpair failed and we were unable to recover it. 00:35:45.718 [2024-11-18 07:21:06.495226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.718 [2024-11-18 07:21:06.495292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.718 qpair failed and we were unable to recover it. 00:35:45.718 [2024-11-18 07:21:06.495546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.718 [2024-11-18 07:21:06.495613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.718 qpair failed and we were unable to recover it. 00:35:45.718 [2024-11-18 07:21:06.495902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.718 [2024-11-18 07:21:06.495968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.719 qpair failed and we were unable to recover it. 00:35:45.719 [2024-11-18 07:21:06.496205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.719 [2024-11-18 07:21:06.496271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.719 qpair failed and we were unable to recover it. 00:35:45.719 [2024-11-18 07:21:06.496524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.719 [2024-11-18 07:21:06.496591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.719 qpair failed and we were unable to recover it. 00:35:45.719 [2024-11-18 07:21:06.496811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.719 [2024-11-18 07:21:06.496878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.719 qpair failed and we were unable to recover it. 00:35:45.719 [2024-11-18 07:21:06.497133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.719 [2024-11-18 07:21:06.497198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.719 qpair failed and we were unable to recover it. 00:35:45.719 [2024-11-18 07:21:06.497511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.719 [2024-11-18 07:21:06.497593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.719 qpair failed and we were unable to recover it. 00:35:45.719 [2024-11-18 07:21:06.497834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.719 [2024-11-18 07:21:06.497901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.719 qpair failed and we were unable to recover it. 00:35:45.719 [2024-11-18 07:21:06.498187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.719 [2024-11-18 07:21:06.498252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.719 qpair failed and we were unable to recover it. 00:35:45.719 [2024-11-18 07:21:06.498522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.719 [2024-11-18 07:21:06.498589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.719 qpair failed and we were unable to recover it. 00:35:45.719 [2024-11-18 07:21:06.498854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.719 [2024-11-18 07:21:06.498919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.719 qpair failed and we were unable to recover it. 00:35:45.719 [2024-11-18 07:21:06.499168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.719 [2024-11-18 07:21:06.499232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.719 qpair failed and we were unable to recover it. 00:35:45.719 [2024-11-18 07:21:06.499477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.719 [2024-11-18 07:21:06.499560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.719 qpair failed and we were unable to recover it. 00:35:45.719 [2024-11-18 07:21:06.499854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.719 [2024-11-18 07:21:06.499920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.719 qpair failed and we were unable to recover it. 00:35:45.719 [2024-11-18 07:21:06.500214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.719 [2024-11-18 07:21:06.500278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.719 qpair failed and we were unable to recover it. 00:35:45.719 [2024-11-18 07:21:06.500558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.719 [2024-11-18 07:21:06.500626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.719 qpair failed and we were unable to recover it. 00:35:45.719 [2024-11-18 07:21:06.500916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.719 [2024-11-18 07:21:06.500981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.719 qpair failed and we were unable to recover it. 00:35:45.719 [2024-11-18 07:21:06.501230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.719 [2024-11-18 07:21:06.501295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.719 qpair failed and we were unable to recover it. 00:35:45.719 [2024-11-18 07:21:06.501505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.719 [2024-11-18 07:21:06.501573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.719 qpair failed and we were unable to recover it. 00:35:45.719 [2024-11-18 07:21:06.501800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.719 [2024-11-18 07:21:06.501867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.719 qpair failed and we were unable to recover it. 00:35:45.719 [2024-11-18 07:21:06.502162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.719 [2024-11-18 07:21:06.502226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.719 qpair failed and we were unable to recover it. 00:35:45.719 [2024-11-18 07:21:06.502432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.719 [2024-11-18 07:21:06.502511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.719 qpair failed and we were unable to recover it. 00:35:45.719 [2024-11-18 07:21:06.502749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.719 [2024-11-18 07:21:06.502816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.719 qpair failed and we were unable to recover it. 00:35:45.719 [2024-11-18 07:21:06.503065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.719 [2024-11-18 07:21:06.503129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.719 qpair failed and we were unable to recover it. 00:35:45.719 [2024-11-18 07:21:06.503415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.719 [2024-11-18 07:21:06.503480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.719 qpair failed and we were unable to recover it. 00:35:45.719 [2024-11-18 07:21:06.503754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.719 [2024-11-18 07:21:06.503820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.719 qpair failed and we were unable to recover it. 00:35:45.719 [2024-11-18 07:21:06.504101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.719 [2024-11-18 07:21:06.504166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.719 qpair failed and we were unable to recover it. 00:35:45.719 [2024-11-18 07:21:06.504447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.719 [2024-11-18 07:21:06.504530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.719 qpair failed and we were unable to recover it. 00:35:45.719 [2024-11-18 07:21:06.504728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.719 [2024-11-18 07:21:06.504794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.719 qpair failed and we were unable to recover it. 00:35:45.719 [2024-11-18 07:21:06.505093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.719 [2024-11-18 07:21:06.505157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.719 qpair failed and we were unable to recover it. 00:35:45.719 [2024-11-18 07:21:06.505403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.719 [2024-11-18 07:21:06.505467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.719 qpair failed and we were unable to recover it. 00:35:45.719 [2024-11-18 07:21:06.505747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.719 [2024-11-18 07:21:06.505813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.719 qpair failed and we were unable to recover it. 00:35:45.719 [2024-11-18 07:21:06.506053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.719 [2024-11-18 07:21:06.506118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.719 qpair failed and we were unable to recover it. 00:35:45.719 [2024-11-18 07:21:06.506410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.719 [2024-11-18 07:21:06.506476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.719 qpair failed and we were unable to recover it. 00:35:45.719 [2024-11-18 07:21:06.506784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.719 [2024-11-18 07:21:06.506851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.719 qpair failed and we were unable to recover it. 00:35:45.719 [2024-11-18 07:21:06.507062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.720 [2024-11-18 07:21:06.507126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.720 qpair failed and we were unable to recover it. 00:35:45.720 [2024-11-18 07:21:06.507325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.720 [2024-11-18 07:21:06.507390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.720 qpair failed and we were unable to recover it. 00:35:45.720 [2024-11-18 07:21:06.507704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.720 [2024-11-18 07:21:06.507768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.720 qpair failed and we were unable to recover it. 00:35:45.720 [2024-11-18 07:21:06.508017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.720 [2024-11-18 07:21:06.508085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.720 qpair failed and we were unable to recover it. 00:35:45.720 [2024-11-18 07:21:06.508344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.720 [2024-11-18 07:21:06.508410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.720 qpair failed and we were unable to recover it. 00:35:45.720 [2024-11-18 07:21:06.508673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.720 [2024-11-18 07:21:06.508740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.720 qpair failed and we were unable to recover it. 00:35:45.720 [2024-11-18 07:21:06.508990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.720 [2024-11-18 07:21:06.509058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.720 qpair failed and we were unable to recover it. 00:35:45.720 [2024-11-18 07:21:06.509348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.720 [2024-11-18 07:21:06.509413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.720 qpair failed and we were unable to recover it. 00:35:45.720 [2024-11-18 07:21:06.509724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.720 [2024-11-18 07:21:06.509791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.720 qpair failed and we were unable to recover it. 00:35:45.720 [2024-11-18 07:21:06.510037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.720 [2024-11-18 07:21:06.510104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.720 qpair failed and we were unable to recover it. 00:35:45.720 [2024-11-18 07:21:06.510398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.720 [2024-11-18 07:21:06.510465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.720 qpair failed and we were unable to recover it. 00:35:45.720 [2024-11-18 07:21:06.510753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.720 [2024-11-18 07:21:06.510835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.720 qpair failed and we were unable to recover it. 00:35:45.720 [2024-11-18 07:21:06.511091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.720 [2024-11-18 07:21:06.511156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.720 qpair failed and we were unable to recover it. 00:35:45.720 [2024-11-18 07:21:06.511460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.720 [2024-11-18 07:21:06.511542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.720 qpair failed and we were unable to recover it. 00:35:45.720 [2024-11-18 07:21:06.511833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.720 [2024-11-18 07:21:06.511898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.720 qpair failed and we were unable to recover it. 00:35:45.720 [2024-11-18 07:21:06.512185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.720 [2024-11-18 07:21:06.512249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.720 qpair failed and we were unable to recover it. 00:35:45.720 [2024-11-18 07:21:06.512517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.720 [2024-11-18 07:21:06.512583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.720 qpair failed and we were unable to recover it. 00:35:45.720 [2024-11-18 07:21:06.512835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.720 [2024-11-18 07:21:06.512899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.720 qpair failed and we were unable to recover it. 00:35:45.720 [2024-11-18 07:21:06.513147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.720 [2024-11-18 07:21:06.513211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.720 qpair failed and we were unable to recover it. 00:35:45.720 [2024-11-18 07:21:06.513466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.720 [2024-11-18 07:21:06.513550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.720 qpair failed and we were unable to recover it. 00:35:45.720 [2024-11-18 07:21:06.513840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.720 [2024-11-18 07:21:06.513905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.720 qpair failed and we were unable to recover it. 00:35:45.720 [2024-11-18 07:21:06.514187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.720 [2024-11-18 07:21:06.514252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.720 qpair failed and we were unable to recover it. 00:35:45.720 [2024-11-18 07:21:06.514535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.720 [2024-11-18 07:21:06.514602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.720 qpair failed and we were unable to recover it. 00:35:45.720 [2024-11-18 07:21:06.514853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.720 [2024-11-18 07:21:06.514920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.720 qpair failed and we were unable to recover it. 00:35:45.720 [2024-11-18 07:21:06.515144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.720 [2024-11-18 07:21:06.515209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.720 qpair failed and we were unable to recover it. 00:35:45.720 [2024-11-18 07:21:06.515517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.720 [2024-11-18 07:21:06.515584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.720 qpair failed and we were unable to recover it. 00:35:45.720 [2024-11-18 07:21:06.515883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.720 [2024-11-18 07:21:06.515948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.720 qpair failed and we were unable to recover it. 00:35:45.720 [2024-11-18 07:21:06.516240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.720 [2024-11-18 07:21:06.516304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.720 qpair failed and we were unable to recover it. 00:35:45.720 [2024-11-18 07:21:06.516567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.720 [2024-11-18 07:21:06.516634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.720 qpair failed and we were unable to recover it. 00:35:45.720 [2024-11-18 07:21:06.516839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.720 [2024-11-18 07:21:06.516904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.720 qpair failed and we were unable to recover it. 00:35:45.720 [2024-11-18 07:21:06.517139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.720 [2024-11-18 07:21:06.517204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.720 qpair failed and we were unable to recover it. 00:35:45.720 [2024-11-18 07:21:06.517445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.720 [2024-11-18 07:21:06.517526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.720 qpair failed and we were unable to recover it. 00:35:45.720 [2024-11-18 07:21:06.517815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.720 [2024-11-18 07:21:06.517880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.720 qpair failed and we were unable to recover it. 00:35:45.720 [2024-11-18 07:21:06.518172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.720 [2024-11-18 07:21:06.518237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.720 qpair failed and we were unable to recover it. 00:35:45.720 [2024-11-18 07:21:06.518511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.720 [2024-11-18 07:21:06.518579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.720 qpair failed and we were unable to recover it. 00:35:45.720 [2024-11-18 07:21:06.518864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.720 [2024-11-18 07:21:06.518929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.720 qpair failed and we were unable to recover it. 00:35:45.720 [2024-11-18 07:21:06.519221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.720 [2024-11-18 07:21:06.519287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.720 qpair failed and we were unable to recover it. 00:35:45.721 [2024-11-18 07:21:06.519525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.721 [2024-11-18 07:21:06.519591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.721 qpair failed and we were unable to recover it. 00:35:45.721 [2024-11-18 07:21:06.519880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.721 [2024-11-18 07:21:06.519957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.721 qpair failed and we were unable to recover it. 00:35:45.721 [2024-11-18 07:21:06.520175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.721 [2024-11-18 07:21:06.520241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.721 qpair failed and we were unable to recover it. 00:35:45.721 [2024-11-18 07:21:06.520545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.721 [2024-11-18 07:21:06.520611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.721 qpair failed and we were unable to recover it. 00:35:45.721 [2024-11-18 07:21:06.520898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.721 [2024-11-18 07:21:06.520962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.721 qpair failed and we were unable to recover it. 00:35:45.721 [2024-11-18 07:21:06.521224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.721 [2024-11-18 07:21:06.521289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.721 qpair failed and we were unable to recover it. 00:35:45.721 [2024-11-18 07:21:06.521539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.721 [2024-11-18 07:21:06.521606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.721 qpair failed and we were unable to recover it. 00:35:45.721 [2024-11-18 07:21:06.521848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.721 [2024-11-18 07:21:06.521913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.721 qpair failed and we were unable to recover it. 00:35:45.721 [2024-11-18 07:21:06.522179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.721 [2024-11-18 07:21:06.522245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.721 qpair failed and we were unable to recover it. 00:35:45.721 [2024-11-18 07:21:06.522503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.721 [2024-11-18 07:21:06.522570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.721 qpair failed and we were unable to recover it. 00:35:45.721 [2024-11-18 07:21:06.522810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.721 [2024-11-18 07:21:06.522876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.721 qpair failed and we were unable to recover it. 00:35:45.721 [2024-11-18 07:21:06.523169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.721 [2024-11-18 07:21:06.523235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.721 qpair failed and we were unable to recover it. 00:35:45.721 [2024-11-18 07:21:06.523484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.721 [2024-11-18 07:21:06.523567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.721 qpair failed and we were unable to recover it. 00:35:45.721 [2024-11-18 07:21:06.523837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.721 [2024-11-18 07:21:06.523904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.721 qpair failed and we were unable to recover it. 00:35:45.721 [2024-11-18 07:21:06.524162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.721 [2024-11-18 07:21:06.524228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.721 qpair failed and we were unable to recover it. 00:35:45.721 [2024-11-18 07:21:06.524483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.721 [2024-11-18 07:21:06.524565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.721 qpair failed and we were unable to recover it. 00:35:45.721 [2024-11-18 07:21:06.524801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.721 [2024-11-18 07:21:06.524866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.721 qpair failed and we were unable to recover it. 00:35:45.721 [2024-11-18 07:21:06.525047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.721 [2024-11-18 07:21:06.525115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.721 qpair failed and we were unable to recover it. 00:35:45.721 [2024-11-18 07:21:06.525410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.721 [2024-11-18 07:21:06.525474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.721 qpair failed and we were unable to recover it. 00:35:45.721 [2024-11-18 07:21:06.525755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.721 [2024-11-18 07:21:06.525821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.721 qpair failed and we were unable to recover it. 00:35:45.721 [2024-11-18 07:21:06.526102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.721 [2024-11-18 07:21:06.526168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.721 qpair failed and we were unable to recover it. 00:35:45.721 [2024-11-18 07:21:06.526462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.721 [2024-11-18 07:21:06.526544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.721 qpair failed and we were unable to recover it. 00:35:45.721 [2024-11-18 07:21:06.526800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.721 [2024-11-18 07:21:06.526865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.721 qpair failed and we were unable to recover it. 00:35:45.721 [2024-11-18 07:21:06.527112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.721 [2024-11-18 07:21:06.527177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.721 qpair failed and we were unable to recover it. 00:35:45.721 [2024-11-18 07:21:06.527468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.721 [2024-11-18 07:21:06.527551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.721 qpair failed and we were unable to recover it. 00:35:45.721 [2024-11-18 07:21:06.527845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.721 [2024-11-18 07:21:06.527911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.721 qpair failed and we were unable to recover it. 00:35:45.721 [2024-11-18 07:21:06.528174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.721 [2024-11-18 07:21:06.528241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.721 qpair failed and we were unable to recover it. 00:35:45.721 [2024-11-18 07:21:06.528461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.721 [2024-11-18 07:21:06.528560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.721 qpair failed and we were unable to recover it. 00:35:45.721 [2024-11-18 07:21:06.528815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.721 [2024-11-18 07:21:06.528892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.721 qpair failed and we were unable to recover it. 00:35:45.721 [2024-11-18 07:21:06.529183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.721 [2024-11-18 07:21:06.529248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.721 qpair failed and we were unable to recover it. 00:35:45.721 [2024-11-18 07:21:06.529454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.721 [2024-11-18 07:21:06.529538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.721 qpair failed and we were unable to recover it. 00:35:45.721 [2024-11-18 07:21:06.529751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.721 [2024-11-18 07:21:06.529818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.721 qpair failed and we were unable to recover it. 00:35:45.721 [2024-11-18 07:21:06.530113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.721 [2024-11-18 07:21:06.530179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.721 qpair failed and we were unable to recover it. 00:35:45.721 [2024-11-18 07:21:06.530433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.721 [2024-11-18 07:21:06.530513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.721 qpair failed and we were unable to recover it. 00:35:45.721 [2024-11-18 07:21:06.530819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.721 [2024-11-18 07:21:06.530885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.721 qpair failed and we were unable to recover it. 00:35:45.721 [2024-11-18 07:21:06.531127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.721 [2024-11-18 07:21:06.531193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.721 qpair failed and we were unable to recover it. 00:35:45.721 [2024-11-18 07:21:06.531429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.721 [2024-11-18 07:21:06.531511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.721 qpair failed and we were unable to recover it. 00:35:45.722 [2024-11-18 07:21:06.531765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.722 [2024-11-18 07:21:06.531832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.722 qpair failed and we were unable to recover it. 00:35:45.722 [2024-11-18 07:21:06.532042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.722 [2024-11-18 07:21:06.532108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.722 qpair failed and we were unable to recover it. 00:35:45.722 [2024-11-18 07:21:06.532358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.722 [2024-11-18 07:21:06.532423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.722 qpair failed and we were unable to recover it. 00:35:45.722 [2024-11-18 07:21:06.532696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.722 [2024-11-18 07:21:06.532762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.722 qpair failed and we were unable to recover it. 00:35:45.722 [2024-11-18 07:21:06.532973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.722 [2024-11-18 07:21:06.533039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.722 qpair failed and we were unable to recover it. 00:35:45.722 [2024-11-18 07:21:06.533259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.722 [2024-11-18 07:21:06.533324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.722 qpair failed and we were unable to recover it. 00:35:45.722 [2024-11-18 07:21:06.533579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.722 [2024-11-18 07:21:06.533646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.722 qpair failed and we were unable to recover it. 00:35:45.722 [2024-11-18 07:21:06.533908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.722 [2024-11-18 07:21:06.533974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.722 qpair failed and we were unable to recover it. 00:35:45.722 [2024-11-18 07:21:06.534261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.722 [2024-11-18 07:21:06.534326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.722 qpair failed and we were unable to recover it. 00:35:45.722 [2024-11-18 07:21:06.534546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.722 [2024-11-18 07:21:06.534613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.722 qpair failed and we were unable to recover it. 00:35:45.722 [2024-11-18 07:21:06.534875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.722 [2024-11-18 07:21:06.534941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.722 qpair failed and we were unable to recover it. 00:35:45.722 [2024-11-18 07:21:06.535229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.722 [2024-11-18 07:21:06.535294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.722 qpair failed and we were unable to recover it. 00:35:45.722 [2024-11-18 07:21:06.535483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.722 [2024-11-18 07:21:06.535560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.722 qpair failed and we were unable to recover it. 00:35:45.722 [2024-11-18 07:21:06.535802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.722 [2024-11-18 07:21:06.535868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.722 qpair failed and we were unable to recover it. 00:35:45.722 [2024-11-18 07:21:06.536060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.722 [2024-11-18 07:21:06.536126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.722 qpair failed and we were unable to recover it. 00:35:45.722 [2024-11-18 07:21:06.536413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.722 [2024-11-18 07:21:06.536479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.722 qpair failed and we were unable to recover it. 00:35:45.722 [2024-11-18 07:21:06.536789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.722 [2024-11-18 07:21:06.536854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.722 qpair failed and we were unable to recover it. 00:35:45.722 [2024-11-18 07:21:06.537146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.722 [2024-11-18 07:21:06.537211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.722 qpair failed and we were unable to recover it. 00:35:45.722 [2024-11-18 07:21:06.537523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.722 [2024-11-18 07:21:06.537590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.722 qpair failed and we were unable to recover it. 00:35:45.722 [2024-11-18 07:21:06.537860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.722 [2024-11-18 07:21:06.537926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.722 qpair failed and we were unable to recover it. 00:35:45.722 [2024-11-18 07:21:06.538173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.722 [2024-11-18 07:21:06.538238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.722 qpair failed and we were unable to recover it. 00:35:45.722 [2024-11-18 07:21:06.538542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.722 [2024-11-18 07:21:06.538609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.722 qpair failed and we were unable to recover it. 00:35:45.722 [2024-11-18 07:21:06.538909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.722 [2024-11-18 07:21:06.538974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.722 qpair failed and we were unable to recover it. 00:35:45.722 [2024-11-18 07:21:06.539190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.722 [2024-11-18 07:21:06.539254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.722 qpair failed and we were unable to recover it. 00:35:45.722 [2024-11-18 07:21:06.539530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.722 [2024-11-18 07:21:06.539603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.722 qpair failed and we were unable to recover it. 00:35:45.722 [2024-11-18 07:21:06.539900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.722 [2024-11-18 07:21:06.539965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.722 qpair failed and we were unable to recover it. 00:35:45.722 [2024-11-18 07:21:06.540215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.722 [2024-11-18 07:21:06.540281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.722 qpair failed and we were unable to recover it. 00:35:45.722 [2024-11-18 07:21:06.540579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.722 [2024-11-18 07:21:06.540645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.722 qpair failed and we were unable to recover it. 00:35:45.722 [2024-11-18 07:21:06.540901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.722 [2024-11-18 07:21:06.540967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.722 qpair failed and we were unable to recover it. 00:35:45.722 [2024-11-18 07:21:06.541259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.722 [2024-11-18 07:21:06.541324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.722 qpair failed and we were unable to recover it. 00:35:45.722 [2024-11-18 07:21:06.541536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.722 [2024-11-18 07:21:06.541603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.722 qpair failed and we were unable to recover it. 00:35:45.722 [2024-11-18 07:21:06.541830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.722 [2024-11-18 07:21:06.541895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.722 qpair failed and we were unable to recover it. 00:35:45.722 [2024-11-18 07:21:06.542148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.722 [2024-11-18 07:21:06.542214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.722 qpair failed and we were unable to recover it. 00:35:45.722 [2024-11-18 07:21:06.542520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.722 [2024-11-18 07:21:06.542587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.722 qpair failed and we were unable to recover it. 00:35:45.722 [2024-11-18 07:21:06.542877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.722 [2024-11-18 07:21:06.542941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.722 qpair failed and we were unable to recover it. 00:35:45.722 [2024-11-18 07:21:06.543154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.722 [2024-11-18 07:21:06.543220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.722 qpair failed and we were unable to recover it. 00:35:45.722 [2024-11-18 07:21:06.543524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.722 [2024-11-18 07:21:06.543592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.722 qpair failed and we were unable to recover it. 00:35:45.722 [2024-11-18 07:21:06.543883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.722 [2024-11-18 07:21:06.543948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.722 qpair failed and we were unable to recover it. 00:35:45.723 [2024-11-18 07:21:06.544239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.723 [2024-11-18 07:21:06.544305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.723 qpair failed and we were unable to recover it. 00:35:45.723 [2024-11-18 07:21:06.544608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.723 [2024-11-18 07:21:06.544675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.723 qpair failed and we were unable to recover it. 00:35:45.723 [2024-11-18 07:21:06.544948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.723 [2024-11-18 07:21:06.545013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.723 qpair failed and we were unable to recover it. 00:35:45.723 [2024-11-18 07:21:06.545206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.723 [2024-11-18 07:21:06.545273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.723 qpair failed and we were unable to recover it. 00:35:45.723 [2024-11-18 07:21:06.545522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.723 [2024-11-18 07:21:06.545590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.723 qpair failed and we were unable to recover it. 00:35:45.723 [2024-11-18 07:21:06.545857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.723 [2024-11-18 07:21:06.545923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.723 qpair failed and we were unable to recover it. 00:35:45.723 [2024-11-18 07:21:06.546191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.723 [2024-11-18 07:21:06.546255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.723 qpair failed and we were unable to recover it. 00:35:45.723 [2024-11-18 07:21:06.546558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.723 [2024-11-18 07:21:06.546624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.723 qpair failed and we were unable to recover it. 00:35:45.723 [2024-11-18 07:21:06.546943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.723 [2024-11-18 07:21:06.547008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.723 qpair failed and we were unable to recover it. 00:35:45.723 [2024-11-18 07:21:06.547280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.723 [2024-11-18 07:21:06.547344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.723 qpair failed and we were unable to recover it. 00:35:45.723 [2024-11-18 07:21:06.547638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.723 [2024-11-18 07:21:06.547706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.723 qpair failed and we were unable to recover it. 00:35:45.723 [2024-11-18 07:21:06.547951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.723 [2024-11-18 07:21:06.548016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.723 qpair failed and we were unable to recover it. 00:35:45.723 [2024-11-18 07:21:06.548243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.723 [2024-11-18 07:21:06.548307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.723 qpair failed and we were unable to recover it. 00:35:45.723 [2024-11-18 07:21:06.548517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.723 [2024-11-18 07:21:06.548584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.723 qpair failed and we were unable to recover it. 00:35:45.723 [2024-11-18 07:21:06.548880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.723 [2024-11-18 07:21:06.548946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.723 qpair failed and we were unable to recover it. 00:35:45.723 [2024-11-18 07:21:06.549205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.723 [2024-11-18 07:21:06.549269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.723 qpair failed and we were unable to recover it. 00:35:45.723 [2024-11-18 07:21:06.549517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.723 [2024-11-18 07:21:06.549584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.723 qpair failed and we were unable to recover it. 00:35:45.723 [2024-11-18 07:21:06.549837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.723 [2024-11-18 07:21:06.549904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.723 qpair failed and we were unable to recover it. 00:35:45.723 [2024-11-18 07:21:06.550104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.723 [2024-11-18 07:21:06.550168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.723 qpair failed and we were unable to recover it. 00:35:45.723 [2024-11-18 07:21:06.550416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.723 [2024-11-18 07:21:06.550482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.723 qpair failed and we were unable to recover it. 00:35:45.723 [2024-11-18 07:21:06.550818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.723 [2024-11-18 07:21:06.550884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.723 qpair failed and we were unable to recover it. 00:35:45.723 [2024-11-18 07:21:06.551108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.723 [2024-11-18 07:21:06.551183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.723 qpair failed and we were unable to recover it. 00:35:45.723 [2024-11-18 07:21:06.551438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.723 [2024-11-18 07:21:06.551519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.723 qpair failed and we were unable to recover it. 00:35:45.723 [2024-11-18 07:21:06.551815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.723 [2024-11-18 07:21:06.551881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.723 qpair failed and we were unable to recover it. 00:35:45.723 [2024-11-18 07:21:06.552174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.723 [2024-11-18 07:21:06.552238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.723 qpair failed and we were unable to recover it. 00:35:45.723 [2024-11-18 07:21:06.552427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.723 [2024-11-18 07:21:06.552527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.723 qpair failed and we were unable to recover it. 00:35:45.723 [2024-11-18 07:21:06.552782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.723 [2024-11-18 07:21:06.552848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.723 qpair failed and we were unable to recover it. 00:35:45.723 [2024-11-18 07:21:06.553092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.723 [2024-11-18 07:21:06.553157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.723 qpair failed and we were unable to recover it. 00:35:45.723 [2024-11-18 07:21:06.553400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.723 [2024-11-18 07:21:06.553465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.723 qpair failed and we were unable to recover it. 00:35:45.723 [2024-11-18 07:21:06.553749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.723 [2024-11-18 07:21:06.553813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.723 qpair failed and we were unable to recover it. 00:35:45.723 [2024-11-18 07:21:06.554113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.723 [2024-11-18 07:21:06.554179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.723 qpair failed and we were unable to recover it. 00:35:45.723 [2024-11-18 07:21:06.554469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.723 [2024-11-18 07:21:06.554553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.723 qpair failed and we were unable to recover it. 00:35:45.723 [2024-11-18 07:21:06.554799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.723 [2024-11-18 07:21:06.554863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.723 qpair failed and we were unable to recover it. 00:35:45.723 [2024-11-18 07:21:06.555151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.723 [2024-11-18 07:21:06.555217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.723 qpair failed and we were unable to recover it. 00:35:45.723 [2024-11-18 07:21:06.555454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.723 [2024-11-18 07:21:06.555538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.723 qpair failed and we were unable to recover it. 00:35:45.723 [2024-11-18 07:21:06.555845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.723 [2024-11-18 07:21:06.555910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.723 qpair failed and we were unable to recover it. 00:35:45.723 [2024-11-18 07:21:06.556099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.723 [2024-11-18 07:21:06.556165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.723 qpair failed and we were unable to recover it. 00:35:45.723 [2024-11-18 07:21:06.556429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.723 [2024-11-18 07:21:06.556511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.723 qpair failed and we were unable to recover it. 00:35:45.724 [2024-11-18 07:21:06.556813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.724 [2024-11-18 07:21:06.556878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.724 qpair failed and we were unable to recover it. 00:35:45.724 [2024-11-18 07:21:06.557170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.724 [2024-11-18 07:21:06.557235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.724 qpair failed and we were unable to recover it. 00:35:45.724 [2024-11-18 07:21:06.557443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.724 [2024-11-18 07:21:06.557525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.724 qpair failed and we were unable to recover it. 00:35:45.724 [2024-11-18 07:21:06.557814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.724 [2024-11-18 07:21:06.557878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.724 qpair failed and we were unable to recover it. 00:35:45.724 [2024-11-18 07:21:06.558187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.724 [2024-11-18 07:21:06.558252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.724 qpair failed and we were unable to recover it. 00:35:45.724 [2024-11-18 07:21:06.558520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.724 [2024-11-18 07:21:06.558587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.724 qpair failed and we were unable to recover it. 00:35:45.724 [2024-11-18 07:21:06.558831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.724 [2024-11-18 07:21:06.558896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.724 qpair failed and we were unable to recover it. 00:35:45.724 [2024-11-18 07:21:06.559098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.724 [2024-11-18 07:21:06.559165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.724 qpair failed and we were unable to recover it. 00:35:45.724 [2024-11-18 07:21:06.559465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.724 [2024-11-18 07:21:06.559547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.724 qpair failed and we were unable to recover it. 00:35:45.724 [2024-11-18 07:21:06.559849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.724 [2024-11-18 07:21:06.559915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.724 qpair failed and we were unable to recover it. 00:35:45.724 [2024-11-18 07:21:06.560120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.724 [2024-11-18 07:21:06.560198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.724 qpair failed and we were unable to recover it. 00:35:45.724 [2024-11-18 07:21:06.560457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.724 [2024-11-18 07:21:06.560557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.724 qpair failed and we were unable to recover it. 00:35:45.724 [2024-11-18 07:21:06.560851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.724 [2024-11-18 07:21:06.560916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.724 qpair failed and we were unable to recover it. 00:35:45.724 [2024-11-18 07:21:06.561166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.724 [2024-11-18 07:21:06.561230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.724 qpair failed and we were unable to recover it. 00:35:45.724 [2024-11-18 07:21:06.561476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.724 [2024-11-18 07:21:06.561559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.724 qpair failed and we were unable to recover it. 00:35:45.724 [2024-11-18 07:21:06.561858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.724 [2024-11-18 07:21:06.561923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.724 qpair failed and we were unable to recover it. 00:35:45.724 [2024-11-18 07:21:06.562122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.724 [2024-11-18 07:21:06.562188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.724 qpair failed and we were unable to recover it. 00:35:45.724 [2024-11-18 07:21:06.562482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.724 [2024-11-18 07:21:06.562565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.724 qpair failed and we were unable to recover it. 00:35:45.724 [2024-11-18 07:21:06.562849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.724 [2024-11-18 07:21:06.562915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.724 qpair failed and we were unable to recover it. 00:35:45.724 [2024-11-18 07:21:06.563179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.724 [2024-11-18 07:21:06.563244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.724 qpair failed and we were unable to recover it. 00:35:45.724 [2024-11-18 07:21:06.563515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.724 [2024-11-18 07:21:06.563580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.724 qpair failed and we were unable to recover it. 00:35:45.724 [2024-11-18 07:21:06.563791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.724 [2024-11-18 07:21:06.563857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.724 qpair failed and we were unable to recover it. 00:35:45.724 [2024-11-18 07:21:06.564141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.724 [2024-11-18 07:21:06.564206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.724 qpair failed and we were unable to recover it. 00:35:45.724 [2024-11-18 07:21:06.564513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.724 [2024-11-18 07:21:06.564579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.724 qpair failed and we were unable to recover it. 00:35:45.724 [2024-11-18 07:21:06.564847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.724 [2024-11-18 07:21:06.564913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.724 qpair failed and we were unable to recover it. 00:35:45.724 [2024-11-18 07:21:06.565164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.724 [2024-11-18 07:21:06.565230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.724 qpair failed and we were unable to recover it. 00:35:45.724 [2024-11-18 07:21:06.565551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.724 [2024-11-18 07:21:06.565618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.724 qpair failed and we were unable to recover it. 00:35:45.724 [2024-11-18 07:21:06.565873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.724 [2024-11-18 07:21:06.565939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.724 qpair failed and we were unable to recover it. 00:35:45.724 [2024-11-18 07:21:06.566189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.724 [2024-11-18 07:21:06.566254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.724 qpair failed and we were unable to recover it. 00:35:45.724 [2024-11-18 07:21:06.566555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.724 [2024-11-18 07:21:06.566621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.724 qpair failed and we were unable to recover it. 00:35:45.724 [2024-11-18 07:21:06.566876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.724 [2024-11-18 07:21:06.566941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.724 qpair failed and we were unable to recover it. 00:35:45.724 [2024-11-18 07:21:06.567178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.724 [2024-11-18 07:21:06.567242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.724 qpair failed and we were unable to recover it. 00:35:45.724 [2024-11-18 07:21:06.567506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.724 [2024-11-18 07:21:06.567572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.724 qpair failed and we were unable to recover it. 00:35:45.725 [2024-11-18 07:21:06.567829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.725 [2024-11-18 07:21:06.567895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.725 qpair failed and we were unable to recover it. 00:35:45.725 [2024-11-18 07:21:06.568136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.725 [2024-11-18 07:21:06.568201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.725 qpair failed and we were unable to recover it. 00:35:45.725 [2024-11-18 07:21:06.568447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.725 [2024-11-18 07:21:06.568527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.725 qpair failed and we were unable to recover it. 00:35:45.725 [2024-11-18 07:21:06.568839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.725 [2024-11-18 07:21:06.568905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.725 qpair failed and we were unable to recover it. 00:35:45.725 [2024-11-18 07:21:06.569112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.725 [2024-11-18 07:21:06.569188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.725 qpair failed and we were unable to recover it. 00:35:45.725 [2024-11-18 07:21:06.569452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.725 [2024-11-18 07:21:06.569535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.725 qpair failed and we were unable to recover it. 00:35:45.725 [2024-11-18 07:21:06.569747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.725 [2024-11-18 07:21:06.569813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.725 qpair failed and we were unable to recover it. 00:35:45.725 [2024-11-18 07:21:06.570027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.725 [2024-11-18 07:21:06.570091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.725 qpair failed and we were unable to recover it. 00:35:45.725 [2024-11-18 07:21:06.570329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.725 [2024-11-18 07:21:06.570396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.725 qpair failed and we were unable to recover it. 00:35:45.725 [2024-11-18 07:21:06.570632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.725 [2024-11-18 07:21:06.570700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.725 qpair failed and we were unable to recover it. 00:35:45.725 [2024-11-18 07:21:06.570895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.725 [2024-11-18 07:21:06.570960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.725 qpair failed and we were unable to recover it. 00:35:45.725 [2024-11-18 07:21:06.571211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.725 [2024-11-18 07:21:06.571277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.725 qpair failed and we were unable to recover it. 00:35:45.725 [2024-11-18 07:21:06.571576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.725 [2024-11-18 07:21:06.571644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.725 qpair failed and we were unable to recover it. 00:35:45.725 [2024-11-18 07:21:06.571888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.725 [2024-11-18 07:21:06.571956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.725 qpair failed and we were unable to recover it. 00:35:45.725 [2024-11-18 07:21:06.572174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.725 [2024-11-18 07:21:06.572240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.725 qpair failed and we were unable to recover it. 00:35:45.725 [2024-11-18 07:21:06.572509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.725 [2024-11-18 07:21:06.572575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.725 qpair failed and we were unable to recover it. 00:35:45.725 [2024-11-18 07:21:06.572825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.725 [2024-11-18 07:21:06.572889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.725 qpair failed and we were unable to recover it. 00:35:45.725 [2024-11-18 07:21:06.573139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.725 [2024-11-18 07:21:06.573204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.725 qpair failed and we were unable to recover it. 00:35:45.725 [2024-11-18 07:21:06.573424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.725 [2024-11-18 07:21:06.573503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.725 qpair failed and we were unable to recover it. 00:35:45.725 [2024-11-18 07:21:06.573709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.725 [2024-11-18 07:21:06.573773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.725 qpair failed and we were unable to recover it. 00:35:45.725 [2024-11-18 07:21:06.574008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.725 [2024-11-18 07:21:06.574083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.725 qpair failed and we were unable to recover it. 00:35:45.725 [2024-11-18 07:21:06.574401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.725 [2024-11-18 07:21:06.574467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.725 qpair failed and we were unable to recover it. 00:35:45.725 [2024-11-18 07:21:06.574723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.725 [2024-11-18 07:21:06.574789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.725 qpair failed and we were unable to recover it. 00:35:45.725 [2024-11-18 07:21:06.575049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.725 [2024-11-18 07:21:06.575114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.725 qpair failed and we were unable to recover it. 00:35:45.725 [2024-11-18 07:21:06.575348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.725 [2024-11-18 07:21:06.575415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.725 qpair failed and we were unable to recover it. 00:35:45.725 [2024-11-18 07:21:06.575745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.725 [2024-11-18 07:21:06.575811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.725 qpair failed and we were unable to recover it. 00:35:45.725 [2024-11-18 07:21:06.576065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.725 [2024-11-18 07:21:06.576131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.725 qpair failed and we were unable to recover it. 00:35:45.725 [2024-11-18 07:21:06.576418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.725 [2024-11-18 07:21:06.576483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.725 qpair failed and we were unable to recover it. 00:35:45.725 [2024-11-18 07:21:06.576675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.725 [2024-11-18 07:21:06.576710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.725 qpair failed and we were unable to recover it. 00:35:45.725 [2024-11-18 07:21:06.576827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.725 [2024-11-18 07:21:06.576861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.725 qpair failed and we were unable to recover it. 00:35:45.725 [2024-11-18 07:21:06.576972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.725 [2024-11-18 07:21:06.577005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.725 qpair failed and we were unable to recover it. 00:35:45.725 [2024-11-18 07:21:06.577113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.725 [2024-11-18 07:21:06.577148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.725 qpair failed and we were unable to recover it. 00:35:45.725 [2024-11-18 07:21:06.577408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.725 [2024-11-18 07:21:06.577475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.725 qpair failed and we were unable to recover it. 00:35:45.725 [2024-11-18 07:21:06.577670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.725 [2024-11-18 07:21:06.577705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.725 qpair failed and we were unable to recover it. 00:35:45.725 [2024-11-18 07:21:06.577899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.725 [2024-11-18 07:21:06.577964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.725 qpair failed and we were unable to recover it. 00:35:45.725 [2024-11-18 07:21:06.578246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.725 [2024-11-18 07:21:06.578311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.725 qpair failed and we were unable to recover it. 00:35:45.725 [2024-11-18 07:21:06.578596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.725 [2024-11-18 07:21:06.578662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.725 qpair failed and we were unable to recover it. 00:35:45.725 [2024-11-18 07:21:06.578948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.725 [2024-11-18 07:21:06.579013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.725 qpair failed and we were unable to recover it. 00:35:45.726 [2024-11-18 07:21:06.579252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.726 [2024-11-18 07:21:06.579317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.726 qpair failed and we were unable to recover it. 00:35:45.726 [2024-11-18 07:21:06.579568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.726 [2024-11-18 07:21:06.579634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.726 qpair failed and we were unable to recover it. 00:35:45.726 [2024-11-18 07:21:06.579922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.726 [2024-11-18 07:21:06.579986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.726 qpair failed and we were unable to recover it. 00:35:45.726 [2024-11-18 07:21:06.580230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.726 [2024-11-18 07:21:06.580295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.726 qpair failed and we were unable to recover it. 00:35:45.726 [2024-11-18 07:21:06.580560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.726 [2024-11-18 07:21:06.580627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.726 qpair failed and we were unable to recover it. 00:35:45.726 [2024-11-18 07:21:06.580827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.726 [2024-11-18 07:21:06.580891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.726 qpair failed and we were unable to recover it. 00:35:45.726 [2024-11-18 07:21:06.581191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.726 [2024-11-18 07:21:06.581255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.726 qpair failed and we were unable to recover it. 00:35:45.726 [2024-11-18 07:21:06.581457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.726 [2024-11-18 07:21:06.581545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.726 qpair failed and we were unable to recover it. 00:35:45.726 [2024-11-18 07:21:06.581781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.726 [2024-11-18 07:21:06.581845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.726 qpair failed and we were unable to recover it. 00:35:45.726 [2024-11-18 07:21:06.582132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.726 [2024-11-18 07:21:06.582197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.726 qpair failed and we were unable to recover it. 00:35:45.726 [2024-11-18 07:21:06.582458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.726 [2024-11-18 07:21:06.582543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.726 qpair failed and we were unable to recover it. 00:35:45.726 [2024-11-18 07:21:06.582815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.726 [2024-11-18 07:21:06.582880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.726 qpair failed and we were unable to recover it. 00:35:45.726 [2024-11-18 07:21:06.583144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.726 [2024-11-18 07:21:06.583208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.726 qpair failed and we were unable to recover it. 00:35:45.726 [2024-11-18 07:21:06.583407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.726 [2024-11-18 07:21:06.583475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.726 qpair failed and we were unable to recover it. 00:35:45.726 [2024-11-18 07:21:06.583748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.726 [2024-11-18 07:21:06.583812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.726 qpair failed and we were unable to recover it. 00:35:45.726 [2024-11-18 07:21:06.584039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.726 [2024-11-18 07:21:06.584105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.726 qpair failed and we were unable to recover it. 00:35:45.726 [2024-11-18 07:21:06.584322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.726 [2024-11-18 07:21:06.584389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.726 qpair failed and we were unable to recover it. 00:35:45.726 [2024-11-18 07:21:06.584701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.726 [2024-11-18 07:21:06.584766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.726 qpair failed and we were unable to recover it. 00:35:45.726 [2024-11-18 07:21:06.585022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.726 [2024-11-18 07:21:06.585087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.726 qpair failed and we were unable to recover it. 00:35:45.726 [2024-11-18 07:21:06.585378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.726 [2024-11-18 07:21:06.585443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.726 qpair failed and we were unable to recover it. 00:35:45.726 [2024-11-18 07:21:06.585756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.726 [2024-11-18 07:21:06.585821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.726 qpair failed and we were unable to recover it. 00:35:45.726 [2024-11-18 07:21:06.586028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.726 [2024-11-18 07:21:06.586093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.726 qpair failed and we were unable to recover it. 00:35:45.726 [2024-11-18 07:21:06.586348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.726 [2024-11-18 07:21:06.586412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.726 qpair failed and we were unable to recover it. 00:35:45.726 [2024-11-18 07:21:06.586714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.726 [2024-11-18 07:21:06.586780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.726 qpair failed and we were unable to recover it. 00:35:45.726 [2024-11-18 07:21:06.587023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.726 [2024-11-18 07:21:06.587089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.726 qpair failed and we were unable to recover it. 00:35:45.726 [2024-11-18 07:21:06.587273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.726 [2024-11-18 07:21:06.587339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.726 qpair failed and we were unable to recover it. 00:35:45.726 [2024-11-18 07:21:06.587547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.726 [2024-11-18 07:21:06.587615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.726 qpair failed and we were unable to recover it. 00:35:45.726 [2024-11-18 07:21:06.587878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.726 [2024-11-18 07:21:06.587943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.726 qpair failed and we were unable to recover it. 00:35:45.726 [2024-11-18 07:21:06.588144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.726 [2024-11-18 07:21:06.588209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.726 qpair failed and we were unable to recover it. 00:35:45.726 [2024-11-18 07:21:06.588458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.726 [2024-11-18 07:21:06.588537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.726 qpair failed and we were unable to recover it. 00:35:45.726 [2024-11-18 07:21:06.588765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.726 [2024-11-18 07:21:06.588831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.726 qpair failed and we were unable to recover it. 00:35:45.726 [2024-11-18 07:21:06.589122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.726 [2024-11-18 07:21:06.589187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.726 qpair failed and we were unable to recover it. 00:35:45.726 [2024-11-18 07:21:06.589480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.726 [2024-11-18 07:21:06.589570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.726 qpair failed and we were unable to recover it. 00:35:45.726 [2024-11-18 07:21:06.589868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.726 [2024-11-18 07:21:06.589933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.726 qpair failed and we were unable to recover it. 00:35:45.726 [2024-11-18 07:21:06.590223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.726 [2024-11-18 07:21:06.590298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.726 qpair failed and we were unable to recover it. 00:35:45.726 [2024-11-18 07:21:06.590603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.726 [2024-11-18 07:21:06.590668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.726 qpair failed and we were unable to recover it. 00:35:45.726 [2024-11-18 07:21:06.590891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.726 [2024-11-18 07:21:06.590956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.726 qpair failed and we were unable to recover it. 00:35:45.726 [2024-11-18 07:21:06.591250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.726 [2024-11-18 07:21:06.591315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.726 qpair failed and we were unable to recover it. 00:35:45.727 [2024-11-18 07:21:06.591571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.727 [2024-11-18 07:21:06.591637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.727 qpair failed and we were unable to recover it. 00:35:45.727 [2024-11-18 07:21:06.591902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.727 [2024-11-18 07:21:06.591968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.727 qpair failed and we were unable to recover it. 00:35:45.727 [2024-11-18 07:21:06.592209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.727 [2024-11-18 07:21:06.592275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.727 qpair failed and we were unable to recover it. 00:35:45.727 [2024-11-18 07:21:06.592529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.727 [2024-11-18 07:21:06.592597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.727 qpair failed and we were unable to recover it. 00:35:45.727 [2024-11-18 07:21:06.592885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.727 [2024-11-18 07:21:06.592951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.727 qpair failed and we were unable to recover it. 00:35:45.727 [2024-11-18 07:21:06.593210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.727 [2024-11-18 07:21:06.593275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.727 qpair failed and we were unable to recover it. 00:35:45.727 [2024-11-18 07:21:06.593548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.727 [2024-11-18 07:21:06.593614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.727 qpair failed and we were unable to recover it. 00:35:45.727 [2024-11-18 07:21:06.593861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.727 [2024-11-18 07:21:06.593927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.727 qpair failed and we were unable to recover it. 00:35:45.727 [2024-11-18 07:21:06.594223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.727 [2024-11-18 07:21:06.594289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.727 qpair failed and we were unable to recover it. 00:35:45.727 [2024-11-18 07:21:06.594535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.727 [2024-11-18 07:21:06.594601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.727 qpair failed and we were unable to recover it. 00:35:45.727 [2024-11-18 07:21:06.594910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.727 [2024-11-18 07:21:06.594976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.727 qpair failed and we were unable to recover it. 00:35:45.727 [2024-11-18 07:21:06.595229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.727 [2024-11-18 07:21:06.595295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.727 qpair failed and we were unable to recover it. 00:35:45.727 [2024-11-18 07:21:06.595547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.727 [2024-11-18 07:21:06.595614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.727 qpair failed and we were unable to recover it. 00:35:45.727 [2024-11-18 07:21:06.595903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.727 [2024-11-18 07:21:06.595968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.727 qpair failed and we were unable to recover it. 00:35:45.727 [2024-11-18 07:21:06.596189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.727 [2024-11-18 07:21:06.596255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.727 qpair failed and we were unable to recover it. 00:35:45.727 [2024-11-18 07:21:06.596520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.727 [2024-11-18 07:21:06.596587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.727 qpair failed and we were unable to recover it. 00:35:45.727 [2024-11-18 07:21:06.596813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.727 [2024-11-18 07:21:06.596876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.727 qpair failed and we were unable to recover it. 00:35:45.727 [2024-11-18 07:21:06.597174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.727 [2024-11-18 07:21:06.597238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.727 qpair failed and we were unable to recover it. 00:35:45.727 [2024-11-18 07:21:06.597481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.727 [2024-11-18 07:21:06.597561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.727 qpair failed and we were unable to recover it. 00:35:45.727 [2024-11-18 07:21:06.597841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.727 [2024-11-18 07:21:06.597905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.727 qpair failed and we were unable to recover it. 00:35:45.727 [2024-11-18 07:21:06.598157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.727 [2024-11-18 07:21:06.598222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.727 qpair failed and we were unable to recover it. 00:35:45.727 [2024-11-18 07:21:06.598522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.727 [2024-11-18 07:21:06.598590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.727 qpair failed and we were unable to recover it. 00:35:45.727 [2024-11-18 07:21:06.598857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.727 [2024-11-18 07:21:06.598921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.727 qpair failed and we were unable to recover it. 00:35:45.727 [2024-11-18 07:21:06.599191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.727 [2024-11-18 07:21:06.599266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.727 qpair failed and we were unable to recover it. 00:35:45.727 [2024-11-18 07:21:06.599530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.727 [2024-11-18 07:21:06.599596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.727 qpair failed and we were unable to recover it. 00:35:45.727 [2024-11-18 07:21:06.599818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.727 [2024-11-18 07:21:06.599883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.727 qpair failed and we were unable to recover it. 00:35:45.727 [2024-11-18 07:21:06.600131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.727 [2024-11-18 07:21:06.600197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.727 qpair failed and we were unable to recover it. 00:35:45.727 [2024-11-18 07:21:06.600505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.727 [2024-11-18 07:21:06.600571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.727 qpair failed and we were unable to recover it. 00:35:45.727 [2024-11-18 07:21:06.600851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.727 [2024-11-18 07:21:06.600916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.727 qpair failed and we were unable to recover it. 00:35:45.727 [2024-11-18 07:21:06.601215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.727 [2024-11-18 07:21:06.601280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.727 qpair failed and we were unable to recover it. 00:35:45.727 [2024-11-18 07:21:06.601529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.727 [2024-11-18 07:21:06.601594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.727 qpair failed and we were unable to recover it. 00:35:45.727 [2024-11-18 07:21:06.601870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.727 [2024-11-18 07:21:06.601936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.727 qpair failed and we were unable to recover it. 00:35:45.727 [2024-11-18 07:21:06.602231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.727 [2024-11-18 07:21:06.602296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.727 qpair failed and we were unable to recover it. 00:35:45.727 [2024-11-18 07:21:06.602541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.727 [2024-11-18 07:21:06.602606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.727 qpair failed and we were unable to recover it. 00:35:45.727 [2024-11-18 07:21:06.602851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.727 [2024-11-18 07:21:06.602916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.727 qpair failed and we were unable to recover it. 00:35:45.727 [2024-11-18 07:21:06.603164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.727 [2024-11-18 07:21:06.603230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.727 qpair failed and we were unable to recover it. 00:35:45.727 [2024-11-18 07:21:06.603551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.727 [2024-11-18 07:21:06.603618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.727 qpair failed and we were unable to recover it. 00:35:45.727 [2024-11-18 07:21:06.603934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.727 [2024-11-18 07:21:06.603999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.727 qpair failed and we were unable to recover it. 00:35:45.728 [2024-11-18 07:21:06.604263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.728 [2024-11-18 07:21:06.604328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.728 qpair failed and we were unable to recover it. 00:35:45.728 [2024-11-18 07:21:06.604556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.728 [2024-11-18 07:21:06.604621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.728 qpair failed and we were unable to recover it. 00:35:45.728 [2024-11-18 07:21:06.604911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.728 [2024-11-18 07:21:06.604975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.728 qpair failed and we were unable to recover it. 00:35:45.728 [2024-11-18 07:21:06.605288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.728 [2024-11-18 07:21:06.605352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.728 qpair failed and we were unable to recover it. 00:35:45.728 [2024-11-18 07:21:06.605600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.728 [2024-11-18 07:21:06.605666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.728 qpair failed and we were unable to recover it. 00:35:45.728 [2024-11-18 07:21:06.605954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.728 [2024-11-18 07:21:06.606018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.728 qpair failed and we were unable to recover it. 00:35:45.728 [2024-11-18 07:21:06.606310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.728 [2024-11-18 07:21:06.606375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.728 qpair failed and we were unable to recover it. 00:35:45.728 [2024-11-18 07:21:06.606568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.728 [2024-11-18 07:21:06.606633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.728 qpair failed and we were unable to recover it. 00:35:45.728 [2024-11-18 07:21:06.606880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.728 [2024-11-18 07:21:06.606945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.728 qpair failed and we were unable to recover it. 00:35:45.728 [2024-11-18 07:21:06.607242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.728 [2024-11-18 07:21:06.607305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.728 qpair failed and we were unable to recover it. 00:35:45.728 [2024-11-18 07:21:06.607597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.728 [2024-11-18 07:21:06.607664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.728 qpair failed and we were unable to recover it. 00:35:45.728 [2024-11-18 07:21:06.607934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.728 [2024-11-18 07:21:06.607999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.728 qpair failed and we were unable to recover it. 00:35:45.728 [2024-11-18 07:21:06.608304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.728 [2024-11-18 07:21:06.608367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.728 qpair failed and we were unable to recover it. 00:35:45.728 [2024-11-18 07:21:06.608610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.728 [2024-11-18 07:21:06.608676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.728 qpair failed and we were unable to recover it. 00:35:45.728 [2024-11-18 07:21:06.608901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.728 [2024-11-18 07:21:06.608967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.728 qpair failed and we were unable to recover it. 00:35:45.728 [2024-11-18 07:21:06.609280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.728 [2024-11-18 07:21:06.609344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.728 qpair failed and we were unable to recover it. 00:35:45.728 [2024-11-18 07:21:06.609613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.728 [2024-11-18 07:21:06.609679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.728 qpair failed and we were unable to recover it. 00:35:45.728 [2024-11-18 07:21:06.609974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.728 [2024-11-18 07:21:06.610039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.728 qpair failed and we were unable to recover it. 00:35:45.728 [2024-11-18 07:21:06.610329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.728 [2024-11-18 07:21:06.610393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.728 qpair failed and we were unable to recover it. 00:35:45.728 [2024-11-18 07:21:06.610695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.728 [2024-11-18 07:21:06.610761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.728 qpair failed and we were unable to recover it. 00:35:45.728 [2024-11-18 07:21:06.610976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.728 [2024-11-18 07:21:06.611041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.728 qpair failed and we were unable to recover it. 00:35:45.728 [2024-11-18 07:21:06.611284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.728 [2024-11-18 07:21:06.611350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.728 qpair failed and we were unable to recover it. 00:35:45.728 [2024-11-18 07:21:06.611647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.728 [2024-11-18 07:21:06.611714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.728 qpair failed and we were unable to recover it. 00:35:45.728 [2024-11-18 07:21:06.611974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.728 [2024-11-18 07:21:06.612039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.728 qpair failed and we were unable to recover it. 00:35:45.728 [2024-11-18 07:21:06.612309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.728 [2024-11-18 07:21:06.612373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.728 qpair failed and we were unable to recover it. 00:35:45.728 [2024-11-18 07:21:06.612626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.728 [2024-11-18 07:21:06.612692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.728 qpair failed and we were unable to recover it. 00:35:45.728 [2024-11-18 07:21:06.612941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.728 [2024-11-18 07:21:06.613007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.728 qpair failed and we were unable to recover it. 00:35:45.728 [2024-11-18 07:21:06.613297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.728 [2024-11-18 07:21:06.613362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.728 qpair failed and we were unable to recover it. 00:35:45.728 [2024-11-18 07:21:06.613662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.728 [2024-11-18 07:21:06.613728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.728 qpair failed and we were unable to recover it. 00:35:45.728 [2024-11-18 07:21:06.614029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.728 [2024-11-18 07:21:06.614092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.728 qpair failed and we were unable to recover it. 00:35:45.728 [2024-11-18 07:21:06.614398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.728 [2024-11-18 07:21:06.614462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.728 qpair failed and we were unable to recover it. 00:35:45.728 [2024-11-18 07:21:06.614739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.728 [2024-11-18 07:21:06.614805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.728 qpair failed and we were unable to recover it. 00:35:45.728 [2024-11-18 07:21:06.615049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.728 [2024-11-18 07:21:06.615113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.728 qpair failed and we were unable to recover it. 00:35:45.728 [2024-11-18 07:21:06.615327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.728 [2024-11-18 07:21:06.615392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.728 qpair failed and we were unable to recover it. 00:35:45.728 [2024-11-18 07:21:06.615704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.728 [2024-11-18 07:21:06.615770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.728 qpair failed and we were unable to recover it. 00:35:45.728 [2024-11-18 07:21:06.616078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.728 [2024-11-18 07:21:06.616142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.728 qpair failed and we were unable to recover it. 00:35:45.728 [2024-11-18 07:21:06.616441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.728 [2024-11-18 07:21:06.616524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.728 qpair failed and we were unable to recover it. 00:35:45.728 [2024-11-18 07:21:06.616823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.728 [2024-11-18 07:21:06.616888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.728 qpair failed and we were unable to recover it. 00:35:45.729 [2024-11-18 07:21:06.617139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.729 [2024-11-18 07:21:06.617203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.729 qpair failed and we were unable to recover it. 00:35:45.729 [2024-11-18 07:21:06.617478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.729 [2024-11-18 07:21:06.617560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.729 qpair failed and we were unable to recover it. 00:35:45.729 [2024-11-18 07:21:06.617832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.729 [2024-11-18 07:21:06.617898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.729 qpair failed and we were unable to recover it. 00:35:45.729 [2024-11-18 07:21:06.618208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.729 [2024-11-18 07:21:06.618271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.729 qpair failed and we were unable to recover it. 00:35:45.729 [2024-11-18 07:21:06.618565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.729 [2024-11-18 07:21:06.618632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.729 qpair failed and we were unable to recover it. 00:35:45.729 [2024-11-18 07:21:06.618922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.729 [2024-11-18 07:21:06.618986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.729 qpair failed and we were unable to recover it. 00:35:45.729 [2024-11-18 07:21:06.619177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.729 [2024-11-18 07:21:06.619252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.729 qpair failed and we were unable to recover it. 00:35:45.729 [2024-11-18 07:21:06.619539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.729 [2024-11-18 07:21:06.619606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.729 qpair failed and we were unable to recover it. 00:35:45.729 [2024-11-18 07:21:06.619852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.729 [2024-11-18 07:21:06.619918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.729 qpair failed and we were unable to recover it. 00:35:45.729 [2024-11-18 07:21:06.620171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.729 [2024-11-18 07:21:06.620237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.729 qpair failed and we were unable to recover it. 00:35:45.729 [2024-11-18 07:21:06.620480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.729 [2024-11-18 07:21:06.620570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.729 qpair failed and we were unable to recover it. 00:35:45.729 [2024-11-18 07:21:06.620814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.729 [2024-11-18 07:21:06.620879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.729 qpair failed and we were unable to recover it. 00:35:45.729 [2024-11-18 07:21:06.621132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.729 [2024-11-18 07:21:06.621198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.729 qpair failed and we were unable to recover it. 00:35:45.729 [2024-11-18 07:21:06.621449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.729 [2024-11-18 07:21:06.621526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.729 qpair failed and we were unable to recover it. 00:35:45.729 [2024-11-18 07:21:06.621767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.729 [2024-11-18 07:21:06.621838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.729 qpair failed and we were unable to recover it. 00:35:45.729 [2024-11-18 07:21:06.622082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.729 [2024-11-18 07:21:06.622159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.729 qpair failed and we were unable to recover it. 00:35:45.729 [2024-11-18 07:21:06.622423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.729 [2024-11-18 07:21:06.622506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.729 qpair failed and we were unable to recover it. 00:35:45.729 [2024-11-18 07:21:06.622815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.729 [2024-11-18 07:21:06.622880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.729 qpair failed and we were unable to recover it. 00:35:45.729 [2024-11-18 07:21:06.623183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.729 [2024-11-18 07:21:06.623249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.729 qpair failed and we were unable to recover it. 00:35:45.729 [2024-11-18 07:21:06.623524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.729 [2024-11-18 07:21:06.623596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.729 qpair failed and we were unable to recover it. 00:35:45.729 [2024-11-18 07:21:06.623847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.729 [2024-11-18 07:21:06.623913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.729 qpair failed and we were unable to recover it. 00:35:45.729 [2024-11-18 07:21:06.624173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.729 [2024-11-18 07:21:06.624238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.729 qpair failed and we were unable to recover it. 00:35:45.729 [2024-11-18 07:21:06.624554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.729 [2024-11-18 07:21:06.624620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.729 qpair failed and we were unable to recover it. 00:35:45.729 [2024-11-18 07:21:06.624906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.729 [2024-11-18 07:21:06.624971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.729 qpair failed and we were unable to recover it. 00:35:45.729 [2024-11-18 07:21:06.625269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.729 [2024-11-18 07:21:06.625334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.729 qpair failed and we were unable to recover it. 00:35:45.729 [2024-11-18 07:21:06.625631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.729 [2024-11-18 07:21:06.625697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.729 qpair failed and we were unable to recover it. 00:35:45.729 [2024-11-18 07:21:06.625958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.729 [2024-11-18 07:21:06.626023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.729 qpair failed and we were unable to recover it. 00:35:45.729 [2024-11-18 07:21:06.626284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.729 [2024-11-18 07:21:06.626349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.729 qpair failed and we were unable to recover it. 00:35:45.729 [2024-11-18 07:21:06.626658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.729 [2024-11-18 07:21:06.626724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.729 qpair failed and we were unable to recover it. 00:35:45.729 [2024-11-18 07:21:06.626994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.729 [2024-11-18 07:21:06.627058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.729 qpair failed and we were unable to recover it. 00:35:45.729 [2024-11-18 07:21:06.627297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.729 [2024-11-18 07:21:06.627364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.729 qpair failed and we were unable to recover it. 00:35:45.729 [2024-11-18 07:21:06.627625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.729 [2024-11-18 07:21:06.627692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.729 qpair failed and we were unable to recover it. 00:35:45.729 [2024-11-18 07:21:06.627989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.729 [2024-11-18 07:21:06.628054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.729 qpair failed and we were unable to recover it. 00:35:45.730 [2024-11-18 07:21:06.628352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.730 [2024-11-18 07:21:06.628417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.730 qpair failed and we were unable to recover it. 00:35:45.730 [2024-11-18 07:21:06.628730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.730 [2024-11-18 07:21:06.628805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.730 qpair failed and we were unable to recover it. 00:35:45.730 [2024-11-18 07:21:06.629089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.730 [2024-11-18 07:21:06.629154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.730 qpair failed and we were unable to recover it. 00:35:45.730 [2024-11-18 07:21:06.629402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.730 [2024-11-18 07:21:06.629467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.730 qpair failed and we were unable to recover it. 00:35:45.730 [2024-11-18 07:21:06.629801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.730 [2024-11-18 07:21:06.629867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.730 qpair failed and we were unable to recover it. 00:35:45.730 [2024-11-18 07:21:06.630163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.730 [2024-11-18 07:21:06.630227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.730 qpair failed and we were unable to recover it. 00:35:45.730 [2024-11-18 07:21:06.630526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.730 [2024-11-18 07:21:06.630593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.730 qpair failed and we were unable to recover it. 00:35:45.730 [2024-11-18 07:21:06.630887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.730 [2024-11-18 07:21:06.630953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.730 qpair failed and we were unable to recover it. 00:35:45.730 [2024-11-18 07:21:06.631207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.730 [2024-11-18 07:21:06.631271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.730 qpair failed and we were unable to recover it. 00:35:45.730 [2024-11-18 07:21:06.631552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.730 [2024-11-18 07:21:06.631627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.730 qpair failed and we were unable to recover it. 00:35:45.730 [2024-11-18 07:21:06.631895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.730 [2024-11-18 07:21:06.631960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.730 qpair failed and we were unable to recover it. 00:35:45.730 [2024-11-18 07:21:06.632203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.730 [2024-11-18 07:21:06.632268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.730 qpair failed and we were unable to recover it. 00:35:45.730 [2024-11-18 07:21:06.632534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.730 [2024-11-18 07:21:06.632599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.730 qpair failed and we were unable to recover it. 00:35:45.730 [2024-11-18 07:21:06.632894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.730 [2024-11-18 07:21:06.632960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.730 qpair failed and we were unable to recover it. 00:35:45.730 [2024-11-18 07:21:06.633222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.730 [2024-11-18 07:21:06.633288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.730 qpair failed and we were unable to recover it. 00:35:45.730 [2024-11-18 07:21:06.633556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.730 [2024-11-18 07:21:06.633622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.730 qpair failed and we were unable to recover it. 00:35:45.730 [2024-11-18 07:21:06.633863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.730 [2024-11-18 07:21:06.633927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.730 qpair failed and we were unable to recover it. 00:35:45.730 [2024-11-18 07:21:06.634206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.730 [2024-11-18 07:21:06.634269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.730 qpair failed and we were unable to recover it. 00:35:45.730 [2024-11-18 07:21:06.634532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.730 [2024-11-18 07:21:06.634599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.730 qpair failed and we were unable to recover it. 00:35:45.730 [2024-11-18 07:21:06.634846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.730 [2024-11-18 07:21:06.634912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.730 qpair failed and we were unable to recover it. 00:35:45.730 [2024-11-18 07:21:06.635203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.730 [2024-11-18 07:21:06.635267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.730 qpair failed and we were unable to recover it. 00:35:45.730 [2024-11-18 07:21:06.635560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.730 [2024-11-18 07:21:06.635628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.730 qpair failed and we were unable to recover it. 00:35:45.730 [2024-11-18 07:21:06.635879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.730 [2024-11-18 07:21:06.635945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.730 qpair failed and we were unable to recover it. 00:35:45.730 [2024-11-18 07:21:06.636213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.730 [2024-11-18 07:21:06.636278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.730 qpair failed and we were unable to recover it. 00:35:45.730 [2024-11-18 07:21:06.636579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.730 [2024-11-18 07:21:06.636645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.730 qpair failed and we were unable to recover it. 00:35:45.730 [2024-11-18 07:21:06.636941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.730 [2024-11-18 07:21:06.637007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.730 qpair failed and we were unable to recover it. 00:35:45.730 [2024-11-18 07:21:06.637318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.730 [2024-11-18 07:21:06.637383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.730 qpair failed and we were unable to recover it. 00:35:45.730 [2024-11-18 07:21:06.637652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.730 [2024-11-18 07:21:06.637718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.730 qpair failed and we were unable to recover it. 00:35:45.730 [2024-11-18 07:21:06.638027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.730 [2024-11-18 07:21:06.638092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.730 qpair failed and we were unable to recover it. 00:35:45.730 [2024-11-18 07:21:06.638342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.730 [2024-11-18 07:21:06.638407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.730 qpair failed and we were unable to recover it. 00:35:45.730 [2024-11-18 07:21:06.638715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.730 [2024-11-18 07:21:06.638781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.730 qpair failed and we were unable to recover it. 00:35:45.730 [2024-11-18 07:21:06.639042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.730 [2024-11-18 07:21:06.639107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.730 qpair failed and we were unable to recover it. 00:35:45.730 [2024-11-18 07:21:06.639320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.730 [2024-11-18 07:21:06.639388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.730 qpair failed and we were unable to recover it. 00:35:45.730 [2024-11-18 07:21:06.639646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.730 [2024-11-18 07:21:06.639713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.730 qpair failed and we were unable to recover it. 00:35:45.730 [2024-11-18 07:21:06.639961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.730 [2024-11-18 07:21:06.640028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.730 qpair failed and we were unable to recover it. 00:35:45.730 [2024-11-18 07:21:06.640309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.730 [2024-11-18 07:21:06.640375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.730 qpair failed and we were unable to recover it. 00:35:45.730 [2024-11-18 07:21:06.640628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.730 [2024-11-18 07:21:06.640694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.730 qpair failed and we were unable to recover it. 00:35:45.730 [2024-11-18 07:21:06.640957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.730 [2024-11-18 07:21:06.641023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.730 qpair failed and we were unable to recover it. 00:35:45.731 [2024-11-18 07:21:06.641259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.731 [2024-11-18 07:21:06.641324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.731 qpair failed and we were unable to recover it. 00:35:45.731 [2024-11-18 07:21:06.641571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.731 [2024-11-18 07:21:06.641647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.731 qpair failed and we were unable to recover it. 00:35:45.731 [2024-11-18 07:21:06.641910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.731 [2024-11-18 07:21:06.641976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.731 qpair failed and we were unable to recover it. 00:35:45.731 [2024-11-18 07:21:06.642227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.731 [2024-11-18 07:21:06.642291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.731 qpair failed and we were unable to recover it. 00:35:45.731 [2024-11-18 07:21:06.642548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.731 [2024-11-18 07:21:06.642614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.731 qpair failed and we were unable to recover it. 00:35:45.731 [2024-11-18 07:21:06.642864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.731 [2024-11-18 07:21:06.642929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.731 qpair failed and we were unable to recover it. 00:35:45.731 [2024-11-18 07:21:06.643224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.731 [2024-11-18 07:21:06.643289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.731 qpair failed and we were unable to recover it. 00:35:45.731 [2024-11-18 07:21:06.643604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.731 [2024-11-18 07:21:06.643670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.731 qpair failed and we were unable to recover it. 00:35:45.731 [2024-11-18 07:21:06.643974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.731 [2024-11-18 07:21:06.644040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.731 qpair failed and we were unable to recover it. 00:35:45.731 [2024-11-18 07:21:06.644338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.731 [2024-11-18 07:21:06.644403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.731 qpair failed and we were unable to recover it. 00:35:45.731 [2024-11-18 07:21:06.644668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.731 [2024-11-18 07:21:06.644734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.731 qpair failed and we were unable to recover it. 00:35:45.731 [2024-11-18 07:21:06.645039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.731 [2024-11-18 07:21:06.645105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.731 qpair failed and we were unable to recover it. 00:35:45.731 [2024-11-18 07:21:06.645364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.731 [2024-11-18 07:21:06.645430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.731 qpair failed and we were unable to recover it. 00:35:45.731 [2024-11-18 07:21:06.645743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.731 [2024-11-18 07:21:06.645809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.731 qpair failed and we were unable to recover it. 00:35:45.731 [2024-11-18 07:21:06.646109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.731 [2024-11-18 07:21:06.646174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.731 qpair failed and we were unable to recover it. 00:35:45.731 [2024-11-18 07:21:06.646368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.731 [2024-11-18 07:21:06.646433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.731 qpair failed and we were unable to recover it. 00:35:45.731 [2024-11-18 07:21:06.646717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.731 [2024-11-18 07:21:06.646784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.731 qpair failed and we were unable to recover it. 00:35:45.731 [2024-11-18 07:21:06.647029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.731 [2024-11-18 07:21:06.647094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.731 qpair failed and we were unable to recover it. 00:35:45.731 [2024-11-18 07:21:06.647353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.731 [2024-11-18 07:21:06.647419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.731 qpair failed and we were unable to recover it. 00:35:45.731 [2024-11-18 07:21:06.647704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.731 [2024-11-18 07:21:06.647770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.731 qpair failed and we were unable to recover it. 00:35:45.731 [2024-11-18 07:21:06.648071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.731 [2024-11-18 07:21:06.648137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.731 qpair failed and we were unable to recover it. 00:35:45.731 [2024-11-18 07:21:06.648397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.731 [2024-11-18 07:21:06.648463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.731 qpair failed and we were unable to recover it. 00:35:45.731 [2024-11-18 07:21:06.648732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.731 [2024-11-18 07:21:06.648798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.731 qpair failed and we were unable to recover it. 00:35:45.731 [2024-11-18 07:21:06.649039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.731 [2024-11-18 07:21:06.649104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.731 qpair failed and we were unable to recover it. 00:35:45.731 [2024-11-18 07:21:06.649352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.731 [2024-11-18 07:21:06.649417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.731 qpair failed and we were unable to recover it. 00:35:45.731 [2024-11-18 07:21:06.649719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.731 [2024-11-18 07:21:06.649787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.731 qpair failed and we were unable to recover it. 00:35:45.731 [2024-11-18 07:21:06.650090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.731 [2024-11-18 07:21:06.650155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.731 qpair failed and we were unable to recover it. 00:35:45.731 [2024-11-18 07:21:06.650459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.731 [2024-11-18 07:21:06.650555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.731 qpair failed and we were unable to recover it. 00:35:45.731 [2024-11-18 07:21:06.650753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.731 [2024-11-18 07:21:06.650819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.731 qpair failed and we were unable to recover it. 00:35:45.731 [2024-11-18 07:21:06.651040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.731 [2024-11-18 07:21:06.651106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.731 qpair failed and we were unable to recover it. 00:35:45.731 [2024-11-18 07:21:06.651360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.731 [2024-11-18 07:21:06.651426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.731 qpair failed and we were unable to recover it. 00:35:45.731 [2024-11-18 07:21:06.651714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.731 [2024-11-18 07:21:06.651780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.731 qpair failed and we were unable to recover it. 00:35:45.731 [2024-11-18 07:21:06.652067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.731 [2024-11-18 07:21:06.652133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.731 qpair failed and we were unable to recover it. 00:35:45.731 [2024-11-18 07:21:06.652385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.731 [2024-11-18 07:21:06.652450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.731 qpair failed and we were unable to recover it. 00:35:45.731 [2024-11-18 07:21:06.652726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.731 [2024-11-18 07:21:06.652793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.731 qpair failed and we were unable to recover it. 00:35:45.731 [2024-11-18 07:21:06.653001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.731 [2024-11-18 07:21:06.653067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.731 qpair failed and we were unable to recover it. 00:35:45.731 [2024-11-18 07:21:06.653312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.731 [2024-11-18 07:21:06.653378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.731 qpair failed and we were unable to recover it. 00:35:45.731 [2024-11-18 07:21:06.653618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.731 [2024-11-18 07:21:06.653683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.731 qpair failed and we were unable to recover it. 00:35:45.732 [2024-11-18 07:21:06.653973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.732 [2024-11-18 07:21:06.654037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.732 qpair failed and we were unable to recover it. 00:35:45.732 [2024-11-18 07:21:06.654329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.732 [2024-11-18 07:21:06.654405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.732 qpair failed and we were unable to recover it. 00:35:45.732 [2024-11-18 07:21:06.654634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.732 [2024-11-18 07:21:06.654701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.732 qpair failed and we were unable to recover it. 00:35:45.732 [2024-11-18 07:21:06.654988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.732 [2024-11-18 07:21:06.655054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.732 qpair failed and we were unable to recover it. 00:35:45.732 [2024-11-18 07:21:06.655311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.732 [2024-11-18 07:21:06.655377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.732 qpair failed and we were unable to recover it. 00:35:45.732 [2024-11-18 07:21:06.655643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.732 [2024-11-18 07:21:06.655711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.732 qpair failed and we were unable to recover it. 00:35:45.732 [2024-11-18 07:21:06.655963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.732 [2024-11-18 07:21:06.656028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.732 qpair failed and we were unable to recover it. 00:35:45.732 [2024-11-18 07:21:06.656269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.732 [2024-11-18 07:21:06.656333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.732 qpair failed and we were unable to recover it. 00:35:45.732 [2024-11-18 07:21:06.656629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.732 [2024-11-18 07:21:06.656696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.732 qpair failed and we were unable to recover it. 00:35:45.732 [2024-11-18 07:21:06.656948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.732 [2024-11-18 07:21:06.657015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.732 qpair failed and we were unable to recover it. 00:35:45.732 [2024-11-18 07:21:06.657300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.732 [2024-11-18 07:21:06.657365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.732 qpair failed and we were unable to recover it. 00:35:45.732 [2024-11-18 07:21:06.657655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.732 [2024-11-18 07:21:06.657722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.732 qpair failed and we were unable to recover it. 00:35:45.732 [2024-11-18 07:21:06.657939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.732 [2024-11-18 07:21:06.658005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.732 qpair failed and we were unable to recover it. 00:35:45.732 [2024-11-18 07:21:06.658258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.732 [2024-11-18 07:21:06.658323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.732 qpair failed and we were unable to recover it. 00:35:45.732 [2024-11-18 07:21:06.658535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.732 [2024-11-18 07:21:06.658602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.732 qpair failed and we were unable to recover it. 00:35:45.732 [2024-11-18 07:21:06.658914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.732 [2024-11-18 07:21:06.658981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.732 qpair failed and we were unable to recover it. 00:35:45.732 [2024-11-18 07:21:06.659273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.732 [2024-11-18 07:21:06.659338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.732 qpair failed and we were unable to recover it. 00:35:45.732 [2024-11-18 07:21:06.659637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.732 [2024-11-18 07:21:06.659704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.732 qpair failed and we were unable to recover it. 00:35:45.732 [2024-11-18 07:21:06.660008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.732 [2024-11-18 07:21:06.660073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.732 qpair failed and we were unable to recover it. 00:35:45.732 [2024-11-18 07:21:06.660341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.732 [2024-11-18 07:21:06.660405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:45.732 qpair failed and we were unable to recover it. 00:35:45.732 [2024-11-18 07:21:06.660688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.010 [2024-11-18 07:21:06.660754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.010 qpair failed and we were unable to recover it. 00:35:46.010 [2024-11-18 07:21:06.661004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.010 [2024-11-18 07:21:06.661072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.010 qpair failed and we were unable to recover it. 00:35:46.010 [2024-11-18 07:21:06.661314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.010 [2024-11-18 07:21:06.661379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.010 qpair failed and we were unable to recover it. 00:35:46.010 [2024-11-18 07:21:06.661622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.010 [2024-11-18 07:21:06.661689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.010 qpair failed and we were unable to recover it. 00:35:46.010 [2024-11-18 07:21:06.661944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.010 [2024-11-18 07:21:06.662010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.010 qpair failed and we were unable to recover it. 00:35:46.010 [2024-11-18 07:21:06.662218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.010 [2024-11-18 07:21:06.662283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.010 qpair failed and we were unable to recover it. 00:35:46.010 [2024-11-18 07:21:06.662539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.010 [2024-11-18 07:21:06.662606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.010 qpair failed and we were unable to recover it. 00:35:46.010 [2024-11-18 07:21:06.662891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.010 [2024-11-18 07:21:06.662957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.010 qpair failed and we were unable to recover it. 00:35:46.010 [2024-11-18 07:21:06.663160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.010 [2024-11-18 07:21:06.663238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.010 qpair failed and we were unable to recover it. 00:35:46.010 [2024-11-18 07:21:06.663457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.010 [2024-11-18 07:21:06.663538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.010 qpair failed and we were unable to recover it. 00:35:46.010 [2024-11-18 07:21:06.663758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.010 [2024-11-18 07:21:06.663822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.010 qpair failed and we were unable to recover it. 00:35:46.010 [2024-11-18 07:21:06.663988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.010 [2024-11-18 07:21:06.664054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.010 qpair failed and we were unable to recover it. 00:35:46.010 [2024-11-18 07:21:06.664250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.010 [2024-11-18 07:21:06.664315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.010 qpair failed and we were unable to recover it. 00:35:46.010 [2024-11-18 07:21:06.664530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.010 [2024-11-18 07:21:06.664594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.010 qpair failed and we were unable to recover it. 00:35:46.010 [2024-11-18 07:21:06.664834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.010 [2024-11-18 07:21:06.664898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.010 qpair failed and we were unable to recover it. 00:35:46.010 [2024-11-18 07:21:06.665140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.010 [2024-11-18 07:21:06.665205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.010 qpair failed and we were unable to recover it. 00:35:46.010 [2024-11-18 07:21:06.665408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.010 [2024-11-18 07:21:06.665472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.010 qpair failed and we were unable to recover it. 00:35:46.010 [2024-11-18 07:21:06.665774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.010 [2024-11-18 07:21:06.665840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.010 qpair failed and we were unable to recover it. 00:35:46.010 [2024-11-18 07:21:06.666078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.010 [2024-11-18 07:21:06.666145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.010 qpair failed and we were unable to recover it. 00:35:46.010 [2024-11-18 07:21:06.666383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.010 [2024-11-18 07:21:06.666448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.010 qpair failed and we were unable to recover it. 00:35:46.011 [2024-11-18 07:21:06.666701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.011 [2024-11-18 07:21:06.666775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.011 qpair failed and we were unable to recover it. 00:35:46.011 [2024-11-18 07:21:06.667021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.011 [2024-11-18 07:21:06.667087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.011 qpair failed and we were unable to recover it. 00:35:46.011 [2024-11-18 07:21:06.667397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.011 [2024-11-18 07:21:06.667461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.011 qpair failed and we were unable to recover it. 00:35:46.011 [2024-11-18 07:21:06.667783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.011 [2024-11-18 07:21:06.667848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.011 qpair failed and we were unable to recover it. 00:35:46.011 [2024-11-18 07:21:06.668052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.011 [2024-11-18 07:21:06.668119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.011 qpair failed and we were unable to recover it. 00:35:46.011 [2024-11-18 07:21:06.668412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.011 [2024-11-18 07:21:06.668477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.011 qpair failed and we were unable to recover it. 00:35:46.011 [2024-11-18 07:21:06.668804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.011 [2024-11-18 07:21:06.668869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.011 qpair failed and we were unable to recover it. 00:35:46.011 [2024-11-18 07:21:06.669126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.011 [2024-11-18 07:21:06.669191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.011 qpair failed and we were unable to recover it. 00:35:46.011 [2024-11-18 07:21:06.669448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.011 [2024-11-18 07:21:06.669531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.011 qpair failed and we were unable to recover it. 00:35:46.011 [2024-11-18 07:21:06.669742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.011 [2024-11-18 07:21:06.669809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.011 qpair failed and we were unable to recover it. 00:35:46.011 [2024-11-18 07:21:06.669994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.011 [2024-11-18 07:21:06.670060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.011 qpair failed and we were unable to recover it. 00:35:46.011 [2024-11-18 07:21:06.670321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.011 [2024-11-18 07:21:06.670387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.011 qpair failed and we were unable to recover it. 00:35:46.011 [2024-11-18 07:21:06.670659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.011 [2024-11-18 07:21:06.670725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.011 qpair failed and we were unable to recover it. 00:35:46.011 [2024-11-18 07:21:06.670985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.011 [2024-11-18 07:21:06.671049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.011 qpair failed and we were unable to recover it. 00:35:46.011 [2024-11-18 07:21:06.671304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.011 [2024-11-18 07:21:06.671368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.011 qpair failed and we were unable to recover it. 00:35:46.011 [2024-11-18 07:21:06.671639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.011 [2024-11-18 07:21:06.671717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.011 qpair failed and we were unable to recover it. 00:35:46.011 [2024-11-18 07:21:06.671960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.011 [2024-11-18 07:21:06.672026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.011 qpair failed and we were unable to recover it. 00:35:46.011 [2024-11-18 07:21:06.672312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.011 [2024-11-18 07:21:06.672378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.011 qpair failed and we were unable to recover it. 00:35:46.011 [2024-11-18 07:21:06.672599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.011 [2024-11-18 07:21:06.672666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.011 qpair failed and we were unable to recover it. 00:35:46.011 [2024-11-18 07:21:06.672924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.011 [2024-11-18 07:21:06.672988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.011 qpair failed and we were unable to recover it. 00:35:46.011 [2024-11-18 07:21:06.673283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.011 [2024-11-18 07:21:06.673349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.011 qpair failed and we were unable to recover it. 00:35:46.011 [2024-11-18 07:21:06.673652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.011 [2024-11-18 07:21:06.673719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.011 qpair failed and we were unable to recover it. 00:35:46.011 [2024-11-18 07:21:06.673929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.011 [2024-11-18 07:21:06.673994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.011 qpair failed and we were unable to recover it. 00:35:46.011 [2024-11-18 07:21:06.674283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.011 [2024-11-18 07:21:06.674348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.011 qpair failed and we were unable to recover it. 00:35:46.011 [2024-11-18 07:21:06.674652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.011 [2024-11-18 07:21:06.674720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.011 qpair failed and we were unable to recover it. 00:35:46.011 [2024-11-18 07:21:06.675018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.011 [2024-11-18 07:21:06.675082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.011 qpair failed and we were unable to recover it. 00:35:46.011 [2024-11-18 07:21:06.675382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.011 [2024-11-18 07:21:06.675447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.011 qpair failed and we were unable to recover it. 00:35:46.011 [2024-11-18 07:21:06.675750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.011 [2024-11-18 07:21:06.675817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.011 qpair failed and we were unable to recover it. 00:35:46.011 [2024-11-18 07:21:06.676064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.011 [2024-11-18 07:21:06.676128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.011 qpair failed and we were unable to recover it. 00:35:46.011 [2024-11-18 07:21:06.676429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.011 [2024-11-18 07:21:06.676512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.011 qpair failed and we were unable to recover it. 00:35:46.011 [2024-11-18 07:21:06.676820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.011 [2024-11-18 07:21:06.676887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.011 qpair failed and we were unable to recover it. 00:35:46.011 [2024-11-18 07:21:06.677181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.011 [2024-11-18 07:21:06.677246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.011 qpair failed and we were unable to recover it. 00:35:46.011 [2024-11-18 07:21:06.677456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.011 [2024-11-18 07:21:06.677537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.011 qpair failed and we were unable to recover it. 00:35:46.011 [2024-11-18 07:21:06.677832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.011 [2024-11-18 07:21:06.677897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.011 qpair failed and we were unable to recover it. 00:35:46.011 [2024-11-18 07:21:06.678166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.011 [2024-11-18 07:21:06.678230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.011 qpair failed and we were unable to recover it. 00:35:46.011 [2024-11-18 07:21:06.678478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.011 [2024-11-18 07:21:06.678568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.012 qpair failed and we were unable to recover it. 00:35:46.012 [2024-11-18 07:21:06.678873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.012 [2024-11-18 07:21:06.678937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.012 qpair failed and we were unable to recover it. 00:35:46.012 [2024-11-18 07:21:06.679184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.012 [2024-11-18 07:21:06.679250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.012 qpair failed and we were unable to recover it. 00:35:46.012 [2024-11-18 07:21:06.679547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.012 [2024-11-18 07:21:06.679613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.012 qpair failed and we were unable to recover it. 00:35:46.012 [2024-11-18 07:21:06.679900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.012 [2024-11-18 07:21:06.679965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.012 qpair failed and we were unable to recover it. 00:35:46.012 [2024-11-18 07:21:06.680170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.012 [2024-11-18 07:21:06.680235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.012 qpair failed and we were unable to recover it. 00:35:46.012 [2024-11-18 07:21:06.680534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.012 [2024-11-18 07:21:06.680600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.012 qpair failed and we were unable to recover it. 00:35:46.012 [2024-11-18 07:21:06.680894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.012 [2024-11-18 07:21:06.680958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.012 qpair failed and we were unable to recover it. 00:35:46.012 [2024-11-18 07:21:06.681265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.012 [2024-11-18 07:21:06.681331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.012 qpair failed and we were unable to recover it. 00:35:46.012 [2024-11-18 07:21:06.681628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.012 [2024-11-18 07:21:06.681694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.012 qpair failed and we were unable to recover it. 00:35:46.012 [2024-11-18 07:21:06.681982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.012 [2024-11-18 07:21:06.682046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.012 qpair failed and we were unable to recover it. 00:35:46.012 [2024-11-18 07:21:06.682305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.012 [2024-11-18 07:21:06.682372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.012 qpair failed and we were unable to recover it. 00:35:46.012 [2024-11-18 07:21:06.682612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.012 [2024-11-18 07:21:06.682680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.012 qpair failed and we were unable to recover it. 00:35:46.012 [2024-11-18 07:21:06.682897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.012 [2024-11-18 07:21:06.682960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.012 qpair failed and we were unable to recover it. 00:35:46.012 [2024-11-18 07:21:06.683249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.012 [2024-11-18 07:21:06.683312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.012 qpair failed and we were unable to recover it. 00:35:46.012 [2024-11-18 07:21:06.683573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.012 [2024-11-18 07:21:06.683641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.012 qpair failed and we were unable to recover it. 00:35:46.012 [2024-11-18 07:21:06.683939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.012 [2024-11-18 07:21:06.684003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.012 qpair failed and we were unable to recover it. 00:35:46.012 [2024-11-18 07:21:06.684227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.012 [2024-11-18 07:21:06.684292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.012 qpair failed and we were unable to recover it. 00:35:46.012 [2024-11-18 07:21:06.684591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.012 [2024-11-18 07:21:06.684657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.012 qpair failed and we were unable to recover it. 00:35:46.012 [2024-11-18 07:21:06.684912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.012 [2024-11-18 07:21:06.684979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.012 qpair failed and we were unable to recover it. 00:35:46.012 [2024-11-18 07:21:06.685271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.012 [2024-11-18 07:21:06.685336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.012 qpair failed and we were unable to recover it. 00:35:46.012 [2024-11-18 07:21:06.685647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.012 [2024-11-18 07:21:06.685713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.012 qpair failed and we were unable to recover it. 00:35:46.012 [2024-11-18 07:21:06.686001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.012 [2024-11-18 07:21:06.686066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.012 qpair failed and we were unable to recover it. 00:35:46.012 [2024-11-18 07:21:06.686353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.012 [2024-11-18 07:21:06.686417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.012 qpair failed and we were unable to recover it. 00:35:46.012 [2024-11-18 07:21:06.686728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.012 [2024-11-18 07:21:06.686796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.012 qpair failed and we were unable to recover it. 00:35:46.012 [2024-11-18 07:21:06.687097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.012 [2024-11-18 07:21:06.687160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.012 qpair failed and we were unable to recover it. 00:35:46.012 [2024-11-18 07:21:06.687459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.012 [2024-11-18 07:21:06.687541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.012 qpair failed and we were unable to recover it. 00:35:46.012 [2024-11-18 07:21:06.687848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.012 [2024-11-18 07:21:06.687913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.012 qpair failed and we were unable to recover it. 00:35:46.012 [2024-11-18 07:21:06.688167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.012 [2024-11-18 07:21:06.688231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.012 qpair failed and we were unable to recover it. 00:35:46.012 [2024-11-18 07:21:06.688528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.012 [2024-11-18 07:21:06.688594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.012 qpair failed and we were unable to recover it. 00:35:46.012 [2024-11-18 07:21:06.688793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.012 [2024-11-18 07:21:06.688860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.012 qpair failed and we were unable to recover it. 00:35:46.012 [2024-11-18 07:21:06.689146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.012 [2024-11-18 07:21:06.689210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.012 qpair failed and we were unable to recover it. 00:35:46.012 [2024-11-18 07:21:06.689517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.012 [2024-11-18 07:21:06.689594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.012 qpair failed and we were unable to recover it. 00:35:46.012 [2024-11-18 07:21:06.689901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.012 [2024-11-18 07:21:06.689966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.012 qpair failed and we were unable to recover it. 00:35:46.012 [2024-11-18 07:21:06.690215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.012 [2024-11-18 07:21:06.690279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.012 qpair failed and we were unable to recover it. 00:35:46.012 [2024-11-18 07:21:06.690541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.012 [2024-11-18 07:21:06.690609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.012 qpair failed and we were unable to recover it. 00:35:46.012 [2024-11-18 07:21:06.690907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.013 [2024-11-18 07:21:06.690974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.013 qpair failed and we were unable to recover it. 00:35:46.013 [2024-11-18 07:21:06.691264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.013 [2024-11-18 07:21:06.691328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.013 qpair failed and we were unable to recover it. 00:35:46.013 [2024-11-18 07:21:06.691538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.013 [2024-11-18 07:21:06.691604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.013 qpair failed and we were unable to recover it. 00:35:46.013 [2024-11-18 07:21:06.691901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.013 [2024-11-18 07:21:06.691966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.013 qpair failed and we were unable to recover it. 00:35:46.013 [2024-11-18 07:21:06.692210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.013 [2024-11-18 07:21:06.692277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.013 qpair failed and we were unable to recover it. 00:35:46.013 [2024-11-18 07:21:06.692476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.013 [2024-11-18 07:21:06.692557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.013 qpair failed and we were unable to recover it. 00:35:46.013 [2024-11-18 07:21:06.692846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.013 [2024-11-18 07:21:06.692909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.013 qpair failed and we were unable to recover it. 00:35:46.013 [2024-11-18 07:21:06.693165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.013 [2024-11-18 07:21:06.693233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.013 qpair failed and we were unable to recover it. 00:35:46.013 [2024-11-18 07:21:06.693532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.013 [2024-11-18 07:21:06.693598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.013 qpair failed and we were unable to recover it. 00:35:46.013 [2024-11-18 07:21:06.693842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.013 [2024-11-18 07:21:06.693916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.013 qpair failed and we were unable to recover it. 00:35:46.013 [2024-11-18 07:21:06.694169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.013 [2024-11-18 07:21:06.694234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.013 qpair failed and we were unable to recover it. 00:35:46.013 [2024-11-18 07:21:06.694525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.013 [2024-11-18 07:21:06.694592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.013 qpair failed and we were unable to recover it. 00:35:46.013 [2024-11-18 07:21:06.694836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.013 [2024-11-18 07:21:06.694913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.013 qpair failed and we were unable to recover it. 00:35:46.013 [2024-11-18 07:21:06.695191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.013 [2024-11-18 07:21:06.695256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.013 qpair failed and we were unable to recover it. 00:35:46.013 [2024-11-18 07:21:06.695556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.013 [2024-11-18 07:21:06.695621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.013 qpair failed and we were unable to recover it. 00:35:46.013 [2024-11-18 07:21:06.695916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.013 [2024-11-18 07:21:06.695981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.013 qpair failed and we were unable to recover it. 00:35:46.013 [2024-11-18 07:21:06.696278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.013 [2024-11-18 07:21:06.696343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.013 qpair failed and we were unable to recover it. 00:35:46.013 [2024-11-18 07:21:06.696639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.013 [2024-11-18 07:21:06.696704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.013 qpair failed and we were unable to recover it. 00:35:46.013 [2024-11-18 07:21:06.696972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.013 [2024-11-18 07:21:06.697036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.013 qpair failed and we were unable to recover it. 00:35:46.013 [2024-11-18 07:21:06.697327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.013 [2024-11-18 07:21:06.697392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.013 qpair failed and we were unable to recover it. 00:35:46.013 [2024-11-18 07:21:06.697647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.013 [2024-11-18 07:21:06.697726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.013 qpair failed and we were unable to recover it. 00:35:46.013 [2024-11-18 07:21:06.697980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.013 [2024-11-18 07:21:06.698047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.013 qpair failed and we were unable to recover it. 00:35:46.013 [2024-11-18 07:21:06.698314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.013 [2024-11-18 07:21:06.698378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.013 qpair failed and we were unable to recover it. 00:35:46.013 [2024-11-18 07:21:06.698700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.013 [2024-11-18 07:21:06.698766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.013 qpair failed and we were unable to recover it. 00:35:46.013 [2024-11-18 07:21:06.699025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.013 [2024-11-18 07:21:06.699089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.013 qpair failed and we were unable to recover it. 00:35:46.013 [2024-11-18 07:21:06.699349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.013 [2024-11-18 07:21:06.699413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.013 qpair failed and we were unable to recover it. 00:35:46.013 [2024-11-18 07:21:06.699745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.013 [2024-11-18 07:21:06.699811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.013 qpair failed and we were unable to recover it. 00:35:46.013 [2024-11-18 07:21:06.700020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.013 [2024-11-18 07:21:06.700084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.013 qpair failed and we were unable to recover it. 00:35:46.013 [2024-11-18 07:21:06.700326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.013 [2024-11-18 07:21:06.700392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.013 qpair failed and we were unable to recover it. 00:35:46.013 [2024-11-18 07:21:06.700658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.013 [2024-11-18 07:21:06.700725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.013 qpair failed and we were unable to recover it. 00:35:46.013 [2024-11-18 07:21:06.700976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.013 [2024-11-18 07:21:06.701042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.013 qpair failed and we were unable to recover it. 00:35:46.013 [2024-11-18 07:21:06.701248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.013 [2024-11-18 07:21:06.701314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.013 qpair failed and we were unable to recover it. 00:35:46.013 [2024-11-18 07:21:06.701568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.013 [2024-11-18 07:21:06.701634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.013 qpair failed and we were unable to recover it. 00:35:46.013 [2024-11-18 07:21:06.701848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.013 [2024-11-18 07:21:06.701915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.013 qpair failed and we were unable to recover it. 00:35:46.013 [2024-11-18 07:21:06.702211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.013 [2024-11-18 07:21:06.702276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.013 qpair failed and we were unable to recover it. 00:35:46.013 [2024-11-18 07:21:06.702563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.013 [2024-11-18 07:21:06.702629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.014 qpair failed and we were unable to recover it. 00:35:46.014 [2024-11-18 07:21:06.702886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.014 [2024-11-18 07:21:06.702951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.014 qpair failed and we were unable to recover it. 00:35:46.014 [2024-11-18 07:21:06.703238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.014 [2024-11-18 07:21:06.703303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.014 qpair failed and we were unable to recover it. 00:35:46.014 [2024-11-18 07:21:06.703608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.014 [2024-11-18 07:21:06.703673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.014 qpair failed and we were unable to recover it. 00:35:46.014 [2024-11-18 07:21:06.703963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.014 [2024-11-18 07:21:06.704039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.014 qpair failed and we were unable to recover it. 00:35:46.014 [2024-11-18 07:21:06.704336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.014 [2024-11-18 07:21:06.704402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.014 qpair failed and we were unable to recover it. 00:35:46.014 [2024-11-18 07:21:06.704709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.014 [2024-11-18 07:21:06.704774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.014 qpair failed and we were unable to recover it. 00:35:46.014 [2024-11-18 07:21:06.705025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.014 [2024-11-18 07:21:06.705089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.014 qpair failed and we were unable to recover it. 00:35:46.014 [2024-11-18 07:21:06.705383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.014 [2024-11-18 07:21:06.705448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.014 qpair failed and we were unable to recover it. 00:35:46.014 [2024-11-18 07:21:06.705769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.014 [2024-11-18 07:21:06.705833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.014 qpair failed and we were unable to recover it. 00:35:46.014 [2024-11-18 07:21:06.706122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.014 [2024-11-18 07:21:06.706186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.014 qpair failed and we were unable to recover it. 00:35:46.014 [2024-11-18 07:21:06.706423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.014 [2024-11-18 07:21:06.706488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.014 qpair failed and we were unable to recover it. 00:35:46.014 [2024-11-18 07:21:06.706752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.014 [2024-11-18 07:21:06.706816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.014 qpair failed and we were unable to recover it. 00:35:46.014 [2024-11-18 07:21:06.707074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.014 [2024-11-18 07:21:06.707139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.014 qpair failed and we were unable to recover it. 00:35:46.014 [2024-11-18 07:21:06.707425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.014 [2024-11-18 07:21:06.707507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.014 qpair failed and we were unable to recover it. 00:35:46.014 [2024-11-18 07:21:06.707801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.014 [2024-11-18 07:21:06.707866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.014 qpair failed and we were unable to recover it. 00:35:46.014 [2024-11-18 07:21:06.708159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.014 [2024-11-18 07:21:06.708224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.014 qpair failed and we were unable to recover it. 00:35:46.014 [2024-11-18 07:21:06.708524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.014 [2024-11-18 07:21:06.708591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.014 qpair failed and we were unable to recover it. 00:35:46.014 [2024-11-18 07:21:06.708892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.014 [2024-11-18 07:21:06.708958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.014 qpair failed and we were unable to recover it. 00:35:46.014 [2024-11-18 07:21:06.709207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.014 [2024-11-18 07:21:06.709272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.014 qpair failed and we were unable to recover it. 00:35:46.014 [2024-11-18 07:21:06.709488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.014 [2024-11-18 07:21:06.709583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.014 qpair failed and we were unable to recover it. 00:35:46.014 [2024-11-18 07:21:06.709842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.014 [2024-11-18 07:21:06.709906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.014 qpair failed and we were unable to recover it. 00:35:46.014 [2024-11-18 07:21:06.710204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.014 [2024-11-18 07:21:06.710268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.014 qpair failed and we were unable to recover it. 00:35:46.014 [2024-11-18 07:21:06.710521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.014 [2024-11-18 07:21:06.710588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.014 qpair failed and we were unable to recover it. 00:35:46.014 [2024-11-18 07:21:06.710879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.014 [2024-11-18 07:21:06.710944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.014 qpair failed and we were unable to recover it. 00:35:46.014 [2024-11-18 07:21:06.711227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.014 [2024-11-18 07:21:06.711291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.014 qpair failed and we were unable to recover it. 00:35:46.014 [2024-11-18 07:21:06.711501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.014 [2024-11-18 07:21:06.711568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.014 qpair failed and we were unable to recover it. 00:35:46.014 [2024-11-18 07:21:06.711749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.014 [2024-11-18 07:21:06.711815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.014 qpair failed and we were unable to recover it. 00:35:46.014 [2024-11-18 07:21:06.712039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.014 [2024-11-18 07:21:06.712103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.014 qpair failed and we were unable to recover it. 00:35:46.014 [2024-11-18 07:21:06.712356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.014 [2024-11-18 07:21:06.712420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.014 qpair failed and we were unable to recover it. 00:35:46.014 [2024-11-18 07:21:06.712687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.014 [2024-11-18 07:21:06.712753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.014 qpair failed and we were unable to recover it. 00:35:46.014 [2024-11-18 07:21:06.713052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.014 [2024-11-18 07:21:06.713116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.014 qpair failed and we were unable to recover it. 00:35:46.014 [2024-11-18 07:21:06.713376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.014 [2024-11-18 07:21:06.713441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.014 qpair failed and we were unable to recover it. 00:35:46.014 [2024-11-18 07:21:06.713758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.014 [2024-11-18 07:21:06.713824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.014 qpair failed and we were unable to recover it. 00:35:46.014 [2024-11-18 07:21:06.714070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.014 [2024-11-18 07:21:06.714135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.014 qpair failed and we were unable to recover it. 00:35:46.014 [2024-11-18 07:21:06.714401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.014 [2024-11-18 07:21:06.714467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.014 qpair failed and we were unable to recover it. 00:35:46.015 [2024-11-18 07:21:06.714776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.015 [2024-11-18 07:21:06.714841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.015 qpair failed and we were unable to recover it. 00:35:46.015 [2024-11-18 07:21:06.715139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.015 [2024-11-18 07:21:06.715204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.015 qpair failed and we were unable to recover it. 00:35:46.015 [2024-11-18 07:21:06.715521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.015 [2024-11-18 07:21:06.715587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.015 qpair failed and we were unable to recover it. 00:35:46.015 [2024-11-18 07:21:06.715879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.015 [2024-11-18 07:21:06.715943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.015 qpair failed and we were unable to recover it. 00:35:46.015 [2024-11-18 07:21:06.716247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.015 [2024-11-18 07:21:06.716312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.015 qpair failed and we were unable to recover it. 00:35:46.015 [2024-11-18 07:21:06.716560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.015 [2024-11-18 07:21:06.716626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.015 qpair failed and we were unable to recover it. 00:35:46.015 [2024-11-18 07:21:06.716888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.015 [2024-11-18 07:21:06.716952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.015 qpair failed and we were unable to recover it. 00:35:46.015 [2024-11-18 07:21:06.717236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.015 [2024-11-18 07:21:06.717300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.015 qpair failed and we were unable to recover it. 00:35:46.015 [2024-11-18 07:21:06.717601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.015 [2024-11-18 07:21:06.717668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.015 qpair failed and we were unable to recover it. 00:35:46.015 [2024-11-18 07:21:06.717965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.015 [2024-11-18 07:21:06.718030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.015 qpair failed and we were unable to recover it. 00:35:46.015 [2024-11-18 07:21:06.718321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.015 [2024-11-18 07:21:06.718385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.015 qpair failed and we were unable to recover it. 00:35:46.015 [2024-11-18 07:21:06.718668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.015 [2024-11-18 07:21:06.718734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.015 qpair failed and we were unable to recover it. 00:35:46.015 [2024-11-18 07:21:06.718948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.015 [2024-11-18 07:21:06.719012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.015 qpair failed and we were unable to recover it. 00:35:46.015 [2024-11-18 07:21:06.719253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.015 [2024-11-18 07:21:06.719318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.015 qpair failed and we were unable to recover it. 00:35:46.015 [2024-11-18 07:21:06.719570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.015 [2024-11-18 07:21:06.719636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.015 qpair failed and we were unable to recover it. 00:35:46.015 [2024-11-18 07:21:06.719890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.015 [2024-11-18 07:21:06.719954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.015 qpair failed and we were unable to recover it. 00:35:46.015 [2024-11-18 07:21:06.720181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.015 [2024-11-18 07:21:06.720246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.015 qpair failed and we were unable to recover it. 00:35:46.015 [2024-11-18 07:21:06.720485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.015 [2024-11-18 07:21:06.720574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.015 qpair failed and we were unable to recover it. 00:35:46.015 [2024-11-18 07:21:06.720826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.015 [2024-11-18 07:21:06.720894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.015 qpair failed and we were unable to recover it. 00:35:46.015 [2024-11-18 07:21:06.721150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.015 [2024-11-18 07:21:06.721215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.015 qpair failed and we were unable to recover it. 00:35:46.015 [2024-11-18 07:21:06.721486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.015 [2024-11-18 07:21:06.721581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.015 qpair failed and we were unable to recover it. 00:35:46.015 [2024-11-18 07:21:06.721856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.015 [2024-11-18 07:21:06.721921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.015 qpair failed and we were unable to recover it. 00:35:46.015 [2024-11-18 07:21:06.722170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.015 [2024-11-18 07:21:06.722236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.015 qpair failed and we were unable to recover it. 00:35:46.015 [2024-11-18 07:21:06.722448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.015 [2024-11-18 07:21:06.722533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.015 qpair failed and we were unable to recover it. 00:35:46.015 [2024-11-18 07:21:06.722804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.015 [2024-11-18 07:21:06.722868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.015 qpair failed and we were unable to recover it. 00:35:46.015 [2024-11-18 07:21:06.723164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.015 [2024-11-18 07:21:06.723229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.015 qpair failed and we were unable to recover it. 00:35:46.015 [2024-11-18 07:21:06.723531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.015 [2024-11-18 07:21:06.723598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.015 qpair failed and we were unable to recover it. 00:35:46.015 [2024-11-18 07:21:06.723849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.015 [2024-11-18 07:21:06.723916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.015 qpair failed and we were unable to recover it. 00:35:46.015 [2024-11-18 07:21:06.724171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.015 [2024-11-18 07:21:06.724236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.016 qpair failed and we were unable to recover it. 00:35:46.016 [2024-11-18 07:21:06.724553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.016 [2024-11-18 07:21:06.724620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.016 qpair failed and we were unable to recover it. 00:35:46.016 [2024-11-18 07:21:06.724823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.016 [2024-11-18 07:21:06.724889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.016 qpair failed and we were unable to recover it. 00:35:46.016 [2024-11-18 07:21:06.725152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.016 [2024-11-18 07:21:06.725218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.016 qpair failed and we were unable to recover it. 00:35:46.016 [2024-11-18 07:21:06.725517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.016 [2024-11-18 07:21:06.725593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.016 qpair failed and we were unable to recover it. 00:35:46.016 [2024-11-18 07:21:06.725892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.016 [2024-11-18 07:21:06.725957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.016 qpair failed and we were unable to recover it. 00:35:46.016 [2024-11-18 07:21:06.726228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.016 [2024-11-18 07:21:06.726294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.016 qpair failed and we were unable to recover it. 00:35:46.016 [2024-11-18 07:21:06.726591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.016 [2024-11-18 07:21:06.726657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.016 qpair failed and we were unable to recover it. 00:35:46.016 [2024-11-18 07:21:06.726945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.016 [2024-11-18 07:21:06.727020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.016 qpair failed and we were unable to recover it. 00:35:46.016 [2024-11-18 07:21:06.727274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.016 [2024-11-18 07:21:06.727339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.016 qpair failed and we were unable to recover it. 00:35:46.016 [2024-11-18 07:21:06.727601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.016 [2024-11-18 07:21:06.727667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.016 qpair failed and we were unable to recover it. 00:35:46.016 [2024-11-18 07:21:06.727950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.016 [2024-11-18 07:21:06.728014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.016 qpair failed and we were unable to recover it. 00:35:46.016 [2024-11-18 07:21:06.728310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.016 [2024-11-18 07:21:06.728375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.016 qpair failed and we were unable to recover it. 00:35:46.016 [2024-11-18 07:21:06.728628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.016 [2024-11-18 07:21:06.728694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.016 qpair failed and we were unable to recover it. 00:35:46.016 [2024-11-18 07:21:06.728962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.016 [2024-11-18 07:21:06.729026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.016 qpair failed and we were unable to recover it. 00:35:46.016 [2024-11-18 07:21:06.729295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.016 [2024-11-18 07:21:06.729360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.016 qpair failed and we were unable to recover it. 00:35:46.016 [2024-11-18 07:21:06.729634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.016 [2024-11-18 07:21:06.729700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.016 qpair failed and we were unable to recover it. 00:35:46.016 [2024-11-18 07:21:06.729964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.016 [2024-11-18 07:21:06.730030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.016 qpair failed and we were unable to recover it. 00:35:46.016 [2024-11-18 07:21:06.730330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.016 [2024-11-18 07:21:06.730394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.016 qpair failed and we were unable to recover it. 00:35:46.016 [2024-11-18 07:21:06.730685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.016 [2024-11-18 07:21:06.730751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.016 qpair failed and we were unable to recover it. 00:35:46.016 [2024-11-18 07:21:06.731001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.016 [2024-11-18 07:21:06.731066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.016 qpair failed and we were unable to recover it. 00:35:46.016 [2024-11-18 07:21:06.731361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.016 [2024-11-18 07:21:06.731426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.016 qpair failed and we were unable to recover it. 00:35:46.016 [2024-11-18 07:21:06.731752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.016 [2024-11-18 07:21:06.731818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.016 qpair failed and we were unable to recover it. 00:35:46.016 [2024-11-18 07:21:06.732066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.016 [2024-11-18 07:21:06.732131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.016 qpair failed and we were unable to recover it. 00:35:46.016 [2024-11-18 07:21:06.732417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.016 [2024-11-18 07:21:06.732482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.016 qpair failed and we were unable to recover it. 00:35:46.016 [2024-11-18 07:21:06.732765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.016 [2024-11-18 07:21:06.732831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.016 qpair failed and we were unable to recover it. 00:35:46.016 [2024-11-18 07:21:06.733131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.016 [2024-11-18 07:21:06.733196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.016 qpair failed and we were unable to recover it. 00:35:46.016 [2024-11-18 07:21:06.733437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.016 [2024-11-18 07:21:06.733518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.016 qpair failed and we were unable to recover it. 00:35:46.016 [2024-11-18 07:21:06.733826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.016 [2024-11-18 07:21:06.733888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.016 qpair failed and we were unable to recover it. 00:35:46.016 [2024-11-18 07:21:06.734177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.016 [2024-11-18 07:21:06.734238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.016 qpair failed and we were unable to recover it. 00:35:46.016 [2024-11-18 07:21:06.734557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.016 [2024-11-18 07:21:06.734629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.016 qpair failed and we were unable to recover it. 00:35:46.016 [2024-11-18 07:21:06.734839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.016 [2024-11-18 07:21:06.734905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.016 qpair failed and we were unable to recover it. 00:35:46.016 [2024-11-18 07:21:06.735147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.016 [2024-11-18 07:21:06.735212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.016 qpair failed and we were unable to recover it. 00:35:46.016 [2024-11-18 07:21:06.735513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.016 [2024-11-18 07:21:06.735580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.016 qpair failed and we were unable to recover it. 00:35:46.016 [2024-11-18 07:21:06.735840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.016 [2024-11-18 07:21:06.735906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.016 qpair failed and we were unable to recover it. 00:35:46.016 [2024-11-18 07:21:06.736153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.016 [2024-11-18 07:21:06.736230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.016 qpair failed and we were unable to recover it. 00:35:46.017 [2024-11-18 07:21:06.736516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.017 [2024-11-18 07:21:06.736584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.017 qpair failed and we were unable to recover it. 00:35:46.017 [2024-11-18 07:21:06.736840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.017 [2024-11-18 07:21:06.736905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.017 qpair failed and we were unable to recover it. 00:35:46.017 [2024-11-18 07:21:06.737147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.017 [2024-11-18 07:21:06.737212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.017 qpair failed and we were unable to recover it. 00:35:46.017 [2024-11-18 07:21:06.737439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.017 [2024-11-18 07:21:06.737521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.017 qpair failed and we were unable to recover it. 00:35:46.017 [2024-11-18 07:21:06.737776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.017 [2024-11-18 07:21:06.737842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.017 qpair failed and we were unable to recover it. 00:35:46.017 [2024-11-18 07:21:06.738082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.017 [2024-11-18 07:21:06.738147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.017 qpair failed and we were unable to recover it. 00:35:46.017 [2024-11-18 07:21:06.738389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.017 [2024-11-18 07:21:06.738454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.017 qpair failed and we were unable to recover it. 00:35:46.017 [2024-11-18 07:21:06.738729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.017 [2024-11-18 07:21:06.738795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.017 qpair failed and we were unable to recover it. 00:35:46.017 [2024-11-18 07:21:06.739082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.017 [2024-11-18 07:21:06.739147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.017 qpair failed and we were unable to recover it. 00:35:46.017 [2024-11-18 07:21:06.739390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.017 [2024-11-18 07:21:06.739455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.017 qpair failed and we were unable to recover it. 00:35:46.017 [2024-11-18 07:21:06.739772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.017 [2024-11-18 07:21:06.739838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.017 qpair failed and we were unable to recover it. 00:35:46.017 [2024-11-18 07:21:06.740161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.017 [2024-11-18 07:21:06.740225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.017 qpair failed and we were unable to recover it. 00:35:46.017 [2024-11-18 07:21:06.740471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.017 [2024-11-18 07:21:06.740559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.017 qpair failed and we were unable to recover it. 00:35:46.017 [2024-11-18 07:21:06.740831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.017 [2024-11-18 07:21:06.740897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.017 qpair failed and we were unable to recover it. 00:35:46.017 [2024-11-18 07:21:06.741153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.017 [2024-11-18 07:21:06.741218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.017 qpair failed and we were unable to recover it. 00:35:46.017 [2024-11-18 07:21:06.741461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.017 [2024-11-18 07:21:06.741546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.017 qpair failed and we were unable to recover it. 00:35:46.017 [2024-11-18 07:21:06.741765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.017 [2024-11-18 07:21:06.741831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.017 qpair failed and we were unable to recover it. 00:35:46.017 [2024-11-18 07:21:06.742134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.017 [2024-11-18 07:21:06.742198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.017 qpair failed and we were unable to recover it. 00:35:46.017 [2024-11-18 07:21:06.742487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.017 [2024-11-18 07:21:06.742572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.017 qpair failed and we were unable to recover it. 00:35:46.017 [2024-11-18 07:21:06.742866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.017 [2024-11-18 07:21:06.742932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.017 qpair failed and we were unable to recover it. 00:35:46.017 [2024-11-18 07:21:06.743138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.017 [2024-11-18 07:21:06.743202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.017 qpair failed and we were unable to recover it. 00:35:46.017 [2024-11-18 07:21:06.743447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.017 [2024-11-18 07:21:06.743545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.017 qpair failed and we were unable to recover it. 00:35:46.017 [2024-11-18 07:21:06.743840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.017 [2024-11-18 07:21:06.743904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.017 qpair failed and we were unable to recover it. 00:35:46.017 [2024-11-18 07:21:06.744154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.017 [2024-11-18 07:21:06.744218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.017 qpair failed and we were unable to recover it. 00:35:46.017 [2024-11-18 07:21:06.744529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.017 [2024-11-18 07:21:06.744596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.017 qpair failed and we were unable to recover it. 00:35:46.017 [2024-11-18 07:21:06.744893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.017 [2024-11-18 07:21:06.744958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.017 qpair failed and we were unable to recover it. 00:35:46.017 [2024-11-18 07:21:06.745181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.017 [2024-11-18 07:21:06.745257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.017 qpair failed and we were unable to recover it. 00:35:46.017 [2024-11-18 07:21:06.745461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.017 [2024-11-18 07:21:06.745543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.017 qpair failed and we were unable to recover it. 00:35:46.017 [2024-11-18 07:21:06.745842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.017 [2024-11-18 07:21:06.745906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.017 qpair failed and we were unable to recover it. 00:35:46.017 [2024-11-18 07:21:06.746149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.017 [2024-11-18 07:21:06.746214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.017 qpair failed and we were unable to recover it. 00:35:46.017 [2024-11-18 07:21:06.746523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.017 [2024-11-18 07:21:06.746589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.017 qpair failed and we were unable to recover it. 00:35:46.017 [2024-11-18 07:21:06.746851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.017 [2024-11-18 07:21:06.746917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.017 qpair failed and we were unable to recover it. 00:35:46.017 [2024-11-18 07:21:06.747220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.017 [2024-11-18 07:21:06.747283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.017 qpair failed and we were unable to recover it. 00:35:46.017 [2024-11-18 07:21:06.747587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.017 [2024-11-18 07:21:06.747654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.017 qpair failed and we were unable to recover it. 00:35:46.017 [2024-11-18 07:21:06.747941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.017 [2024-11-18 07:21:06.748007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.017 qpair failed and we were unable to recover it. 00:35:46.018 [2024-11-18 07:21:06.748255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.018 [2024-11-18 07:21:06.748320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.018 qpair failed and we were unable to recover it. 00:35:46.018 [2024-11-18 07:21:06.748562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.018 [2024-11-18 07:21:06.748629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.018 qpair failed and we were unable to recover it. 00:35:46.018 [2024-11-18 07:21:06.748870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.018 [2024-11-18 07:21:06.748936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.018 qpair failed and we were unable to recover it. 00:35:46.018 [2024-11-18 07:21:06.749237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.018 [2024-11-18 07:21:06.749302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.018 qpair failed and we were unable to recover it. 00:35:46.018 [2024-11-18 07:21:06.749593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.018 [2024-11-18 07:21:06.749660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.018 qpair failed and we were unable to recover it. 00:35:46.018 [2024-11-18 07:21:06.749911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.018 [2024-11-18 07:21:06.749978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.018 qpair failed and we were unable to recover it. 00:35:46.018 [2024-11-18 07:21:06.750227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.018 [2024-11-18 07:21:06.750293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.018 qpair failed and we were unable to recover it. 00:35:46.018 [2024-11-18 07:21:06.750481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.018 [2024-11-18 07:21:06.750577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.018 qpair failed and we were unable to recover it. 00:35:46.018 [2024-11-18 07:21:06.750856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.018 [2024-11-18 07:21:06.750921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.018 qpair failed and we were unable to recover it. 00:35:46.018 [2024-11-18 07:21:06.751207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.018 [2024-11-18 07:21:06.751273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.018 qpair failed and we were unable to recover it. 00:35:46.018 [2024-11-18 07:21:06.751571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.018 [2024-11-18 07:21:06.751639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.018 qpair failed and we were unable to recover it. 00:35:46.018 [2024-11-18 07:21:06.751945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.018 [2024-11-18 07:21:06.752010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.018 qpair failed and we were unable to recover it. 00:35:46.018 [2024-11-18 07:21:06.752257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.018 [2024-11-18 07:21:06.752322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.018 qpair failed and we were unable to recover it. 00:35:46.018 [2024-11-18 07:21:06.752542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.018 [2024-11-18 07:21:06.752610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.018 qpair failed and we were unable to recover it. 00:35:46.018 [2024-11-18 07:21:06.752916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.018 [2024-11-18 07:21:06.752980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.018 qpair failed and we were unable to recover it. 00:35:46.018 [2024-11-18 07:21:06.753274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.018 [2024-11-18 07:21:06.753339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.018 qpair failed and we were unable to recover it. 00:35:46.018 [2024-11-18 07:21:06.753635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.018 [2024-11-18 07:21:06.753701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.018 qpair failed and we were unable to recover it. 00:35:46.018 [2024-11-18 07:21:06.753949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.018 [2024-11-18 07:21:06.754014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.018 qpair failed and we were unable to recover it. 00:35:46.018 [2024-11-18 07:21:06.754309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.018 [2024-11-18 07:21:06.754373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.018 qpair failed and we were unable to recover it. 00:35:46.018 [2024-11-18 07:21:06.754622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.018 [2024-11-18 07:21:06.754689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.018 qpair failed and we were unable to recover it. 00:35:46.018 [2024-11-18 07:21:06.754934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.018 [2024-11-18 07:21:06.754999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.018 qpair failed and we were unable to recover it. 00:35:46.018 [2024-11-18 07:21:06.755283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.018 [2024-11-18 07:21:06.755348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.018 qpair failed and we were unable to recover it. 00:35:46.018 [2024-11-18 07:21:06.755644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.018 [2024-11-18 07:21:06.755710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.018 qpair failed and we were unable to recover it. 00:35:46.018 [2024-11-18 07:21:06.755965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.018 [2024-11-18 07:21:06.756030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.018 qpair failed and we were unable to recover it. 00:35:46.018 [2024-11-18 07:21:06.756276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.018 [2024-11-18 07:21:06.756341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.018 qpair failed and we were unable to recover it. 00:35:46.018 [2024-11-18 07:21:06.756589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.018 [2024-11-18 07:21:06.756656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.018 qpair failed and we were unable to recover it. 00:35:46.018 [2024-11-18 07:21:06.756924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.018 [2024-11-18 07:21:06.756990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.018 qpair failed and we were unable to recover it. 00:35:46.018 [2024-11-18 07:21:06.757281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.018 [2024-11-18 07:21:06.757346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.018 qpair failed and we were unable to recover it. 00:35:46.018 [2024-11-18 07:21:06.757590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.018 [2024-11-18 07:21:06.757659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.018 qpair failed and we were unable to recover it. 00:35:46.018 [2024-11-18 07:21:06.757962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.018 [2024-11-18 07:21:06.758028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.018 qpair failed and we were unable to recover it. 00:35:46.018 [2024-11-18 07:21:06.758328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.018 [2024-11-18 07:21:06.758393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.018 qpair failed and we were unable to recover it. 00:35:46.018 [2024-11-18 07:21:06.758655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.018 [2024-11-18 07:21:06.758723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.018 qpair failed and we were unable to recover it. 00:35:46.018 [2024-11-18 07:21:06.758985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.018 [2024-11-18 07:21:06.759051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.018 qpair failed and we were unable to recover it. 00:35:46.018 [2024-11-18 07:21:06.759342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.018 [2024-11-18 07:21:06.759407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.018 qpair failed and we were unable to recover it. 00:35:46.019 [2024-11-18 07:21:06.759670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.019 [2024-11-18 07:21:06.759739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.019 qpair failed and we were unable to recover it. 00:35:46.019 [2024-11-18 07:21:06.760070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.019 [2024-11-18 07:21:06.760135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.019 qpair failed and we were unable to recover it. 00:35:46.019 [2024-11-18 07:21:06.760356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.019 [2024-11-18 07:21:06.760420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.019 qpair failed and we were unable to recover it. 00:35:46.019 [2024-11-18 07:21:06.760650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.019 [2024-11-18 07:21:06.760718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.019 qpair failed and we were unable to recover it. 00:35:46.019 [2024-11-18 07:21:06.760962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.019 [2024-11-18 07:21:06.761027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.019 qpair failed and we were unable to recover it. 00:35:46.019 [2024-11-18 07:21:06.761275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.019 [2024-11-18 07:21:06.761340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.019 qpair failed and we were unable to recover it. 00:35:46.019 [2024-11-18 07:21:06.761636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.019 [2024-11-18 07:21:06.761703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.019 qpair failed and we were unable to recover it. 00:35:46.019 [2024-11-18 07:21:06.761965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.019 [2024-11-18 07:21:06.762032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.019 qpair failed and we were unable to recover it. 00:35:46.019 [2024-11-18 07:21:06.762251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.019 [2024-11-18 07:21:06.762316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.019 qpair failed and we were unable to recover it. 00:35:46.019 [2024-11-18 07:21:06.762529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.019 [2024-11-18 07:21:06.762596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.019 qpair failed and we were unable to recover it. 00:35:46.019 [2024-11-18 07:21:06.762881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.019 [2024-11-18 07:21:06.762947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.019 qpair failed and we were unable to recover it. 00:35:46.019 [2024-11-18 07:21:06.763192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.019 [2024-11-18 07:21:06.763257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.019 qpair failed and we were unable to recover it. 00:35:46.019 [2024-11-18 07:21:06.763565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.019 [2024-11-18 07:21:06.763632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.019 qpair failed and we were unable to recover it. 00:35:46.019 [2024-11-18 07:21:06.763831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.019 [2024-11-18 07:21:06.763897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.019 qpair failed and we were unable to recover it. 00:35:46.019 [2024-11-18 07:21:06.764128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.019 [2024-11-18 07:21:06.764193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.019 qpair failed and we were unable to recover it. 00:35:46.019 [2024-11-18 07:21:06.764453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.019 [2024-11-18 07:21:06.764535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.019 qpair failed and we were unable to recover it. 00:35:46.019 [2024-11-18 07:21:06.764795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.019 [2024-11-18 07:21:06.764860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.019 qpair failed and we were unable to recover it. 00:35:46.019 [2024-11-18 07:21:06.765161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.019 [2024-11-18 07:21:06.765226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.019 qpair failed and we were unable to recover it. 00:35:46.019 [2024-11-18 07:21:06.765547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.019 [2024-11-18 07:21:06.765613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.019 qpair failed and we were unable to recover it. 00:35:46.019 [2024-11-18 07:21:06.765899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.019 [2024-11-18 07:21:06.765965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.019 qpair failed and we were unable to recover it. 00:35:46.019 [2024-11-18 07:21:06.766247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.019 [2024-11-18 07:21:06.766311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.019 qpair failed and we were unable to recover it. 00:35:46.019 [2024-11-18 07:21:06.766606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.019 [2024-11-18 07:21:06.766671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.019 qpair failed and we were unable to recover it. 00:35:46.019 [2024-11-18 07:21:06.766927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.019 [2024-11-18 07:21:06.766993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.019 qpair failed and we were unable to recover it. 00:35:46.019 [2024-11-18 07:21:06.767291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.019 [2024-11-18 07:21:06.767356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.019 qpair failed and we were unable to recover it. 00:35:46.019 [2024-11-18 07:21:06.767647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.019 [2024-11-18 07:21:06.767713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.019 qpair failed and we were unable to recover it. 00:35:46.019 [2024-11-18 07:21:06.768023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.019 [2024-11-18 07:21:06.768099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.019 qpair failed and we were unable to recover it. 00:35:46.019 [2024-11-18 07:21:06.768333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.019 [2024-11-18 07:21:06.768399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.019 qpair failed and we were unable to recover it. 00:35:46.019 [2024-11-18 07:21:06.768712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.019 [2024-11-18 07:21:06.768778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.019 qpair failed and we were unable to recover it. 00:35:46.019 [2024-11-18 07:21:06.769022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.019 [2024-11-18 07:21:06.769088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.019 qpair failed and we were unable to recover it. 00:35:46.019 [2024-11-18 07:21:06.769373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.019 [2024-11-18 07:21:06.769439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.019 qpair failed and we were unable to recover it. 00:35:46.019 [2024-11-18 07:21:06.769736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.019 [2024-11-18 07:21:06.769802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.019 qpair failed and we were unable to recover it. 00:35:46.019 [2024-11-18 07:21:06.770102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.019 [2024-11-18 07:21:06.770168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.019 qpair failed and we were unable to recover it. 00:35:46.019 [2024-11-18 07:21:06.770457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.019 [2024-11-18 07:21:06.770545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.019 qpair failed and we were unable to recover it. 00:35:46.019 [2024-11-18 07:21:06.770802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.019 [2024-11-18 07:21:06.770867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.019 qpair failed and we were unable to recover it. 00:35:46.019 [2024-11-18 07:21:06.771066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.019 [2024-11-18 07:21:06.771133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.020 qpair failed and we were unable to recover it. 00:35:46.020 [2024-11-18 07:21:06.771360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.020 [2024-11-18 07:21:06.771427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.020 qpair failed and we were unable to recover it. 00:35:46.020 [2024-11-18 07:21:06.771676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.020 [2024-11-18 07:21:06.771742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.020 qpair failed and we were unable to recover it. 00:35:46.020 [2024-11-18 07:21:06.772019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.020 [2024-11-18 07:21:06.772085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.020 qpair failed and we were unable to recover it. 00:35:46.020 [2024-11-18 07:21:06.772376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.020 [2024-11-18 07:21:06.772446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.020 qpair failed and we were unable to recover it. 00:35:46.020 [2024-11-18 07:21:06.772771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.020 [2024-11-18 07:21:06.772837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.020 qpair failed and we were unable to recover it. 00:35:46.020 [2024-11-18 07:21:06.773124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.020 [2024-11-18 07:21:06.773190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.020 qpair failed and we were unable to recover it. 00:35:46.020 [2024-11-18 07:21:06.773425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.020 [2024-11-18 07:21:06.773512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.020 qpair failed and we were unable to recover it. 00:35:46.020 [2024-11-18 07:21:06.773765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.020 [2024-11-18 07:21:06.773830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.020 qpair failed and we were unable to recover it. 00:35:46.020 [2024-11-18 07:21:06.774119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.020 [2024-11-18 07:21:06.774185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.020 qpair failed and we were unable to recover it. 00:35:46.020 [2024-11-18 07:21:06.774432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.020 [2024-11-18 07:21:06.774515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.020 qpair failed and we were unable to recover it. 00:35:46.020 [2024-11-18 07:21:06.774813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.020 [2024-11-18 07:21:06.774879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.020 qpair failed and we were unable to recover it. 00:35:46.020 [2024-11-18 07:21:06.775162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.020 [2024-11-18 07:21:06.775236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.020 qpair failed and we were unable to recover it. 00:35:46.020 [2024-11-18 07:21:06.775555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.020 [2024-11-18 07:21:06.775624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.020 qpair failed and we were unable to recover it. 00:35:46.020 [2024-11-18 07:21:06.775917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.020 [2024-11-18 07:21:06.775980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.020 qpair failed and we were unable to recover it. 00:35:46.020 [2024-11-18 07:21:06.776267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.020 [2024-11-18 07:21:06.776333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.020 qpair failed and we were unable to recover it. 00:35:46.020 [2024-11-18 07:21:06.776593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.020 [2024-11-18 07:21:06.776661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.020 qpair failed and we were unable to recover it. 00:35:46.020 [2024-11-18 07:21:06.776940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.020 [2024-11-18 07:21:06.777006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.020 qpair failed and we were unable to recover it. 00:35:46.020 [2024-11-18 07:21:06.777244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.020 [2024-11-18 07:21:06.777327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.020 qpair failed and we were unable to recover it. 00:35:46.020 [2024-11-18 07:21:06.777586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.020 [2024-11-18 07:21:06.777653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.020 qpair failed and we were unable to recover it. 00:35:46.020 [2024-11-18 07:21:06.777872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.020 [2024-11-18 07:21:06.777939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.020 qpair failed and we were unable to recover it. 00:35:46.020 [2024-11-18 07:21:06.778183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.020 [2024-11-18 07:21:06.778250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.020 qpair failed and we were unable to recover it. 00:35:46.020 [2024-11-18 07:21:06.778565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.020 [2024-11-18 07:21:06.778600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.020 qpair failed and we were unable to recover it. 00:35:46.020 [2024-11-18 07:21:06.778708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.020 [2024-11-18 07:21:06.778742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.020 qpair failed and we were unable to recover it. 00:35:46.020 [2024-11-18 07:21:06.778880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.020 [2024-11-18 07:21:06.778914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.020 qpair failed and we were unable to recover it. 00:35:46.020 [2024-11-18 07:21:06.779084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.020 [2024-11-18 07:21:06.779119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.020 qpair failed and we were unable to recover it. 00:35:46.020 [2024-11-18 07:21:06.779223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.020 [2024-11-18 07:21:06.779255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.020 qpair failed and we were unable to recover it. 00:35:46.020 [2024-11-18 07:21:06.779428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.020 [2024-11-18 07:21:06.779464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.020 qpair failed and we were unable to recover it. 00:35:46.020 [2024-11-18 07:21:06.779598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.020 [2024-11-18 07:21:06.779631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.020 qpair failed and we were unable to recover it. 00:35:46.020 [2024-11-18 07:21:06.779803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.020 [2024-11-18 07:21:06.779870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.020 qpair failed and we were unable to recover it. 00:35:46.020 [2024-11-18 07:21:06.780077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.021 [2024-11-18 07:21:06.780143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.021 qpair failed and we were unable to recover it. 00:35:46.021 [2024-11-18 07:21:06.780339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.021 [2024-11-18 07:21:06.780376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.021 qpair failed and we were unable to recover it. 00:35:46.021 [2024-11-18 07:21:06.780538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.021 [2024-11-18 07:21:06.780574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.021 qpair failed and we were unable to recover it. 00:35:46.021 [2024-11-18 07:21:06.780815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.021 [2024-11-18 07:21:06.780879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.021 qpair failed and we were unable to recover it. 00:35:46.021 [2024-11-18 07:21:06.781126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.021 [2024-11-18 07:21:06.781186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.021 qpair failed and we were unable to recover it. 00:35:46.021 [2024-11-18 07:21:06.781412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.021 [2024-11-18 07:21:06.781447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.021 qpair failed and we were unable to recover it. 00:35:46.021 [2024-11-18 07:21:06.781611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.021 [2024-11-18 07:21:06.781645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.021 qpair failed and we were unable to recover it. 00:35:46.021 [2024-11-18 07:21:06.781847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.021 [2024-11-18 07:21:06.781882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.021 qpair failed and we were unable to recover it. 00:35:46.021 [2024-11-18 07:21:06.782182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.021 [2024-11-18 07:21:06.782242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.021 qpair failed and we were unable to recover it. 00:35:46.021 [2024-11-18 07:21:06.782452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.021 [2024-11-18 07:21:06.782487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.021 qpair failed and we were unable to recover it. 00:35:46.021 [2024-11-18 07:21:06.782674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.021 [2024-11-18 07:21:06.782739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.021 qpair failed and we were unable to recover it. 00:35:46.021 [2024-11-18 07:21:06.782982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.021 [2024-11-18 07:21:06.783048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.021 qpair failed and we were unable to recover it. 00:35:46.021 [2024-11-18 07:21:06.783295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.021 [2024-11-18 07:21:06.783330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.021 qpair failed and we were unable to recover it. 00:35:46.021 [2024-11-18 07:21:06.783482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.021 [2024-11-18 07:21:06.783582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.021 qpair failed and we were unable to recover it. 00:35:46.021 [2024-11-18 07:21:06.783828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.021 [2024-11-18 07:21:06.783880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.021 qpair failed and we were unable to recover it. 00:35:46.021 [2024-11-18 07:21:06.784133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.021 [2024-11-18 07:21:06.784198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.021 qpair failed and we were unable to recover it. 00:35:46.021 [2024-11-18 07:21:06.784414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.021 [2024-11-18 07:21:06.784450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.021 qpair failed and we were unable to recover it. 00:35:46.021 [2024-11-18 07:21:06.784621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.021 [2024-11-18 07:21:06.784688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.021 qpair failed and we were unable to recover it. 00:35:46.021 [2024-11-18 07:21:06.784989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.021 [2024-11-18 07:21:06.785054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.021 qpair failed and we were unable to recover it. 00:35:46.021 [2024-11-18 07:21:06.785314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.021 [2024-11-18 07:21:06.785349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.021 qpair failed and we were unable to recover it. 00:35:46.021 [2024-11-18 07:21:06.785487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.021 [2024-11-18 07:21:06.785542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.021 qpair failed and we were unable to recover it. 00:35:46.021 [2024-11-18 07:21:06.785732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.021 [2024-11-18 07:21:06.785797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.021 qpair failed and we were unable to recover it. 00:35:46.021 [2024-11-18 07:21:06.786032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.021 [2024-11-18 07:21:06.786097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.021 qpair failed and we were unable to recover it. 00:35:46.021 [2024-11-18 07:21:06.786353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.021 [2024-11-18 07:21:06.786389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.021 qpair failed and we were unable to recover it. 00:35:46.021 [2024-11-18 07:21:06.786534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.021 [2024-11-18 07:21:06.786567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.021 qpair failed and we were unable to recover it. 00:35:46.021 [2024-11-18 07:21:06.786731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.021 [2024-11-18 07:21:06.786807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.021 qpair failed and we were unable to recover it. 00:35:46.021 [2024-11-18 07:21:06.787041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.021 [2024-11-18 07:21:06.787107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.021 qpair failed and we were unable to recover it. 00:35:46.021 [2024-11-18 07:21:06.787358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.021 [2024-11-18 07:21:06.787393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.021 qpair failed and we were unable to recover it. 00:35:46.021 [2024-11-18 07:21:06.787565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.021 [2024-11-18 07:21:06.787601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.021 qpair failed and we were unable to recover it. 00:35:46.021 [2024-11-18 07:21:06.787750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.021 [2024-11-18 07:21:06.787817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.021 qpair failed and we were unable to recover it. 00:35:46.021 [2024-11-18 07:21:06.788055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.021 [2024-11-18 07:21:06.788121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.021 qpair failed and we were unable to recover it. 00:35:46.021 [2024-11-18 07:21:06.788329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.021 [2024-11-18 07:21:06.788364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.021 qpair failed and we were unable to recover it. 00:35:46.021 [2024-11-18 07:21:06.788539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.021 [2024-11-18 07:21:06.788574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.021 qpair failed and we were unable to recover it. 00:35:46.021 [2024-11-18 07:21:06.788755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.021 [2024-11-18 07:21:06.788821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.021 qpair failed and we were unable to recover it. 00:35:46.021 [2024-11-18 07:21:06.789106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.021 [2024-11-18 07:21:06.789172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.021 qpair failed and we were unable to recover it. 00:35:46.021 [2024-11-18 07:21:06.789440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.021 [2024-11-18 07:21:06.789475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.022 qpair failed and we were unable to recover it. 00:35:46.022 [2024-11-18 07:21:06.789586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.022 [2024-11-18 07:21:06.789620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.022 qpair failed and we were unable to recover it. 00:35:46.022 [2024-11-18 07:21:06.789913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.022 [2024-11-18 07:21:06.789979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.022 qpair failed and we were unable to recover it. 00:35:46.022 [2024-11-18 07:21:06.790232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.022 [2024-11-18 07:21:06.790296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.022 qpair failed and we were unable to recover it. 00:35:46.022 [2024-11-18 07:21:06.790521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.022 [2024-11-18 07:21:06.790576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.022 qpair failed and we were unable to recover it. 00:35:46.022 [2024-11-18 07:21:06.790725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.022 [2024-11-18 07:21:06.790761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.022 qpair failed and we were unable to recover it. 00:35:46.022 [2024-11-18 07:21:06.790991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.022 [2024-11-18 07:21:06.791053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.022 qpair failed and we were unable to recover it. 00:35:46.022 [2024-11-18 07:21:06.791281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.022 [2024-11-18 07:21:06.791317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.022 qpair failed and we were unable to recover it. 00:35:46.022 [2024-11-18 07:21:06.791466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.022 [2024-11-18 07:21:06.791512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.022 qpair failed and we were unable to recover it. 00:35:46.022 [2024-11-18 07:21:06.791624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.022 [2024-11-18 07:21:06.791659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.022 qpair failed and we were unable to recover it. 00:35:46.022 [2024-11-18 07:21:06.791815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.022 [2024-11-18 07:21:06.791880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.022 qpair failed and we were unable to recover it. 00:35:46.022 [2024-11-18 07:21:06.792176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.022 [2024-11-18 07:21:06.792233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.022 qpair failed and we were unable to recover it. 00:35:46.022 [2024-11-18 07:21:06.792509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.022 [2024-11-18 07:21:06.792549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.022 qpair failed and we were unable to recover it. 00:35:46.022 [2024-11-18 07:21:06.792695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.022 [2024-11-18 07:21:06.792760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.022 qpair failed and we were unable to recover it. 00:35:46.022 [2024-11-18 07:21:06.793047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.022 [2024-11-18 07:21:06.793111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.022 qpair failed and we were unable to recover it. 00:35:46.022 [2024-11-18 07:21:06.793408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.022 [2024-11-18 07:21:06.793473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.022 qpair failed and we were unable to recover it. 00:35:46.022 [2024-11-18 07:21:06.793626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.022 [2024-11-18 07:21:06.793661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.022 qpair failed and we were unable to recover it. 00:35:46.022 [2024-11-18 07:21:06.793938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.022 [2024-11-18 07:21:06.794003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.022 qpair failed and we were unable to recover it. 00:35:46.022 [2024-11-18 07:21:06.794298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.022 [2024-11-18 07:21:06.794363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.022 qpair failed and we were unable to recover it. 00:35:46.022 [2024-11-18 07:21:06.794570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.022 [2024-11-18 07:21:06.794605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.022 qpair failed and we were unable to recover it. 00:35:46.022 [2024-11-18 07:21:06.794792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.022 [2024-11-18 07:21:06.794859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.022 qpair failed and we were unable to recover it. 00:35:46.022 [2024-11-18 07:21:06.795146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.022 [2024-11-18 07:21:06.795222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.022 qpair failed and we were unable to recover it. 00:35:46.022 [2024-11-18 07:21:06.795511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.022 [2024-11-18 07:21:06.795556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.022 qpair failed and we were unable to recover it. 00:35:46.022 [2024-11-18 07:21:06.795688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.022 [2024-11-18 07:21:06.795722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.022 qpair failed and we were unable to recover it. 00:35:46.022 [2024-11-18 07:21:06.795871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.022 [2024-11-18 07:21:06.795907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.022 qpair failed and we were unable to recover it. 00:35:46.022 [2024-11-18 07:21:06.796076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.022 [2024-11-18 07:21:06.796112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.022 qpair failed and we were unable to recover it. 00:35:46.022 [2024-11-18 07:21:06.796242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.022 [2024-11-18 07:21:06.796274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.022 qpair failed and we were unable to recover it. 00:35:46.022 [2024-11-18 07:21:06.796423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.022 [2024-11-18 07:21:06.796458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.022 qpair failed and we were unable to recover it. 00:35:46.022 [2024-11-18 07:21:06.796613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.022 [2024-11-18 07:21:06.796649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.022 qpair failed and we were unable to recover it. 00:35:46.022 [2024-11-18 07:21:06.796818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.022 [2024-11-18 07:21:06.796853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.022 qpair failed and we were unable to recover it. 00:35:46.022 [2024-11-18 07:21:06.796991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.022 [2024-11-18 07:21:06.797026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.022 qpair failed and we were unable to recover it. 00:35:46.022 [2024-11-18 07:21:06.797170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.022 [2024-11-18 07:21:06.797205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.022 qpair failed and we were unable to recover it. 00:35:46.022 [2024-11-18 07:21:06.797376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.022 [2024-11-18 07:21:06.797411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.022 qpair failed and we were unable to recover it. 00:35:46.022 [2024-11-18 07:21:06.797559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.022 [2024-11-18 07:21:06.797593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.022 qpair failed and we were unable to recover it. 00:35:46.022 [2024-11-18 07:21:06.797727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.022 [2024-11-18 07:21:06.797775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.022 qpair failed and we were unable to recover it. 00:35:46.022 [2024-11-18 07:21:06.797923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.022 [2024-11-18 07:21:06.797958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.022 qpair failed and we were unable to recover it. 00:35:46.022 [2024-11-18 07:21:06.798077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.023 [2024-11-18 07:21:06.798111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.023 qpair failed and we were unable to recover it. 00:35:46.023 [2024-11-18 07:21:06.798248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.023 [2024-11-18 07:21:06.798283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.023 qpair failed and we were unable to recover it. 00:35:46.023 [2024-11-18 07:21:06.798429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.023 [2024-11-18 07:21:06.798464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.023 qpair failed and we were unable to recover it. 00:35:46.023 [2024-11-18 07:21:06.798672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.023 [2024-11-18 07:21:06.798709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.023 qpair failed and we were unable to recover it. 00:35:46.023 [2024-11-18 07:21:06.798841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.023 [2024-11-18 07:21:06.798874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.023 qpair failed and we were unable to recover it. 00:35:46.023 [2024-11-18 07:21:06.798972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.023 [2024-11-18 07:21:06.799005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.023 qpair failed and we were unable to recover it. 00:35:46.023 [2024-11-18 07:21:06.799150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.023 [2024-11-18 07:21:06.799185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.023 qpair failed and we were unable to recover it. 00:35:46.023 [2024-11-18 07:21:06.799328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.023 [2024-11-18 07:21:06.799363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.023 qpair failed and we were unable to recover it. 00:35:46.023 [2024-11-18 07:21:06.799509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.023 [2024-11-18 07:21:06.799546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.023 qpair failed and we were unable to recover it. 00:35:46.023 [2024-11-18 07:21:06.799684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.023 [2024-11-18 07:21:06.799720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.023 qpair failed and we were unable to recover it. 00:35:46.023 [2024-11-18 07:21:06.799817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.023 [2024-11-18 07:21:06.799849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.023 qpair failed and we were unable to recover it. 00:35:46.023 [2024-11-18 07:21:06.799962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.023 [2024-11-18 07:21:06.799995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.023 qpair failed and we were unable to recover it. 00:35:46.023 [2024-11-18 07:21:06.800133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.023 [2024-11-18 07:21:06.800173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.023 qpair failed and we were unable to recover it. 00:35:46.023 [2024-11-18 07:21:06.800322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.023 [2024-11-18 07:21:06.800357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.023 qpair failed and we were unable to recover it. 00:35:46.023 [2024-11-18 07:21:06.800471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.023 [2024-11-18 07:21:06.800515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.023 qpair failed and we were unable to recover it. 00:35:46.023 [2024-11-18 07:21:06.800652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.023 [2024-11-18 07:21:06.800687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.023 qpair failed and we were unable to recover it. 00:35:46.023 [2024-11-18 07:21:06.800837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.023 [2024-11-18 07:21:06.800872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.023 qpair failed and we were unable to recover it. 00:35:46.023 [2024-11-18 07:21:06.801016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.023 [2024-11-18 07:21:06.801052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.023 qpair failed and we were unable to recover it. 00:35:46.023 [2024-11-18 07:21:06.801185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.023 [2024-11-18 07:21:06.801221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.023 qpair failed and we were unable to recover it. 00:35:46.023 [2024-11-18 07:21:06.801334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.023 [2024-11-18 07:21:06.801367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.023 qpair failed and we were unable to recover it. 00:35:46.023 [2024-11-18 07:21:06.801535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.023 [2024-11-18 07:21:06.801570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.023 qpair failed and we were unable to recover it. 00:35:46.023 [2024-11-18 07:21:06.801678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.023 [2024-11-18 07:21:06.801714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.023 qpair failed and we were unable to recover it. 00:35:46.023 [2024-11-18 07:21:06.801826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.023 [2024-11-18 07:21:06.801858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.023 qpair failed and we were unable to recover it. 00:35:46.023 [2024-11-18 07:21:06.802038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.023 [2024-11-18 07:21:06.802074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.023 qpair failed and we were unable to recover it. 00:35:46.023 [2024-11-18 07:21:06.802244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.023 [2024-11-18 07:21:06.802279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.023 qpair failed and we were unable to recover it. 00:35:46.023 [2024-11-18 07:21:06.802422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.023 [2024-11-18 07:21:06.802457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.023 qpair failed and we were unable to recover it. 00:35:46.023 [2024-11-18 07:21:06.802617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.023 [2024-11-18 07:21:06.802652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.023 qpair failed and we were unable to recover it. 00:35:46.023 [2024-11-18 07:21:06.802754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.023 [2024-11-18 07:21:06.802787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.023 qpair failed and we were unable to recover it. 00:35:46.023 [2024-11-18 07:21:06.802916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.023 [2024-11-18 07:21:06.802951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.023 qpair failed and we were unable to recover it. 00:35:46.023 [2024-11-18 07:21:06.803123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.023 [2024-11-18 07:21:06.803158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.023 qpair failed and we were unable to recover it. 00:35:46.023 [2024-11-18 07:21:06.803331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.023 [2024-11-18 07:21:06.803365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.023 qpair failed and we were unable to recover it. 00:35:46.023 [2024-11-18 07:21:06.803522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.023 [2024-11-18 07:21:06.803556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.023 qpair failed and we were unable to recover it. 00:35:46.023 [2024-11-18 07:21:06.803695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.023 [2024-11-18 07:21:06.803731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.023 qpair failed and we were unable to recover it. 00:35:46.023 [2024-11-18 07:21:06.803878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.023 [2024-11-18 07:21:06.803914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.023 qpair failed and we were unable to recover it. 00:35:46.023 [2024-11-18 07:21:06.804084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.023 [2024-11-18 07:21:06.804119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.024 qpair failed and we were unable to recover it. 00:35:46.024 [2024-11-18 07:21:06.804270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.024 [2024-11-18 07:21:06.804305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.024 qpair failed and we were unable to recover it. 00:35:46.024 [2024-11-18 07:21:06.804416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.024 [2024-11-18 07:21:06.804450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.024 qpair failed and we were unable to recover it. 00:35:46.024 [2024-11-18 07:21:06.804606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.024 [2024-11-18 07:21:06.804641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.024 qpair failed and we were unable to recover it. 00:35:46.024 [2024-11-18 07:21:06.804810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.024 [2024-11-18 07:21:06.804852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.024 qpair failed and we were unable to recover it. 00:35:46.024 [2024-11-18 07:21:06.804981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.024 [2024-11-18 07:21:06.805021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.024 qpair failed and we were unable to recover it. 00:35:46.024 [2024-11-18 07:21:06.805189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.024 [2024-11-18 07:21:06.805224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.024 qpair failed and we were unable to recover it. 00:35:46.024 [2024-11-18 07:21:06.805462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.024 [2024-11-18 07:21:06.805553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.024 qpair failed and we were unable to recover it. 00:35:46.024 [2024-11-18 07:21:06.805760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.024 [2024-11-18 07:21:06.805816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.024 qpair failed and we were unable to recover it. 00:35:46.024 [2024-11-18 07:21:06.805977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.024 [2024-11-18 07:21:06.806033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.024 qpair failed and we were unable to recover it. 00:35:46.024 [2024-11-18 07:21:06.806262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.024 [2024-11-18 07:21:06.806318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.024 qpair failed and we were unable to recover it. 00:35:46.024 [2024-11-18 07:21:06.806530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.024 [2024-11-18 07:21:06.806589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.024 qpair failed and we were unable to recover it. 00:35:46.024 [2024-11-18 07:21:06.806750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.024 [2024-11-18 07:21:06.806807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.024 qpair failed and we were unable to recover it. 00:35:46.024 [2024-11-18 07:21:06.807041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.024 [2024-11-18 07:21:06.807097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.024 qpair failed and we were unable to recover it. 00:35:46.024 [2024-11-18 07:21:06.807345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.024 [2024-11-18 07:21:06.807401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.024 qpair failed and we were unable to recover it. 00:35:46.024 [2024-11-18 07:21:06.807599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.024 [2024-11-18 07:21:06.807658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.024 qpair failed and we were unable to recover it. 00:35:46.024 [2024-11-18 07:21:06.807871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.024 [2024-11-18 07:21:06.807926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.024 qpair failed and we were unable to recover it. 00:35:46.024 [2024-11-18 07:21:06.808188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.024 [2024-11-18 07:21:06.808244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.024 qpair failed and we were unable to recover it. 00:35:46.024 [2024-11-18 07:21:06.808552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.024 [2024-11-18 07:21:06.808610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.024 qpair failed and we were unable to recover it. 00:35:46.024 [2024-11-18 07:21:06.808846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.024 [2024-11-18 07:21:06.808904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.024 qpair failed and we were unable to recover it. 00:35:46.024 [2024-11-18 07:21:06.809159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.024 [2024-11-18 07:21:06.809216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.024 qpair failed and we were unable to recover it. 00:35:46.024 [2024-11-18 07:21:06.809517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.024 [2024-11-18 07:21:06.809574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.024 qpair failed and we were unable to recover it. 00:35:46.024 [2024-11-18 07:21:06.809874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.024 [2024-11-18 07:21:06.809930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.024 qpair failed and we were unable to recover it. 00:35:46.024 [2024-11-18 07:21:06.810176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.024 [2024-11-18 07:21:06.810233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.024 qpair failed and we were unable to recover it. 00:35:46.024 [2024-11-18 07:21:06.810449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.024 [2024-11-18 07:21:06.810516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.024 qpair failed and we were unable to recover it. 00:35:46.024 [2024-11-18 07:21:06.810741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.024 [2024-11-18 07:21:06.810798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.024 qpair failed and we were unable to recover it. 00:35:46.024 [2024-11-18 07:21:06.811008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.024 [2024-11-18 07:21:06.811065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.024 qpair failed and we were unable to recover it. 00:35:46.024 [2024-11-18 07:21:06.811243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.024 [2024-11-18 07:21:06.811299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.024 qpair failed and we were unable to recover it. 00:35:46.024 [2024-11-18 07:21:06.811534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.024 [2024-11-18 07:21:06.811592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.024 qpair failed and we were unable to recover it. 00:35:46.024 [2024-11-18 07:21:06.811804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.024 [2024-11-18 07:21:06.811861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.024 qpair failed and we were unable to recover it. 00:35:46.024 [2024-11-18 07:21:06.812142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.024 [2024-11-18 07:21:06.812199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.024 qpair failed and we were unable to recover it. 00:35:46.024 [2024-11-18 07:21:06.812453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.024 [2024-11-18 07:21:06.812527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.024 qpair failed and we were unable to recover it. 00:35:46.024 [2024-11-18 07:21:06.812707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.024 [2024-11-18 07:21:06.812762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.024 qpair failed and we were unable to recover it. 00:35:46.024 [2024-11-18 07:21:06.813032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.024 [2024-11-18 07:21:06.813089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.024 qpair failed and we were unable to recover it. 00:35:46.024 [2024-11-18 07:21:06.813390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.024 [2024-11-18 07:21:06.813446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.024 qpair failed and we were unable to recover it. 00:35:46.025 [2024-11-18 07:21:06.813652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.025 [2024-11-18 07:21:06.813708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.025 qpair failed and we were unable to recover it. 00:35:46.025 [2024-11-18 07:21:06.813921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.025 [2024-11-18 07:21:06.813979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.025 qpair failed and we were unable to recover it. 00:35:46.025 [2024-11-18 07:21:06.814173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.025 [2024-11-18 07:21:06.814229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.025 qpair failed and we were unable to recover it. 00:35:46.025 [2024-11-18 07:21:06.814449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.025 [2024-11-18 07:21:06.814523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.025 qpair failed and we were unable to recover it. 00:35:46.025 [2024-11-18 07:21:06.814745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.025 [2024-11-18 07:21:06.814802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.025 qpair failed and we were unable to recover it. 00:35:46.025 [2024-11-18 07:21:06.815019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.025 [2024-11-18 07:21:06.815077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.025 qpair failed and we were unable to recover it. 00:35:46.025 [2024-11-18 07:21:06.815292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.025 [2024-11-18 07:21:06.815350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.025 qpair failed and we were unable to recover it. 00:35:46.025 [2024-11-18 07:21:06.815614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.025 [2024-11-18 07:21:06.815672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.025 qpair failed and we were unable to recover it. 00:35:46.025 [2024-11-18 07:21:06.815884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.025 [2024-11-18 07:21:06.815941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.025 qpair failed and we were unable to recover it. 00:35:46.025 [2024-11-18 07:21:06.816122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.025 [2024-11-18 07:21:06.816180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.025 qpair failed and we were unable to recover it. 00:35:46.025 [2024-11-18 07:21:06.816377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.025 [2024-11-18 07:21:06.816433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.025 qpair failed and we were unable to recover it. 00:35:46.025 [2024-11-18 07:21:06.816825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.025 [2024-11-18 07:21:06.816929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.025 qpair failed and we were unable to recover it. 00:35:46.025 [2024-11-18 07:21:06.817171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.025 [2024-11-18 07:21:06.817232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.025 qpair failed and we were unable to recover it. 00:35:46.025 [2024-11-18 07:21:06.817420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.025 [2024-11-18 07:21:06.817480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.025 qpair failed and we were unable to recover it. 00:35:46.025 [2024-11-18 07:21:06.817718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.025 [2024-11-18 07:21:06.817778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.025 qpair failed and we were unable to recover it. 00:35:46.025 [2024-11-18 07:21:06.818030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.025 [2024-11-18 07:21:06.818099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.025 qpair failed and we were unable to recover it. 00:35:46.025 [2024-11-18 07:21:06.818336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.025 [2024-11-18 07:21:06.818402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.025 qpair failed and we were unable to recover it. 00:35:46.025 [2024-11-18 07:21:06.818705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.025 [2024-11-18 07:21:06.818763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.025 qpair failed and we were unable to recover it. 00:35:46.025 [2024-11-18 07:21:06.818982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.025 [2024-11-18 07:21:06.819040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.025 qpair failed and we were unable to recover it. 00:35:46.025 [2024-11-18 07:21:06.819265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.025 [2024-11-18 07:21:06.819333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.025 qpair failed and we were unable to recover it. 00:35:46.025 [2024-11-18 07:21:06.819594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.025 [2024-11-18 07:21:06.819658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.025 qpair failed and we were unable to recover it. 00:35:46.025 [2024-11-18 07:21:06.819850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.025 [2024-11-18 07:21:06.819917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.025 qpair failed and we were unable to recover it. 00:35:46.025 [2024-11-18 07:21:06.820195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.025 [2024-11-18 07:21:06.820253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.025 qpair failed and we were unable to recover it. 00:35:46.025 [2024-11-18 07:21:06.820473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.025 [2024-11-18 07:21:06.820548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.025 qpair failed and we were unable to recover it. 00:35:46.025 [2024-11-18 07:21:06.820822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.025 [2024-11-18 07:21:06.820888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.025 qpair failed and we were unable to recover it. 00:35:46.025 [2024-11-18 07:21:06.821161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.025 [2024-11-18 07:21:06.821231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.025 qpair failed and we were unable to recover it. 00:35:46.025 [2024-11-18 07:21:06.821485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.025 [2024-11-18 07:21:06.821554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.025 qpair failed and we were unable to recover it. 00:35:46.025 [2024-11-18 07:21:06.821751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.025 [2024-11-18 07:21:06.821808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.025 qpair failed and we were unable to recover it. 00:35:46.026 [2024-11-18 07:21:06.822062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.026 [2024-11-18 07:21:06.822130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.026 qpair failed and we were unable to recover it. 00:35:46.026 [2024-11-18 07:21:06.822377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.026 [2024-11-18 07:21:06.822437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.026 qpair failed and we were unable to recover it. 00:35:46.026 [2024-11-18 07:21:06.822632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.026 [2024-11-18 07:21:06.822691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.026 qpair failed and we were unable to recover it. 00:35:46.026 [2024-11-18 07:21:06.822924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.026 [2024-11-18 07:21:06.822991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.026 qpair failed and we were unable to recover it. 00:35:46.026 [2024-11-18 07:21:06.823315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.026 [2024-11-18 07:21:06.823393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.026 qpair failed and we were unable to recover it. 00:35:46.026 [2024-11-18 07:21:06.823582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.026 [2024-11-18 07:21:06.823642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.026 qpair failed and we were unable to recover it. 00:35:46.026 [2024-11-18 07:21:06.823813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.026 [2024-11-18 07:21:06.823870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.026 qpair failed and we were unable to recover it. 00:35:46.026 [2024-11-18 07:21:06.824120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.026 [2024-11-18 07:21:06.824182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.026 qpair failed and we were unable to recover it. 00:35:46.026 [2024-11-18 07:21:06.824443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.026 [2024-11-18 07:21:06.824548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.026 qpair failed and we were unable to recover it. 00:35:46.026 [2024-11-18 07:21:06.824835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.026 [2024-11-18 07:21:06.824893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.026 qpair failed and we were unable to recover it. 00:35:46.026 [2024-11-18 07:21:06.825074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.026 [2024-11-18 07:21:06.825133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.026 qpair failed and we were unable to recover it. 00:35:46.026 [2024-11-18 07:21:06.825313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.026 [2024-11-18 07:21:06.825394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.026 qpair failed and we were unable to recover it. 00:35:46.026 [2024-11-18 07:21:06.825713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.026 [2024-11-18 07:21:06.825776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.026 qpair failed and we were unable to recover it. 00:35:46.026 [2024-11-18 07:21:06.826065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.026 [2024-11-18 07:21:06.826142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.026 qpair failed and we were unable to recover it. 00:35:46.026 [2024-11-18 07:21:06.826326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.026 [2024-11-18 07:21:06.826385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.026 qpair failed and we were unable to recover it. 00:35:46.026 [2024-11-18 07:21:06.826584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.026 [2024-11-18 07:21:06.826644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.026 qpair failed and we were unable to recover it. 00:35:46.026 [2024-11-18 07:21:06.826890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.026 [2024-11-18 07:21:06.826952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.026 qpair failed and we were unable to recover it. 00:35:46.026 [2024-11-18 07:21:06.827254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.026 [2024-11-18 07:21:06.827322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.026 qpair failed and we were unable to recover it. 00:35:46.026 [2024-11-18 07:21:06.827600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.026 [2024-11-18 07:21:06.827660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.026 qpair failed and we were unable to recover it. 00:35:46.026 [2024-11-18 07:21:06.827912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.026 [2024-11-18 07:21:06.827969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.026 qpair failed and we were unable to recover it. 00:35:46.026 [2024-11-18 07:21:06.828273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.026 [2024-11-18 07:21:06.828330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.026 qpair failed and we were unable to recover it. 00:35:46.026 [2024-11-18 07:21:06.828585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.026 [2024-11-18 07:21:06.828644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.026 qpair failed and we were unable to recover it. 00:35:46.026 [2024-11-18 07:21:06.828875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.026 [2024-11-18 07:21:06.828938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.026 qpair failed and we were unable to recover it. 00:35:46.026 [2024-11-18 07:21:06.829255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.026 [2024-11-18 07:21:06.829349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.026 qpair failed and we were unable to recover it. 00:35:46.026 [2024-11-18 07:21:06.829649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.026 [2024-11-18 07:21:06.829713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.026 qpair failed and we were unable to recover it. 00:35:46.026 [2024-11-18 07:21:06.830013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.026 [2024-11-18 07:21:06.830080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.026 qpair failed and we were unable to recover it. 00:35:46.026 [2024-11-18 07:21:06.830372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.026 [2024-11-18 07:21:06.830438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.026 qpair failed and we were unable to recover it. 00:35:46.026 [2024-11-18 07:21:06.830755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.026 [2024-11-18 07:21:06.830823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.026 qpair failed and we were unable to recover it. 00:35:46.026 [2024-11-18 07:21:06.831084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.026 [2024-11-18 07:21:06.831151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.026 qpair failed and we were unable to recover it. 00:35:46.026 [2024-11-18 07:21:06.831399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.026 [2024-11-18 07:21:06.831465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.026 qpair failed and we were unable to recover it. 00:35:46.026 [2024-11-18 07:21:06.831747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.026 [2024-11-18 07:21:06.831815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.026 qpair failed and we were unable to recover it. 00:35:46.026 [2024-11-18 07:21:06.832069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.026 [2024-11-18 07:21:06.832135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.026 qpair failed and we were unable to recover it. 00:35:46.026 [2024-11-18 07:21:06.832404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.026 [2024-11-18 07:21:06.832462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.026 qpair failed and we were unable to recover it. 00:35:46.026 [2024-11-18 07:21:06.832705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.026 [2024-11-18 07:21:06.832763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.026 qpair failed and we were unable to recover it. 00:35:46.026 [2024-11-18 07:21:06.832993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.026 [2024-11-18 07:21:06.833059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.026 qpair failed and we were unable to recover it. 00:35:46.027 [2024-11-18 07:21:06.833305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.027 [2024-11-18 07:21:06.833372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.027 qpair failed and we were unable to recover it. 00:35:46.027 [2024-11-18 07:21:06.833633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.027 [2024-11-18 07:21:06.833701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.027 qpair failed and we were unable to recover it. 00:35:46.027 [2024-11-18 07:21:06.834000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.027 [2024-11-18 07:21:06.834067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.027 qpair failed and we were unable to recover it. 00:35:46.027 [2024-11-18 07:21:06.834320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.027 [2024-11-18 07:21:06.834388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.027 qpair failed and we were unable to recover it. 00:35:46.027 [2024-11-18 07:21:06.834622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.027 [2024-11-18 07:21:06.834690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.027 qpair failed and we were unable to recover it. 00:35:46.027 [2024-11-18 07:21:06.834992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.027 [2024-11-18 07:21:06.835059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.027 qpair failed and we were unable to recover it. 00:35:46.027 [2024-11-18 07:21:06.835305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.027 [2024-11-18 07:21:06.835374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.027 qpair failed and we were unable to recover it. 00:35:46.027 [2024-11-18 07:21:06.835642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.027 [2024-11-18 07:21:06.835712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.027 qpair failed and we were unable to recover it. 00:35:46.027 [2024-11-18 07:21:06.836011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.027 [2024-11-18 07:21:06.836079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.027 qpair failed and we were unable to recover it. 00:35:46.027 [2024-11-18 07:21:06.836327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.027 [2024-11-18 07:21:06.836398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.027 qpair failed and we were unable to recover it. 00:35:46.027 [2024-11-18 07:21:06.836710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.027 [2024-11-18 07:21:06.836778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.027 qpair failed and we were unable to recover it. 00:35:46.027 [2024-11-18 07:21:06.837089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.027 [2024-11-18 07:21:06.837156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.027 qpair failed and we were unable to recover it. 00:35:46.027 [2024-11-18 07:21:06.837447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.027 [2024-11-18 07:21:06.837532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.027 qpair failed and we were unable to recover it. 00:35:46.027 [2024-11-18 07:21:06.837796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.027 [2024-11-18 07:21:06.837862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.027 qpair failed and we were unable to recover it. 00:35:46.027 [2024-11-18 07:21:06.838115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.027 [2024-11-18 07:21:06.838184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.027 qpair failed and we were unable to recover it. 00:35:46.027 [2024-11-18 07:21:06.838452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.027 [2024-11-18 07:21:06.838540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.027 qpair failed and we were unable to recover it. 00:35:46.027 [2024-11-18 07:21:06.838742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.027 [2024-11-18 07:21:06.838805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.027 qpair failed and we were unable to recover it. 00:35:46.027 [2024-11-18 07:21:06.839093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.027 [2024-11-18 07:21:06.839160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.027 qpair failed and we were unable to recover it. 00:35:46.027 [2024-11-18 07:21:06.839405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.027 [2024-11-18 07:21:06.839472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.027 qpair failed and we were unable to recover it. 00:35:46.027 [2024-11-18 07:21:06.839760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.027 [2024-11-18 07:21:06.839826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.027 qpair failed and we were unable to recover it. 00:35:46.027 [2024-11-18 07:21:06.840029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.027 [2024-11-18 07:21:06.840096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.027 qpair failed and we were unable to recover it. 00:35:46.027 [2024-11-18 07:21:06.840302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.027 [2024-11-18 07:21:06.840369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.027 qpair failed and we were unable to recover it. 00:35:46.027 [2024-11-18 07:21:06.840654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.027 [2024-11-18 07:21:06.840724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.027 qpair failed and we were unable to recover it. 00:35:46.027 [2024-11-18 07:21:06.840975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.027 [2024-11-18 07:21:06.841043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.027 qpair failed and we were unable to recover it. 00:35:46.027 [2024-11-18 07:21:06.841282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.027 [2024-11-18 07:21:06.841349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.027 qpair failed and we were unable to recover it. 00:35:46.027 [2024-11-18 07:21:06.841607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.027 [2024-11-18 07:21:06.841678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.027 qpair failed and we were unable to recover it. 00:35:46.027 [2024-11-18 07:21:06.841987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.027 [2024-11-18 07:21:06.842053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.027 qpair failed and we were unable to recover it. 00:35:46.027 [2024-11-18 07:21:06.842349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.027 [2024-11-18 07:21:06.842416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.027 qpair failed and we were unable to recover it. 00:35:46.027 [2024-11-18 07:21:06.842682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.027 [2024-11-18 07:21:06.842790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.027 qpair failed and we were unable to recover it. 00:35:46.027 [2024-11-18 07:21:06.843045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.027 [2024-11-18 07:21:06.843112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.027 qpair failed and we were unable to recover it. 00:35:46.027 [2024-11-18 07:21:06.843404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.027 [2024-11-18 07:21:06.843471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.027 qpair failed and we were unable to recover it. 00:35:46.027 [2024-11-18 07:21:06.843790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.027 [2024-11-18 07:21:06.843858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.027 qpair failed and we were unable to recover it. 00:35:46.027 [2024-11-18 07:21:06.844154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.027 [2024-11-18 07:21:06.844221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.027 qpair failed and we were unable to recover it. 00:35:46.027 [2024-11-18 07:21:06.844466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.027 [2024-11-18 07:21:06.844560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.027 qpair failed and we were unable to recover it. 00:35:46.028 [2024-11-18 07:21:06.844821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.028 [2024-11-18 07:21:06.844886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.028 qpair failed and we were unable to recover it. 00:35:46.028 [2024-11-18 07:21:06.845144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.028 [2024-11-18 07:21:06.845210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.028 qpair failed and we were unable to recover it. 00:35:46.028 [2024-11-18 07:21:06.845516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.028 [2024-11-18 07:21:06.845584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.028 qpair failed and we were unable to recover it. 00:35:46.028 [2024-11-18 07:21:06.845872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.028 [2024-11-18 07:21:06.845938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.028 qpair failed and we were unable to recover it. 00:35:46.028 [2024-11-18 07:21:06.846142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.028 [2024-11-18 07:21:06.846208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.028 qpair failed and we were unable to recover it. 00:35:46.028 [2024-11-18 07:21:06.846467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.028 [2024-11-18 07:21:06.846548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.028 qpair failed and we were unable to recover it. 00:35:46.028 [2024-11-18 07:21:06.846843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.028 [2024-11-18 07:21:06.846910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.028 qpair failed and we were unable to recover it. 00:35:46.028 [2024-11-18 07:21:06.847206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.028 [2024-11-18 07:21:06.847273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.028 qpair failed and we were unable to recover it. 00:35:46.028 [2024-11-18 07:21:06.847540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.028 [2024-11-18 07:21:06.847609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.028 qpair failed and we were unable to recover it. 00:35:46.028 [2024-11-18 07:21:06.847909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.028 [2024-11-18 07:21:06.847974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.028 qpair failed and we were unable to recover it. 00:35:46.028 [2024-11-18 07:21:06.848262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.028 [2024-11-18 07:21:06.848329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.028 qpair failed and we were unable to recover it. 00:35:46.028 [2024-11-18 07:21:06.848575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.028 [2024-11-18 07:21:06.848644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.028 qpair failed and we were unable to recover it. 00:35:46.028 [2024-11-18 07:21:06.848864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.028 [2024-11-18 07:21:06.848931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.028 qpair failed and we were unable to recover it. 00:35:46.028 [2024-11-18 07:21:06.849217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.028 [2024-11-18 07:21:06.849284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.028 qpair failed and we were unable to recover it. 00:35:46.028 [2024-11-18 07:21:06.849540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.028 [2024-11-18 07:21:06.849609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.028 qpair failed and we were unable to recover it. 00:35:46.028 [2024-11-18 07:21:06.849898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.028 [2024-11-18 07:21:06.849966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.028 qpair failed and we were unable to recover it. 00:35:46.028 [2024-11-18 07:21:06.850260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.028 [2024-11-18 07:21:06.850326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.028 qpair failed and we were unable to recover it. 00:35:46.028 [2024-11-18 07:21:06.850555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.028 [2024-11-18 07:21:06.850624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.028 qpair failed and we were unable to recover it. 00:35:46.028 [2024-11-18 07:21:06.850877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.028 [2024-11-18 07:21:06.850943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.028 qpair failed and we were unable to recover it. 00:35:46.028 [2024-11-18 07:21:06.851196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.028 [2024-11-18 07:21:06.851263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.028 qpair failed and we were unable to recover it. 00:35:46.028 [2024-11-18 07:21:06.851553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.028 [2024-11-18 07:21:06.851623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.028 qpair failed and we were unable to recover it. 00:35:46.028 [2024-11-18 07:21:06.851844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.028 [2024-11-18 07:21:06.851911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.028 qpair failed and we were unable to recover it. 00:35:46.028 [2024-11-18 07:21:06.852158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.028 [2024-11-18 07:21:06.852216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.028 qpair failed and we were unable to recover it. 00:35:46.028 [2024-11-18 07:21:06.852526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.028 [2024-11-18 07:21:06.852595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.028 qpair failed and we were unable to recover it. 00:35:46.028 [2024-11-18 07:21:06.852859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.028 [2024-11-18 07:21:06.852926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.028 qpair failed and we were unable to recover it. 00:35:46.028 [2024-11-18 07:21:06.853223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.028 [2024-11-18 07:21:06.853291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.028 qpair failed and we were unable to recover it. 00:35:46.028 [2024-11-18 07:21:06.853649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.028 [2024-11-18 07:21:06.853718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.028 qpair failed and we were unable to recover it. 00:35:46.028 [2024-11-18 07:21:06.853976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.028 [2024-11-18 07:21:06.854044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.028 qpair failed and we were unable to recover it. 00:35:46.028 [2024-11-18 07:21:06.854321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.028 [2024-11-18 07:21:06.854386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.028 qpair failed and we were unable to recover it. 00:35:46.028 [2024-11-18 07:21:06.854687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.028 [2024-11-18 07:21:06.854756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.028 qpair failed and we were unable to recover it. 00:35:46.028 [2024-11-18 07:21:06.855055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.028 [2024-11-18 07:21:06.855122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.028 qpair failed and we were unable to recover it. 00:35:46.028 [2024-11-18 07:21:06.855359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.028 [2024-11-18 07:21:06.855424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.028 qpair failed and we were unable to recover it. 00:35:46.028 [2024-11-18 07:21:06.855727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.028 [2024-11-18 07:21:06.855794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.028 qpair failed and we were unable to recover it. 00:35:46.028 [2024-11-18 07:21:06.856060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.028 [2024-11-18 07:21:06.856117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.028 qpair failed and we were unable to recover it. 00:35:46.028 [2024-11-18 07:21:06.856406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.029 [2024-11-18 07:21:06.856482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.029 qpair failed and we were unable to recover it. 00:35:46.029 [2024-11-18 07:21:06.856769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.029 [2024-11-18 07:21:06.856836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.029 qpair failed and we were unable to recover it. 00:35:46.029 [2024-11-18 07:21:06.857091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.029 [2024-11-18 07:21:06.857157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.029 qpair failed and we were unable to recover it. 00:35:46.029 [2024-11-18 07:21:06.857420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.029 [2024-11-18 07:21:06.857487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.029 qpair failed and we were unable to recover it. 00:35:46.029 [2024-11-18 07:21:06.857756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.029 [2024-11-18 07:21:06.857823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.029 qpair failed and we were unable to recover it. 00:35:46.029 [2024-11-18 07:21:06.858073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.029 [2024-11-18 07:21:06.858140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.029 qpair failed and we were unable to recover it. 00:35:46.029 [2024-11-18 07:21:06.858447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.029 [2024-11-18 07:21:06.858535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.029 qpair failed and we were unable to recover it. 00:35:46.029 [2024-11-18 07:21:06.858870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.029 [2024-11-18 07:21:06.858937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.029 qpair failed and we were unable to recover it. 00:35:46.029 [2024-11-18 07:21:06.859183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.029 [2024-11-18 07:21:06.859249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.029 qpair failed and we were unable to recover it. 00:35:46.029 [2024-11-18 07:21:06.859462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.029 [2024-11-18 07:21:06.859549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.029 qpair failed and we were unable to recover it. 00:35:46.029 [2024-11-18 07:21:06.859842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.029 [2024-11-18 07:21:06.859909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.029 qpair failed and we were unable to recover it. 00:35:46.029 [2024-11-18 07:21:06.860203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.029 [2024-11-18 07:21:06.860269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.029 qpair failed and we were unable to recover it. 00:35:46.029 [2024-11-18 07:21:06.860559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.029 [2024-11-18 07:21:06.860628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.029 qpair failed and we were unable to recover it. 00:35:46.029 [2024-11-18 07:21:06.860835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.029 [2024-11-18 07:21:06.860902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.029 qpair failed and we were unable to recover it. 00:35:46.029 [2024-11-18 07:21:06.861177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.029 [2024-11-18 07:21:06.861243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.029 qpair failed and we were unable to recover it. 00:35:46.029 [2024-11-18 07:21:06.861531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.029 [2024-11-18 07:21:06.861600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.029 qpair failed and we were unable to recover it. 00:35:46.029 [2024-11-18 07:21:06.861855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.029 [2024-11-18 07:21:06.861923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.029 qpair failed and we were unable to recover it. 00:35:46.029 [2024-11-18 07:21:06.862171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.029 [2024-11-18 07:21:06.862239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.029 qpair failed and we were unable to recover it. 00:35:46.029 [2024-11-18 07:21:06.862487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.029 [2024-11-18 07:21:06.862575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.029 qpair failed and we were unable to recover it. 00:35:46.029 [2024-11-18 07:21:06.862802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.029 [2024-11-18 07:21:06.862867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.029 qpair failed and we were unable to recover it. 00:35:46.029 [2024-11-18 07:21:06.863158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.029 [2024-11-18 07:21:06.863224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.029 qpair failed and we were unable to recover it. 00:35:46.029 [2024-11-18 07:21:06.863482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.029 [2024-11-18 07:21:06.863566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.029 qpair failed and we were unable to recover it. 00:35:46.029 [2024-11-18 07:21:06.863839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.029 [2024-11-18 07:21:06.863907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.029 qpair failed and we were unable to recover it. 00:35:46.029 [2024-11-18 07:21:06.864159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.029 [2024-11-18 07:21:06.864225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.029 qpair failed and we were unable to recover it. 00:35:46.029 [2024-11-18 07:21:06.864435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.029 [2024-11-18 07:21:06.864513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.029 qpair failed and we were unable to recover it. 00:35:46.029 [2024-11-18 07:21:06.864757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.029 [2024-11-18 07:21:06.864824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.029 qpair failed and we were unable to recover it. 00:35:46.029 [2024-11-18 07:21:06.865077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.029 [2024-11-18 07:21:06.865142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.029 qpair failed and we were unable to recover it. 00:35:46.029 [2024-11-18 07:21:06.865453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.029 [2024-11-18 07:21:06.865533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.029 qpair failed and we were unable to recover it. 00:35:46.029 [2024-11-18 07:21:06.865828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.029 [2024-11-18 07:21:06.865895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.029 qpair failed and we were unable to recover it. 00:35:46.029 [2024-11-18 07:21:06.866205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.029 [2024-11-18 07:21:06.866261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.029 qpair failed and we were unable to recover it. 00:35:46.029 [2024-11-18 07:21:06.866512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.029 [2024-11-18 07:21:06.866580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.029 qpair failed and we were unable to recover it. 00:35:46.029 [2024-11-18 07:21:06.866793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.029 [2024-11-18 07:21:06.866860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.029 qpair failed and we were unable to recover it. 00:35:46.029 [2024-11-18 07:21:06.867117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.029 [2024-11-18 07:21:06.867187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.029 qpair failed and we were unable to recover it. 00:35:46.029 [2024-11-18 07:21:06.867483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.029 [2024-11-18 07:21:06.867564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.029 qpair failed and we were unable to recover it. 00:35:46.029 [2024-11-18 07:21:06.867867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.029 [2024-11-18 07:21:06.867934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.029 qpair failed and we were unable to recover it. 00:35:46.029 [2024-11-18 07:21:06.868182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.029 [2024-11-18 07:21:06.868251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.030 qpair failed and we were unable to recover it. 00:35:46.030 [2024-11-18 07:21:06.868562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.030 [2024-11-18 07:21:06.868621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.030 qpair failed and we were unable to recover it. 00:35:46.030 [2024-11-18 07:21:06.868810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.030 [2024-11-18 07:21:06.868896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.030 qpair failed and we were unable to recover it. 00:35:46.030 [2024-11-18 07:21:06.869183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.030 [2024-11-18 07:21:06.869251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.030 qpair failed and we were unable to recover it. 00:35:46.030 [2024-11-18 07:21:06.869532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.030 [2024-11-18 07:21:06.869602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.030 qpair failed and we were unable to recover it. 00:35:46.030 [2024-11-18 07:21:06.869844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.030 [2024-11-18 07:21:06.869921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.030 qpair failed and we were unable to recover it. 00:35:46.030 [2024-11-18 07:21:06.870227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.030 [2024-11-18 07:21:06.870284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.030 qpair failed and we were unable to recover it. 00:35:46.030 [2024-11-18 07:21:06.870574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.030 [2024-11-18 07:21:06.870642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.030 qpair failed and we were unable to recover it. 00:35:46.030 [2024-11-18 07:21:06.870935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.030 [2024-11-18 07:21:06.871002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.030 qpair failed and we were unable to recover it. 00:35:46.030 [2024-11-18 07:21:06.871299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.030 [2024-11-18 07:21:06.871366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.030 qpair failed and we were unable to recover it. 00:35:46.030 [2024-11-18 07:21:06.871636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.030 [2024-11-18 07:21:06.871705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.030 qpair failed and we were unable to recover it. 00:35:46.030 [2024-11-18 07:21:06.871903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.030 [2024-11-18 07:21:06.871970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.030 qpair failed and we were unable to recover it. 00:35:46.030 [2024-11-18 07:21:06.872232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.030 [2024-11-18 07:21:06.872299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.030 qpair failed and we were unable to recover it. 00:35:46.030 [2024-11-18 07:21:06.872569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.030 [2024-11-18 07:21:06.872637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.030 qpair failed and we were unable to recover it. 00:35:46.030 [2024-11-18 07:21:06.872937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.030 [2024-11-18 07:21:06.873003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.030 qpair failed and we were unable to recover it. 00:35:46.030 [2024-11-18 07:21:06.873287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.030 [2024-11-18 07:21:06.873354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.030 qpair failed and we were unable to recover it. 00:35:46.030 [2024-11-18 07:21:06.873607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.030 [2024-11-18 07:21:06.873675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.030 qpair failed and we were unable to recover it. 00:35:46.030 [2024-11-18 07:21:06.873899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.030 [2024-11-18 07:21:06.873965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.030 qpair failed and we were unable to recover it. 00:35:46.030 [2024-11-18 07:21:06.874255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.030 [2024-11-18 07:21:06.874321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.030 qpair failed and we were unable to recover it. 00:35:46.030 [2024-11-18 07:21:06.874617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.030 [2024-11-18 07:21:06.874685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.030 qpair failed and we were unable to recover it. 00:35:46.030 [2024-11-18 07:21:06.874935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.030 [2024-11-18 07:21:06.875002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.030 qpair failed and we were unable to recover it. 00:35:46.030 [2024-11-18 07:21:06.875242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.030 [2024-11-18 07:21:06.875308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.030 qpair failed and we were unable to recover it. 00:35:46.030 [2024-11-18 07:21:06.875515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.030 [2024-11-18 07:21:06.875585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.030 qpair failed and we were unable to recover it. 00:35:46.030 [2024-11-18 07:21:06.875800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.030 [2024-11-18 07:21:06.875871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.030 qpair failed and we were unable to recover it. 00:35:46.030 [2024-11-18 07:21:06.876164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.030 [2024-11-18 07:21:06.876231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.030 qpair failed and we were unable to recover it. 00:35:46.030 [2024-11-18 07:21:06.876546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.030 [2024-11-18 07:21:06.876615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.030 qpair failed and we were unable to recover it. 00:35:46.030 [2024-11-18 07:21:06.876895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.030 [2024-11-18 07:21:06.876961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.030 qpair failed and we were unable to recover it. 00:35:46.030 [2024-11-18 07:21:06.877214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.030 [2024-11-18 07:21:06.877281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.030 qpair failed and we were unable to recover it. 00:35:46.030 [2024-11-18 07:21:06.877579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.030 [2024-11-18 07:21:06.877647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.030 qpair failed and we were unable to recover it. 00:35:46.030 [2024-11-18 07:21:06.877895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.030 [2024-11-18 07:21:06.877961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.030 qpair failed and we were unable to recover it. 00:35:46.031 [2024-11-18 07:21:06.878231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.031 [2024-11-18 07:21:06.878298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.031 qpair failed and we were unable to recover it. 00:35:46.031 [2024-11-18 07:21:06.878556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.031 [2024-11-18 07:21:06.878626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.031 qpair failed and we were unable to recover it. 00:35:46.031 [2024-11-18 07:21:06.878888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.031 [2024-11-18 07:21:06.878955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.031 qpair failed and we were unable to recover it. 00:35:46.031 [2024-11-18 07:21:06.879171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.031 [2024-11-18 07:21:06.879240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.031 qpair failed and we were unable to recover it. 00:35:46.031 [2024-11-18 07:21:06.879513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.031 [2024-11-18 07:21:06.879582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.031 qpair failed and we were unable to recover it. 00:35:46.031 [2024-11-18 07:21:06.879878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.031 [2024-11-18 07:21:06.879944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.031 qpair failed and we were unable to recover it. 00:35:46.031 [2024-11-18 07:21:06.880197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.031 [2024-11-18 07:21:06.880264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.031 qpair failed and we were unable to recover it. 00:35:46.031 [2024-11-18 07:21:06.880564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.031 [2024-11-18 07:21:06.880624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.031 qpair failed and we were unable to recover it. 00:35:46.031 [2024-11-18 07:21:06.880904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.031 [2024-11-18 07:21:06.880970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.031 qpair failed and we were unable to recover it. 00:35:46.031 [2024-11-18 07:21:06.881257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.031 [2024-11-18 07:21:06.881325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.031 qpair failed and we were unable to recover it. 00:35:46.031 [2024-11-18 07:21:06.881580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.031 [2024-11-18 07:21:06.881649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.031 qpair failed and we were unable to recover it. 00:35:46.031 [2024-11-18 07:21:06.881953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.031 [2024-11-18 07:21:06.882010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.031 qpair failed and we were unable to recover it. 00:35:46.031 [2024-11-18 07:21:06.882246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.031 [2024-11-18 07:21:06.882313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.031 qpair failed and we were unable to recover it. 00:35:46.031 [2024-11-18 07:21:06.882528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.031 [2024-11-18 07:21:06.882596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.031 qpair failed and we were unable to recover it. 00:35:46.031 [2024-11-18 07:21:06.882917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.031 [2024-11-18 07:21:06.882983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.031 qpair failed and we were unable to recover it. 00:35:46.031 [2024-11-18 07:21:06.883200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.031 [2024-11-18 07:21:06.883279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.031 qpair failed and we were unable to recover it. 00:35:46.031 [2024-11-18 07:21:06.883579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.031 [2024-11-18 07:21:06.883646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.031 qpair failed and we were unable to recover it. 00:35:46.031 [2024-11-18 07:21:06.883937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.031 [2024-11-18 07:21:06.884003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.031 qpair failed and we were unable to recover it. 00:35:46.031 [2024-11-18 07:21:06.884300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.031 [2024-11-18 07:21:06.884367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.031 qpair failed and we were unable to recover it. 00:35:46.031 [2024-11-18 07:21:06.884619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.031 [2024-11-18 07:21:06.884687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.031 qpair failed and we were unable to recover it. 00:35:46.031 [2024-11-18 07:21:06.884899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.031 [2024-11-18 07:21:06.884965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.031 qpair failed and we were unable to recover it. 00:35:46.031 [2024-11-18 07:21:06.885265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.031 [2024-11-18 07:21:06.885332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.031 qpair failed and we were unable to recover it. 00:35:46.031 [2024-11-18 07:21:06.885659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.031 [2024-11-18 07:21:06.885716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.031 qpair failed and we were unable to recover it. 00:35:46.031 [2024-11-18 07:21:06.886007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.031 [2024-11-18 07:21:06.886074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.031 qpair failed and we were unable to recover it. 00:35:46.031 [2024-11-18 07:21:06.886374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.031 [2024-11-18 07:21:06.886442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.031 qpair failed and we were unable to recover it. 00:35:46.031 [2024-11-18 07:21:06.886712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.031 [2024-11-18 07:21:06.886778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.031 qpair failed and we were unable to recover it. 00:35:46.031 [2024-11-18 07:21:06.887069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.031 [2024-11-18 07:21:06.887136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.031 qpair failed and we were unable to recover it. 00:35:46.031 [2024-11-18 07:21:06.887342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.031 [2024-11-18 07:21:06.887409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.031 qpair failed and we were unable to recover it. 00:35:46.031 [2024-11-18 07:21:06.887669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.031 [2024-11-18 07:21:06.887738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.031 qpair failed and we were unable to recover it. 00:35:46.031 [2024-11-18 07:21:06.887996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.031 [2024-11-18 07:21:06.888064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.031 qpair failed and we were unable to recover it. 00:35:46.031 [2024-11-18 07:21:06.888347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.031 [2024-11-18 07:21:06.888416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.031 qpair failed and we were unable to recover it. 00:35:46.031 [2024-11-18 07:21:06.888686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.031 [2024-11-18 07:21:06.888756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.031 qpair failed and we were unable to recover it. 00:35:46.031 [2024-11-18 07:21:06.889019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.031 [2024-11-18 07:21:06.889087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.031 qpair failed and we were unable to recover it. 00:35:46.031 [2024-11-18 07:21:06.889282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.031 [2024-11-18 07:21:06.889349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.031 qpair failed and we were unable to recover it. 00:35:46.031 [2024-11-18 07:21:06.889651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.031 [2024-11-18 07:21:06.889720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.031 qpair failed and we were unable to recover it. 00:35:46.032 [2024-11-18 07:21:06.889970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.032 [2024-11-18 07:21:06.890040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.032 qpair failed and we were unable to recover it. 00:35:46.032 [2024-11-18 07:21:06.890293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.032 [2024-11-18 07:21:06.890359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.032 qpair failed and we were unable to recover it. 00:35:46.032 [2024-11-18 07:21:06.890616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.032 [2024-11-18 07:21:06.890685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.032 qpair failed and we were unable to recover it. 00:35:46.032 [2024-11-18 07:21:06.890941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.032 [2024-11-18 07:21:06.891011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.032 qpair failed and we were unable to recover it. 00:35:46.032 [2024-11-18 07:21:06.891259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.032 [2024-11-18 07:21:06.891325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.032 qpair failed and we were unable to recover it. 00:35:46.032 [2024-11-18 07:21:06.891623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.032 [2024-11-18 07:21:06.891692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.032 qpair failed and we were unable to recover it. 00:35:46.032 [2024-11-18 07:21:06.891949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.032 [2024-11-18 07:21:06.892016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.032 qpair failed and we were unable to recover it. 00:35:46.032 [2024-11-18 07:21:06.892284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.032 [2024-11-18 07:21:06.892351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.032 qpair failed and we were unable to recover it. 00:35:46.032 [2024-11-18 07:21:06.892571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.032 [2024-11-18 07:21:06.892640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.032 qpair failed and we were unable to recover it. 00:35:46.032 [2024-11-18 07:21:06.892877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.032 [2024-11-18 07:21:06.892943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.032 qpair failed and we were unable to recover it. 00:35:46.032 [2024-11-18 07:21:06.893194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.032 [2024-11-18 07:21:06.893261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.032 qpair failed and we were unable to recover it. 00:35:46.032 [2024-11-18 07:21:06.893520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.032 [2024-11-18 07:21:06.893589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.032 qpair failed and we were unable to recover it. 00:35:46.032 [2024-11-18 07:21:06.893856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.032 [2024-11-18 07:21:06.893915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.032 qpair failed and we were unable to recover it. 00:35:46.032 [2024-11-18 07:21:06.894189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.032 [2024-11-18 07:21:06.894256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.032 qpair failed and we were unable to recover it. 00:35:46.032 [2024-11-18 07:21:06.894525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.032 [2024-11-18 07:21:06.894613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.032 qpair failed and we were unable to recover it. 00:35:46.032 [2024-11-18 07:21:06.894917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.032 [2024-11-18 07:21:06.894985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.032 qpair failed and we were unable to recover it. 00:35:46.032 [2024-11-18 07:21:06.895194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.032 [2024-11-18 07:21:06.895262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.032 qpair failed and we were unable to recover it. 00:35:46.032 [2024-11-18 07:21:06.895555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.032 [2024-11-18 07:21:06.895624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.032 qpair failed and we were unable to recover it. 00:35:46.032 [2024-11-18 07:21:06.895865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.032 [2024-11-18 07:21:06.895931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.032 qpair failed and we were unable to recover it. 00:35:46.032 [2024-11-18 07:21:06.896173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.032 [2024-11-18 07:21:06.896256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.032 qpair failed and we were unable to recover it. 00:35:46.032 [2024-11-18 07:21:06.896563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.032 [2024-11-18 07:21:06.896642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.032 qpair failed and we were unable to recover it. 00:35:46.032 [2024-11-18 07:21:06.896936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.032 [2024-11-18 07:21:06.897002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.032 qpair failed and we were unable to recover it. 00:35:46.032 [2024-11-18 07:21:06.897247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.032 [2024-11-18 07:21:06.897316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.032 qpair failed and we were unable to recover it. 00:35:46.032 [2024-11-18 07:21:06.897559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.032 [2024-11-18 07:21:06.897631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.032 qpair failed and we were unable to recover it. 00:35:46.032 [2024-11-18 07:21:06.897887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.032 [2024-11-18 07:21:06.897952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.032 qpair failed and we were unable to recover it. 00:35:46.032 [2024-11-18 07:21:06.898212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.032 [2024-11-18 07:21:06.898279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.032 qpair failed and we were unable to recover it. 00:35:46.032 [2024-11-18 07:21:06.898581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.032 [2024-11-18 07:21:06.898649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.032 qpair failed and we were unable to recover it. 00:35:46.032 [2024-11-18 07:21:06.898940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.032 [2024-11-18 07:21:06.899007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.032 qpair failed and we were unable to recover it. 00:35:46.032 [2024-11-18 07:21:06.899251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.032 [2024-11-18 07:21:06.899320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.032 qpair failed and we were unable to recover it. 00:35:46.032 [2024-11-18 07:21:06.899580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.032 [2024-11-18 07:21:06.899648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.032 qpair failed and we were unable to recover it. 00:35:46.032 [2024-11-18 07:21:06.899933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.032 [2024-11-18 07:21:06.899999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.032 qpair failed and we were unable to recover it. 00:35:46.032 [2024-11-18 07:21:06.900258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.032 [2024-11-18 07:21:06.900325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.032 qpair failed and we were unable to recover it. 00:35:46.032 [2024-11-18 07:21:06.900630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.032 [2024-11-18 07:21:06.900699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.032 qpair failed and we were unable to recover it. 00:35:46.032 [2024-11-18 07:21:06.900945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.032 [2024-11-18 07:21:06.901013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.032 qpair failed and we were unable to recover it. 00:35:46.032 [2024-11-18 07:21:06.901325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.032 [2024-11-18 07:21:06.901393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.033 qpair failed and we were unable to recover it. 00:35:46.033 [2024-11-18 07:21:06.901636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.033 [2024-11-18 07:21:06.901705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.033 qpair failed and we were unable to recover it. 00:35:46.033 [2024-11-18 07:21:06.901962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.033 [2024-11-18 07:21:06.902030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.033 qpair failed and we were unable to recover it. 00:35:46.033 [2024-11-18 07:21:06.902232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.033 [2024-11-18 07:21:06.902302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.033 qpair failed and we were unable to recover it. 00:35:46.033 [2024-11-18 07:21:06.902592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.033 [2024-11-18 07:21:06.902661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.033 qpair failed and we were unable to recover it. 00:35:46.033 [2024-11-18 07:21:06.902866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.033 [2024-11-18 07:21:06.902935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.033 qpair failed and we were unable to recover it. 00:35:46.033 [2024-11-18 07:21:06.903240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.033 [2024-11-18 07:21:06.903308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.033 qpair failed and we were unable to recover it. 00:35:46.033 [2024-11-18 07:21:06.903547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.033 [2024-11-18 07:21:06.903615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.033 qpair failed and we were unable to recover it. 00:35:46.033 [2024-11-18 07:21:06.903845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.033 [2024-11-18 07:21:06.903911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.033 qpair failed and we were unable to recover it. 00:35:46.033 [2024-11-18 07:21:06.904198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.033 [2024-11-18 07:21:06.904265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.033 qpair failed and we were unable to recover it. 00:35:46.033 [2024-11-18 07:21:06.904576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.033 [2024-11-18 07:21:06.904634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.033 qpair failed and we were unable to recover it. 00:35:46.033 [2024-11-18 07:21:06.904910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.033 [2024-11-18 07:21:06.904977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.033 qpair failed and we were unable to recover it. 00:35:46.033 [2024-11-18 07:21:06.905232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.033 [2024-11-18 07:21:06.905299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.033 qpair failed and we were unable to recover it. 00:35:46.033 [2024-11-18 07:21:06.905602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.033 [2024-11-18 07:21:06.905669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.033 qpair failed and we were unable to recover it. 00:35:46.033 [2024-11-18 07:21:06.905969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.033 [2024-11-18 07:21:06.906036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.033 qpair failed and we were unable to recover it. 00:35:46.033 [2024-11-18 07:21:06.906286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.033 [2024-11-18 07:21:06.906353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.033 qpair failed and we were unable to recover it. 00:35:46.033 [2024-11-18 07:21:06.906642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.033 [2024-11-18 07:21:06.906709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.033 qpair failed and we were unable to recover it. 00:35:46.033 [2024-11-18 07:21:06.907002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.033 [2024-11-18 07:21:06.907069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.033 qpair failed and we were unable to recover it. 00:35:46.033 [2024-11-18 07:21:06.907325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.033 [2024-11-18 07:21:06.907393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.033 qpair failed and we were unable to recover it. 00:35:46.033 [2024-11-18 07:21:06.907656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.033 [2024-11-18 07:21:06.907725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.033 qpair failed and we were unable to recover it. 00:35:46.033 [2024-11-18 07:21:06.908023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.033 [2024-11-18 07:21:06.908090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.033 qpair failed and we were unable to recover it. 00:35:46.033 [2024-11-18 07:21:06.908399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.033 [2024-11-18 07:21:06.908466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.033 qpair failed and we were unable to recover it. 00:35:46.033 [2024-11-18 07:21:06.908811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.033 [2024-11-18 07:21:06.908869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.033 qpair failed and we were unable to recover it. 00:35:46.033 [2024-11-18 07:21:06.909050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.033 [2024-11-18 07:21:06.909133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.033 qpair failed and we were unable to recover it. 00:35:46.033 [2024-11-18 07:21:06.909418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.033 [2024-11-18 07:21:06.909486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.033 qpair failed and we were unable to recover it. 00:35:46.033 [2024-11-18 07:21:06.909767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.033 [2024-11-18 07:21:06.909835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.033 qpair failed and we were unable to recover it. 00:35:46.033 [2024-11-18 07:21:06.910137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.033 [2024-11-18 07:21:06.910209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.033 qpair failed and we were unable to recover it. 00:35:46.033 [2024-11-18 07:21:06.910515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.033 [2024-11-18 07:21:06.910584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.033 qpair failed and we were unable to recover it. 00:35:46.033 [2024-11-18 07:21:06.910851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.033 [2024-11-18 07:21:06.910917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.033 qpair failed and we were unable to recover it. 00:35:46.033 [2024-11-18 07:21:06.911219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.033 [2024-11-18 07:21:06.911285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.033 qpair failed and we were unable to recover it. 00:35:46.033 [2024-11-18 07:21:06.911656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.033 [2024-11-18 07:21:06.911725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.033 qpair failed and we were unable to recover it. 00:35:46.033 [2024-11-18 07:21:06.911978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.033 [2024-11-18 07:21:06.912048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.033 qpair failed and we were unable to recover it. 00:35:46.033 [2024-11-18 07:21:06.912338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.033 [2024-11-18 07:21:06.912405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.033 qpair failed and we were unable to recover it. 00:35:46.033 [2024-11-18 07:21:06.912669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.033 [2024-11-18 07:21:06.912737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.033 qpair failed and we were unable to recover it. 00:35:46.033 [2024-11-18 07:21:06.912998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.033 [2024-11-18 07:21:06.913066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.033 qpair failed and we were unable to recover it. 00:35:46.033 [2024-11-18 07:21:06.913353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.033 [2024-11-18 07:21:06.913420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.034 qpair failed and we were unable to recover it. 00:35:46.034 [2024-11-18 07:21:06.913736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.034 [2024-11-18 07:21:06.913804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.034 qpair failed and we were unable to recover it. 00:35:46.034 [2024-11-18 07:21:06.914094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.034 [2024-11-18 07:21:06.914162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.034 qpair failed and we were unable to recover it. 00:35:46.034 [2024-11-18 07:21:06.914451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.034 [2024-11-18 07:21:06.914531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.034 qpair failed and we were unable to recover it. 00:35:46.034 [2024-11-18 07:21:06.914830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.034 [2024-11-18 07:21:06.914896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.034 qpair failed and we were unable to recover it. 00:35:46.034 [2024-11-18 07:21:06.915175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.034 [2024-11-18 07:21:06.915243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.034 qpair failed and we were unable to recover it. 00:35:46.034 [2024-11-18 07:21:06.915487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.034 [2024-11-18 07:21:06.915566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.034 qpair failed and we were unable to recover it. 00:35:46.034 [2024-11-18 07:21:06.915814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.034 [2024-11-18 07:21:06.915880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.034 qpair failed and we were unable to recover it. 00:35:46.034 [2024-11-18 07:21:06.916102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.034 [2024-11-18 07:21:06.916186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.034 qpair failed and we were unable to recover it. 00:35:46.034 [2024-11-18 07:21:06.916476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.034 [2024-11-18 07:21:06.916573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.034 qpair failed and we were unable to recover it. 00:35:46.034 [2024-11-18 07:21:06.916843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.034 [2024-11-18 07:21:06.916911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.034 qpair failed and we were unable to recover it. 00:35:46.034 [2024-11-18 07:21:06.917162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.034 [2024-11-18 07:21:06.917230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.034 qpair failed and we were unable to recover it. 00:35:46.034 [2024-11-18 07:21:06.917502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.034 [2024-11-18 07:21:06.917570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.034 qpair failed and we were unable to recover it. 00:35:46.034 [2024-11-18 07:21:06.917868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.034 [2024-11-18 07:21:06.917936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.034 qpair failed and we were unable to recover it. 00:35:46.034 [2024-11-18 07:21:06.918203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.034 [2024-11-18 07:21:06.918271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.034 qpair failed and we were unable to recover it. 00:35:46.034 [2024-11-18 07:21:06.918565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.034 [2024-11-18 07:21:06.918634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.034 qpair failed and we were unable to recover it. 00:35:46.034 [2024-11-18 07:21:06.918927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.034 [2024-11-18 07:21:06.918994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.034 qpair failed and we were unable to recover it. 00:35:46.034 [2024-11-18 07:21:06.919245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.034 [2024-11-18 07:21:06.919313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.034 qpair failed and we were unable to recover it. 00:35:46.034 [2024-11-18 07:21:06.919575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.034 [2024-11-18 07:21:06.919635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.034 qpair failed and we were unable to recover it. 00:35:46.034 [2024-11-18 07:21:06.919914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.034 [2024-11-18 07:21:06.919982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.034 qpair failed and we were unable to recover it. 00:35:46.034 [2024-11-18 07:21:06.920282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.034 [2024-11-18 07:21:06.920347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.034 qpair failed and we were unable to recover it. 00:35:46.034 [2024-11-18 07:21:06.920646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.034 [2024-11-18 07:21:06.920714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.034 qpair failed and we were unable to recover it. 00:35:46.034 [2024-11-18 07:21:06.920977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.034 [2024-11-18 07:21:06.921044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.034 qpair failed and we were unable to recover it. 00:35:46.034 [2024-11-18 07:21:06.921330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.034 [2024-11-18 07:21:06.921395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.034 qpair failed and we were unable to recover it. 00:35:46.034 [2024-11-18 07:21:06.921651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.034 [2024-11-18 07:21:06.921720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.034 qpair failed and we were unable to recover it. 00:35:46.034 [2024-11-18 07:21:06.922011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.034 [2024-11-18 07:21:06.922079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.034 qpair failed and we were unable to recover it. 00:35:46.034 [2024-11-18 07:21:06.922327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.034 [2024-11-18 07:21:06.922393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.034 qpair failed and we were unable to recover it. 00:35:46.034 [2024-11-18 07:21:06.922635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.034 [2024-11-18 07:21:06.922704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.034 qpair failed and we were unable to recover it. 00:35:46.034 [2024-11-18 07:21:06.922964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.034 [2024-11-18 07:21:06.923031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.034 qpair failed and we were unable to recover it. 00:35:46.034 [2024-11-18 07:21:06.923252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.034 [2024-11-18 07:21:06.923320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.034 qpair failed and we were unable to recover it. 00:35:46.034 [2024-11-18 07:21:06.923536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.034 [2024-11-18 07:21:06.923608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.034 qpair failed and we were unable to recover it. 00:35:46.034 [2024-11-18 07:21:06.923911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.034 [2024-11-18 07:21:06.923990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.034 qpair failed and we were unable to recover it. 00:35:46.034 [2024-11-18 07:21:06.924239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.034 [2024-11-18 07:21:06.924306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.034 qpair failed and we were unable to recover it. 00:35:46.034 [2024-11-18 07:21:06.924532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.034 [2024-11-18 07:21:06.924603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.034 qpair failed and we were unable to recover it. 00:35:46.034 [2024-11-18 07:21:06.924897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.034 [2024-11-18 07:21:06.924967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.034 qpair failed and we were unable to recover it. 00:35:46.035 [2024-11-18 07:21:06.925228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.035 [2024-11-18 07:21:06.925294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.035 qpair failed and we were unable to recover it. 00:35:46.035 [2024-11-18 07:21:06.925543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.035 [2024-11-18 07:21:06.925612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.035 qpair failed and we were unable to recover it. 00:35:46.035 [2024-11-18 07:21:06.925892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.035 [2024-11-18 07:21:06.925961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.035 qpair failed and we were unable to recover it. 00:35:46.035 [2024-11-18 07:21:06.926227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.035 [2024-11-18 07:21:06.926296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.035 qpair failed and we were unable to recover it. 00:35:46.035 [2024-11-18 07:21:06.926585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.035 [2024-11-18 07:21:06.926653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.035 qpair failed and we were unable to recover it. 00:35:46.035 [2024-11-18 07:21:06.926917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.035 [2024-11-18 07:21:06.926982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.035 qpair failed and we were unable to recover it. 00:35:46.035 [2024-11-18 07:21:06.927261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.035 [2024-11-18 07:21:06.927327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.035 qpair failed and we were unable to recover it. 00:35:46.035 [2024-11-18 07:21:06.927620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.035 [2024-11-18 07:21:06.927689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.035 qpair failed and we were unable to recover it. 00:35:46.035 [2024-11-18 07:21:06.927948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.035 [2024-11-18 07:21:06.928016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.035 qpair failed and we were unable to recover it. 00:35:46.035 [2024-11-18 07:21:06.928307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.035 [2024-11-18 07:21:06.928374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.035 qpair failed and we were unable to recover it. 00:35:46.035 [2024-11-18 07:21:06.928638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.035 [2024-11-18 07:21:06.928707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.035 qpair failed and we were unable to recover it. 00:35:46.035 [2024-11-18 07:21:06.928994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.035 [2024-11-18 07:21:06.929061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.035 qpair failed and we were unable to recover it. 00:35:46.035 [2024-11-18 07:21:06.929311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.035 [2024-11-18 07:21:06.929381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.035 qpair failed and we were unable to recover it. 00:35:46.035 [2024-11-18 07:21:06.929656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.035 [2024-11-18 07:21:06.929724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.035 qpair failed and we were unable to recover it. 00:35:46.035 [2024-11-18 07:21:06.929976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.035 [2024-11-18 07:21:06.930043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.035 qpair failed and we were unable to recover it. 00:35:46.035 [2024-11-18 07:21:06.930344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.035 [2024-11-18 07:21:06.930412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.035 qpair failed and we were unable to recover it. 00:35:46.035 [2024-11-18 07:21:06.930665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.035 [2024-11-18 07:21:06.930733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.035 qpair failed and we were unable to recover it. 00:35:46.035 [2024-11-18 07:21:06.931021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.035 [2024-11-18 07:21:06.931088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.035 qpair failed and we were unable to recover it. 00:35:46.035 [2024-11-18 07:21:06.931336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.035 [2024-11-18 07:21:06.931402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.035 qpair failed and we were unable to recover it. 00:35:46.035 [2024-11-18 07:21:06.931629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.035 [2024-11-18 07:21:06.931698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.035 qpair failed and we were unable to recover it. 00:35:46.035 [2024-11-18 07:21:06.931946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.035 [2024-11-18 07:21:06.932015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.035 qpair failed and we were unable to recover it. 00:35:46.035 [2024-11-18 07:21:06.932306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.035 [2024-11-18 07:21:06.932372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.035 qpair failed and we were unable to recover it. 00:35:46.035 [2024-11-18 07:21:06.932739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.035 [2024-11-18 07:21:06.932799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.035 qpair failed and we were unable to recover it. 00:35:46.035 [2024-11-18 07:21:06.933068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.035 [2024-11-18 07:21:06.933137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.035 qpair failed and we were unable to recover it. 00:35:46.035 [2024-11-18 07:21:06.933426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.035 [2024-11-18 07:21:06.933505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.035 qpair failed and we were unable to recover it. 00:35:46.035 [2024-11-18 07:21:06.933798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.035 [2024-11-18 07:21:06.933866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.035 qpair failed and we were unable to recover it. 00:35:46.035 [2024-11-18 07:21:06.934067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.035 [2024-11-18 07:21:06.934134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.035 qpair failed and we were unable to recover it. 00:35:46.035 [2024-11-18 07:21:06.934360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.035 [2024-11-18 07:21:06.934427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.035 qpair failed and we were unable to recover it. 00:35:46.035 [2024-11-18 07:21:06.934741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.035 [2024-11-18 07:21:06.934809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.035 qpair failed and we were unable to recover it. 00:35:46.035 [2024-11-18 07:21:06.935105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.035 [2024-11-18 07:21:06.935172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.035 qpair failed and we were unable to recover it. 00:35:46.036 [2024-11-18 07:21:06.935438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.036 [2024-11-18 07:21:06.935516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.036 qpair failed and we were unable to recover it. 00:35:46.036 [2024-11-18 07:21:06.935809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.036 [2024-11-18 07:21:06.935878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.036 qpair failed and we were unable to recover it. 00:35:46.036 [2024-11-18 07:21:06.936121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.036 [2024-11-18 07:21:06.936191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.036 qpair failed and we were unable to recover it. 00:35:46.036 [2024-11-18 07:21:06.936485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.036 [2024-11-18 07:21:06.936582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.036 qpair failed and we were unable to recover it. 00:35:46.036 [2024-11-18 07:21:06.936830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.036 [2024-11-18 07:21:06.936898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.036 qpair failed and we were unable to recover it. 00:35:46.036 [2024-11-18 07:21:06.937158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.036 [2024-11-18 07:21:06.937226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.036 qpair failed and we were unable to recover it. 00:35:46.036 [2024-11-18 07:21:06.937527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.036 [2024-11-18 07:21:06.937606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.036 qpair failed and we were unable to recover it. 00:35:46.036 [2024-11-18 07:21:06.937893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.036 [2024-11-18 07:21:06.937962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.036 qpair failed and we were unable to recover it. 00:35:46.036 [2024-11-18 07:21:06.938174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.036 [2024-11-18 07:21:06.938245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.036 qpair failed and we were unable to recover it. 00:35:46.036 [2024-11-18 07:21:06.938538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.036 [2024-11-18 07:21:06.938606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.036 qpair failed and we were unable to recover it. 00:35:46.036 [2024-11-18 07:21:06.938840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.036 [2024-11-18 07:21:06.938907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.036 qpair failed and we were unable to recover it. 00:35:46.036 [2024-11-18 07:21:06.939200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.036 [2024-11-18 07:21:06.939269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.036 qpair failed and we were unable to recover it. 00:35:46.036 [2024-11-18 07:21:06.939481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.036 [2024-11-18 07:21:06.939562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.036 qpair failed and we were unable to recover it. 00:35:46.036 [2024-11-18 07:21:06.939860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.036 [2024-11-18 07:21:06.939928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.036 qpair failed and we were unable to recover it. 00:35:46.036 [2024-11-18 07:21:06.940188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.036 [2024-11-18 07:21:06.940255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.036 qpair failed and we were unable to recover it. 00:35:46.036 [2024-11-18 07:21:06.940556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.036 [2024-11-18 07:21:06.940626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.036 qpair failed and we were unable to recover it. 00:35:46.036 [2024-11-18 07:21:06.940883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.036 [2024-11-18 07:21:06.940951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.036 qpair failed and we were unable to recover it. 00:35:46.036 [2024-11-18 07:21:06.941213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.036 [2024-11-18 07:21:06.941280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.036 qpair failed and we were unable to recover it. 00:35:46.036 [2024-11-18 07:21:06.941542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.036 [2024-11-18 07:21:06.941611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.036 qpair failed and we were unable to recover it. 00:35:46.036 [2024-11-18 07:21:06.941863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.036 [2024-11-18 07:21:06.941932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.036 qpair failed and we were unable to recover it. 00:35:46.036 [2024-11-18 07:21:06.942246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.036 [2024-11-18 07:21:06.942313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.036 qpair failed and we were unable to recover it. 00:35:46.036 [2024-11-18 07:21:06.942616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.036 [2024-11-18 07:21:06.942684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.036 qpair failed and we were unable to recover it. 00:35:46.036 [2024-11-18 07:21:06.942977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.036 [2024-11-18 07:21:06.943043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.036 qpair failed and we were unable to recover it. 00:35:46.036 [2024-11-18 07:21:06.943344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.036 [2024-11-18 07:21:06.943410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.036 qpair failed and we were unable to recover it. 00:35:46.036 [2024-11-18 07:21:06.943685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.036 [2024-11-18 07:21:06.943755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.036 qpair failed and we were unable to recover it. 00:35:46.036 [2024-11-18 07:21:06.944023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.036 [2024-11-18 07:21:06.944091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.036 qpair failed and we were unable to recover it. 00:35:46.036 [2024-11-18 07:21:06.944351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.036 [2024-11-18 07:21:06.944417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.036 qpair failed and we were unable to recover it. 00:35:46.036 [2024-11-18 07:21:06.944739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.036 [2024-11-18 07:21:06.944799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.036 qpair failed and we were unable to recover it. 00:35:46.036 [2024-11-18 07:21:06.944986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.036 [2024-11-18 07:21:06.945068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.036 qpair failed and we were unable to recover it. 00:35:46.036 [2024-11-18 07:21:06.945368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.036 [2024-11-18 07:21:06.945436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.036 qpair failed and we were unable to recover it. 00:35:46.036 [2024-11-18 07:21:06.945716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.036 [2024-11-18 07:21:06.945784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.036 qpair failed and we were unable to recover it. 00:35:46.036 [2024-11-18 07:21:06.946065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.036 [2024-11-18 07:21:06.946132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.036 qpair failed and we were unable to recover it. 00:35:46.036 [2024-11-18 07:21:06.946387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.036 [2024-11-18 07:21:06.946456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.036 qpair failed and we were unable to recover it. 00:35:46.036 [2024-11-18 07:21:06.946801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.036 [2024-11-18 07:21:06.946869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.036 qpair failed and we were unable to recover it. 00:35:46.036 [2024-11-18 07:21:06.947164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.036 [2024-11-18 07:21:06.947231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.037 qpair failed and we were unable to recover it. 00:35:46.037 [2024-11-18 07:21:06.947525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.037 [2024-11-18 07:21:06.947595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.037 qpair failed and we were unable to recover it. 00:35:46.037 [2024-11-18 07:21:06.947844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.037 [2024-11-18 07:21:06.947910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.037 qpair failed and we were unable to recover it. 00:35:46.037 [2024-11-18 07:21:06.948165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.037 [2024-11-18 07:21:06.948233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.037 qpair failed and we were unable to recover it. 00:35:46.037 [2024-11-18 07:21:06.948534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.037 [2024-11-18 07:21:06.948603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.037 qpair failed and we were unable to recover it. 00:35:46.037 [2024-11-18 07:21:06.948920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.037 [2024-11-18 07:21:06.948988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.037 qpair failed and we were unable to recover it. 00:35:46.037 [2024-11-18 07:21:06.949241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.037 [2024-11-18 07:21:06.949307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.037 qpair failed and we were unable to recover it. 00:35:46.037 [2024-11-18 07:21:06.949611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.037 [2024-11-18 07:21:06.949679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.037 qpair failed and we were unable to recover it. 00:35:46.037 [2024-11-18 07:21:06.949908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.037 [2024-11-18 07:21:06.949976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.037 qpair failed and we were unable to recover it. 00:35:46.037 [2024-11-18 07:21:06.950261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.037 [2024-11-18 07:21:06.950329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.037 qpair failed and we were unable to recover it. 00:35:46.037 [2024-11-18 07:21:06.950576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.037 [2024-11-18 07:21:06.950646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.037 qpair failed and we were unable to recover it. 00:35:46.037 [2024-11-18 07:21:06.950899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.037 [2024-11-18 07:21:06.950967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.037 qpair failed and we were unable to recover it. 00:35:46.037 [2024-11-18 07:21:06.951275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.037 [2024-11-18 07:21:06.951342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.037 qpair failed and we were unable to recover it. 00:35:46.037 [2024-11-18 07:21:06.951631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.037 [2024-11-18 07:21:06.951699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.037 qpair failed and we were unable to recover it. 00:35:46.037 [2024-11-18 07:21:06.951955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.037 [2024-11-18 07:21:06.952023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.037 qpair failed and we were unable to recover it. 00:35:46.037 [2024-11-18 07:21:06.952310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.037 [2024-11-18 07:21:06.952377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.037 qpair failed and we were unable to recover it. 00:35:46.037 [2024-11-18 07:21:06.952657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.037 [2024-11-18 07:21:06.952727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.037 qpair failed and we were unable to recover it. 00:35:46.037 [2024-11-18 07:21:06.953016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.037 [2024-11-18 07:21:06.953081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.037 qpair failed and we were unable to recover it. 00:35:46.037 [2024-11-18 07:21:06.953371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.037 [2024-11-18 07:21:06.953438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.037 qpair failed and we were unable to recover it. 00:35:46.037 [2024-11-18 07:21:06.953735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.037 [2024-11-18 07:21:06.953804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.037 qpair failed and we were unable to recover it. 00:35:46.037 [2024-11-18 07:21:06.954101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.037 [2024-11-18 07:21:06.954167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.037 qpair failed and we were unable to recover it. 00:35:46.037 [2024-11-18 07:21:06.954465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.037 [2024-11-18 07:21:06.954547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.037 qpair failed and we were unable to recover it. 00:35:46.037 [2024-11-18 07:21:06.954839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.037 [2024-11-18 07:21:06.954907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.037 qpair failed and we were unable to recover it. 00:35:46.037 [2024-11-18 07:21:06.955206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.037 [2024-11-18 07:21:06.955272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.037 qpair failed and we were unable to recover it. 00:35:46.037 [2024-11-18 07:21:06.955532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.037 [2024-11-18 07:21:06.955600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.037 qpair failed and we were unable to recover it. 00:35:46.037 [2024-11-18 07:21:06.955850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.037 [2024-11-18 07:21:06.955917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.037 qpair failed and we were unable to recover it. 00:35:46.037 [2024-11-18 07:21:06.956224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.037 [2024-11-18 07:21:06.956291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.037 qpair failed and we were unable to recover it. 00:35:46.037 [2024-11-18 07:21:06.956589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.037 [2024-11-18 07:21:06.956657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.037 qpair failed and we were unable to recover it. 00:35:46.037 [2024-11-18 07:21:06.956958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.037 [2024-11-18 07:21:06.957025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.037 qpair failed and we were unable to recover it. 00:35:46.037 [2024-11-18 07:21:06.957315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.037 [2024-11-18 07:21:06.957381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.037 qpair failed and we were unable to recover it. 00:35:46.037 [2024-11-18 07:21:06.957683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.037 [2024-11-18 07:21:06.957751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.037 qpair failed and we were unable to recover it. 00:35:46.037 [2024-11-18 07:21:06.958052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.037 [2024-11-18 07:21:06.958121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.037 qpair failed and we were unable to recover it. 00:35:46.037 [2024-11-18 07:21:06.958338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.037 [2024-11-18 07:21:06.958404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.037 qpair failed and we were unable to recover it. 00:35:46.037 [2024-11-18 07:21:06.958677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.037 [2024-11-18 07:21:06.958745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.037 qpair failed and we were unable to recover it. 00:35:46.037 [2024-11-18 07:21:06.958995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.037 [2024-11-18 07:21:06.959062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.037 qpair failed and we were unable to recover it. 00:35:46.037 [2024-11-18 07:21:06.959334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.037 [2024-11-18 07:21:06.959371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.037 qpair failed and we were unable to recover it. 00:35:46.038 [2024-11-18 07:21:06.959587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.038 [2024-11-18 07:21:06.959626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.038 qpair failed and we were unable to recover it. 00:35:46.038 [2024-11-18 07:21:06.959774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.038 [2024-11-18 07:21:06.959848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.038 qpair failed and we were unable to recover it. 00:35:46.038 [2024-11-18 07:21:06.960083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.038 [2024-11-18 07:21:06.960150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.038 qpair failed and we were unable to recover it. 00:35:46.038 [2024-11-18 07:21:06.960353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.038 [2024-11-18 07:21:06.960391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.038 qpair failed and we were unable to recover it. 00:35:46.038 [2024-11-18 07:21:06.960545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.038 [2024-11-18 07:21:06.960583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.038 qpair failed and we were unable to recover it. 00:35:46.038 [2024-11-18 07:21:06.960691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.038 [2024-11-18 07:21:06.960726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.038 qpair failed and we were unable to recover it. 00:35:46.038 [2024-11-18 07:21:06.960878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.038 [2024-11-18 07:21:06.960916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.038 qpair failed and we were unable to recover it. 00:35:46.038 [2024-11-18 07:21:06.961038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.038 [2024-11-18 07:21:06.961074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.038 qpair failed and we were unable to recover it. 00:35:46.038 [2024-11-18 07:21:06.961227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.038 [2024-11-18 07:21:06.961264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.038 qpair failed and we were unable to recover it. 00:35:46.038 [2024-11-18 07:21:06.961417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.038 [2024-11-18 07:21:06.961455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.038 qpair failed and we were unable to recover it. 00:35:46.038 [2024-11-18 07:21:06.961612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.038 [2024-11-18 07:21:06.961650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.038 qpair failed and we were unable to recover it. 00:35:46.038 [2024-11-18 07:21:06.961791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.038 [2024-11-18 07:21:06.961828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.038 qpair failed and we were unable to recover it. 00:35:46.038 [2024-11-18 07:21:06.962010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.038 [2024-11-18 07:21:06.962048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.038 qpair failed and we were unable to recover it. 00:35:46.038 [2024-11-18 07:21:06.962201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.038 [2024-11-18 07:21:06.962237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.038 qpair failed and we were unable to recover it. 00:35:46.038 [2024-11-18 07:21:06.962367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.038 [2024-11-18 07:21:06.962400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.038 qpair failed and we were unable to recover it. 00:35:46.038 [2024-11-18 07:21:06.962527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.038 [2024-11-18 07:21:06.962562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.038 qpair failed and we were unable to recover it. 00:35:46.038 [2024-11-18 07:21:06.962679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.038 [2024-11-18 07:21:06.962720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.038 qpair failed and we were unable to recover it. 00:35:46.038 [2024-11-18 07:21:06.962857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.038 [2024-11-18 07:21:06.962895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.038 qpair failed and we were unable to recover it. 00:35:46.038 [2024-11-18 07:21:06.963064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.038 [2024-11-18 07:21:06.963122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.038 qpair failed and we were unable to recover it. 00:35:46.038 [2024-11-18 07:21:06.963328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.038 [2024-11-18 07:21:06.963363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.038 qpair failed and we were unable to recover it. 00:35:46.038 [2024-11-18 07:21:06.963479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.038 [2024-11-18 07:21:06.963523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.038 qpair failed and we were unable to recover it. 00:35:46.038 [2024-11-18 07:21:06.963644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.038 [2024-11-18 07:21:06.963678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.038 qpair failed and we were unable to recover it. 00:35:46.038 [2024-11-18 07:21:06.963850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.038 [2024-11-18 07:21:06.963886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.038 qpair failed and we were unable to recover it. 00:35:46.038 [2024-11-18 07:21:06.964001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.038 [2024-11-18 07:21:06.964035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.038 qpair failed and we were unable to recover it. 00:35:46.038 [2024-11-18 07:21:06.964251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.038 [2024-11-18 07:21:06.964287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.038 qpair failed and we were unable to recover it. 00:35:46.038 [2024-11-18 07:21:06.964394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.038 [2024-11-18 07:21:06.964427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.038 qpair failed and we were unable to recover it. 00:35:46.038 [2024-11-18 07:21:06.964576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.038 [2024-11-18 07:21:06.964614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.038 qpair failed and we were unable to recover it. 00:35:46.038 [2024-11-18 07:21:06.964730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.038 [2024-11-18 07:21:06.964764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.038 qpair failed and we were unable to recover it. 00:35:46.038 [2024-11-18 07:21:06.964869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.038 [2024-11-18 07:21:06.964903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.038 qpair failed and we were unable to recover it. 00:35:46.038 [2024-11-18 07:21:06.965020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.038 [2024-11-18 07:21:06.965053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.038 qpair failed and we were unable to recover it. 00:35:46.038 [2024-11-18 07:21:06.965199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.038 [2024-11-18 07:21:06.965235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.038 qpair failed and we were unable to recover it. 00:35:46.038 [2024-11-18 07:21:06.965353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.038 [2024-11-18 07:21:06.965402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.038 qpair failed and we were unable to recover it. 00:35:46.038 [2024-11-18 07:21:06.965547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.038 [2024-11-18 07:21:06.965597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.038 qpair failed and we were unable to recover it. 00:35:46.038 [2024-11-18 07:21:06.965735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.038 [2024-11-18 07:21:06.965773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.038 qpair failed and we were unable to recover it. 00:35:46.038 [2024-11-18 07:21:06.965949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.038 [2024-11-18 07:21:06.965985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.039 qpair failed and we were unable to recover it. 00:35:46.039 [2024-11-18 07:21:06.966103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.039 [2024-11-18 07:21:06.966136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.039 qpair failed and we were unable to recover it. 00:35:46.039 [2024-11-18 07:21:06.966271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.039 [2024-11-18 07:21:06.966310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.039 qpair failed and we were unable to recover it. 00:35:46.039 [2024-11-18 07:21:06.966479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.039 [2024-11-18 07:21:06.966526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.039 qpair failed and we were unable to recover it. 00:35:46.039 [2024-11-18 07:21:06.966693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.039 [2024-11-18 07:21:06.966728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.039 qpair failed and we were unable to recover it. 00:35:46.039 [2024-11-18 07:21:06.966874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.039 [2024-11-18 07:21:06.966911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.039 qpair failed and we were unable to recover it. 00:35:46.039 [2024-11-18 07:21:06.967058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.039 [2024-11-18 07:21:06.967094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.039 qpair failed and we were unable to recover it. 00:35:46.039 [2024-11-18 07:21:06.967305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.039 [2024-11-18 07:21:06.967342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.039 qpair failed and we were unable to recover it. 00:35:46.039 [2024-11-18 07:21:06.967523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.039 [2024-11-18 07:21:06.967583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.039 qpair failed and we were unable to recover it. 00:35:46.039 [2024-11-18 07:21:06.967781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.039 [2024-11-18 07:21:06.967835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.039 qpair failed and we were unable to recover it. 00:35:46.039 [2024-11-18 07:21:06.967997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.039 [2024-11-18 07:21:06.968034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.039 qpair failed and we were unable to recover it. 00:35:46.039 [2024-11-18 07:21:06.968138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.039 [2024-11-18 07:21:06.968172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.039 qpair failed and we were unable to recover it. 00:35:46.039 [2024-11-18 07:21:06.968380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.039 [2024-11-18 07:21:06.968432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.039 qpair failed and we were unable to recover it. 00:35:46.039 [2024-11-18 07:21:06.968597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.039 [2024-11-18 07:21:06.968634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.039 qpair failed and we were unable to recover it. 00:35:46.039 [2024-11-18 07:21:06.968756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.039 [2024-11-18 07:21:06.968807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.039 qpair failed and we were unable to recover it. 00:35:46.039 [2024-11-18 07:21:06.969008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.039 [2024-11-18 07:21:06.969043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.039 qpair failed and we were unable to recover it. 00:35:46.039 [2024-11-18 07:21:06.969187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.039 [2024-11-18 07:21:06.969222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.039 qpair failed and we were unable to recover it. 00:35:46.039 [2024-11-18 07:21:06.969486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.039 [2024-11-18 07:21:06.969570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.039 qpair failed and we were unable to recover it. 00:35:46.039 [2024-11-18 07:21:06.969693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.039 [2024-11-18 07:21:06.969725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.039 qpair failed and we were unable to recover it. 00:35:46.039 [2024-11-18 07:21:06.969851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.039 [2024-11-18 07:21:06.969886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.039 qpair failed and we were unable to recover it. 00:35:46.325 [2024-11-18 07:21:06.970005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.325 [2024-11-18 07:21:06.970038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.325 qpair failed and we were unable to recover it. 00:35:46.325 [2024-11-18 07:21:06.970217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.325 [2024-11-18 07:21:06.970252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.325 qpair failed and we were unable to recover it. 00:35:46.325 [2024-11-18 07:21:06.970413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.325 [2024-11-18 07:21:06.970449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.325 qpair failed and we were unable to recover it. 00:35:46.325 [2024-11-18 07:21:06.970600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.325 [2024-11-18 07:21:06.970635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.325 qpair failed and we were unable to recover it. 00:35:46.325 [2024-11-18 07:21:06.970740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.325 [2024-11-18 07:21:06.970773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.325 qpair failed and we were unable to recover it. 00:35:46.325 [2024-11-18 07:21:06.970919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.325 [2024-11-18 07:21:06.970953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.325 qpair failed and we were unable to recover it. 00:35:46.325 [2024-11-18 07:21:06.971097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.325 [2024-11-18 07:21:06.971132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.325 qpair failed and we were unable to recover it. 00:35:46.325 [2024-11-18 07:21:06.971273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.325 [2024-11-18 07:21:06.971308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.325 qpair failed and we were unable to recover it. 00:35:46.325 [2024-11-18 07:21:06.971441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.325 [2024-11-18 07:21:06.971475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.325 qpair failed and we were unable to recover it. 00:35:46.325 [2024-11-18 07:21:06.971605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.325 [2024-11-18 07:21:06.971639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.325 qpair failed and we were unable to recover it. 00:35:46.325 [2024-11-18 07:21:06.971761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.325 [2024-11-18 07:21:06.971796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.325 qpair failed and we were unable to recover it. 00:35:46.325 [2024-11-18 07:21:06.971914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.325 [2024-11-18 07:21:06.971949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.325 qpair failed and we were unable to recover it. 00:35:46.326 [2024-11-18 07:21:06.972117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.326 [2024-11-18 07:21:06.972151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.326 qpair failed and we were unable to recover it. 00:35:46.326 [2024-11-18 07:21:06.972284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.326 [2024-11-18 07:21:06.972317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.326 qpair failed and we were unable to recover it. 00:35:46.326 [2024-11-18 07:21:06.972411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.326 [2024-11-18 07:21:06.972443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.326 qpair failed and we were unable to recover it. 00:35:46.326 [2024-11-18 07:21:06.972566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.326 [2024-11-18 07:21:06.972601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.326 qpair failed and we were unable to recover it. 00:35:46.326 [2024-11-18 07:21:06.972715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.326 [2024-11-18 07:21:06.972757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.326 qpair failed and we were unable to recover it. 00:35:46.326 [2024-11-18 07:21:06.972855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.326 [2024-11-18 07:21:06.972888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.326 qpair failed and we were unable to recover it. 00:35:46.326 [2024-11-18 07:21:06.972993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.326 [2024-11-18 07:21:06.973026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.326 qpair failed and we were unable to recover it. 00:35:46.326 [2024-11-18 07:21:06.973130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.326 [2024-11-18 07:21:06.973162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.326 qpair failed and we were unable to recover it. 00:35:46.326 [2024-11-18 07:21:06.973312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.326 [2024-11-18 07:21:06.973363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.326 qpair failed and we were unable to recover it. 00:35:46.326 [2024-11-18 07:21:06.973529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.326 [2024-11-18 07:21:06.973565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.326 qpair failed and we were unable to recover it. 00:35:46.326 [2024-11-18 07:21:06.973705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.326 [2024-11-18 07:21:06.973740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.326 qpair failed and we were unable to recover it. 00:35:46.326 [2024-11-18 07:21:06.973878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.326 [2024-11-18 07:21:06.973913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.326 qpair failed and we were unable to recover it. 00:35:46.326 [2024-11-18 07:21:06.974027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.326 [2024-11-18 07:21:06.974060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.326 qpair failed and we were unable to recover it. 00:35:46.326 [2024-11-18 07:21:06.974316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.326 [2024-11-18 07:21:06.974371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.326 qpair failed and we were unable to recover it. 00:35:46.326 [2024-11-18 07:21:06.974637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.326 [2024-11-18 07:21:06.974672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.326 qpair failed and we were unable to recover it. 00:35:46.326 [2024-11-18 07:21:06.974802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.326 [2024-11-18 07:21:06.974837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.326 qpair failed and we were unable to recover it. 00:35:46.326 [2024-11-18 07:21:06.975010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.326 [2024-11-18 07:21:06.975044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.326 qpair failed and we were unable to recover it. 00:35:46.326 [2024-11-18 07:21:06.975174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.326 [2024-11-18 07:21:06.975207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.326 qpair failed and we were unable to recover it. 00:35:46.326 [2024-11-18 07:21:06.975373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.326 [2024-11-18 07:21:06.975430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.326 qpair failed and we were unable to recover it. 00:35:46.326 [2024-11-18 07:21:06.975622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.326 [2024-11-18 07:21:06.975658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.326 qpair failed and we were unable to recover it. 00:35:46.326 [2024-11-18 07:21:06.975805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.326 [2024-11-18 07:21:06.975839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.326 qpair failed and we were unable to recover it. 00:35:46.326 [2024-11-18 07:21:06.975993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.326 [2024-11-18 07:21:06.976028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.326 qpair failed and we were unable to recover it. 00:35:46.326 [2024-11-18 07:21:06.976174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.326 [2024-11-18 07:21:06.976211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.326 qpair failed and we were unable to recover it. 00:35:46.326 [2024-11-18 07:21:06.976443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.326 [2024-11-18 07:21:06.976477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.326 qpair failed and we were unable to recover it. 00:35:46.326 [2024-11-18 07:21:06.976636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.326 [2024-11-18 07:21:06.976671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.326 qpair failed and we were unable to recover it. 00:35:46.326 [2024-11-18 07:21:06.976863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.326 [2024-11-18 07:21:06.976898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.326 qpair failed and we were unable to recover it. 00:35:46.326 [2024-11-18 07:21:06.977032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.326 [2024-11-18 07:21:06.977067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.326 qpair failed and we were unable to recover it. 00:35:46.326 [2024-11-18 07:21:06.977213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.326 [2024-11-18 07:21:06.977271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.326 qpair failed and we were unable to recover it. 00:35:46.326 [2024-11-18 07:21:06.977525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.326 [2024-11-18 07:21:06.977590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.326 qpair failed and we were unable to recover it. 00:35:46.326 [2024-11-18 07:21:06.977737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.326 [2024-11-18 07:21:06.977771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.326 qpair failed and we were unable to recover it. 00:35:46.326 [2024-11-18 07:21:06.977946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.326 [2024-11-18 07:21:06.977981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.326 qpair failed and we were unable to recover it. 00:35:46.326 [2024-11-18 07:21:06.978151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.326 [2024-11-18 07:21:06.978195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.326 qpair failed and we were unable to recover it. 00:35:46.326 [2024-11-18 07:21:06.978459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.326 [2024-11-18 07:21:06.978503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.326 qpair failed and we were unable to recover it. 00:35:46.326 [2024-11-18 07:21:06.978619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.327 [2024-11-18 07:21:06.978654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.327 qpair failed and we were unable to recover it. 00:35:46.327 [2024-11-18 07:21:06.978776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.327 [2024-11-18 07:21:06.978810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.327 qpair failed and we were unable to recover it. 00:35:46.327 [2024-11-18 07:21:06.978978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.327 [2024-11-18 07:21:06.979012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.327 qpair failed and we were unable to recover it. 00:35:46.327 [2024-11-18 07:21:06.979251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.327 [2024-11-18 07:21:06.979307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.327 qpair failed and we were unable to recover it. 00:35:46.327 [2024-11-18 07:21:06.979454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.327 [2024-11-18 07:21:06.979488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.327 qpair failed and we were unable to recover it. 00:35:46.327 [2024-11-18 07:21:06.979601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.327 [2024-11-18 07:21:06.979636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.327 qpair failed and we were unable to recover it. 00:35:46.327 [2024-11-18 07:21:06.979787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.327 [2024-11-18 07:21:06.979823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.327 qpair failed and we were unable to recover it. 00:35:46.327 [2024-11-18 07:21:06.980005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.327 [2024-11-18 07:21:06.980039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.327 qpair failed and we were unable to recover it. 00:35:46.327 [2024-11-18 07:21:06.980210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.327 [2024-11-18 07:21:06.980276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.327 qpair failed and we were unable to recover it. 00:35:46.327 [2024-11-18 07:21:06.980477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.327 [2024-11-18 07:21:06.980521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.327 qpair failed and we were unable to recover it. 00:35:46.327 [2024-11-18 07:21:06.980684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.327 [2024-11-18 07:21:06.980717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.327 qpair failed and we were unable to recover it. 00:35:46.327 [2024-11-18 07:21:06.980986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.327 [2024-11-18 07:21:06.981045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.327 qpair failed and we were unable to recover it. 00:35:46.327 [2024-11-18 07:21:06.981330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.327 [2024-11-18 07:21:06.981396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.327 qpair failed and we were unable to recover it. 00:35:46.327 [2024-11-18 07:21:06.981643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.327 [2024-11-18 07:21:06.981678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.327 qpair failed and we were unable to recover it. 00:35:46.327 [2024-11-18 07:21:06.981853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.327 [2024-11-18 07:21:06.981914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.327 qpair failed and we were unable to recover it. 00:35:46.327 [2024-11-18 07:21:06.982162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.327 [2024-11-18 07:21:06.982197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.327 qpair failed and we were unable to recover it. 00:35:46.327 [2024-11-18 07:21:06.982313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.327 [2024-11-18 07:21:06.982347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.327 qpair failed and we were unable to recover it. 00:35:46.327 [2024-11-18 07:21:06.982576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.327 [2024-11-18 07:21:06.982611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.327 qpair failed and we were unable to recover it. 00:35:46.327 [2024-11-18 07:21:06.982758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.327 [2024-11-18 07:21:06.982793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.327 qpair failed and we were unable to recover it. 00:35:46.327 [2024-11-18 07:21:06.983056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.327 [2024-11-18 07:21:06.983112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.327 qpair failed and we were unable to recover it. 00:35:46.327 [2024-11-18 07:21:06.983421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.327 [2024-11-18 07:21:06.983486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.327 qpair failed and we were unable to recover it. 00:35:46.327 [2024-11-18 07:21:06.983704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.327 [2024-11-18 07:21:06.983740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.327 qpair failed and we were unable to recover it. 00:35:46.327 [2024-11-18 07:21:06.983925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.327 [2024-11-18 07:21:06.983978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.327 qpair failed and we were unable to recover it. 00:35:46.327 [2024-11-18 07:21:06.984148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.327 [2024-11-18 07:21:06.984182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.327 qpair failed and we were unable to recover it. 00:35:46.327 [2024-11-18 07:21:06.984326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.327 [2024-11-18 07:21:06.984360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.327 qpair failed and we were unable to recover it. 00:35:46.327 [2024-11-18 07:21:06.984533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.327 [2024-11-18 07:21:06.984568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.327 qpair failed and we were unable to recover it. 00:35:46.327 [2024-11-18 07:21:06.984714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.327 [2024-11-18 07:21:06.984749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.327 qpair failed and we were unable to recover it. 00:35:46.327 [2024-11-18 07:21:06.985007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.327 [2024-11-18 07:21:06.985041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.327 qpair failed and we were unable to recover it. 00:35:46.327 [2024-11-18 07:21:06.985148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.327 [2024-11-18 07:21:06.985183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.327 qpair failed and we were unable to recover it. 00:35:46.327 [2024-11-18 07:21:06.985290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.327 [2024-11-18 07:21:06.985323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.327 qpair failed and we were unable to recover it. 00:35:46.327 [2024-11-18 07:21:06.985497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.327 [2024-11-18 07:21:06.985532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.327 qpair failed and we were unable to recover it. 00:35:46.327 [2024-11-18 07:21:06.985671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.327 [2024-11-18 07:21:06.985706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.327 qpair failed and we were unable to recover it. 00:35:46.327 [2024-11-18 07:21:06.985934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.327 [2024-11-18 07:21:06.985968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.327 qpair failed and we were unable to recover it. 00:35:46.327 [2024-11-18 07:21:06.986084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.327 [2024-11-18 07:21:06.986119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.327 qpair failed and we were unable to recover it. 00:35:46.327 [2024-11-18 07:21:06.986311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.327 [2024-11-18 07:21:06.986382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.327 qpair failed and we were unable to recover it. 00:35:46.328 [2024-11-18 07:21:06.986595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.328 [2024-11-18 07:21:06.986632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.328 qpair failed and we were unable to recover it. 00:35:46.328 [2024-11-18 07:21:06.986784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.328 [2024-11-18 07:21:06.986820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.328 qpair failed and we were unable to recover it. 00:35:46.328 [2024-11-18 07:21:06.986960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.328 [2024-11-18 07:21:06.986996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.328 qpair failed and we were unable to recover it. 00:35:46.328 [2024-11-18 07:21:06.987301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.328 [2024-11-18 07:21:06.987366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.328 qpair failed and we were unable to recover it. 00:35:46.328 [2024-11-18 07:21:06.987573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.328 [2024-11-18 07:21:06.987610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.328 qpair failed and we were unable to recover it. 00:35:46.328 [2024-11-18 07:21:06.987762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.328 [2024-11-18 07:21:06.987796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.328 qpair failed and we were unable to recover it. 00:35:46.328 [2024-11-18 07:21:06.987942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.328 [2024-11-18 07:21:06.987977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.328 qpair failed and we were unable to recover it. 00:35:46.328 [2024-11-18 07:21:06.988123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.328 [2024-11-18 07:21:06.988157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.328 qpair failed and we were unable to recover it. 00:35:46.328 [2024-11-18 07:21:06.988326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.328 [2024-11-18 07:21:06.988361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.328 qpair failed and we were unable to recover it. 00:35:46.328 [2024-11-18 07:21:06.988652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.328 [2024-11-18 07:21:06.988688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.328 qpair failed and we were unable to recover it. 00:35:46.328 [2024-11-18 07:21:06.988805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.328 [2024-11-18 07:21:06.988839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.328 qpair failed and we were unable to recover it. 00:35:46.328 [2024-11-18 07:21:06.989003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.328 [2024-11-18 07:21:06.989043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.328 qpair failed and we were unable to recover it. 00:35:46.328 [2024-11-18 07:21:06.989213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.328 [2024-11-18 07:21:06.989253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.328 qpair failed and we were unable to recover it. 00:35:46.328 [2024-11-18 07:21:06.989409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.328 [2024-11-18 07:21:06.989463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.328 qpair failed and we were unable to recover it. 00:35:46.328 [2024-11-18 07:21:06.989629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.328 [2024-11-18 07:21:06.989664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.328 qpair failed and we were unable to recover it. 00:35:46.328 [2024-11-18 07:21:06.989774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.328 [2024-11-18 07:21:06.989812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.328 qpair failed and we were unable to recover it. 00:35:46.328 [2024-11-18 07:21:06.989961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.328 [2024-11-18 07:21:06.990000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.328 qpair failed and we were unable to recover it. 00:35:46.328 [2024-11-18 07:21:06.990194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.328 [2024-11-18 07:21:06.990231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.328 qpair failed and we were unable to recover it. 00:35:46.328 [2024-11-18 07:21:06.990395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.328 [2024-11-18 07:21:06.990474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.328 qpair failed and we were unable to recover it. 00:35:46.328 [2024-11-18 07:21:06.990671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.328 [2024-11-18 07:21:06.990706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.328 qpair failed and we were unable to recover it. 00:35:46.328 [2024-11-18 07:21:06.990878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.328 [2024-11-18 07:21:06.990943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.328 qpair failed and we were unable to recover it. 00:35:46.328 [2024-11-18 07:21:06.991204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.328 [2024-11-18 07:21:06.991260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.328 qpair failed and we were unable to recover it. 00:35:46.328 [2024-11-18 07:21:06.991487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.328 [2024-11-18 07:21:06.991533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.328 qpair failed and we were unable to recover it. 00:35:46.328 [2024-11-18 07:21:06.991662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.328 [2024-11-18 07:21:06.991697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.328 qpair failed and we were unable to recover it. 00:35:46.328 [2024-11-18 07:21:06.991845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.328 [2024-11-18 07:21:06.991879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.328 qpair failed and we were unable to recover it. 00:35:46.328 [2024-11-18 07:21:06.991991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.328 [2024-11-18 07:21:06.992024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.328 qpair failed and we were unable to recover it. 00:35:46.328 [2024-11-18 07:21:06.992196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.328 [2024-11-18 07:21:06.992231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.328 qpair failed and we were unable to recover it. 00:35:46.328 [2024-11-18 07:21:06.992376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.328 [2024-11-18 07:21:06.992410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.328 qpair failed and we were unable to recover it. 00:35:46.328 [2024-11-18 07:21:06.992528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.328 [2024-11-18 07:21:06.992573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.328 qpair failed and we were unable to recover it. 00:35:46.328 [2024-11-18 07:21:06.992719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.328 [2024-11-18 07:21:06.992755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.328 qpair failed and we were unable to recover it. 00:35:46.328 [2024-11-18 07:21:06.992871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.328 [2024-11-18 07:21:06.992904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.328 qpair failed and we were unable to recover it. 00:35:46.328 [2024-11-18 07:21:06.993049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.328 [2024-11-18 07:21:06.993090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.328 qpair failed and we were unable to recover it. 00:35:46.328 [2024-11-18 07:21:06.993229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.328 [2024-11-18 07:21:06.993264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.328 qpair failed and we were unable to recover it. 00:35:46.328 [2024-11-18 07:21:06.993552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.328 [2024-11-18 07:21:06.993604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.328 qpair failed and we were unable to recover it. 00:35:46.328 [2024-11-18 07:21:06.993726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.328 [2024-11-18 07:21:06.993760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.328 qpair failed and we were unable to recover it. 00:35:46.328 [2024-11-18 07:21:06.993903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.329 [2024-11-18 07:21:06.993937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.329 qpair failed and we were unable to recover it. 00:35:46.329 [2024-11-18 07:21:06.994051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.329 [2024-11-18 07:21:06.994085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.329 qpair failed and we were unable to recover it. 00:35:46.329 [2024-11-18 07:21:06.994254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.329 [2024-11-18 07:21:06.994289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.329 qpair failed and we were unable to recover it. 00:35:46.329 [2024-11-18 07:21:06.994432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.329 [2024-11-18 07:21:06.994467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.329 qpair failed and we were unable to recover it. 00:35:46.329 [2024-11-18 07:21:06.994595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.329 [2024-11-18 07:21:06.994630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.329 qpair failed and we were unable to recover it. 00:35:46.329 [2024-11-18 07:21:06.994761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.329 [2024-11-18 07:21:06.994804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.329 qpair failed and we were unable to recover it. 00:35:46.329 [2024-11-18 07:21:06.994946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.329 [2024-11-18 07:21:06.994980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.329 qpair failed and we were unable to recover it. 00:35:46.329 [2024-11-18 07:21:06.995092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.329 [2024-11-18 07:21:06.995126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.329 qpair failed and we were unable to recover it. 00:35:46.329 [2024-11-18 07:21:06.995246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.329 [2024-11-18 07:21:06.995282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.329 qpair failed and we were unable to recover it. 00:35:46.329 [2024-11-18 07:21:06.995393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.329 [2024-11-18 07:21:06.995426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.329 qpair failed and we were unable to recover it. 00:35:46.329 [2024-11-18 07:21:06.995578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.329 [2024-11-18 07:21:06.995613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.329 qpair failed and we were unable to recover it. 00:35:46.329 [2024-11-18 07:21:06.995753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.329 [2024-11-18 07:21:06.995797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.329 qpair failed and we were unable to recover it. 00:35:46.329 [2024-11-18 07:21:06.995941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.329 [2024-11-18 07:21:06.995976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.329 qpair failed and we were unable to recover it. 00:35:46.329 [2024-11-18 07:21:06.996085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.329 [2024-11-18 07:21:06.996118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.329 qpair failed and we were unable to recover it. 00:35:46.329 [2024-11-18 07:21:06.996252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.329 [2024-11-18 07:21:06.996287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.329 qpair failed and we were unable to recover it. 00:35:46.329 [2024-11-18 07:21:06.996391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.329 [2024-11-18 07:21:06.996424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.329 qpair failed and we were unable to recover it. 00:35:46.329 [2024-11-18 07:21:06.996544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.329 [2024-11-18 07:21:06.996578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.329 qpair failed and we were unable to recover it. 00:35:46.329 [2024-11-18 07:21:06.996689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.329 [2024-11-18 07:21:06.996723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.329 qpair failed and we were unable to recover it. 00:35:46.329 [2024-11-18 07:21:06.996874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.329 [2024-11-18 07:21:06.996910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.329 qpair failed and we were unable to recover it. 00:35:46.329 [2024-11-18 07:21:06.997048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.329 [2024-11-18 07:21:06.997081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.329 qpair failed and we were unable to recover it. 00:35:46.329 [2024-11-18 07:21:06.997191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.329 [2024-11-18 07:21:06.997223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.329 qpair failed and we were unable to recover it. 00:35:46.329 [2024-11-18 07:21:06.997439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.329 [2024-11-18 07:21:06.997474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.329 qpair failed and we were unable to recover it. 00:35:46.329 [2024-11-18 07:21:06.997601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.329 [2024-11-18 07:21:06.997635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.329 qpair failed and we were unable to recover it. 00:35:46.329 [2024-11-18 07:21:06.997778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.329 [2024-11-18 07:21:06.997818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.329 qpair failed and we were unable to recover it. 00:35:46.329 [2024-11-18 07:21:06.997963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.329 [2024-11-18 07:21:06.998004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.329 qpair failed and we were unable to recover it. 00:35:46.329 [2024-11-18 07:21:06.998122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.329 [2024-11-18 07:21:06.998156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.329 qpair failed and we were unable to recover it. 00:35:46.329 [2024-11-18 07:21:06.998425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.329 [2024-11-18 07:21:06.998460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.329 qpair failed and we were unable to recover it. 00:35:46.329 [2024-11-18 07:21:06.998614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.329 [2024-11-18 07:21:06.998649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.329 qpair failed and we were unable to recover it. 00:35:46.329 [2024-11-18 07:21:06.998795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.329 [2024-11-18 07:21:06.998830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.329 qpair failed and we were unable to recover it. 00:35:46.329 [2024-11-18 07:21:06.998940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.329 [2024-11-18 07:21:06.998973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.329 qpair failed and we were unable to recover it. 00:35:46.329 [2024-11-18 07:21:06.999153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.329 [2024-11-18 07:21:06.999189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.329 qpair failed and we were unable to recover it. 00:35:46.329 [2024-11-18 07:21:06.999320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.329 [2024-11-18 07:21:06.999355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.329 qpair failed and we were unable to recover it. 00:35:46.329 [2024-11-18 07:21:06.999505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.329 [2024-11-18 07:21:06.999552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.329 qpair failed and we were unable to recover it. 00:35:46.329 [2024-11-18 07:21:06.999691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.329 [2024-11-18 07:21:06.999726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.329 qpair failed and we were unable to recover it. 00:35:46.329 [2024-11-18 07:21:06.999898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.329 [2024-11-18 07:21:06.999948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.329 qpair failed and we were unable to recover it. 00:35:46.329 [2024-11-18 07:21:07.000089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.330 [2024-11-18 07:21:07.000125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.330 qpair failed and we were unable to recover it. 00:35:46.330 [2024-11-18 07:21:07.000265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.330 [2024-11-18 07:21:07.000300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.330 qpair failed and we were unable to recover it. 00:35:46.330 [2024-11-18 07:21:07.000448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.330 [2024-11-18 07:21:07.000483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.330 qpair failed and we were unable to recover it. 00:35:46.330 [2024-11-18 07:21:07.000604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.330 [2024-11-18 07:21:07.000639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.330 qpair failed and we were unable to recover it. 00:35:46.330 [2024-11-18 07:21:07.000774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.330 [2024-11-18 07:21:07.000809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.330 qpair failed and we were unable to recover it. 00:35:46.330 [2024-11-18 07:21:07.000952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.330 [2024-11-18 07:21:07.000986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.330 qpair failed and we were unable to recover it. 00:35:46.330 [2024-11-18 07:21:07.001157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.330 [2024-11-18 07:21:07.001212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.330 qpair failed and we were unable to recover it. 00:35:46.330 [2024-11-18 07:21:07.001355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.330 [2024-11-18 07:21:07.001391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.330 qpair failed and we were unable to recover it. 00:35:46.330 [2024-11-18 07:21:07.001529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.330 [2024-11-18 07:21:07.001572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.330 qpair failed and we were unable to recover it. 00:35:46.330 [2024-11-18 07:21:07.001711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.330 [2024-11-18 07:21:07.001747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.330 qpair failed and we were unable to recover it. 00:35:46.330 [2024-11-18 07:21:07.001857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.330 [2024-11-18 07:21:07.001896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.330 qpair failed and we were unable to recover it. 00:35:46.330 [2024-11-18 07:21:07.002047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.330 [2024-11-18 07:21:07.002088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.330 qpair failed and we were unable to recover it. 00:35:46.330 [2024-11-18 07:21:07.002232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.330 [2024-11-18 07:21:07.002267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.330 qpair failed and we were unable to recover it. 00:35:46.330 [2024-11-18 07:21:07.002411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.330 [2024-11-18 07:21:07.002446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.330 qpair failed and we were unable to recover it. 00:35:46.330 [2024-11-18 07:21:07.002563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.330 [2024-11-18 07:21:07.002597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.330 qpair failed and we were unable to recover it. 00:35:46.330 [2024-11-18 07:21:07.002715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.330 [2024-11-18 07:21:07.002755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.330 qpair failed and we were unable to recover it. 00:35:46.330 [2024-11-18 07:21:07.003025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.330 [2024-11-18 07:21:07.003079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.330 qpair failed and we were unable to recover it. 00:35:46.330 [2024-11-18 07:21:07.003217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.330 [2024-11-18 07:21:07.003249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.330 qpair failed and we were unable to recover it. 00:35:46.330 [2024-11-18 07:21:07.003360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.330 [2024-11-18 07:21:07.003394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.330 qpair failed and we were unable to recover it. 00:35:46.330 [2024-11-18 07:21:07.003507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.330 [2024-11-18 07:21:07.003550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.330 qpair failed and we were unable to recover it. 00:35:46.330 [2024-11-18 07:21:07.003652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.330 [2024-11-18 07:21:07.003688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.330 qpair failed and we were unable to recover it. 00:35:46.330 [2024-11-18 07:21:07.003820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.330 [2024-11-18 07:21:07.003860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.330 qpair failed and we were unable to recover it. 00:35:46.330 [2024-11-18 07:21:07.004036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.330 [2024-11-18 07:21:07.004073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.330 qpair failed and we were unable to recover it. 00:35:46.330 [2024-11-18 07:21:07.004230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.330 [2024-11-18 07:21:07.004266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.330 qpair failed and we were unable to recover it. 00:35:46.330 [2024-11-18 07:21:07.004481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.330 [2024-11-18 07:21:07.004523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.330 qpair failed and we were unable to recover it. 00:35:46.330 [2024-11-18 07:21:07.004668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.330 [2024-11-18 07:21:07.004703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.330 qpair failed and we were unable to recover it. 00:35:46.330 [2024-11-18 07:21:07.004821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.330 [2024-11-18 07:21:07.004854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.330 qpair failed and we were unable to recover it. 00:35:46.330 [2024-11-18 07:21:07.004974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.330 [2024-11-18 07:21:07.005008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.330 qpair failed and we were unable to recover it. 00:35:46.330 [2024-11-18 07:21:07.005109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.330 [2024-11-18 07:21:07.005144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.330 qpair failed and we were unable to recover it. 00:35:46.330 [2024-11-18 07:21:07.005267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.331 [2024-11-18 07:21:07.005302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.331 qpair failed and we were unable to recover it. 00:35:46.331 [2024-11-18 07:21:07.005446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.331 [2024-11-18 07:21:07.005480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.331 qpair failed and we were unable to recover it. 00:35:46.331 [2024-11-18 07:21:07.005644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.331 [2024-11-18 07:21:07.005678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.331 qpair failed and we were unable to recover it. 00:35:46.331 [2024-11-18 07:21:07.005790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.331 [2024-11-18 07:21:07.005825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.331 qpair failed and we were unable to recover it. 00:35:46.331 [2024-11-18 07:21:07.005940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.331 [2024-11-18 07:21:07.005975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.331 qpair failed and we were unable to recover it. 00:35:46.331 [2024-11-18 07:21:07.006106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.331 [2024-11-18 07:21:07.006140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.331 qpair failed and we were unable to recover it. 00:35:46.331 [2024-11-18 07:21:07.006273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.331 [2024-11-18 07:21:07.006308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.331 qpair failed and we were unable to recover it. 00:35:46.331 [2024-11-18 07:21:07.006464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.331 [2024-11-18 07:21:07.006516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.331 qpair failed and we were unable to recover it. 00:35:46.331 [2024-11-18 07:21:07.006630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.331 [2024-11-18 07:21:07.006664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.331 qpair failed and we were unable to recover it. 00:35:46.331 [2024-11-18 07:21:07.006812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.331 [2024-11-18 07:21:07.006864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.331 qpair failed and we were unable to recover it. 00:35:46.331 [2024-11-18 07:21:07.006999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.331 [2024-11-18 07:21:07.007054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.331 qpair failed and we were unable to recover it. 00:35:46.331 [2024-11-18 07:21:07.007214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.331 [2024-11-18 07:21:07.007248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.331 qpair failed and we were unable to recover it. 00:35:46.331 [2024-11-18 07:21:07.007382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.331 [2024-11-18 07:21:07.007417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.331 qpair failed and we were unable to recover it. 00:35:46.331 [2024-11-18 07:21:07.007533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.331 [2024-11-18 07:21:07.007569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.331 qpair failed and we were unable to recover it. 00:35:46.331 [2024-11-18 07:21:07.007682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.331 [2024-11-18 07:21:07.007716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.331 qpair failed and we were unable to recover it. 00:35:46.331 [2024-11-18 07:21:07.007858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.331 [2024-11-18 07:21:07.007893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.331 qpair failed and we were unable to recover it. 00:35:46.331 [2024-11-18 07:21:07.008013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.331 [2024-11-18 07:21:07.008066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.331 qpair failed and we were unable to recover it. 00:35:46.331 [2024-11-18 07:21:07.008189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.331 [2024-11-18 07:21:07.008224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.331 qpair failed and we were unable to recover it. 00:35:46.331 [2024-11-18 07:21:07.008363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.331 [2024-11-18 07:21:07.008399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.331 qpair failed and we were unable to recover it. 00:35:46.331 [2024-11-18 07:21:07.008531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.331 [2024-11-18 07:21:07.008566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.331 qpair failed and we were unable to recover it. 00:35:46.331 [2024-11-18 07:21:07.008676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.331 [2024-11-18 07:21:07.008709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.331 qpair failed and we were unable to recover it. 00:35:46.331 [2024-11-18 07:21:07.008882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.331 [2024-11-18 07:21:07.008916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.331 qpair failed and we were unable to recover it. 00:35:46.331 [2024-11-18 07:21:07.009091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.331 [2024-11-18 07:21:07.009156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.331 qpair failed and we were unable to recover it. 00:35:46.331 [2024-11-18 07:21:07.009323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.331 [2024-11-18 07:21:07.009357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.331 qpair failed and we were unable to recover it. 00:35:46.331 [2024-11-18 07:21:07.009467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.331 [2024-11-18 07:21:07.009509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.331 qpair failed and we were unable to recover it. 00:35:46.331 [2024-11-18 07:21:07.009637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.331 [2024-11-18 07:21:07.009672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.331 qpair failed and we were unable to recover it. 00:35:46.331 [2024-11-18 07:21:07.009819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.331 [2024-11-18 07:21:07.009853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.331 qpair failed and we were unable to recover it. 00:35:46.331 [2024-11-18 07:21:07.009973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.331 [2024-11-18 07:21:07.010008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.331 qpair failed and we were unable to recover it. 00:35:46.331 [2024-11-18 07:21:07.010111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.331 [2024-11-18 07:21:07.010146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.331 qpair failed and we were unable to recover it. 00:35:46.331 [2024-11-18 07:21:07.010261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.331 [2024-11-18 07:21:07.010295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.331 qpair failed and we were unable to recover it. 00:35:46.331 [2024-11-18 07:21:07.010399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.331 [2024-11-18 07:21:07.010434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.331 qpair failed and we were unable to recover it. 00:35:46.331 [2024-11-18 07:21:07.010552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.331 [2024-11-18 07:21:07.010586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.331 qpair failed and we were unable to recover it. 00:35:46.331 [2024-11-18 07:21:07.010703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.331 [2024-11-18 07:21:07.010737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.331 qpair failed and we were unable to recover it. 00:35:46.331 [2024-11-18 07:21:07.010876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.331 [2024-11-18 07:21:07.010910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.331 qpair failed and we were unable to recover it. 00:35:46.331 [2024-11-18 07:21:07.011044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.331 [2024-11-18 07:21:07.011078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.331 qpair failed and we were unable to recover it. 00:35:46.331 [2024-11-18 07:21:07.011193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.331 [2024-11-18 07:21:07.011228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.331 qpair failed and we were unable to recover it. 00:35:46.332 [2024-11-18 07:21:07.011368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.332 [2024-11-18 07:21:07.011403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.332 qpair failed and we were unable to recover it. 00:35:46.332 [2024-11-18 07:21:07.011524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.332 [2024-11-18 07:21:07.011560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.332 qpair failed and we were unable to recover it. 00:35:46.332 [2024-11-18 07:21:07.011682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.332 [2024-11-18 07:21:07.011716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.332 qpair failed and we were unable to recover it. 00:35:46.332 [2024-11-18 07:21:07.011850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.332 [2024-11-18 07:21:07.011884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.332 qpair failed and we were unable to recover it. 00:35:46.332 [2024-11-18 07:21:07.012038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.332 [2024-11-18 07:21:07.012073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.332 qpair failed and we were unable to recover it. 00:35:46.332 [2024-11-18 07:21:07.012322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.332 [2024-11-18 07:21:07.012386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.332 qpair failed and we were unable to recover it. 00:35:46.332 [2024-11-18 07:21:07.012603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.332 [2024-11-18 07:21:07.012639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.332 qpair failed and we were unable to recover it. 00:35:46.332 [2024-11-18 07:21:07.012746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.332 [2024-11-18 07:21:07.012780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.332 qpair failed and we were unable to recover it. 00:35:46.332 [2024-11-18 07:21:07.012925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.332 [2024-11-18 07:21:07.012959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.332 qpair failed and we were unable to recover it. 00:35:46.332 [2024-11-18 07:21:07.013065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.332 [2024-11-18 07:21:07.013099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.332 qpair failed and we were unable to recover it. 00:35:46.332 [2024-11-18 07:21:07.013240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.332 [2024-11-18 07:21:07.013276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.332 qpair failed and we were unable to recover it. 00:35:46.332 [2024-11-18 07:21:07.013419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.332 [2024-11-18 07:21:07.013453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.332 qpair failed and we were unable to recover it. 00:35:46.332 [2024-11-18 07:21:07.013586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.332 [2024-11-18 07:21:07.013621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.332 qpair failed and we were unable to recover it. 00:35:46.332 [2024-11-18 07:21:07.013731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.332 [2024-11-18 07:21:07.013766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.332 qpair failed and we were unable to recover it. 00:35:46.332 [2024-11-18 07:21:07.013877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.332 [2024-11-18 07:21:07.013911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.332 qpair failed and we were unable to recover it. 00:35:46.332 [2024-11-18 07:21:07.014074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.332 [2024-11-18 07:21:07.014128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.332 qpair failed and we were unable to recover it. 00:35:46.332 [2024-11-18 07:21:07.014255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.332 [2024-11-18 07:21:07.014292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.332 qpair failed and we were unable to recover it. 00:35:46.332 [2024-11-18 07:21:07.014432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.332 [2024-11-18 07:21:07.014467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.332 qpair failed and we were unable to recover it. 00:35:46.332 [2024-11-18 07:21:07.014599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.332 [2024-11-18 07:21:07.014643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.332 qpair failed and we were unable to recover it. 00:35:46.332 [2024-11-18 07:21:07.014761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.332 [2024-11-18 07:21:07.014796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.332 qpair failed and we were unable to recover it. 00:35:46.332 [2024-11-18 07:21:07.014911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.332 [2024-11-18 07:21:07.014946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.332 qpair failed and we were unable to recover it. 00:35:46.332 [2024-11-18 07:21:07.015055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.332 [2024-11-18 07:21:07.015090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.332 qpair failed and we were unable to recover it. 00:35:46.332 [2024-11-18 07:21:07.015313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.332 [2024-11-18 07:21:07.015365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.332 qpair failed and we were unable to recover it. 00:35:46.332 [2024-11-18 07:21:07.015538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.332 [2024-11-18 07:21:07.015573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.332 qpair failed and we were unable to recover it. 00:35:46.332 [2024-11-18 07:21:07.015683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.332 [2024-11-18 07:21:07.015718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.332 qpair failed and we were unable to recover it. 00:35:46.332 [2024-11-18 07:21:07.015823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.332 [2024-11-18 07:21:07.015858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.332 qpair failed and we were unable to recover it. 00:35:46.332 [2024-11-18 07:21:07.016002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.332 [2024-11-18 07:21:07.016036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.332 qpair failed and we were unable to recover it. 00:35:46.332 [2024-11-18 07:21:07.016147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.332 [2024-11-18 07:21:07.016181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.332 qpair failed and we were unable to recover it. 00:35:46.332 [2024-11-18 07:21:07.016321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.332 [2024-11-18 07:21:07.016356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.332 qpair failed and we were unable to recover it. 00:35:46.332 [2024-11-18 07:21:07.016522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.332 [2024-11-18 07:21:07.016558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.332 qpair failed and we were unable to recover it. 00:35:46.332 [2024-11-18 07:21:07.016669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.332 [2024-11-18 07:21:07.016704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.332 qpair failed and we were unable to recover it. 00:35:46.332 [2024-11-18 07:21:07.016844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.332 [2024-11-18 07:21:07.016878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.332 qpair failed and we were unable to recover it. 00:35:46.332 [2024-11-18 07:21:07.016984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.332 [2024-11-18 07:21:07.017037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.332 qpair failed and we were unable to recover it. 00:35:46.332 [2024-11-18 07:21:07.017168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.332 [2024-11-18 07:21:07.017222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.332 qpair failed and we were unable to recover it. 00:35:46.332 [2024-11-18 07:21:07.017389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.332 [2024-11-18 07:21:07.017423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.332 qpair failed and we were unable to recover it. 00:35:46.332 [2024-11-18 07:21:07.017554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.333 [2024-11-18 07:21:07.017589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.333 qpair failed and we were unable to recover it. 00:35:46.333 [2024-11-18 07:21:07.017692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.333 [2024-11-18 07:21:07.017727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.333 qpair failed and we were unable to recover it. 00:35:46.333 [2024-11-18 07:21:07.017872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.333 [2024-11-18 07:21:07.017906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.333 qpair failed and we were unable to recover it. 00:35:46.333 [2024-11-18 07:21:07.018019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.333 [2024-11-18 07:21:07.018054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.333 qpair failed and we were unable to recover it. 00:35:46.333 [2024-11-18 07:21:07.018287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.333 [2024-11-18 07:21:07.018322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.333 qpair failed and we were unable to recover it. 00:35:46.333 [2024-11-18 07:21:07.018434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.333 [2024-11-18 07:21:07.018468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.333 qpair failed and we were unable to recover it. 00:35:46.333 [2024-11-18 07:21:07.018604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.333 [2024-11-18 07:21:07.018637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.333 qpair failed and we were unable to recover it. 00:35:46.333 [2024-11-18 07:21:07.018743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.333 [2024-11-18 07:21:07.018777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.333 qpair failed and we were unable to recover it. 00:35:46.333 [2024-11-18 07:21:07.018881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.333 [2024-11-18 07:21:07.018913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.333 qpair failed and we were unable to recover it. 00:35:46.333 [2024-11-18 07:21:07.019058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.333 [2024-11-18 07:21:07.019094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.333 qpair failed and we were unable to recover it. 00:35:46.333 [2024-11-18 07:21:07.019237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.333 [2024-11-18 07:21:07.019273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.333 qpair failed and we were unable to recover it. 00:35:46.333 [2024-11-18 07:21:07.019444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.333 [2024-11-18 07:21:07.019505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.333 qpair failed and we were unable to recover it. 00:35:46.333 [2024-11-18 07:21:07.019649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.333 [2024-11-18 07:21:07.019682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.333 qpair failed and we were unable to recover it. 00:35:46.333 [2024-11-18 07:21:07.019782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.333 [2024-11-18 07:21:07.019816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.333 qpair failed and we were unable to recover it. 00:35:46.333 [2024-11-18 07:21:07.019927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.333 [2024-11-18 07:21:07.019961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.333 qpair failed and we were unable to recover it. 00:35:46.333 [2024-11-18 07:21:07.020081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.333 [2024-11-18 07:21:07.020115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.333 qpair failed and we were unable to recover it. 00:35:46.333 [2024-11-18 07:21:07.020274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.333 [2024-11-18 07:21:07.020310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.333 qpair failed and we were unable to recover it. 00:35:46.333 [2024-11-18 07:21:07.020444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.333 [2024-11-18 07:21:07.020477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.333 qpair failed and we were unable to recover it. 00:35:46.333 [2024-11-18 07:21:07.020592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.333 [2024-11-18 07:21:07.020626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.333 qpair failed and we were unable to recover it. 00:35:46.333 [2024-11-18 07:21:07.020745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.333 [2024-11-18 07:21:07.020778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.333 qpair failed and we were unable to recover it. 00:35:46.333 [2024-11-18 07:21:07.020920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.333 [2024-11-18 07:21:07.020953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.333 qpair failed and we were unable to recover it. 00:35:46.333 [2024-11-18 07:21:07.021100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.333 [2024-11-18 07:21:07.021135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.333 qpair failed and we were unable to recover it. 00:35:46.333 [2024-11-18 07:21:07.021307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.333 [2024-11-18 07:21:07.021356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.333 qpair failed and we were unable to recover it. 00:35:46.333 [2024-11-18 07:21:07.021481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.333 [2024-11-18 07:21:07.021544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.333 qpair failed and we were unable to recover it. 00:35:46.333 [2024-11-18 07:21:07.021639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.333 [2024-11-18 07:21:07.021672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.333 qpair failed and we were unable to recover it. 00:35:46.333 [2024-11-18 07:21:07.021773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.333 [2024-11-18 07:21:07.021824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.333 qpair failed and we were unable to recover it. 00:35:46.333 [2024-11-18 07:21:07.021961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.333 [2024-11-18 07:21:07.021996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.333 qpair failed and we were unable to recover it. 00:35:46.333 [2024-11-18 07:21:07.022106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.333 [2024-11-18 07:21:07.022142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.333 qpair failed and we were unable to recover it. 00:35:46.333 [2024-11-18 07:21:07.022292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.333 [2024-11-18 07:21:07.022331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.333 qpair failed and we were unable to recover it. 00:35:46.333 [2024-11-18 07:21:07.022497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.333 [2024-11-18 07:21:07.022531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.333 qpair failed and we were unable to recover it. 00:35:46.333 [2024-11-18 07:21:07.022637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.333 [2024-11-18 07:21:07.022669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.333 qpair failed and we were unable to recover it. 00:35:46.333 [2024-11-18 07:21:07.022820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.333 [2024-11-18 07:21:07.022853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.333 qpair failed and we were unable to recover it. 00:35:46.333 [2024-11-18 07:21:07.022970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.333 [2024-11-18 07:21:07.023004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.333 qpair failed and we were unable to recover it. 00:35:46.333 [2024-11-18 07:21:07.023173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.333 [2024-11-18 07:21:07.023225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.333 qpair failed and we were unable to recover it. 00:35:46.333 [2024-11-18 07:21:07.023410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.333 [2024-11-18 07:21:07.023445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.333 qpair failed and we were unable to recover it. 00:35:46.333 [2024-11-18 07:21:07.023589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.333 [2024-11-18 07:21:07.023623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.334 qpair failed and we were unable to recover it. 00:35:46.334 [2024-11-18 07:21:07.023732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.334 [2024-11-18 07:21:07.023765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.334 qpair failed and we were unable to recover it. 00:35:46.334 [2024-11-18 07:21:07.023959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.334 [2024-11-18 07:21:07.023994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.334 qpair failed and we were unable to recover it. 00:35:46.334 [2024-11-18 07:21:07.024097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.334 [2024-11-18 07:21:07.024131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.334 qpair failed and we were unable to recover it. 00:35:46.334 [2024-11-18 07:21:07.024262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.334 [2024-11-18 07:21:07.024296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.334 qpair failed and we were unable to recover it. 00:35:46.334 [2024-11-18 07:21:07.024469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.334 [2024-11-18 07:21:07.024520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.334 qpair failed and we were unable to recover it. 00:35:46.334 [2024-11-18 07:21:07.024638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.334 [2024-11-18 07:21:07.024671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.334 qpair failed and we were unable to recover it. 00:35:46.334 [2024-11-18 07:21:07.024776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.334 [2024-11-18 07:21:07.024809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.334 qpair failed and we were unable to recover it. 00:35:46.334 [2024-11-18 07:21:07.024926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.334 [2024-11-18 07:21:07.024960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.334 qpair failed and we were unable to recover it. 00:35:46.334 [2024-11-18 07:21:07.025134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.334 [2024-11-18 07:21:07.025168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.334 qpair failed and we were unable to recover it. 00:35:46.334 [2024-11-18 07:21:07.025315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.334 [2024-11-18 07:21:07.025366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.334 qpair failed and we were unable to recover it. 00:35:46.334 [2024-11-18 07:21:07.025507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.334 [2024-11-18 07:21:07.025558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.334 qpair failed and we were unable to recover it. 00:35:46.334 [2024-11-18 07:21:07.025648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.334 [2024-11-18 07:21:07.025681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.334 qpair failed and we were unable to recover it. 00:35:46.334 [2024-11-18 07:21:07.025793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.334 [2024-11-18 07:21:07.025826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.334 qpair failed and we were unable to recover it. 00:35:46.334 [2024-11-18 07:21:07.025979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.334 [2024-11-18 07:21:07.026014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.334 qpair failed and we were unable to recover it. 00:35:46.334 [2024-11-18 07:21:07.026203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.334 [2024-11-18 07:21:07.026252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.334 qpair failed and we were unable to recover it. 00:35:46.334 [2024-11-18 07:21:07.026391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.334 [2024-11-18 07:21:07.026441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.334 qpair failed and we were unable to recover it. 00:35:46.334 [2024-11-18 07:21:07.026545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.334 [2024-11-18 07:21:07.026579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.334 qpair failed and we were unable to recover it. 00:35:46.334 [2024-11-18 07:21:07.026680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.334 [2024-11-18 07:21:07.026713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.334 qpair failed and we were unable to recover it. 00:35:46.334 [2024-11-18 07:21:07.026845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.334 [2024-11-18 07:21:07.026878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.334 qpair failed and we were unable to recover it. 00:35:46.334 [2024-11-18 07:21:07.027064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.334 [2024-11-18 07:21:07.027098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.334 qpair failed and we were unable to recover it. 00:35:46.334 [2024-11-18 07:21:07.027207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.334 [2024-11-18 07:21:07.027242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.334 qpair failed and we were unable to recover it. 00:35:46.334 [2024-11-18 07:21:07.027395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.334 [2024-11-18 07:21:07.027428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.334 qpair failed and we were unable to recover it. 00:35:46.334 [2024-11-18 07:21:07.027561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.334 [2024-11-18 07:21:07.027595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.334 qpair failed and we were unable to recover it. 00:35:46.334 [2024-11-18 07:21:07.027705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.334 [2024-11-18 07:21:07.027738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.334 qpair failed and we were unable to recover it. 00:35:46.334 [2024-11-18 07:21:07.027873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.334 [2024-11-18 07:21:07.027906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.334 qpair failed and we were unable to recover it. 00:35:46.334 [2024-11-18 07:21:07.028101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.334 [2024-11-18 07:21:07.028136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.334 qpair failed and we were unable to recover it. 00:35:46.334 [2024-11-18 07:21:07.028255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.334 [2024-11-18 07:21:07.028289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.334 qpair failed and we were unable to recover it. 00:35:46.334 [2024-11-18 07:21:07.028448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.334 [2024-11-18 07:21:07.028481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.334 qpair failed and we were unable to recover it. 00:35:46.334 [2024-11-18 07:21:07.028609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.334 [2024-11-18 07:21:07.028643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.334 qpair failed and we were unable to recover it. 00:35:46.334 [2024-11-18 07:21:07.028755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.334 [2024-11-18 07:21:07.028788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.334 qpair failed and we were unable to recover it. 00:35:46.334 [2024-11-18 07:21:07.028923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.334 [2024-11-18 07:21:07.028957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.334 qpair failed and we were unable to recover it. 00:35:46.334 [2024-11-18 07:21:07.029143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.334 [2024-11-18 07:21:07.029177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.334 qpair failed and we were unable to recover it. 00:35:46.334 [2024-11-18 07:21:07.029313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.334 [2024-11-18 07:21:07.029347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.334 qpair failed and we were unable to recover it. 00:35:46.334 [2024-11-18 07:21:07.029504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.334 [2024-11-18 07:21:07.029538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.334 qpair failed and we were unable to recover it. 00:35:46.334 [2024-11-18 07:21:07.029663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.334 [2024-11-18 07:21:07.029696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.334 qpair failed and we were unable to recover it. 00:35:46.334 [2024-11-18 07:21:07.029879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.335 [2024-11-18 07:21:07.029914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.335 qpair failed and we were unable to recover it. 00:35:46.335 [2024-11-18 07:21:07.030041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.335 [2024-11-18 07:21:07.030094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.335 qpair failed and we were unable to recover it. 00:35:46.335 [2024-11-18 07:21:07.030313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.335 [2024-11-18 07:21:07.030346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.335 qpair failed and we were unable to recover it. 00:35:46.335 [2024-11-18 07:21:07.030540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.335 [2024-11-18 07:21:07.030574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.335 qpair failed and we were unable to recover it. 00:35:46.335 [2024-11-18 07:21:07.030677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.335 [2024-11-18 07:21:07.030710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.335 qpair failed and we were unable to recover it. 00:35:46.335 [2024-11-18 07:21:07.030872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.335 [2024-11-18 07:21:07.030905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.335 qpair failed and we were unable to recover it. 00:35:46.335 [2024-11-18 07:21:07.031095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.335 [2024-11-18 07:21:07.031154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.335 qpair failed and we were unable to recover it. 00:35:46.335 [2024-11-18 07:21:07.031314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.335 [2024-11-18 07:21:07.031349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.335 qpair failed and we were unable to recover it. 00:35:46.335 [2024-11-18 07:21:07.031401] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217e970 (9): Bad file descriptor 00:35:46.335 [2024-11-18 07:21:07.031580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.335 [2024-11-18 07:21:07.031620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.335 qpair failed and we were unable to recover it. 00:35:46.335 [2024-11-18 07:21:07.031761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.335 [2024-11-18 07:21:07.031797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.335 qpair failed and we were unable to recover it. 00:35:46.335 [2024-11-18 07:21:07.031933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.335 [2024-11-18 07:21:07.031984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.335 qpair failed and we were unable to recover it. 00:35:46.335 [2024-11-18 07:21:07.032146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.335 [2024-11-18 07:21:07.032181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.335 qpair failed and we were unable to recover it. 00:35:46.335 [2024-11-18 07:21:07.032333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.335 [2024-11-18 07:21:07.032368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.335 qpair failed and we were unable to recover it. 00:35:46.335 [2024-11-18 07:21:07.032533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.335 [2024-11-18 07:21:07.032569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.335 qpair failed and we were unable to recover it. 00:35:46.335 [2024-11-18 07:21:07.032676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.335 [2024-11-18 07:21:07.032709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.335 qpair failed and we were unable to recover it. 00:35:46.335 [2024-11-18 07:21:07.032863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.335 [2024-11-18 07:21:07.032898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.335 qpair failed and we were unable to recover it. 00:35:46.335 [2024-11-18 07:21:07.033041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.335 [2024-11-18 07:21:07.033076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.335 qpair failed and we were unable to recover it. 00:35:46.335 [2024-11-18 07:21:07.033195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.335 [2024-11-18 07:21:07.033231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.335 qpair failed and we were unable to recover it. 00:35:46.335 [2024-11-18 07:21:07.033370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.335 [2024-11-18 07:21:07.033420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.335 qpair failed and we were unable to recover it. 00:35:46.335 [2024-11-18 07:21:07.033547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.335 [2024-11-18 07:21:07.033591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.335 qpair failed and we were unable to recover it. 00:35:46.335 [2024-11-18 07:21:07.033719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.335 [2024-11-18 07:21:07.033771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.335 qpair failed and we were unable to recover it. 00:35:46.335 [2024-11-18 07:21:07.033911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.335 [2024-11-18 07:21:07.033945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.335 qpair failed and we were unable to recover it. 00:35:46.335 [2024-11-18 07:21:07.034079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.335 [2024-11-18 07:21:07.034114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.335 qpair failed and we were unable to recover it. 00:35:46.335 [2024-11-18 07:21:07.034225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.335 [2024-11-18 07:21:07.034260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.335 qpair failed and we were unable to recover it. 00:35:46.335 [2024-11-18 07:21:07.034395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.335 [2024-11-18 07:21:07.034430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.335 qpair failed and we were unable to recover it. 00:35:46.335 [2024-11-18 07:21:07.034555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.335 [2024-11-18 07:21:07.034590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.335 qpair failed and we were unable to recover it. 00:35:46.335 [2024-11-18 07:21:07.034708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.335 [2024-11-18 07:21:07.034742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.335 qpair failed and we were unable to recover it. 00:35:46.335 [2024-11-18 07:21:07.034871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.335 [2024-11-18 07:21:07.034905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.335 qpair failed and we were unable to recover it. 00:35:46.335 [2024-11-18 07:21:07.035075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.335 [2024-11-18 07:21:07.035110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.335 qpair failed and we were unable to recover it. 00:35:46.335 [2024-11-18 07:21:07.035254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.335 [2024-11-18 07:21:07.035288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.335 qpair failed and we were unable to recover it. 00:35:46.336 [2024-11-18 07:21:07.035387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.336 [2024-11-18 07:21:07.035421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.336 qpair failed and we were unable to recover it. 00:35:46.336 [2024-11-18 07:21:07.035547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.336 [2024-11-18 07:21:07.035581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.336 qpair failed and we were unable to recover it. 00:35:46.336 [2024-11-18 07:21:07.035695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.336 [2024-11-18 07:21:07.035727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.336 qpair failed and we were unable to recover it. 00:35:46.336 [2024-11-18 07:21:07.035887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.336 [2024-11-18 07:21:07.035927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.336 qpair failed and we were unable to recover it. 00:35:46.336 [2024-11-18 07:21:07.036191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.336 [2024-11-18 07:21:07.036242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.336 qpair failed and we were unable to recover it. 00:35:46.336 [2024-11-18 07:21:07.036441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.336 [2024-11-18 07:21:07.036473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.336 qpair failed and we were unable to recover it. 00:35:46.336 [2024-11-18 07:21:07.036602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.336 [2024-11-18 07:21:07.036652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.336 qpair failed and we were unable to recover it. 00:35:46.336 [2024-11-18 07:21:07.036762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.336 [2024-11-18 07:21:07.036796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.336 qpair failed and we were unable to recover it. 00:35:46.336 [2024-11-18 07:21:07.036947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.336 [2024-11-18 07:21:07.036983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.336 qpair failed and we were unable to recover it. 00:35:46.336 [2024-11-18 07:21:07.037124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.336 [2024-11-18 07:21:07.037159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.336 qpair failed and we were unable to recover it. 00:35:46.336 [2024-11-18 07:21:07.037255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.336 [2024-11-18 07:21:07.037290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.336 qpair failed and we were unable to recover it. 00:35:46.336 [2024-11-18 07:21:07.037443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.336 [2024-11-18 07:21:07.037476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.336 qpair failed and we were unable to recover it. 00:35:46.336 [2024-11-18 07:21:07.037590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.336 [2024-11-18 07:21:07.037623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.336 qpair failed and we were unable to recover it. 00:35:46.336 [2024-11-18 07:21:07.037729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.336 [2024-11-18 07:21:07.037779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.336 qpair failed and we were unable to recover it. 00:35:46.336 [2024-11-18 07:21:07.037916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.336 [2024-11-18 07:21:07.037950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.336 qpair failed and we were unable to recover it. 00:35:46.336 [2024-11-18 07:21:07.038086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.336 [2024-11-18 07:21:07.038120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.336 qpair failed and we were unable to recover it. 00:35:46.336 [2024-11-18 07:21:07.038270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.336 [2024-11-18 07:21:07.038330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.336 qpair failed and we were unable to recover it. 00:35:46.336 [2024-11-18 07:21:07.038495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.336 [2024-11-18 07:21:07.038531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.336 qpair failed and we were unable to recover it. 00:35:46.336 [2024-11-18 07:21:07.038651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.336 [2024-11-18 07:21:07.038684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.336 qpair failed and we were unable to recover it. 00:35:46.336 [2024-11-18 07:21:07.038836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.336 [2024-11-18 07:21:07.038870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.336 qpair failed and we were unable to recover it. 00:35:46.336 [2024-11-18 07:21:07.038982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.336 [2024-11-18 07:21:07.039033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.336 qpair failed and we were unable to recover it. 00:35:46.336 [2024-11-18 07:21:07.039220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.336 [2024-11-18 07:21:07.039275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.336 qpair failed and we were unable to recover it. 00:35:46.336 [2024-11-18 07:21:07.039442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.336 [2024-11-18 07:21:07.039477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.336 qpair failed and we were unable to recover it. 00:35:46.336 [2024-11-18 07:21:07.039605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.336 [2024-11-18 07:21:07.039639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.336 qpair failed and we were unable to recover it. 00:35:46.336 [2024-11-18 07:21:07.039747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.336 [2024-11-18 07:21:07.039781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.336 qpair failed and we were unable to recover it. 00:35:46.336 [2024-11-18 07:21:07.039913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.336 [2024-11-18 07:21:07.039947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.336 qpair failed and we were unable to recover it. 00:35:46.336 [2024-11-18 07:21:07.040053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.336 [2024-11-18 07:21:07.040088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.336 qpair failed and we were unable to recover it. 00:35:46.336 [2024-11-18 07:21:07.040231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.336 [2024-11-18 07:21:07.040280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.336 qpair failed and we were unable to recover it. 00:35:46.336 [2024-11-18 07:21:07.040419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.336 [2024-11-18 07:21:07.040455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.336 qpair failed and we were unable to recover it. 00:35:46.336 [2024-11-18 07:21:07.040595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.336 [2024-11-18 07:21:07.040630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.336 qpair failed and we were unable to recover it. 00:35:46.336 [2024-11-18 07:21:07.040768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.336 [2024-11-18 07:21:07.040802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.336 qpair failed and we were unable to recover it. 00:35:46.336 [2024-11-18 07:21:07.040935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.336 [2024-11-18 07:21:07.040971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.336 qpair failed and we were unable to recover it. 00:35:46.336 [2024-11-18 07:21:07.041133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.336 [2024-11-18 07:21:07.041200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.336 qpair failed and we were unable to recover it. 00:35:46.336 [2024-11-18 07:21:07.041351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.336 [2024-11-18 07:21:07.041385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.336 qpair failed and we were unable to recover it. 00:35:46.336 [2024-11-18 07:21:07.041532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.337 [2024-11-18 07:21:07.041566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.337 qpair failed and we were unable to recover it. 00:35:46.337 [2024-11-18 07:21:07.041678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.337 [2024-11-18 07:21:07.041711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.337 qpair failed and we were unable to recover it. 00:35:46.337 [2024-11-18 07:21:07.041845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.337 [2024-11-18 07:21:07.041880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.337 qpair failed and we were unable to recover it. 00:35:46.337 [2024-11-18 07:21:07.041988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.337 [2024-11-18 07:21:07.042023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.337 qpair failed and we were unable to recover it. 00:35:46.337 [2024-11-18 07:21:07.042158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.337 [2024-11-18 07:21:07.042194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.337 qpair failed and we were unable to recover it. 00:35:46.337 [2024-11-18 07:21:07.042337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.337 [2024-11-18 07:21:07.042374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.337 qpair failed and we were unable to recover it. 00:35:46.337 [2024-11-18 07:21:07.042501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.337 [2024-11-18 07:21:07.042536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.337 qpair failed and we were unable to recover it. 00:35:46.337 [2024-11-18 07:21:07.042644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.337 [2024-11-18 07:21:07.042680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.337 qpair failed and we were unable to recover it. 00:35:46.337 [2024-11-18 07:21:07.042831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.337 [2024-11-18 07:21:07.042882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.337 qpair failed and we were unable to recover it. 00:35:46.337 [2024-11-18 07:21:07.043055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.337 [2024-11-18 07:21:07.043107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.337 qpair failed and we were unable to recover it. 00:35:46.337 [2024-11-18 07:21:07.043229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.337 [2024-11-18 07:21:07.043265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.337 qpair failed and we were unable to recover it. 00:35:46.337 [2024-11-18 07:21:07.043382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.337 [2024-11-18 07:21:07.043419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.337 qpair failed and we were unable to recover it. 00:35:46.337 [2024-11-18 07:21:07.043560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.337 [2024-11-18 07:21:07.043594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.337 qpair failed and we were unable to recover it. 00:35:46.337 [2024-11-18 07:21:07.043709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.337 [2024-11-18 07:21:07.043742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.337 qpair failed and we were unable to recover it. 00:35:46.337 [2024-11-18 07:21:07.043878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.337 [2024-11-18 07:21:07.043911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.337 qpair failed and we were unable to recover it. 00:35:46.337 [2024-11-18 07:21:07.044002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.337 [2024-11-18 07:21:07.044036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.337 qpair failed and we were unable to recover it. 00:35:46.337 [2024-11-18 07:21:07.044176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.337 [2024-11-18 07:21:07.044208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.337 qpair failed and we were unable to recover it. 00:35:46.337 [2024-11-18 07:21:07.044343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.337 [2024-11-18 07:21:07.044376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.337 qpair failed and we were unable to recover it. 00:35:46.337 [2024-11-18 07:21:07.044499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.337 [2024-11-18 07:21:07.044536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.337 qpair failed and we were unable to recover it. 00:35:46.337 [2024-11-18 07:21:07.044649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.337 [2024-11-18 07:21:07.044684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.337 qpair failed and we were unable to recover it. 00:35:46.337 [2024-11-18 07:21:07.044861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.337 [2024-11-18 07:21:07.044913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.337 qpair failed and we were unable to recover it. 00:35:46.337 [2024-11-18 07:21:07.045066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.337 [2024-11-18 07:21:07.045104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.337 qpair failed and we were unable to recover it. 00:35:46.337 [2024-11-18 07:21:07.045241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.337 [2024-11-18 07:21:07.045276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.337 qpair failed and we were unable to recover it. 00:35:46.337 [2024-11-18 07:21:07.045465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.337 [2024-11-18 07:21:07.045508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.337 qpair failed and we were unable to recover it. 00:35:46.337 [2024-11-18 07:21:07.045613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.337 [2024-11-18 07:21:07.045647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.337 qpair failed and we were unable to recover it. 00:35:46.337 [2024-11-18 07:21:07.045767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.337 [2024-11-18 07:21:07.045799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.337 qpair failed and we were unable to recover it. 00:35:46.337 [2024-11-18 07:21:07.045978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.337 [2024-11-18 07:21:07.046029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.337 qpair failed and we were unable to recover it. 00:35:46.337 [2024-11-18 07:21:07.046166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.337 [2024-11-18 07:21:07.046203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.337 qpair failed and we were unable to recover it. 00:35:46.337 [2024-11-18 07:21:07.046360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.337 [2024-11-18 07:21:07.046394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.337 qpair failed and we were unable to recover it. 00:35:46.337 [2024-11-18 07:21:07.046502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.337 [2024-11-18 07:21:07.046545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.337 qpair failed and we were unable to recover it. 00:35:46.337 [2024-11-18 07:21:07.046660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.337 [2024-11-18 07:21:07.046693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.337 qpair failed and we were unable to recover it. 00:35:46.337 [2024-11-18 07:21:07.046864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.337 [2024-11-18 07:21:07.046899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.337 qpair failed and we were unable to recover it. 00:35:46.337 [2024-11-18 07:21:07.047020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.337 [2024-11-18 07:21:07.047071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.337 qpair failed and we were unable to recover it. 00:35:46.337 [2024-11-18 07:21:07.047187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.337 [2024-11-18 07:21:07.047221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.337 qpair failed and we were unable to recover it. 00:35:46.337 [2024-11-18 07:21:07.047380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.337 [2024-11-18 07:21:07.047432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.337 qpair failed and we were unable to recover it. 00:35:46.338 [2024-11-18 07:21:07.047609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.338 [2024-11-18 07:21:07.047660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.338 qpair failed and we were unable to recover it. 00:35:46.338 [2024-11-18 07:21:07.047831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.338 [2024-11-18 07:21:07.047884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.338 qpair failed and we were unable to recover it. 00:35:46.338 [2024-11-18 07:21:07.047996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.338 [2024-11-18 07:21:07.048030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.338 qpair failed and we were unable to recover it. 00:35:46.338 [2024-11-18 07:21:07.048154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.338 [2024-11-18 07:21:07.048205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.338 qpair failed and we were unable to recover it. 00:35:46.338 [2024-11-18 07:21:07.048338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.338 [2024-11-18 07:21:07.048373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.338 qpair failed and we were unable to recover it. 00:35:46.338 [2024-11-18 07:21:07.048497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.338 [2024-11-18 07:21:07.048544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.338 qpair failed and we were unable to recover it. 00:35:46.338 [2024-11-18 07:21:07.048650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.338 [2024-11-18 07:21:07.048683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.338 qpair failed and we were unable to recover it. 00:35:46.338 [2024-11-18 07:21:07.048800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.338 [2024-11-18 07:21:07.048834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.338 qpair failed and we were unable to recover it. 00:35:46.338 [2024-11-18 07:21:07.048947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.338 [2024-11-18 07:21:07.048982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.338 qpair failed and we were unable to recover it. 00:35:46.338 [2024-11-18 07:21:07.049166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.338 [2024-11-18 07:21:07.049200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.338 qpair failed and we were unable to recover it. 00:35:46.338 [2024-11-18 07:21:07.049351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.338 [2024-11-18 07:21:07.049383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.338 qpair failed and we were unable to recover it. 00:35:46.338 [2024-11-18 07:21:07.049528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.338 [2024-11-18 07:21:07.049564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.338 qpair failed and we were unable to recover it. 00:35:46.338 [2024-11-18 07:21:07.049676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.338 [2024-11-18 07:21:07.049709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.338 qpair failed and we were unable to recover it. 00:35:46.338 [2024-11-18 07:21:07.049845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.338 [2024-11-18 07:21:07.049880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.338 qpair failed and we were unable to recover it. 00:35:46.338 [2024-11-18 07:21:07.050046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.338 [2024-11-18 07:21:07.050087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.338 qpair failed and we were unable to recover it. 00:35:46.338 [2024-11-18 07:21:07.050224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.338 [2024-11-18 07:21:07.050259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.338 qpair failed and we were unable to recover it. 00:35:46.338 [2024-11-18 07:21:07.050422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.338 [2024-11-18 07:21:07.050457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.338 qpair failed and we were unable to recover it. 00:35:46.338 [2024-11-18 07:21:07.050590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.338 [2024-11-18 07:21:07.050623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.338 qpair failed and we were unable to recover it. 00:35:46.338 [2024-11-18 07:21:07.050730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.338 [2024-11-18 07:21:07.050768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.338 qpair failed and we were unable to recover it. 00:35:46.338 [2024-11-18 07:21:07.050901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.338 [2024-11-18 07:21:07.050934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.338 qpair failed and we were unable to recover it. 00:35:46.338 [2024-11-18 07:21:07.051106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.338 [2024-11-18 07:21:07.051140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.338 qpair failed and we were unable to recover it. 00:35:46.338 [2024-11-18 07:21:07.051381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.338 [2024-11-18 07:21:07.051436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.338 qpair failed and we were unable to recover it. 00:35:46.338 [2024-11-18 07:21:07.051581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.338 [2024-11-18 07:21:07.051615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.338 qpair failed and we were unable to recover it. 00:35:46.338 [2024-11-18 07:21:07.051728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.338 [2024-11-18 07:21:07.051767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.338 qpair failed and we were unable to recover it. 00:35:46.338 [2024-11-18 07:21:07.051869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.338 [2024-11-18 07:21:07.051903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.338 qpair failed and we were unable to recover it. 00:35:46.338 [2024-11-18 07:21:07.052065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.338 [2024-11-18 07:21:07.052099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.338 qpair failed and we were unable to recover it. 00:35:46.338 [2024-11-18 07:21:07.052217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.338 [2024-11-18 07:21:07.052267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.338 qpair failed and we were unable to recover it. 00:35:46.338 [2024-11-18 07:21:07.052391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.338 [2024-11-18 07:21:07.052426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.338 qpair failed and we were unable to recover it. 00:35:46.338 [2024-11-18 07:21:07.052600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.338 [2024-11-18 07:21:07.052633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.338 qpair failed and we were unable to recover it. 00:35:46.338 [2024-11-18 07:21:07.052846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.338 [2024-11-18 07:21:07.052885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.338 qpair failed and we were unable to recover it. 00:35:46.338 [2024-11-18 07:21:07.053012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.338 [2024-11-18 07:21:07.053052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.338 qpair failed and we were unable to recover it. 00:35:46.338 [2024-11-18 07:21:07.053266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.338 [2024-11-18 07:21:07.053300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.338 qpair failed and we were unable to recover it. 00:35:46.338 [2024-11-18 07:21:07.053430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.338 [2024-11-18 07:21:07.053464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.338 qpair failed and we were unable to recover it. 00:35:46.338 [2024-11-18 07:21:07.053591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.339 [2024-11-18 07:21:07.053627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.339 qpair failed and we were unable to recover it. 00:35:46.339 [2024-11-18 07:21:07.053742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.339 [2024-11-18 07:21:07.053775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.339 qpair failed and we were unable to recover it. 00:35:46.339 [2024-11-18 07:21:07.053887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.339 [2024-11-18 07:21:07.053921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.339 qpair failed and we were unable to recover it. 00:35:46.339 [2024-11-18 07:21:07.054052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.339 [2024-11-18 07:21:07.054087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.339 qpair failed and we were unable to recover it. 00:35:46.339 [2024-11-18 07:21:07.054216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.339 [2024-11-18 07:21:07.054250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.339 qpair failed and we were unable to recover it. 00:35:46.339 [2024-11-18 07:21:07.054363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.339 [2024-11-18 07:21:07.054397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.339 qpair failed and we were unable to recover it. 00:35:46.339 [2024-11-18 07:21:07.054513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.339 [2024-11-18 07:21:07.054564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.339 qpair failed and we were unable to recover it. 00:35:46.339 [2024-11-18 07:21:07.054672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.339 [2024-11-18 07:21:07.054705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.339 qpair failed and we were unable to recover it. 00:35:46.339 [2024-11-18 07:21:07.054842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.339 [2024-11-18 07:21:07.054881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.339 qpair failed and we were unable to recover it. 00:35:46.339 [2024-11-18 07:21:07.055062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.339 [2024-11-18 07:21:07.055097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.339 qpair failed and we were unable to recover it. 00:35:46.339 [2024-11-18 07:21:07.055239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.339 [2024-11-18 07:21:07.055273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.339 qpair failed and we were unable to recover it. 00:35:46.339 [2024-11-18 07:21:07.055403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.339 [2024-11-18 07:21:07.055436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.339 qpair failed and we were unable to recover it. 00:35:46.339 [2024-11-18 07:21:07.055563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.339 [2024-11-18 07:21:07.055599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.339 qpair failed and we were unable to recover it. 00:35:46.339 [2024-11-18 07:21:07.055701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.339 [2024-11-18 07:21:07.055734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.339 qpair failed and we were unable to recover it. 00:35:46.339 [2024-11-18 07:21:07.055830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.339 [2024-11-18 07:21:07.055864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.339 qpair failed and we were unable to recover it. 00:35:46.339 [2024-11-18 07:21:07.056047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.339 [2024-11-18 07:21:07.056082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.339 qpair failed and we were unable to recover it. 00:35:46.339 [2024-11-18 07:21:07.056239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.339 [2024-11-18 07:21:07.056312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.339 qpair failed and we were unable to recover it. 00:35:46.339 [2024-11-18 07:21:07.056470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.339 [2024-11-18 07:21:07.056520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.339 qpair failed and we were unable to recover it. 00:35:46.339 [2024-11-18 07:21:07.056648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.339 [2024-11-18 07:21:07.056681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.339 qpair failed and we were unable to recover it. 00:35:46.339 [2024-11-18 07:21:07.056876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.339 [2024-11-18 07:21:07.056924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.339 qpair failed and we were unable to recover it. 00:35:46.339 [2024-11-18 07:21:07.057094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.339 [2024-11-18 07:21:07.057173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.339 qpair failed and we were unable to recover it. 00:35:46.339 [2024-11-18 07:21:07.057386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.339 [2024-11-18 07:21:07.057423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.339 qpair failed and we were unable to recover it. 00:35:46.339 [2024-11-18 07:21:07.057569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.339 [2024-11-18 07:21:07.057604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.339 qpair failed and we were unable to recover it. 00:35:46.339 [2024-11-18 07:21:07.057720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.339 [2024-11-18 07:21:07.057776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.339 qpair failed and we were unable to recover it. 00:35:46.339 [2024-11-18 07:21:07.057912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.339 [2024-11-18 07:21:07.057947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.339 qpair failed and we were unable to recover it. 00:35:46.339 [2024-11-18 07:21:07.058230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.339 [2024-11-18 07:21:07.058265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.339 qpair failed and we were unable to recover it. 00:35:46.339 [2024-11-18 07:21:07.058422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.339 [2024-11-18 07:21:07.058456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.339 qpair failed and we were unable to recover it. 00:35:46.339 [2024-11-18 07:21:07.058581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.339 [2024-11-18 07:21:07.058615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.339 qpair failed and we were unable to recover it. 00:35:46.339 [2024-11-18 07:21:07.058741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.339 [2024-11-18 07:21:07.058779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.339 qpair failed and we were unable to recover it. 00:35:46.339 [2024-11-18 07:21:07.058946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.339 [2024-11-18 07:21:07.058981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.339 qpair failed and we were unable to recover it. 00:35:46.339 [2024-11-18 07:21:07.059119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.339 [2024-11-18 07:21:07.059153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.339 qpair failed and we were unable to recover it. 00:35:46.339 [2024-11-18 07:21:07.059263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.339 [2024-11-18 07:21:07.059297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.339 qpair failed and we were unable to recover it. 00:35:46.339 [2024-11-18 07:21:07.059417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.339 [2024-11-18 07:21:07.059449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.339 qpair failed and we were unable to recover it. 00:35:46.339 [2024-11-18 07:21:07.059574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.339 [2024-11-18 07:21:07.059608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.339 qpair failed and we were unable to recover it. 00:35:46.339 [2024-11-18 07:21:07.059717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.339 [2024-11-18 07:21:07.059758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.339 qpair failed and we were unable to recover it. 00:35:46.340 [2024-11-18 07:21:07.059939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.340 [2024-11-18 07:21:07.059978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.340 qpair failed and we were unable to recover it. 00:35:46.340 [2024-11-18 07:21:07.060147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.340 [2024-11-18 07:21:07.060182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.340 qpair failed and we were unable to recover it. 00:35:46.340 [2024-11-18 07:21:07.060322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.340 [2024-11-18 07:21:07.060356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.340 qpair failed and we were unable to recover it. 00:35:46.340 [2024-11-18 07:21:07.060584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.340 [2024-11-18 07:21:07.060635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.340 qpair failed and we were unable to recover it. 00:35:46.340 [2024-11-18 07:21:07.060770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.340 [2024-11-18 07:21:07.060807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.340 qpair failed and we were unable to recover it. 00:35:46.340 [2024-11-18 07:21:07.060925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.340 [2024-11-18 07:21:07.060959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.340 qpair failed and we were unable to recover it. 00:35:46.340 [2024-11-18 07:21:07.061087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.340 [2024-11-18 07:21:07.061137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.340 qpair failed and we were unable to recover it. 00:35:46.340 [2024-11-18 07:21:07.061277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.340 [2024-11-18 07:21:07.061312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.340 qpair failed and we were unable to recover it. 00:35:46.340 [2024-11-18 07:21:07.061477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.340 [2024-11-18 07:21:07.061532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.340 qpair failed and we were unable to recover it. 00:35:46.340 [2024-11-18 07:21:07.061655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.340 [2024-11-18 07:21:07.061688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.340 qpair failed and we were unable to recover it. 00:35:46.340 [2024-11-18 07:21:07.061835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.340 [2024-11-18 07:21:07.061870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.340 qpair failed and we were unable to recover it. 00:35:46.340 [2024-11-18 07:21:07.062038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.340 [2024-11-18 07:21:07.062073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.340 qpair failed and we were unable to recover it. 00:35:46.340 [2024-11-18 07:21:07.062212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.340 [2024-11-18 07:21:07.062246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.340 qpair failed and we were unable to recover it. 00:35:46.340 [2024-11-18 07:21:07.062382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.340 [2024-11-18 07:21:07.062415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.340 qpair failed and we were unable to recover it. 00:35:46.340 [2024-11-18 07:21:07.062573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.340 [2024-11-18 07:21:07.062625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.340 qpair failed and we were unable to recover it. 00:35:46.340 [2024-11-18 07:21:07.062747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.340 [2024-11-18 07:21:07.062798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.340 qpair failed and we were unable to recover it. 00:35:46.340 [2024-11-18 07:21:07.062966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.340 [2024-11-18 07:21:07.063001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.340 qpair failed and we were unable to recover it. 00:35:46.340 [2024-11-18 07:21:07.063148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.340 [2024-11-18 07:21:07.063182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.340 qpair failed and we were unable to recover it. 00:35:46.340 [2024-11-18 07:21:07.063341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.340 [2024-11-18 07:21:07.063377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.340 qpair failed and we were unable to recover it. 00:35:46.340 [2024-11-18 07:21:07.063477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.340 [2024-11-18 07:21:07.063544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.340 qpair failed and we were unable to recover it. 00:35:46.340 [2024-11-18 07:21:07.063660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.340 [2024-11-18 07:21:07.063695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.340 qpair failed and we were unable to recover it. 00:35:46.340 [2024-11-18 07:21:07.063847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.340 [2024-11-18 07:21:07.063897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.340 qpair failed and we were unable to recover it. 00:35:46.340 [2024-11-18 07:21:07.064031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.340 [2024-11-18 07:21:07.064065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.340 qpair failed and we were unable to recover it. 00:35:46.340 [2024-11-18 07:21:07.064199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.340 [2024-11-18 07:21:07.064233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.340 qpair failed and we were unable to recover it. 00:35:46.340 [2024-11-18 07:21:07.064348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.340 [2024-11-18 07:21:07.064381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.340 qpair failed and we were unable to recover it. 00:35:46.340 [2024-11-18 07:21:07.064523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.340 [2024-11-18 07:21:07.064559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.340 qpair failed and we were unable to recover it. 00:35:46.340 [2024-11-18 07:21:07.064660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.340 [2024-11-18 07:21:07.064693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.340 qpair failed and we were unable to recover it. 00:35:46.340 [2024-11-18 07:21:07.064859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.340 [2024-11-18 07:21:07.064917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.340 qpair failed and we were unable to recover it. 00:35:46.340 [2024-11-18 07:21:07.065067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.340 [2024-11-18 07:21:07.065117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.340 qpair failed and we were unable to recover it. 00:35:46.341 [2024-11-18 07:21:07.065300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.341 [2024-11-18 07:21:07.065351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.341 qpair failed and we were unable to recover it. 00:35:46.341 [2024-11-18 07:21:07.065448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.341 [2024-11-18 07:21:07.065481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.341 qpair failed and we were unable to recover it. 00:35:46.341 [2024-11-18 07:21:07.065619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.341 [2024-11-18 07:21:07.065669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.341 qpair failed and we were unable to recover it. 00:35:46.341 [2024-11-18 07:21:07.065799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.341 [2024-11-18 07:21:07.065834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.341 qpair failed and we were unable to recover it. 00:35:46.341 [2024-11-18 07:21:07.066040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.341 [2024-11-18 07:21:07.066090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.341 qpair failed and we were unable to recover it. 00:35:46.341 [2024-11-18 07:21:07.066225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.341 [2024-11-18 07:21:07.066259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.341 qpair failed and we were unable to recover it. 00:35:46.341 [2024-11-18 07:21:07.066365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.341 [2024-11-18 07:21:07.066398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.341 qpair failed and we were unable to recover it. 00:35:46.341 [2024-11-18 07:21:07.066521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.341 [2024-11-18 07:21:07.066557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.341 qpair failed and we were unable to recover it. 00:35:46.341 [2024-11-18 07:21:07.066667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.341 [2024-11-18 07:21:07.066700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.341 qpair failed and we were unable to recover it. 00:35:46.341 [2024-11-18 07:21:07.066798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.341 [2024-11-18 07:21:07.066831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.341 qpair failed and we were unable to recover it. 00:35:46.341 [2024-11-18 07:21:07.066993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.341 [2024-11-18 07:21:07.067026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.341 qpair failed and we were unable to recover it. 00:35:46.341 [2024-11-18 07:21:07.067161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.341 [2024-11-18 07:21:07.067194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.341 qpair failed and we were unable to recover it. 00:35:46.341 [2024-11-18 07:21:07.067325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.341 [2024-11-18 07:21:07.067358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.341 qpair failed and we were unable to recover it. 00:35:46.341 [2024-11-18 07:21:07.067509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.341 [2024-11-18 07:21:07.067544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.341 qpair failed and we were unable to recover it. 00:35:46.341 [2024-11-18 07:21:07.067654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.341 [2024-11-18 07:21:07.067689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.341 qpair failed and we were unable to recover it. 00:35:46.341 [2024-11-18 07:21:07.067816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.341 [2024-11-18 07:21:07.067851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.341 qpair failed and we were unable to recover it. 00:35:46.341 [2024-11-18 07:21:07.068029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.341 [2024-11-18 07:21:07.068080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.341 qpair failed and we were unable to recover it. 00:35:46.341 [2024-11-18 07:21:07.068274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.341 [2024-11-18 07:21:07.068330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.341 qpair failed and we were unable to recover it. 00:35:46.341 [2024-11-18 07:21:07.068433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.341 [2024-11-18 07:21:07.068466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.341 qpair failed and we were unable to recover it. 00:35:46.341 [2024-11-18 07:21:07.068628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.341 [2024-11-18 07:21:07.068679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.341 qpair failed and we were unable to recover it. 00:35:46.341 [2024-11-18 07:21:07.068801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.341 [2024-11-18 07:21:07.068852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.341 qpair failed and we were unable to recover it. 00:35:46.341 [2024-11-18 07:21:07.068981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.341 [2024-11-18 07:21:07.069030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.341 qpair failed and we were unable to recover it. 00:35:46.341 [2024-11-18 07:21:07.069165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.341 [2024-11-18 07:21:07.069199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.341 qpair failed and we were unable to recover it. 00:35:46.341 [2024-11-18 07:21:07.069335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.341 [2024-11-18 07:21:07.069367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.341 qpair failed and we were unable to recover it. 00:35:46.341 [2024-11-18 07:21:07.069503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.341 [2024-11-18 07:21:07.069538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.341 qpair failed and we were unable to recover it. 00:35:46.341 [2024-11-18 07:21:07.069637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.341 [2024-11-18 07:21:07.069672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.341 qpair failed and we were unable to recover it. 00:35:46.341 [2024-11-18 07:21:07.069783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.341 [2024-11-18 07:21:07.069816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.341 qpair failed and we were unable to recover it. 00:35:46.341 [2024-11-18 07:21:07.069916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.341 [2024-11-18 07:21:07.069952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.341 qpair failed and we were unable to recover it. 00:35:46.341 [2024-11-18 07:21:07.070122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.341 [2024-11-18 07:21:07.070155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.341 qpair failed and we were unable to recover it. 00:35:46.341 [2024-11-18 07:21:07.070320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.341 [2024-11-18 07:21:07.070353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.341 qpair failed and we were unable to recover it. 00:35:46.341 [2024-11-18 07:21:07.070465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.341 [2024-11-18 07:21:07.070519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.341 qpair failed and we were unable to recover it. 00:35:46.341 [2024-11-18 07:21:07.070655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.341 [2024-11-18 07:21:07.070689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.341 qpair failed and we were unable to recover it. 00:35:46.341 [2024-11-18 07:21:07.070786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.341 [2024-11-18 07:21:07.070819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.341 qpair failed and we were unable to recover it. 00:35:46.341 [2024-11-18 07:21:07.070957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.342 [2024-11-18 07:21:07.070992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.342 qpair failed and we were unable to recover it. 00:35:46.342 [2024-11-18 07:21:07.071124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.342 [2024-11-18 07:21:07.071158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.342 qpair failed and we were unable to recover it. 00:35:46.342 [2024-11-18 07:21:07.071294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.342 [2024-11-18 07:21:07.071326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.342 qpair failed and we were unable to recover it. 00:35:46.342 [2024-11-18 07:21:07.071455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.342 [2024-11-18 07:21:07.071499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.342 qpair failed and we were unable to recover it. 00:35:46.342 [2024-11-18 07:21:07.071642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.342 [2024-11-18 07:21:07.071675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.342 qpair failed and we were unable to recover it. 00:35:46.342 [2024-11-18 07:21:07.071839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.342 [2024-11-18 07:21:07.071878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.342 qpair failed and we were unable to recover it. 00:35:46.342 [2024-11-18 07:21:07.072046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.342 [2024-11-18 07:21:07.072081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.342 qpair failed and we were unable to recover it. 00:35:46.342 [2024-11-18 07:21:07.072216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.342 [2024-11-18 07:21:07.072249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.342 qpair failed and we were unable to recover it. 00:35:46.342 [2024-11-18 07:21:07.072416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.342 [2024-11-18 07:21:07.072449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.342 qpair failed and we were unable to recover it. 00:35:46.342 [2024-11-18 07:21:07.072600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.342 [2024-11-18 07:21:07.072634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.342 qpair failed and we were unable to recover it. 00:35:46.342 [2024-11-18 07:21:07.072764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.342 [2024-11-18 07:21:07.072813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.342 qpair failed and we were unable to recover it. 00:35:46.342 [2024-11-18 07:21:07.072925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.342 [2024-11-18 07:21:07.072958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.342 qpair failed and we were unable to recover it. 00:35:46.342 [2024-11-18 07:21:07.073124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.342 [2024-11-18 07:21:07.073157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.342 qpair failed and we were unable to recover it. 00:35:46.342 [2024-11-18 07:21:07.073325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.342 [2024-11-18 07:21:07.073359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.342 qpair failed and we were unable to recover it. 00:35:46.342 [2024-11-18 07:21:07.073504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.342 [2024-11-18 07:21:07.073537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.342 qpair failed and we were unable to recover it. 00:35:46.342 [2024-11-18 07:21:07.073647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.342 [2024-11-18 07:21:07.073679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.342 qpair failed and we were unable to recover it. 00:35:46.342 [2024-11-18 07:21:07.073848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.342 [2024-11-18 07:21:07.073882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.342 qpair failed and we were unable to recover it. 00:35:46.342 [2024-11-18 07:21:07.074020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.342 [2024-11-18 07:21:07.074053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.342 qpair failed and we were unable to recover it. 00:35:46.342 [2024-11-18 07:21:07.074199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.342 [2024-11-18 07:21:07.074231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.342 qpair failed and we were unable to recover it. 00:35:46.342 [2024-11-18 07:21:07.074379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.342 [2024-11-18 07:21:07.074413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.342 qpair failed and we were unable to recover it. 00:35:46.342 [2024-11-18 07:21:07.074569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.342 [2024-11-18 07:21:07.074621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.342 qpair failed and we were unable to recover it. 00:35:46.342 [2024-11-18 07:21:07.074764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.342 [2024-11-18 07:21:07.074815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.342 qpair failed and we were unable to recover it. 00:35:46.342 [2024-11-18 07:21:07.074982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.342 [2024-11-18 07:21:07.075015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.342 qpair failed and we were unable to recover it. 00:35:46.342 [2024-11-18 07:21:07.075157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.342 [2024-11-18 07:21:07.075190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.342 qpair failed and we were unable to recover it. 00:35:46.342 [2024-11-18 07:21:07.075355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.342 [2024-11-18 07:21:07.075389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.342 qpair failed and we were unable to recover it. 00:35:46.342 [2024-11-18 07:21:07.075527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.342 [2024-11-18 07:21:07.075560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.342 qpair failed and we were unable to recover it. 00:35:46.342 [2024-11-18 07:21:07.075712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.342 [2024-11-18 07:21:07.075762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.342 qpair failed and we were unable to recover it. 00:35:46.342 [2024-11-18 07:21:07.075866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.342 [2024-11-18 07:21:07.075900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.342 qpair failed and we were unable to recover it. 00:35:46.342 [2024-11-18 07:21:07.076042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.342 [2024-11-18 07:21:07.076074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.342 qpair failed and we were unable to recover it. 00:35:46.342 [2024-11-18 07:21:07.076250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.342 [2024-11-18 07:21:07.076283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.342 qpair failed and we were unable to recover it. 00:35:46.342 [2024-11-18 07:21:07.076409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.342 [2024-11-18 07:21:07.076443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.342 qpair failed and we were unable to recover it. 00:35:46.342 [2024-11-18 07:21:07.076581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.342 [2024-11-18 07:21:07.076636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.342 qpair failed and we were unable to recover it. 00:35:46.342 [2024-11-18 07:21:07.076820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.342 [2024-11-18 07:21:07.076856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.342 qpair failed and we were unable to recover it. 00:35:46.342 [2024-11-18 07:21:07.077013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.342 [2024-11-18 07:21:07.077046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.342 qpair failed and we were unable to recover it. 00:35:46.342 [2024-11-18 07:21:07.077223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.342 [2024-11-18 07:21:07.077257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.343 qpair failed and we were unable to recover it. 00:35:46.343 [2024-11-18 07:21:07.077388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.343 [2024-11-18 07:21:07.077420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.343 qpair failed and we were unable to recover it. 00:35:46.343 [2024-11-18 07:21:07.077581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.343 [2024-11-18 07:21:07.077630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.343 qpair failed and we were unable to recover it. 00:35:46.343 [2024-11-18 07:21:07.077759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.343 [2024-11-18 07:21:07.077812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.343 qpair failed and we were unable to recover it. 00:35:46.343 [2024-11-18 07:21:07.077972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.343 [2024-11-18 07:21:07.078024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.343 qpair failed and we were unable to recover it. 00:35:46.343 [2024-11-18 07:21:07.078126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.343 [2024-11-18 07:21:07.078159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.343 qpair failed and we were unable to recover it. 00:35:46.343 [2024-11-18 07:21:07.078271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.343 [2024-11-18 07:21:07.078306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.343 qpair failed and we were unable to recover it. 00:35:46.343 [2024-11-18 07:21:07.078403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.343 [2024-11-18 07:21:07.078436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.343 qpair failed and we were unable to recover it. 00:35:46.343 [2024-11-18 07:21:07.078588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.343 [2024-11-18 07:21:07.078621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.343 qpair failed and we were unable to recover it. 00:35:46.343 [2024-11-18 07:21:07.078761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.343 [2024-11-18 07:21:07.078795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.343 qpair failed and we were unable to recover it. 00:35:46.343 [2024-11-18 07:21:07.078957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.343 [2024-11-18 07:21:07.078990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.343 qpair failed and we were unable to recover it. 00:35:46.343 [2024-11-18 07:21:07.079122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.343 [2024-11-18 07:21:07.079160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.343 qpair failed and we were unable to recover it. 00:35:46.343 [2024-11-18 07:21:07.079256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.343 [2024-11-18 07:21:07.079290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.343 qpair failed and we were unable to recover it. 00:35:46.343 [2024-11-18 07:21:07.079433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.343 [2024-11-18 07:21:07.079465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.343 qpair failed and we were unable to recover it. 00:35:46.343 [2024-11-18 07:21:07.079585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.343 [2024-11-18 07:21:07.079619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.343 qpair failed and we were unable to recover it. 00:35:46.343 [2024-11-18 07:21:07.079723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.343 [2024-11-18 07:21:07.079756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.343 qpair failed and we were unable to recover it. 00:35:46.343 [2024-11-18 07:21:07.079890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.343 [2024-11-18 07:21:07.079923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.343 qpair failed and we were unable to recover it. 00:35:46.343 [2024-11-18 07:21:07.080055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.343 [2024-11-18 07:21:07.080089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.343 qpair failed and we were unable to recover it. 00:35:46.343 [2024-11-18 07:21:07.080231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.343 [2024-11-18 07:21:07.080263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.343 qpair failed and we were unable to recover it. 00:35:46.343 [2024-11-18 07:21:07.080407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.343 [2024-11-18 07:21:07.080441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.343 qpair failed and we were unable to recover it. 00:35:46.343 [2024-11-18 07:21:07.080588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.343 [2024-11-18 07:21:07.080622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.343 qpair failed and we were unable to recover it. 00:35:46.343 [2024-11-18 07:21:07.080737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.343 [2024-11-18 07:21:07.080770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.343 qpair failed and we were unable to recover it. 00:35:46.343 [2024-11-18 07:21:07.080929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.343 [2024-11-18 07:21:07.080963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.343 qpair failed and we were unable to recover it. 00:35:46.343 [2024-11-18 07:21:07.081100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.343 [2024-11-18 07:21:07.081134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.343 qpair failed and we were unable to recover it. 00:35:46.343 [2024-11-18 07:21:07.081244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.343 [2024-11-18 07:21:07.081277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.343 qpair failed and we were unable to recover it. 00:35:46.343 [2024-11-18 07:21:07.081446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.343 [2024-11-18 07:21:07.081480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.343 qpair failed and we were unable to recover it. 00:35:46.343 [2024-11-18 07:21:07.081630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.343 [2024-11-18 07:21:07.081665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.343 qpair failed and we were unable to recover it. 00:35:46.343 [2024-11-18 07:21:07.081833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.343 [2024-11-18 07:21:07.081867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.343 qpair failed and we were unable to recover it. 00:35:46.343 [2024-11-18 07:21:07.081982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.343 [2024-11-18 07:21:07.082015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.343 qpair failed and we were unable to recover it. 00:35:46.343 [2024-11-18 07:21:07.082155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.343 [2024-11-18 07:21:07.082189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.343 qpair failed and we were unable to recover it. 00:35:46.343 [2024-11-18 07:21:07.082328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.343 [2024-11-18 07:21:07.082367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.343 qpair failed and we were unable to recover it. 00:35:46.343 [2024-11-18 07:21:07.082461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.343 [2024-11-18 07:21:07.082517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.343 qpair failed and we were unable to recover it. 00:35:46.343 [2024-11-18 07:21:07.082660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.343 [2024-11-18 07:21:07.082694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.343 qpair failed and we were unable to recover it. 00:35:46.343 [2024-11-18 07:21:07.082865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.343 [2024-11-18 07:21:07.082899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.343 qpair failed and we were unable to recover it. 00:35:46.343 [2024-11-18 07:21:07.082997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.343 [2024-11-18 07:21:07.083031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.343 qpair failed and we were unable to recover it. 00:35:46.343 [2024-11-18 07:21:07.083133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.344 [2024-11-18 07:21:07.083167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.344 qpair failed and we were unable to recover it. 00:35:46.344 [2024-11-18 07:21:07.083299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.344 [2024-11-18 07:21:07.083332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.344 qpair failed and we were unable to recover it. 00:35:46.344 [2024-11-18 07:21:07.083476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.344 [2024-11-18 07:21:07.083521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.344 qpair failed and we were unable to recover it. 00:35:46.344 [2024-11-18 07:21:07.083730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.344 [2024-11-18 07:21:07.083780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.344 qpair failed and we were unable to recover it. 00:35:46.344 [2024-11-18 07:21:07.083896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.344 [2024-11-18 07:21:07.083931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.344 qpair failed and we were unable to recover it. 00:35:46.344 [2024-11-18 07:21:07.084081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.344 [2024-11-18 07:21:07.084115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.344 qpair failed and we were unable to recover it. 00:35:46.344 [2024-11-18 07:21:07.084282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.344 [2024-11-18 07:21:07.084314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.344 qpair failed and we were unable to recover it. 00:35:46.344 [2024-11-18 07:21:07.084451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.344 [2024-11-18 07:21:07.084485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.344 qpair failed and we were unable to recover it. 00:35:46.344 [2024-11-18 07:21:07.084661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.344 [2024-11-18 07:21:07.084695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.344 qpair failed and we were unable to recover it. 00:35:46.344 [2024-11-18 07:21:07.084833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.344 [2024-11-18 07:21:07.084866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.344 qpair failed and we were unable to recover it. 00:35:46.344 [2024-11-18 07:21:07.085031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.344 [2024-11-18 07:21:07.085064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.344 qpair failed and we were unable to recover it. 00:35:46.344 [2024-11-18 07:21:07.085200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.344 [2024-11-18 07:21:07.085233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.344 qpair failed and we were unable to recover it. 00:35:46.344 [2024-11-18 07:21:07.085338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.344 [2024-11-18 07:21:07.085371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.344 qpair failed and we were unable to recover it. 00:35:46.344 [2024-11-18 07:21:07.085554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.344 [2024-11-18 07:21:07.085590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.344 qpair failed and we were unable to recover it. 00:35:46.344 [2024-11-18 07:21:07.085769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.344 [2024-11-18 07:21:07.085819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.344 qpair failed and we were unable to recover it. 00:35:46.344 [2024-11-18 07:21:07.085957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.344 [2024-11-18 07:21:07.085990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.344 qpair failed and we were unable to recover it. 00:35:46.344 [2024-11-18 07:21:07.086156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.344 [2024-11-18 07:21:07.086194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.344 qpair failed and we were unable to recover it. 00:35:46.344 [2024-11-18 07:21:07.086371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.344 [2024-11-18 07:21:07.086405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.344 qpair failed and we were unable to recover it. 00:35:46.344 [2024-11-18 07:21:07.086516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.344 [2024-11-18 07:21:07.086549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.344 qpair failed and we were unable to recover it. 00:35:46.344 [2024-11-18 07:21:07.086714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.344 [2024-11-18 07:21:07.086772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.344 qpair failed and we were unable to recover it. 00:35:46.344 [2024-11-18 07:21:07.086923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.344 [2024-11-18 07:21:07.086976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.344 qpair failed and we were unable to recover it. 00:35:46.344 [2024-11-18 07:21:07.087159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.344 [2024-11-18 07:21:07.087192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.344 qpair failed and we were unable to recover it. 00:35:46.344 [2024-11-18 07:21:07.087308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.344 [2024-11-18 07:21:07.087340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.344 qpair failed and we were unable to recover it. 00:35:46.344 [2024-11-18 07:21:07.087505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.344 [2024-11-18 07:21:07.087539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.344 qpair failed and we were unable to recover it. 00:35:46.344 [2024-11-18 07:21:07.087745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.344 [2024-11-18 07:21:07.087779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.344 qpair failed and we were unable to recover it. 00:35:46.344 [2024-11-18 07:21:07.087879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.344 [2024-11-18 07:21:07.087913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.344 qpair failed and we were unable to recover it. 00:35:46.344 [2024-11-18 07:21:07.088045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.344 [2024-11-18 07:21:07.088079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.344 qpair failed and we were unable to recover it. 00:35:46.344 [2024-11-18 07:21:07.088190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.344 [2024-11-18 07:21:07.088223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.344 qpair failed and we were unable to recover it. 00:35:46.344 [2024-11-18 07:21:07.088391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.344 [2024-11-18 07:21:07.088424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.344 qpair failed and we were unable to recover it. 00:35:46.344 [2024-11-18 07:21:07.088566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.344 [2024-11-18 07:21:07.088601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.344 qpair failed and we were unable to recover it. 00:35:46.344 [2024-11-18 07:21:07.088767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.344 [2024-11-18 07:21:07.088817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.344 qpair failed and we were unable to recover it. 00:35:46.344 [2024-11-18 07:21:07.088984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.344 [2024-11-18 07:21:07.089017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.344 qpair failed and we were unable to recover it. 00:35:46.344 [2024-11-18 07:21:07.089180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.344 [2024-11-18 07:21:07.089213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.344 qpair failed and we were unable to recover it. 00:35:46.344 [2024-11-18 07:21:07.089349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.344 [2024-11-18 07:21:07.089383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.344 qpair failed and we were unable to recover it. 00:35:46.344 [2024-11-18 07:21:07.089539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.345 [2024-11-18 07:21:07.089575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.345 qpair failed and we were unable to recover it. 00:35:46.345 [2024-11-18 07:21:07.089809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.345 [2024-11-18 07:21:07.089845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.345 qpair failed and we were unable to recover it. 00:35:46.345 [2024-11-18 07:21:07.090006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.345 [2024-11-18 07:21:07.090039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.345 qpair failed and we were unable to recover it. 00:35:46.345 [2024-11-18 07:21:07.090136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.345 [2024-11-18 07:21:07.090169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.345 qpair failed and we were unable to recover it. 00:35:46.345 [2024-11-18 07:21:07.090332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.345 [2024-11-18 07:21:07.090365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.345 qpair failed and we were unable to recover it. 00:35:46.345 [2024-11-18 07:21:07.090528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.345 [2024-11-18 07:21:07.090561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.345 qpair failed and we were unable to recover it. 00:35:46.345 [2024-11-18 07:21:07.090744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.345 [2024-11-18 07:21:07.090793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.345 qpair failed and we were unable to recover it. 00:35:46.345 [2024-11-18 07:21:07.090942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.345 [2024-11-18 07:21:07.090974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.345 qpair failed and we were unable to recover it. 00:35:46.345 [2024-11-18 07:21:07.091136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.345 [2024-11-18 07:21:07.091170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.345 qpair failed and we were unable to recover it. 00:35:46.345 [2024-11-18 07:21:07.091293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.345 [2024-11-18 07:21:07.091325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.345 qpair failed and we were unable to recover it. 00:35:46.345 [2024-11-18 07:21:07.091498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.345 [2024-11-18 07:21:07.091531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.345 qpair failed and we were unable to recover it. 00:35:46.345 [2024-11-18 07:21:07.091715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.345 [2024-11-18 07:21:07.091768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.345 qpair failed and we were unable to recover it. 00:35:46.345 [2024-11-18 07:21:07.091903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.345 [2024-11-18 07:21:07.091938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.345 qpair failed and we were unable to recover it. 00:35:46.345 [2024-11-18 07:21:07.092093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.345 [2024-11-18 07:21:07.092126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.345 qpair failed and we were unable to recover it. 00:35:46.345 [2024-11-18 07:21:07.092268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.345 [2024-11-18 07:21:07.092302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.345 qpair failed and we were unable to recover it. 00:35:46.345 [2024-11-18 07:21:07.092439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.345 [2024-11-18 07:21:07.092472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.345 qpair failed and we were unable to recover it. 00:35:46.345 [2024-11-18 07:21:07.092610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.345 [2024-11-18 07:21:07.092644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.345 qpair failed and we were unable to recover it. 00:35:46.345 [2024-11-18 07:21:07.092804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.345 [2024-11-18 07:21:07.092838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.345 qpair failed and we were unable to recover it. 00:35:46.345 [2024-11-18 07:21:07.092973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.345 [2024-11-18 07:21:07.093007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.345 qpair failed and we were unable to recover it. 00:35:46.345 [2024-11-18 07:21:07.093148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.345 [2024-11-18 07:21:07.093183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.345 qpair failed and we were unable to recover it. 00:35:46.345 [2024-11-18 07:21:07.093346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.345 [2024-11-18 07:21:07.093379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.345 qpair failed and we were unable to recover it. 00:35:46.345 [2024-11-18 07:21:07.093520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.345 [2024-11-18 07:21:07.093553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.345 qpair failed and we were unable to recover it. 00:35:46.345 [2024-11-18 07:21:07.093718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.345 [2024-11-18 07:21:07.093757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.345 qpair failed and we were unable to recover it. 00:35:46.345 [2024-11-18 07:21:07.093922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.345 [2024-11-18 07:21:07.093956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.345 qpair failed and we were unable to recover it. 00:35:46.345 [2024-11-18 07:21:07.094121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.345 [2024-11-18 07:21:07.094154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.345 qpair failed and we were unable to recover it. 00:35:46.345 [2024-11-18 07:21:07.094284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.345 [2024-11-18 07:21:07.094318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.345 qpair failed and we were unable to recover it. 00:35:46.345 [2024-11-18 07:21:07.094428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.345 [2024-11-18 07:21:07.094462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.345 qpair failed and we were unable to recover it. 00:35:46.345 [2024-11-18 07:21:07.094627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.345 [2024-11-18 07:21:07.094679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.345 qpair failed and we were unable to recover it. 00:35:46.345 [2024-11-18 07:21:07.094833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.345 [2024-11-18 07:21:07.094868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.345 qpair failed and we were unable to recover it. 00:35:46.345 [2024-11-18 07:21:07.094993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.345 [2024-11-18 07:21:07.095028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.346 qpair failed and we were unable to recover it. 00:35:46.346 [2024-11-18 07:21:07.095177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.346 [2024-11-18 07:21:07.095210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.346 qpair failed and we were unable to recover it. 00:35:46.346 [2024-11-18 07:21:07.095377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.346 [2024-11-18 07:21:07.095411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.346 qpair failed and we were unable to recover it. 00:35:46.346 [2024-11-18 07:21:07.095559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.346 [2024-11-18 07:21:07.095612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.346 qpair failed and we were unable to recover it. 00:35:46.346 [2024-11-18 07:21:07.095763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.346 [2024-11-18 07:21:07.095814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.346 qpair failed and we were unable to recover it. 00:35:46.346 [2024-11-18 07:21:07.095996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.346 [2024-11-18 07:21:07.096047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.346 qpair failed and we were unable to recover it. 00:35:46.346 [2024-11-18 07:21:07.096189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.346 [2024-11-18 07:21:07.096225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.346 qpair failed and we were unable to recover it. 00:35:46.346 [2024-11-18 07:21:07.096398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.346 [2024-11-18 07:21:07.096433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.346 qpair failed and we were unable to recover it. 00:35:46.346 [2024-11-18 07:21:07.096581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.346 [2024-11-18 07:21:07.096615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.346 qpair failed and we were unable to recover it. 00:35:46.346 [2024-11-18 07:21:07.096749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.346 [2024-11-18 07:21:07.096784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.346 qpair failed and we were unable to recover it. 00:35:46.346 [2024-11-18 07:21:07.096946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.346 [2024-11-18 07:21:07.096979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.346 qpair failed and we were unable to recover it. 00:35:46.346 [2024-11-18 07:21:07.097086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.346 [2024-11-18 07:21:07.097121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.346 qpair failed and we were unable to recover it. 00:35:46.346 [2024-11-18 07:21:07.097262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.346 [2024-11-18 07:21:07.097296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.346 qpair failed and we were unable to recover it. 00:35:46.346 [2024-11-18 07:21:07.097434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.346 [2024-11-18 07:21:07.097468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.346 qpair failed and we were unable to recover it. 00:35:46.346 [2024-11-18 07:21:07.097617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.346 [2024-11-18 07:21:07.097652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.346 qpair failed and we were unable to recover it. 00:35:46.346 [2024-11-18 07:21:07.097842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.346 [2024-11-18 07:21:07.097895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.346 qpair failed and we were unable to recover it. 00:35:46.346 [2024-11-18 07:21:07.098057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.346 [2024-11-18 07:21:07.098091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.346 qpair failed and we were unable to recover it. 00:35:46.346 [2024-11-18 07:21:07.098193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.346 [2024-11-18 07:21:07.098227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.346 qpair failed and we were unable to recover it. 00:35:46.346 [2024-11-18 07:21:07.098387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.346 [2024-11-18 07:21:07.098420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.346 qpair failed and we were unable to recover it. 00:35:46.346 [2024-11-18 07:21:07.098584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.346 [2024-11-18 07:21:07.098619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.346 qpair failed and we were unable to recover it. 00:35:46.346 [2024-11-18 07:21:07.098790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.346 [2024-11-18 07:21:07.098825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.346 qpair failed and we were unable to recover it. 00:35:46.346 [2024-11-18 07:21:07.098962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.346 [2024-11-18 07:21:07.098997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.346 qpair failed and we were unable to recover it. 00:35:46.346 [2024-11-18 07:21:07.099161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.346 [2024-11-18 07:21:07.099196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.346 qpair failed and we were unable to recover it. 00:35:46.346 [2024-11-18 07:21:07.099331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.346 [2024-11-18 07:21:07.099365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.346 qpair failed and we were unable to recover it. 00:35:46.346 [2024-11-18 07:21:07.099463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.346 [2024-11-18 07:21:07.099504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.346 qpair failed and we were unable to recover it. 00:35:46.346 [2024-11-18 07:21:07.099641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.346 [2024-11-18 07:21:07.099692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.346 qpair failed and we were unable to recover it. 00:35:46.346 [2024-11-18 07:21:07.099834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.346 [2024-11-18 07:21:07.099868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.346 qpair failed and we were unable to recover it. 00:35:46.346 [2024-11-18 07:21:07.100008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.346 [2024-11-18 07:21:07.100043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.346 qpair failed and we were unable to recover it. 00:35:46.346 [2024-11-18 07:21:07.100211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.346 [2024-11-18 07:21:07.100246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.346 qpair failed and we were unable to recover it. 00:35:46.346 [2024-11-18 07:21:07.100362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.346 [2024-11-18 07:21:07.100396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.346 qpair failed and we were unable to recover it. 00:35:46.346 [2024-11-18 07:21:07.100531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.346 [2024-11-18 07:21:07.100566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.346 qpair failed and we were unable to recover it. 00:35:46.346 [2024-11-18 07:21:07.100733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.346 [2024-11-18 07:21:07.100766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.346 qpair failed and we were unable to recover it. 00:35:46.346 [2024-11-18 07:21:07.101896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.346 [2024-11-18 07:21:07.101930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.346 qpair failed and we were unable to recover it. 00:35:46.346 [2024-11-18 07:21:07.102058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.346 [2024-11-18 07:21:07.102091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.346 qpair failed and we were unable to recover it. 00:35:46.346 [2024-11-18 07:21:07.102182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.346 [2024-11-18 07:21:07.102210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.347 qpair failed and we were unable to recover it. 00:35:46.347 [2024-11-18 07:21:07.102304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.347 [2024-11-18 07:21:07.102333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.347 qpair failed and we were unable to recover it. 00:35:46.347 [2024-11-18 07:21:07.102476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.347 [2024-11-18 07:21:07.102513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.347 qpair failed and we were unable to recover it. 00:35:46.347 [2024-11-18 07:21:07.102672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.347 [2024-11-18 07:21:07.102700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.347 qpair failed and we were unable to recover it. 00:35:46.347 [2024-11-18 07:21:07.102888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.347 [2024-11-18 07:21:07.102938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.347 qpair failed and we were unable to recover it. 00:35:46.347 [2024-11-18 07:21:07.103070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.347 [2024-11-18 07:21:07.103119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.347 qpair failed and we were unable to recover it. 00:35:46.347 [2024-11-18 07:21:07.103210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.347 [2024-11-18 07:21:07.103238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.347 qpair failed and we were unable to recover it. 00:35:46.347 [2024-11-18 07:21:07.103361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.347 [2024-11-18 07:21:07.103389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.347 qpair failed and we were unable to recover it. 00:35:46.347 [2024-11-18 07:21:07.103532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.347 [2024-11-18 07:21:07.103560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.347 qpair failed and we were unable to recover it. 00:35:46.347 [2024-11-18 07:21:07.103679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.347 [2024-11-18 07:21:07.103707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.347 qpair failed and we were unable to recover it. 00:35:46.347 [2024-11-18 07:21:07.103805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.347 [2024-11-18 07:21:07.103833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.347 qpair failed and we were unable to recover it. 00:35:46.347 [2024-11-18 07:21:07.103949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.347 [2024-11-18 07:21:07.103977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.347 qpair failed and we were unable to recover it. 00:35:46.347 [2024-11-18 07:21:07.104120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.347 [2024-11-18 07:21:07.104148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.347 qpair failed and we were unable to recover it. 00:35:46.347 [2024-11-18 07:21:07.104272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.347 [2024-11-18 07:21:07.104301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.347 qpair failed and we were unable to recover it. 00:35:46.347 [2024-11-18 07:21:07.104389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.347 [2024-11-18 07:21:07.104418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.347 qpair failed and we were unable to recover it. 00:35:46.347 [2024-11-18 07:21:07.104512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.347 [2024-11-18 07:21:07.104542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.347 qpair failed and we were unable to recover it. 00:35:46.347 [2024-11-18 07:21:07.104659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.347 [2024-11-18 07:21:07.104688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.347 qpair failed and we were unable to recover it. 00:35:46.347 [2024-11-18 07:21:07.104765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.347 [2024-11-18 07:21:07.104793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.347 qpair failed and we were unable to recover it. 00:35:46.347 [2024-11-18 07:21:07.104919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.347 [2024-11-18 07:21:07.104948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.347 qpair failed and we were unable to recover it. 00:35:46.347 [2024-11-18 07:21:07.105034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.347 [2024-11-18 07:21:07.105063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.347 qpair failed and we were unable to recover it. 00:35:46.347 [2024-11-18 07:21:07.105156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.347 [2024-11-18 07:21:07.105185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.347 qpair failed and we were unable to recover it. 00:35:46.347 [2024-11-18 07:21:07.105315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.347 [2024-11-18 07:21:07.105344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.347 qpair failed and we were unable to recover it. 00:35:46.347 [2024-11-18 07:21:07.105458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.347 [2024-11-18 07:21:07.105486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.347 qpair failed and we were unable to recover it. 00:35:46.347 [2024-11-18 07:21:07.105622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.347 [2024-11-18 07:21:07.105650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.347 qpair failed and we were unable to recover it. 00:35:46.347 [2024-11-18 07:21:07.105766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.347 [2024-11-18 07:21:07.105795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.347 qpair failed and we were unable to recover it. 00:35:46.347 [2024-11-18 07:21:07.105909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.347 [2024-11-18 07:21:07.105938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.347 qpair failed and we were unable to recover it. 00:35:46.347 [2024-11-18 07:21:07.106049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.347 [2024-11-18 07:21:07.106091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.347 qpair failed and we were unable to recover it. 00:35:46.347 [2024-11-18 07:21:07.106294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.347 [2024-11-18 07:21:07.106323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.347 qpair failed and we were unable to recover it. 00:35:46.347 [2024-11-18 07:21:07.106470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.347 [2024-11-18 07:21:07.106510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.347 qpair failed and we were unable to recover it. 00:35:46.347 [2024-11-18 07:21:07.106629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.347 [2024-11-18 07:21:07.106657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.347 qpair failed and we were unable to recover it. 00:35:46.347 [2024-11-18 07:21:07.106778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.347 [2024-11-18 07:21:07.106806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.347 qpair failed and we were unable to recover it. 00:35:46.347 [2024-11-18 07:21:07.106913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.347 [2024-11-18 07:21:07.106942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.347 qpair failed and we were unable to recover it. 00:35:46.347 [2024-11-18 07:21:07.107076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.347 [2024-11-18 07:21:07.107113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.347 qpair failed and we were unable to recover it. 00:35:46.347 [2024-11-18 07:21:07.107258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.347 [2024-11-18 07:21:07.107293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.347 qpair failed and we were unable to recover it. 00:35:46.348 [2024-11-18 07:21:07.107506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.348 [2024-11-18 07:21:07.107552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.348 qpair failed and we were unable to recover it. 00:35:46.348 [2024-11-18 07:21:07.107669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.348 [2024-11-18 07:21:07.107723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.348 qpair failed and we were unable to recover it. 00:35:46.348 [2024-11-18 07:21:07.107894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.348 [2024-11-18 07:21:07.107929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.348 qpair failed and we were unable to recover it. 00:35:46.348 [2024-11-18 07:21:07.108066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.348 [2024-11-18 07:21:07.108101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.348 qpair failed and we were unable to recover it. 00:35:46.348 [2024-11-18 07:21:07.108307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.348 [2024-11-18 07:21:07.108342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.348 qpair failed and we were unable to recover it. 00:35:46.348 [2024-11-18 07:21:07.108447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.348 [2024-11-18 07:21:07.108505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.348 qpair failed and we were unable to recover it. 00:35:46.348 [2024-11-18 07:21:07.108607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.348 [2024-11-18 07:21:07.108657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.348 qpair failed and we were unable to recover it. 00:35:46.348 [2024-11-18 07:21:07.108829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.348 [2024-11-18 07:21:07.108880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.348 qpair failed and we were unable to recover it. 00:35:46.348 [2024-11-18 07:21:07.109045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.348 [2024-11-18 07:21:07.109109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.348 qpair failed and we were unable to recover it. 00:35:46.348 [2024-11-18 07:21:07.109346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.348 [2024-11-18 07:21:07.109387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.348 qpair failed and we were unable to recover it. 00:35:46.348 [2024-11-18 07:21:07.109539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.348 [2024-11-18 07:21:07.109568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.348 qpair failed and we were unable to recover it. 00:35:46.348 [2024-11-18 07:21:07.109661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.348 [2024-11-18 07:21:07.109690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.348 qpair failed and we were unable to recover it. 00:35:46.348 [2024-11-18 07:21:07.109821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.348 [2024-11-18 07:21:07.109856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.348 qpair failed and we were unable to recover it. 00:35:46.348 [2024-11-18 07:21:07.109989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.348 [2024-11-18 07:21:07.110038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.348 qpair failed and we were unable to recover it. 00:35:46.348 [2024-11-18 07:21:07.110178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.348 [2024-11-18 07:21:07.110213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.348 qpair failed and we were unable to recover it. 00:35:46.348 [2024-11-18 07:21:07.110409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.348 [2024-11-18 07:21:07.110475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.348 qpair failed and we were unable to recover it. 00:35:46.348 [2024-11-18 07:21:07.110654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.348 [2024-11-18 07:21:07.110682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.348 qpair failed and we were unable to recover it. 00:35:46.348 [2024-11-18 07:21:07.110796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.348 [2024-11-18 07:21:07.110825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.348 qpair failed and we were unable to recover it. 00:35:46.348 [2024-11-18 07:21:07.111022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.348 [2024-11-18 07:21:07.111057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.348 qpair failed and we were unable to recover it. 00:35:46.348 [2024-11-18 07:21:07.111222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.348 [2024-11-18 07:21:07.111257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.348 qpair failed and we were unable to recover it. 00:35:46.348 [2024-11-18 07:21:07.111395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.348 [2024-11-18 07:21:07.111431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.348 qpair failed and we were unable to recover it. 00:35:46.348 [2024-11-18 07:21:07.111577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.348 [2024-11-18 07:21:07.111605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.348 qpair failed and we were unable to recover it. 00:35:46.348 [2024-11-18 07:21:07.111722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.348 [2024-11-18 07:21:07.111750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.348 qpair failed and we were unable to recover it. 00:35:46.348 [2024-11-18 07:21:07.111895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.348 [2024-11-18 07:21:07.111945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.348 qpair failed and we were unable to recover it. 00:35:46.348 [2024-11-18 07:21:07.112136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.348 [2024-11-18 07:21:07.112173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.348 qpair failed and we were unable to recover it. 00:35:46.348 [2024-11-18 07:21:07.112378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.348 [2024-11-18 07:21:07.112417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.348 qpair failed and we were unable to recover it. 00:35:46.348 [2024-11-18 07:21:07.112561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.348 [2024-11-18 07:21:07.112589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.348 qpair failed and we were unable to recover it. 00:35:46.348 [2024-11-18 07:21:07.112736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.348 [2024-11-18 07:21:07.112764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.348 qpair failed and we were unable to recover it. 00:35:46.348 [2024-11-18 07:21:07.112942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.348 [2024-11-18 07:21:07.112977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.348 qpair failed and we were unable to recover it. 00:35:46.348 [2024-11-18 07:21:07.113195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.348 [2024-11-18 07:21:07.113233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.348 qpair failed and we were unable to recover it. 00:35:46.348 [2024-11-18 07:21:07.113438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.348 [2024-11-18 07:21:07.113466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.348 qpair failed and we were unable to recover it. 00:35:46.348 [2024-11-18 07:21:07.113636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.348 [2024-11-18 07:21:07.113680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.348 qpair failed and we were unable to recover it. 00:35:46.348 [2024-11-18 07:21:07.113871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.348 [2024-11-18 07:21:07.113922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.348 qpair failed and we were unable to recover it. 00:35:46.348 [2024-11-18 07:21:07.114104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.348 [2024-11-18 07:21:07.114141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.348 qpair failed and we were unable to recover it. 00:35:46.348 [2024-11-18 07:21:07.114269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.348 [2024-11-18 07:21:07.114304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.348 qpair failed and we were unable to recover it. 00:35:46.349 [2024-11-18 07:21:07.114439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.349 [2024-11-18 07:21:07.114467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.349 qpair failed and we were unable to recover it. 00:35:46.349 [2024-11-18 07:21:07.114573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.349 [2024-11-18 07:21:07.114600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.349 qpair failed and we were unable to recover it. 00:35:46.349 [2024-11-18 07:21:07.114723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.349 [2024-11-18 07:21:07.114752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.349 qpair failed and we were unable to recover it. 00:35:46.349 [2024-11-18 07:21:07.114924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.349 [2024-11-18 07:21:07.114953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.349 qpair failed and we were unable to recover it. 00:35:46.349 [2024-11-18 07:21:07.115114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.349 [2024-11-18 07:21:07.115149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.349 qpair failed and we were unable to recover it. 00:35:46.349 [2024-11-18 07:21:07.115293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.349 [2024-11-18 07:21:07.115327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.349 qpair failed and we were unable to recover it. 00:35:46.349 [2024-11-18 07:21:07.115501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.349 [2024-11-18 07:21:07.115530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.349 qpair failed and we were unable to recover it. 00:35:46.349 [2024-11-18 07:21:07.115645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.349 [2024-11-18 07:21:07.115674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.349 qpair failed and we were unable to recover it. 00:35:46.349 [2024-11-18 07:21:07.115753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.349 [2024-11-18 07:21:07.115781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.349 qpair failed and we were unable to recover it. 00:35:46.349 [2024-11-18 07:21:07.115926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.349 [2024-11-18 07:21:07.115960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.349 qpair failed and we were unable to recover it. 00:35:46.349 [2024-11-18 07:21:07.116098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.349 [2024-11-18 07:21:07.116132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.349 qpair failed and we were unable to recover it. 00:35:46.349 [2024-11-18 07:21:07.116265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.349 [2024-11-18 07:21:07.116295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.349 qpair failed and we were unable to recover it. 00:35:46.349 [2024-11-18 07:21:07.116469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.349 [2024-11-18 07:21:07.116503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.349 qpair failed and we were unable to recover it. 00:35:46.349 [2024-11-18 07:21:07.116622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.349 [2024-11-18 07:21:07.116650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.349 qpair failed and we were unable to recover it. 00:35:46.349 [2024-11-18 07:21:07.116807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.349 [2024-11-18 07:21:07.116843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.349 qpair failed and we were unable to recover it. 00:35:46.349 [2024-11-18 07:21:07.116991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.349 [2024-11-18 07:21:07.117025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.349 qpair failed and we were unable to recover it. 00:35:46.349 [2024-11-18 07:21:07.117208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.349 [2024-11-18 07:21:07.117255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.349 qpair failed and we were unable to recover it. 00:35:46.349 [2024-11-18 07:21:07.117458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.349 [2024-11-18 07:21:07.117503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.349 qpair failed and we were unable to recover it. 00:35:46.349 [2024-11-18 07:21:07.117650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.349 [2024-11-18 07:21:07.117678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.349 qpair failed and we were unable to recover it. 00:35:46.349 [2024-11-18 07:21:07.117764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.349 [2024-11-18 07:21:07.117794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.349 qpair failed and we were unable to recover it. 00:35:46.349 [2024-11-18 07:21:07.117970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.349 [2024-11-18 07:21:07.118005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.349 qpair failed and we were unable to recover it. 00:35:46.349 [2024-11-18 07:21:07.118163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.349 [2024-11-18 07:21:07.118198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.349 qpair failed and we were unable to recover it. 00:35:46.349 [2024-11-18 07:21:07.118326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.349 [2024-11-18 07:21:07.118355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.349 qpair failed and we were unable to recover it. 00:35:46.349 [2024-11-18 07:21:07.118537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.349 [2024-11-18 07:21:07.118567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.349 qpair failed and we were unable to recover it. 00:35:46.349 [2024-11-18 07:21:07.118689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.349 [2024-11-18 07:21:07.118718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.349 qpair failed and we were unable to recover it. 00:35:46.349 [2024-11-18 07:21:07.118865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.349 [2024-11-18 07:21:07.118893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.349 qpair failed and we were unable to recover it. 00:35:46.349 [2024-11-18 07:21:07.119111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.349 [2024-11-18 07:21:07.119147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.349 qpair failed and we were unable to recover it. 00:35:46.349 [2024-11-18 07:21:07.119259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.349 [2024-11-18 07:21:07.119289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.349 qpair failed and we were unable to recover it. 00:35:46.349 [2024-11-18 07:21:07.119421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.349 [2024-11-18 07:21:07.119457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.349 qpair failed and we were unable to recover it. 00:35:46.349 [2024-11-18 07:21:07.119603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.349 [2024-11-18 07:21:07.119632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.349 qpair failed and we were unable to recover it. 00:35:46.349 [2024-11-18 07:21:07.119747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.349 [2024-11-18 07:21:07.119775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.349 qpair failed and we were unable to recover it. 00:35:46.349 [2024-11-18 07:21:07.119921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.349 [2024-11-18 07:21:07.119970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.349 qpair failed and we were unable to recover it. 00:35:46.349 [2024-11-18 07:21:07.120140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.349 [2024-11-18 07:21:07.120190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.349 qpair failed and we were unable to recover it. 00:35:46.349 [2024-11-18 07:21:07.120377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.349 [2024-11-18 07:21:07.120420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.349 qpair failed and we were unable to recover it. 00:35:46.349 [2024-11-18 07:21:07.120534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.350 [2024-11-18 07:21:07.120563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.350 qpair failed and we were unable to recover it. 00:35:46.350 [2024-11-18 07:21:07.120673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.350 [2024-11-18 07:21:07.120701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.350 qpair failed and we were unable to recover it. 00:35:46.350 [2024-11-18 07:21:07.120788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.350 [2024-11-18 07:21:07.120815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.350 qpair failed and we were unable to recover it. 00:35:46.350 [2024-11-18 07:21:07.120934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.350 [2024-11-18 07:21:07.120967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.350 qpair failed and we were unable to recover it. 00:35:46.350 [2024-11-18 07:21:07.121141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.350 [2024-11-18 07:21:07.121178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.350 qpair failed and we were unable to recover it. 00:35:46.350 [2024-11-18 07:21:07.121317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.350 [2024-11-18 07:21:07.121353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.350 qpair failed and we were unable to recover it. 00:35:46.350 [2024-11-18 07:21:07.121506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.350 [2024-11-18 07:21:07.121535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.350 qpair failed and we were unable to recover it. 00:35:46.350 [2024-11-18 07:21:07.121653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.350 [2024-11-18 07:21:07.121682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.350 qpair failed and we were unable to recover it. 00:35:46.350 [2024-11-18 07:21:07.121774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.350 [2024-11-18 07:21:07.121803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.350 qpair failed and we were unable to recover it. 00:35:46.350 [2024-11-18 07:21:07.121914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.350 [2024-11-18 07:21:07.121942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.350 qpair failed and we were unable to recover it. 00:35:46.350 [2024-11-18 07:21:07.122138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.350 [2024-11-18 07:21:07.122174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.350 qpair failed and we were unable to recover it. 00:35:46.350 [2024-11-18 07:21:07.122413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.350 [2024-11-18 07:21:07.122448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.350 qpair failed and we were unable to recover it. 00:35:46.350 [2024-11-18 07:21:07.122594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.350 [2024-11-18 07:21:07.122623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.350 qpair failed and we were unable to recover it. 00:35:46.350 [2024-11-18 07:21:07.122735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.350 [2024-11-18 07:21:07.122781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.350 qpair failed and we were unable to recover it. 00:35:46.350 [2024-11-18 07:21:07.122919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.350 [2024-11-18 07:21:07.122954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.350 qpair failed and we were unable to recover it. 00:35:46.350 [2024-11-18 07:21:07.123109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.350 [2024-11-18 07:21:07.123144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.350 qpair failed and we were unable to recover it. 00:35:46.350 [2024-11-18 07:21:07.123313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.350 [2024-11-18 07:21:07.123349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.350 qpair failed and we were unable to recover it. 00:35:46.350 [2024-11-18 07:21:07.123545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.350 [2024-11-18 07:21:07.123587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.350 qpair failed and we were unable to recover it. 00:35:46.350 [2024-11-18 07:21:07.123711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.350 [2024-11-18 07:21:07.123741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.350 qpair failed and we were unable to recover it. 00:35:46.350 [2024-11-18 07:21:07.123884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.350 [2024-11-18 07:21:07.123932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.350 qpair failed and we were unable to recover it. 00:35:46.350 [2024-11-18 07:21:07.124073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.350 [2024-11-18 07:21:07.124119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.350 qpair failed and we were unable to recover it. 00:35:46.350 [2024-11-18 07:21:07.124237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.350 [2024-11-18 07:21:07.124264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.350 qpair failed and we were unable to recover it. 00:35:46.350 [2024-11-18 07:21:07.124407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.350 [2024-11-18 07:21:07.124435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.350 qpair failed and we were unable to recover it. 00:35:46.350 [2024-11-18 07:21:07.124555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.350 [2024-11-18 07:21:07.124584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.350 qpair failed and we were unable to recover it. 00:35:46.350 [2024-11-18 07:21:07.124696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.350 [2024-11-18 07:21:07.124723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.350 qpair failed and we were unable to recover it. 00:35:46.350 [2024-11-18 07:21:07.124920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.350 [2024-11-18 07:21:07.124983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.350 qpair failed and we were unable to recover it. 00:35:46.350 [2024-11-18 07:21:07.125088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.350 [2024-11-18 07:21:07.125139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.350 qpair failed and we were unable to recover it. 00:35:46.350 [2024-11-18 07:21:07.125282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.350 [2024-11-18 07:21:07.125310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.350 qpair failed and we were unable to recover it. 00:35:46.350 [2024-11-18 07:21:07.125433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.350 [2024-11-18 07:21:07.125460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.350 qpair failed and we were unable to recover it. 00:35:46.350 [2024-11-18 07:21:07.125613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.350 [2024-11-18 07:21:07.125666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.351 qpair failed and we were unable to recover it. 00:35:46.351 [2024-11-18 07:21:07.125823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.351 [2024-11-18 07:21:07.125870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.351 qpair failed and we were unable to recover it. 00:35:46.351 [2024-11-18 07:21:07.126000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.351 [2024-11-18 07:21:07.126052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.351 qpair failed and we were unable to recover it. 00:35:46.351 [2024-11-18 07:21:07.126172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.351 [2024-11-18 07:21:07.126202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.351 qpair failed and we were unable to recover it. 00:35:46.351 [2024-11-18 07:21:07.126316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.351 [2024-11-18 07:21:07.126344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.351 qpair failed and we were unable to recover it. 00:35:46.351 [2024-11-18 07:21:07.126444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.351 [2024-11-18 07:21:07.126471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.351 qpair failed and we were unable to recover it. 00:35:46.351 [2024-11-18 07:21:07.126599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.351 [2024-11-18 07:21:07.126626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.351 qpair failed and we were unable to recover it. 00:35:46.351 [2024-11-18 07:21:07.126748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.351 [2024-11-18 07:21:07.126776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.351 qpair failed and we were unable to recover it. 00:35:46.351 [2024-11-18 07:21:07.126891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.351 [2024-11-18 07:21:07.126920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.351 qpair failed and we were unable to recover it. 00:35:46.351 [2024-11-18 07:21:07.127036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.351 [2024-11-18 07:21:07.127063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.351 qpair failed and we were unable to recover it. 00:35:46.351 [2024-11-18 07:21:07.127179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.351 [2024-11-18 07:21:07.127207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.351 qpair failed and we were unable to recover it. 00:35:46.351 [2024-11-18 07:21:07.127353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.351 [2024-11-18 07:21:07.127380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.351 qpair failed and we were unable to recover it. 00:35:46.351 [2024-11-18 07:21:07.127502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.351 [2024-11-18 07:21:07.127534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.351 qpair failed and we were unable to recover it. 00:35:46.351 [2024-11-18 07:21:07.127652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.351 [2024-11-18 07:21:07.127681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.351 qpair failed and we were unable to recover it. 00:35:46.351 [2024-11-18 07:21:07.127799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.351 [2024-11-18 07:21:07.127832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.351 qpair failed and we were unable to recover it. 00:35:46.351 [2024-11-18 07:21:07.127976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.351 [2024-11-18 07:21:07.128004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.351 qpair failed and we were unable to recover it. 00:35:46.351 [2024-11-18 07:21:07.128132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.351 [2024-11-18 07:21:07.128161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.351 qpair failed and we were unable to recover it. 00:35:46.351 [2024-11-18 07:21:07.128279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.351 [2024-11-18 07:21:07.128306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.351 qpair failed and we were unable to recover it. 00:35:46.351 [2024-11-18 07:21:07.128426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.351 [2024-11-18 07:21:07.128455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.351 qpair failed and we were unable to recover it. 00:35:46.351 [2024-11-18 07:21:07.128633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.351 [2024-11-18 07:21:07.128682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.351 qpair failed and we were unable to recover it. 00:35:46.351 [2024-11-18 07:21:07.128765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.351 [2024-11-18 07:21:07.128793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.351 qpair failed and we were unable to recover it. 00:35:46.351 [2024-11-18 07:21:07.128961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.351 [2024-11-18 07:21:07.129019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.351 qpair failed and we were unable to recover it. 00:35:46.351 [2024-11-18 07:21:07.129117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.351 [2024-11-18 07:21:07.129145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.351 qpair failed and we were unable to recover it. 00:35:46.351 [2024-11-18 07:21:07.129266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.351 [2024-11-18 07:21:07.129293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.351 qpair failed and we were unable to recover it. 00:35:46.351 [2024-11-18 07:21:07.129416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.351 [2024-11-18 07:21:07.129445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.351 qpair failed and we were unable to recover it. 00:35:46.351 [2024-11-18 07:21:07.129581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.351 [2024-11-18 07:21:07.129617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.351 qpair failed and we were unable to recover it. 00:35:46.351 [2024-11-18 07:21:07.129736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.351 [2024-11-18 07:21:07.129771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.351 qpair failed and we were unable to recover it. 00:35:46.351 [2024-11-18 07:21:07.129906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.351 [2024-11-18 07:21:07.129945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.351 qpair failed and we were unable to recover it. 00:35:46.351 [2024-11-18 07:21:07.130119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.351 [2024-11-18 07:21:07.130157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.351 qpair failed and we were unable to recover it. 00:35:46.351 [2024-11-18 07:21:07.130333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.351 [2024-11-18 07:21:07.130388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.351 qpair failed and we were unable to recover it. 00:35:46.351 [2024-11-18 07:21:07.130539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.351 [2024-11-18 07:21:07.130569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.351 qpair failed and we were unable to recover it. 00:35:46.351 [2024-11-18 07:21:07.130696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.351 [2024-11-18 07:21:07.130724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.351 qpair failed and we were unable to recover it. 00:35:46.351 [2024-11-18 07:21:07.130869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.351 [2024-11-18 07:21:07.130897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.351 qpair failed and we were unable to recover it. 00:35:46.351 [2024-11-18 07:21:07.131035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.351 [2024-11-18 07:21:07.131087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.351 qpair failed and we were unable to recover it. 00:35:46.351 [2024-11-18 07:21:07.131243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.351 [2024-11-18 07:21:07.131279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.351 qpair failed and we were unable to recover it. 00:35:46.351 [2024-11-18 07:21:07.131409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.352 [2024-11-18 07:21:07.131437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.352 qpair failed and we were unable to recover it. 00:35:46.352 [2024-11-18 07:21:07.131536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.352 [2024-11-18 07:21:07.131564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.352 qpair failed and we were unable to recover it. 00:35:46.352 [2024-11-18 07:21:07.131675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.352 [2024-11-18 07:21:07.131725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.352 qpair failed and we were unable to recover it. 00:35:46.352 [2024-11-18 07:21:07.131841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.352 [2024-11-18 07:21:07.131869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.352 qpair failed and we were unable to recover it. 00:35:46.352 [2024-11-18 07:21:07.131978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.352 [2024-11-18 07:21:07.132006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.352 qpair failed and we were unable to recover it. 00:35:46.352 [2024-11-18 07:21:07.132120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.352 [2024-11-18 07:21:07.132147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.352 qpair failed and we were unable to recover it. 00:35:46.352 [2024-11-18 07:21:07.132297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.352 [2024-11-18 07:21:07.132326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.352 qpair failed and we were unable to recover it. 00:35:46.352 [2024-11-18 07:21:07.132419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.352 [2024-11-18 07:21:07.132449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.352 qpair failed and we were unable to recover it. 00:35:46.352 [2024-11-18 07:21:07.132599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.352 [2024-11-18 07:21:07.132628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.352 qpair failed and we were unable to recover it. 00:35:46.352 [2024-11-18 07:21:07.132721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.352 [2024-11-18 07:21:07.132749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.352 qpair failed and we were unable to recover it. 00:35:46.352 [2024-11-18 07:21:07.132932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.352 [2024-11-18 07:21:07.132967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.352 qpair failed and we were unable to recover it. 00:35:46.352 [2024-11-18 07:21:07.133108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.352 [2024-11-18 07:21:07.133144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.352 qpair failed and we were unable to recover it. 00:35:46.352 [2024-11-18 07:21:07.133281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.352 [2024-11-18 07:21:07.133317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.352 qpair failed and we were unable to recover it. 00:35:46.352 [2024-11-18 07:21:07.133438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.352 [2024-11-18 07:21:07.133467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.352 qpair failed and we were unable to recover it. 00:35:46.352 [2024-11-18 07:21:07.133630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.352 [2024-11-18 07:21:07.133682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.352 qpair failed and we were unable to recover it. 00:35:46.352 [2024-11-18 07:21:07.133935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.352 [2024-11-18 07:21:07.133972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.352 qpair failed and we were unable to recover it. 00:35:46.352 [2024-11-18 07:21:07.134209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.352 [2024-11-18 07:21:07.134247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.352 qpair failed and we were unable to recover it. 00:35:46.352 [2024-11-18 07:21:07.134400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.352 [2024-11-18 07:21:07.134435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.352 qpair failed and we were unable to recover it. 00:35:46.352 [2024-11-18 07:21:07.134652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.352 [2024-11-18 07:21:07.134681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.352 qpair failed and we were unable to recover it. 00:35:46.352 [2024-11-18 07:21:07.134882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.352 [2024-11-18 07:21:07.134923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.352 qpair failed and we were unable to recover it. 00:35:46.352 [2024-11-18 07:21:07.135033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.352 [2024-11-18 07:21:07.135067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.352 qpair failed and we were unable to recover it. 00:35:46.352 [2024-11-18 07:21:07.135221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.352 [2024-11-18 07:21:07.135255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.352 qpair failed and we were unable to recover it. 00:35:46.352 [2024-11-18 07:21:07.135463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.352 [2024-11-18 07:21:07.135497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.352 qpair failed and we were unable to recover it. 00:35:46.352 [2024-11-18 07:21:07.135617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.352 [2024-11-18 07:21:07.135645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.352 qpair failed and we were unable to recover it. 00:35:46.352 [2024-11-18 07:21:07.135780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.352 [2024-11-18 07:21:07.135846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.352 qpair failed and we were unable to recover it. 00:35:46.352 [2024-11-18 07:21:07.136070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.352 [2024-11-18 07:21:07.136106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.352 qpair failed and we were unable to recover it. 00:35:46.352 [2024-11-18 07:21:07.136327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.352 [2024-11-18 07:21:07.136361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.352 qpair failed and we were unable to recover it. 00:35:46.352 [2024-11-18 07:21:07.136528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.352 [2024-11-18 07:21:07.136571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.352 qpair failed and we were unable to recover it. 00:35:46.352 [2024-11-18 07:21:07.136693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.352 [2024-11-18 07:21:07.136721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.352 qpair failed and we were unable to recover it. 00:35:46.352 [2024-11-18 07:21:07.136837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.352 [2024-11-18 07:21:07.136865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.352 qpair failed and we were unable to recover it. 00:35:46.352 [2024-11-18 07:21:07.137022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.352 [2024-11-18 07:21:07.137057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.352 qpair failed and we were unable to recover it. 00:35:46.352 [2024-11-18 07:21:07.137174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.352 [2024-11-18 07:21:07.137237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.352 qpair failed and we were unable to recover it. 00:35:46.352 [2024-11-18 07:21:07.137513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.352 [2024-11-18 07:21:07.137561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.352 qpair failed and we were unable to recover it. 00:35:46.352 [2024-11-18 07:21:07.137679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.352 [2024-11-18 07:21:07.137707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.352 qpair failed and we were unable to recover it. 00:35:46.352 [2024-11-18 07:21:07.137942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.353 [2024-11-18 07:21:07.138008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.353 qpair failed and we were unable to recover it. 00:35:46.353 [2024-11-18 07:21:07.138308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.353 [2024-11-18 07:21:07.138372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.353 qpair failed and we were unable to recover it. 00:35:46.353 [2024-11-18 07:21:07.138636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.353 [2024-11-18 07:21:07.138664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.353 qpair failed and we were unable to recover it. 00:35:46.353 [2024-11-18 07:21:07.138806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.353 [2024-11-18 07:21:07.138834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.353 qpair failed and we were unable to recover it. 00:35:46.353 [2024-11-18 07:21:07.138950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.353 [2024-11-18 07:21:07.139000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.353 qpair failed and we were unable to recover it. 00:35:46.353 [2024-11-18 07:21:07.139171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.353 [2024-11-18 07:21:07.139219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.353 qpair failed and we were unable to recover it. 00:35:46.353 [2024-11-18 07:21:07.139495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.353 [2024-11-18 07:21:07.139524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.353 qpair failed and we were unable to recover it. 00:35:46.353 [2024-11-18 07:21:07.139641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.353 [2024-11-18 07:21:07.139669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.353 qpair failed and we were unable to recover it. 00:35:46.353 [2024-11-18 07:21:07.139834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.353 [2024-11-18 07:21:07.139871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.353 qpair failed and we were unable to recover it. 00:35:46.353 [2024-11-18 07:21:07.140155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.353 [2024-11-18 07:21:07.140193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.353 qpair failed and we were unable to recover it. 00:35:46.353 [2024-11-18 07:21:07.140302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.353 [2024-11-18 07:21:07.140340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.353 qpair failed and we were unable to recover it. 00:35:46.353 [2024-11-18 07:21:07.140520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.353 [2024-11-18 07:21:07.140566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.353 qpair failed and we were unable to recover it. 00:35:46.353 [2024-11-18 07:21:07.140694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.353 [2024-11-18 07:21:07.140735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.353 qpair failed and we were unable to recover it. 00:35:46.353 [2024-11-18 07:21:07.140854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.353 [2024-11-18 07:21:07.140928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.353 qpair failed and we were unable to recover it. 00:35:46.353 [2024-11-18 07:21:07.141052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.353 [2024-11-18 07:21:07.141079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.353 qpair failed and we were unable to recover it. 00:35:46.353 [2024-11-18 07:21:07.141173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.353 [2024-11-18 07:21:07.141202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.353 qpair failed and we were unable to recover it. 00:35:46.353 [2024-11-18 07:21:07.141358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.353 [2024-11-18 07:21:07.141385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.353 qpair failed and we were unable to recover it. 00:35:46.353 [2024-11-18 07:21:07.141505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.353 [2024-11-18 07:21:07.141533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.353 qpair failed and we were unable to recover it. 00:35:46.353 [2024-11-18 07:21:07.141662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.353 [2024-11-18 07:21:07.141690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.353 qpair failed and we were unable to recover it. 00:35:46.353 [2024-11-18 07:21:07.141778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.353 [2024-11-18 07:21:07.141806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.353 qpair failed and we were unable to recover it. 00:35:46.353 [2024-11-18 07:21:07.141889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.353 [2024-11-18 07:21:07.141916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.353 qpair failed and we were unable to recover it. 00:35:46.353 [2024-11-18 07:21:07.142032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.353 [2024-11-18 07:21:07.142061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.353 qpair failed and we were unable to recover it. 00:35:46.353 [2024-11-18 07:21:07.142193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.353 [2024-11-18 07:21:07.142233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.353 qpair failed and we were unable to recover it. 00:35:46.353 [2024-11-18 07:21:07.142351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.353 [2024-11-18 07:21:07.142380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.353 qpair failed and we were unable to recover it. 00:35:46.353 [2024-11-18 07:21:07.142558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.353 [2024-11-18 07:21:07.142629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.353 qpair failed and we were unable to recover it. 00:35:46.353 [2024-11-18 07:21:07.142920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.353 [2024-11-18 07:21:07.143001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.353 qpair failed and we were unable to recover it. 00:35:46.353 [2024-11-18 07:21:07.143255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.353 [2024-11-18 07:21:07.143324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.353 qpair failed and we were unable to recover it. 00:35:46.353 [2024-11-18 07:21:07.143590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.353 [2024-11-18 07:21:07.143619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.353 qpair failed and we were unable to recover it. 00:35:46.353 [2024-11-18 07:21:07.143763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.353 [2024-11-18 07:21:07.143829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.353 qpair failed and we were unable to recover it. 00:35:46.353 [2024-11-18 07:21:07.144056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.353 [2024-11-18 07:21:07.144122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.353 qpair failed and we were unable to recover it. 00:35:46.353 [2024-11-18 07:21:07.144295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.353 [2024-11-18 07:21:07.144323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.353 qpair failed and we were unable to recover it. 00:35:46.353 [2024-11-18 07:21:07.144476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.353 [2024-11-18 07:21:07.144511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.353 qpair failed and we were unable to recover it. 00:35:46.353 [2024-11-18 07:21:07.144599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.353 [2024-11-18 07:21:07.144627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.353 qpair failed and we were unable to recover it. 00:35:46.353 [2024-11-18 07:21:07.144805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.353 [2024-11-18 07:21:07.144872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.353 qpair failed and we were unable to recover it. 00:35:46.353 [2024-11-18 07:21:07.145098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.353 [2024-11-18 07:21:07.145137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.353 qpair failed and we were unable to recover it. 00:35:46.354 [2024-11-18 07:21:07.145289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.354 [2024-11-18 07:21:07.145326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.354 qpair failed and we were unable to recover it. 00:35:46.354 [2024-11-18 07:21:07.145532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.354 [2024-11-18 07:21:07.145589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.354 qpair failed and we were unable to recover it. 00:35:46.354 [2024-11-18 07:21:07.145711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.354 [2024-11-18 07:21:07.145740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.354 qpair failed and we were unable to recover it. 00:35:46.354 [2024-11-18 07:21:07.145944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.354 [2024-11-18 07:21:07.145982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.354 qpair failed and we were unable to recover it. 00:35:46.354 [2024-11-18 07:21:07.146261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.354 [2024-11-18 07:21:07.146299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.354 qpair failed and we were unable to recover it. 00:35:46.354 [2024-11-18 07:21:07.146459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.354 [2024-11-18 07:21:07.146523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.354 qpair failed and we were unable to recover it. 00:35:46.354 [2024-11-18 07:21:07.146709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.354 [2024-11-18 07:21:07.146737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.354 qpair failed and we were unable to recover it. 00:35:46.354 [2024-11-18 07:21:07.146883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.354 [2024-11-18 07:21:07.146947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.354 qpair failed and we were unable to recover it. 00:35:46.354 [2024-11-18 07:21:07.147206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.354 [2024-11-18 07:21:07.147241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.354 qpair failed and we were unable to recover it. 00:35:46.354 [2024-11-18 07:21:07.147388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.354 [2024-11-18 07:21:07.147423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.354 qpair failed and we were unable to recover it. 00:35:46.354 [2024-11-18 07:21:07.147598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.354 [2024-11-18 07:21:07.147627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.354 qpair failed and we were unable to recover it. 00:35:46.354 [2024-11-18 07:21:07.147770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.354 [2024-11-18 07:21:07.147798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.354 qpair failed and we were unable to recover it. 00:35:46.354 [2024-11-18 07:21:07.147915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.354 [2024-11-18 07:21:07.147944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.354 qpair failed and we were unable to recover it. 00:35:46.354 [2024-11-18 07:21:07.148115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.354 [2024-11-18 07:21:07.148185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.354 qpair failed and we were unable to recover it. 00:35:46.354 [2024-11-18 07:21:07.148438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.354 [2024-11-18 07:21:07.148467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.354 qpair failed and we were unable to recover it. 00:35:46.354 [2024-11-18 07:21:07.148593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.354 [2024-11-18 07:21:07.148621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.354 qpair failed and we were unable to recover it. 00:35:46.354 [2024-11-18 07:21:07.148754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.354 [2024-11-18 07:21:07.148791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.354 qpair failed and we were unable to recover it. 00:35:46.354 [2024-11-18 07:21:07.149102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.354 [2024-11-18 07:21:07.149153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.354 qpair failed and we were unable to recover it. 00:35:46.354 [2024-11-18 07:21:07.149277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.354 [2024-11-18 07:21:07.149329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.354 qpair failed and we were unable to recover it. 00:35:46.354 [2024-11-18 07:21:07.149501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.354 [2024-11-18 07:21:07.149530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.354 qpair failed and we were unable to recover it. 00:35:46.354 [2024-11-18 07:21:07.149645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.354 [2024-11-18 07:21:07.149673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.354 qpair failed and we were unable to recover it. 00:35:46.354 [2024-11-18 07:21:07.149815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.354 [2024-11-18 07:21:07.149843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.354 qpair failed and we were unable to recover it. 00:35:46.354 [2024-11-18 07:21:07.150017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.354 [2024-11-18 07:21:07.150052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.354 qpair failed and we were unable to recover it. 00:35:46.354 [2024-11-18 07:21:07.150302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.354 [2024-11-18 07:21:07.150366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.354 qpair failed and we were unable to recover it. 00:35:46.354 [2024-11-18 07:21:07.150613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.354 [2024-11-18 07:21:07.150642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.354 qpair failed and we were unable to recover it. 00:35:46.354 [2024-11-18 07:21:07.150881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.354 [2024-11-18 07:21:07.150947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.354 qpair failed and we were unable to recover it. 00:35:46.354 [2024-11-18 07:21:07.151248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.354 [2024-11-18 07:21:07.151312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.354 qpair failed and we were unable to recover it. 00:35:46.354 [2024-11-18 07:21:07.151598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.354 [2024-11-18 07:21:07.151633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.354 qpair failed and we were unable to recover it. 00:35:46.354 [2024-11-18 07:21:07.151785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.354 [2024-11-18 07:21:07.151820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.354 qpair failed and we were unable to recover it. 00:35:46.354 [2024-11-18 07:21:07.152092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.354 [2024-11-18 07:21:07.152157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.354 qpair failed and we were unable to recover it. 00:35:46.354 [2024-11-18 07:21:07.152437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.354 [2024-11-18 07:21:07.152483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.354 qpair failed and we were unable to recover it. 00:35:46.354 [2024-11-18 07:21:07.152651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.354 [2024-11-18 07:21:07.152689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.354 qpair failed and we were unable to recover it. 00:35:46.354 [2024-11-18 07:21:07.152874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.354 [2024-11-18 07:21:07.152912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.354 qpair failed and we were unable to recover it. 00:35:46.354 [2024-11-18 07:21:07.153115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.354 [2024-11-18 07:21:07.153181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.354 qpair failed and we were unable to recover it. 00:35:46.354 [2024-11-18 07:21:07.153428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.355 [2024-11-18 07:21:07.153467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.355 qpair failed and we were unable to recover it. 00:35:46.355 [2024-11-18 07:21:07.153641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.355 [2024-11-18 07:21:07.153713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.355 qpair failed and we were unable to recover it. 00:35:46.355 [2024-11-18 07:21:07.154005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.355 [2024-11-18 07:21:07.154071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.355 qpair failed and we were unable to recover it. 00:35:46.355 [2024-11-18 07:21:07.154265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.355 [2024-11-18 07:21:07.154332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.355 qpair failed and we were unable to recover it. 00:35:46.355 [2024-11-18 07:21:07.154630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.355 [2024-11-18 07:21:07.154669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.355 qpair failed and we were unable to recover it. 00:35:46.355 [2024-11-18 07:21:07.154826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.355 [2024-11-18 07:21:07.154863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.355 qpair failed and we were unable to recover it. 00:35:46.355 [2024-11-18 07:21:07.155104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.355 [2024-11-18 07:21:07.155172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.355 qpair failed and we were unable to recover it. 00:35:46.355 [2024-11-18 07:21:07.155423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.355 [2024-11-18 07:21:07.155506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.355 qpair failed and we were unable to recover it. 00:35:46.355 [2024-11-18 07:21:07.155763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.355 [2024-11-18 07:21:07.155829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.355 qpair failed and we were unable to recover it. 00:35:46.355 [2024-11-18 07:21:07.156033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.355 [2024-11-18 07:21:07.156098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.355 qpair failed and we were unable to recover it. 00:35:46.355 [2024-11-18 07:21:07.156402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.355 [2024-11-18 07:21:07.156468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.355 qpair failed and we were unable to recover it. 00:35:46.355 [2024-11-18 07:21:07.156739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.355 [2024-11-18 07:21:07.156804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.355 qpair failed and we were unable to recover it. 00:35:46.355 [2024-11-18 07:21:07.157071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.355 [2024-11-18 07:21:07.157108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.355 qpair failed and we were unable to recover it. 00:35:46.355 [2024-11-18 07:21:07.157290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.355 [2024-11-18 07:21:07.157348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.355 qpair failed and we were unable to recover it. 00:35:46.355 [2024-11-18 07:21:07.157631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.355 [2024-11-18 07:21:07.157666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.355 qpair failed and we were unable to recover it. 00:35:46.355 [2024-11-18 07:21:07.157852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.355 [2024-11-18 07:21:07.157889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.355 qpair failed and we were unable to recover it. 00:35:46.355 [2024-11-18 07:21:07.158011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.355 [2024-11-18 07:21:07.158049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.355 qpair failed and we were unable to recover it. 00:35:46.355 [2024-11-18 07:21:07.158286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.355 [2024-11-18 07:21:07.158354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.355 qpair failed and we were unable to recover it. 00:35:46.355 [2024-11-18 07:21:07.158565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.355 [2024-11-18 07:21:07.158632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.355 qpair failed and we were unable to recover it. 00:35:46.355 [2024-11-18 07:21:07.158893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.355 [2024-11-18 07:21:07.158958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.355 qpair failed and we were unable to recover it. 00:35:46.355 [2024-11-18 07:21:07.159207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.355 [2024-11-18 07:21:07.159273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.355 qpair failed and we were unable to recover it. 00:35:46.355 [2024-11-18 07:21:07.159569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.355 [2024-11-18 07:21:07.159635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.355 qpair failed and we were unable to recover it. 00:35:46.355 [2024-11-18 07:21:07.159909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.355 [2024-11-18 07:21:07.159947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.355 qpair failed and we were unable to recover it. 00:35:46.355 [2024-11-18 07:21:07.160103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.355 [2024-11-18 07:21:07.160142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.355 qpair failed and we were unable to recover it. 00:35:46.355 [2024-11-18 07:21:07.160327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.355 [2024-11-18 07:21:07.160391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.355 qpair failed and we were unable to recover it. 00:35:46.355 [2024-11-18 07:21:07.160678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.355 [2024-11-18 07:21:07.160746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.355 qpair failed and we were unable to recover it. 00:35:46.355 [2024-11-18 07:21:07.161002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.355 [2024-11-18 07:21:07.161069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.355 qpair failed and we were unable to recover it. 00:35:46.355 [2024-11-18 07:21:07.161308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.355 [2024-11-18 07:21:07.161345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.355 qpair failed and we were unable to recover it. 00:35:46.355 [2024-11-18 07:21:07.161484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.355 [2024-11-18 07:21:07.161529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.355 qpair failed and we were unable to recover it. 00:35:46.355 [2024-11-18 07:21:07.161677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.355 [2024-11-18 07:21:07.161715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.355 qpair failed and we were unable to recover it. 00:35:46.355 [2024-11-18 07:21:07.161924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.355 [2024-11-18 07:21:07.161988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.355 qpair failed and we were unable to recover it. 00:35:46.355 [2024-11-18 07:21:07.162251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.356 [2024-11-18 07:21:07.162316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.356 qpair failed and we were unable to recover it. 00:35:46.356 [2024-11-18 07:21:07.162571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.356 [2024-11-18 07:21:07.162639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.356 qpair failed and we were unable to recover it. 00:35:46.356 [2024-11-18 07:21:07.162920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.356 [2024-11-18 07:21:07.162985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.356 qpair failed and we were unable to recover it. 00:35:46.356 [2024-11-18 07:21:07.163201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.356 [2024-11-18 07:21:07.163266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.356 qpair failed and we were unable to recover it. 00:35:46.356 [2024-11-18 07:21:07.163538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.356 [2024-11-18 07:21:07.163605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.356 qpair failed and we were unable to recover it. 00:35:46.356 [2024-11-18 07:21:07.163854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.356 [2024-11-18 07:21:07.163938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.356 qpair failed and we were unable to recover it. 00:35:46.356 [2024-11-18 07:21:07.164205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.356 [2024-11-18 07:21:07.164239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.356 qpair failed and we were unable to recover it. 00:35:46.356 [2024-11-18 07:21:07.164384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.356 [2024-11-18 07:21:07.164440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.356 qpair failed and we were unable to recover it. 00:35:46.356 [2024-11-18 07:21:07.164753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.356 [2024-11-18 07:21:07.164818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.356 qpair failed and we were unable to recover it. 00:35:46.356 [2024-11-18 07:21:07.165094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.356 [2024-11-18 07:21:07.165132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.356 qpair failed and we were unable to recover it. 00:35:46.356 [2024-11-18 07:21:07.165278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.356 [2024-11-18 07:21:07.165315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.356 qpair failed and we were unable to recover it. 00:35:46.356 [2024-11-18 07:21:07.165501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.356 [2024-11-18 07:21:07.165539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.356 qpair failed and we were unable to recover it. 00:35:46.356 [2024-11-18 07:21:07.165813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.356 [2024-11-18 07:21:07.165879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.356 qpair failed and we were unable to recover it. 00:35:46.356 [2024-11-18 07:21:07.166139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.356 [2024-11-18 07:21:07.166174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.356 qpair failed and we were unable to recover it. 00:35:46.356 [2024-11-18 07:21:07.166353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.356 [2024-11-18 07:21:07.166413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.356 qpair failed and we were unable to recover it. 00:35:46.356 [2024-11-18 07:21:07.166682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.356 [2024-11-18 07:21:07.166748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.356 qpair failed and we were unable to recover it. 00:35:46.356 [2024-11-18 07:21:07.167032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.356 [2024-11-18 07:21:07.167066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.356 qpair failed and we were unable to recover it. 00:35:46.356 [2024-11-18 07:21:07.167244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.356 [2024-11-18 07:21:07.167278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.356 qpair failed and we were unable to recover it. 00:35:46.356 [2024-11-18 07:21:07.167484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.356 [2024-11-18 07:21:07.167564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.356 qpair failed and we were unable to recover it. 00:35:46.356 [2024-11-18 07:21:07.167824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.356 [2024-11-18 07:21:07.167889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.356 qpair failed and we were unable to recover it. 00:35:46.356 [2024-11-18 07:21:07.168170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.356 [2024-11-18 07:21:07.168235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.356 qpair failed and we were unable to recover it. 00:35:46.356 [2024-11-18 07:21:07.168483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.356 [2024-11-18 07:21:07.168562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.356 qpair failed and we were unable to recover it. 00:35:46.356 [2024-11-18 07:21:07.168815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.356 [2024-11-18 07:21:07.168878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.356 qpair failed and we were unable to recover it. 00:35:46.356 [2024-11-18 07:21:07.169170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.356 [2024-11-18 07:21:07.169235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.356 qpair failed and we were unable to recover it. 00:35:46.356 [2024-11-18 07:21:07.169486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.356 [2024-11-18 07:21:07.169567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.356 qpair failed and we were unable to recover it. 00:35:46.356 [2024-11-18 07:21:07.169791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.356 [2024-11-18 07:21:07.169857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.356 qpair failed and we were unable to recover it. 00:35:46.356 [2024-11-18 07:21:07.170078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.356 [2024-11-18 07:21:07.170143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.356 qpair failed and we were unable to recover it. 00:35:46.356 [2024-11-18 07:21:07.170377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.356 [2024-11-18 07:21:07.170442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.356 qpair failed and we were unable to recover it. 00:35:46.356 [2024-11-18 07:21:07.170660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.356 [2024-11-18 07:21:07.170728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.356 qpair failed and we were unable to recover it. 00:35:46.356 [2024-11-18 07:21:07.171088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.356 [2024-11-18 07:21:07.171153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.356 qpair failed and we were unable to recover it. 00:35:46.356 [2024-11-18 07:21:07.171350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.356 [2024-11-18 07:21:07.171415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.356 qpair failed and we were unable to recover it. 00:35:46.356 [2024-11-18 07:21:07.171735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.356 [2024-11-18 07:21:07.171801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.356 qpair failed and we were unable to recover it. 00:35:46.356 [2024-11-18 07:21:07.172074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.356 [2024-11-18 07:21:07.172140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.357 qpair failed and we were unable to recover it. 00:35:46.357 [2024-11-18 07:21:07.172386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.357 [2024-11-18 07:21:07.172452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.357 qpair failed and we were unable to recover it. 00:35:46.357 [2024-11-18 07:21:07.172758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.357 [2024-11-18 07:21:07.172822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.357 qpair failed and we were unable to recover it. 00:35:46.357 [2024-11-18 07:21:07.173091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.357 [2024-11-18 07:21:07.173157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.357 qpair failed and we were unable to recover it. 00:35:46.357 [2024-11-18 07:21:07.173392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.357 [2024-11-18 07:21:07.173457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.357 qpair failed and we were unable to recover it. 00:35:46.357 [2024-11-18 07:21:07.173759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.357 [2024-11-18 07:21:07.173823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.357 qpair failed and we were unable to recover it. 00:35:46.357 [2024-11-18 07:21:07.174122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.357 [2024-11-18 07:21:07.174187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.357 qpair failed and we were unable to recover it. 00:35:46.357 [2024-11-18 07:21:07.174504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.357 [2024-11-18 07:21:07.174571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.357 qpair failed and we were unable to recover it. 00:35:46.357 [2024-11-18 07:21:07.174862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.357 [2024-11-18 07:21:07.174926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.357 qpair failed and we were unable to recover it. 00:35:46.357 [2024-11-18 07:21:07.175218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.357 [2024-11-18 07:21:07.175284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.357 qpair failed and we were unable to recover it. 00:35:46.357 [2024-11-18 07:21:07.175510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.357 [2024-11-18 07:21:07.175580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.357 qpair failed and we were unable to recover it. 00:35:46.357 [2024-11-18 07:21:07.175881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.357 [2024-11-18 07:21:07.175945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.357 qpair failed and we were unable to recover it. 00:35:46.357 [2024-11-18 07:21:07.176152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.357 [2024-11-18 07:21:07.176219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.357 qpair failed and we were unable to recover it. 00:35:46.357 [2024-11-18 07:21:07.176479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.357 [2024-11-18 07:21:07.176558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.357 qpair failed and we were unable to recover it. 00:35:46.357 [2024-11-18 07:21:07.176836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.357 [2024-11-18 07:21:07.176901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.357 qpair failed and we were unable to recover it. 00:35:46.357 [2024-11-18 07:21:07.177143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.357 [2024-11-18 07:21:07.177209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.357 qpair failed and we were unable to recover it. 00:35:46.357 [2024-11-18 07:21:07.177457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.357 [2024-11-18 07:21:07.177535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.357 qpair failed and we were unable to recover it. 00:35:46.357 [2024-11-18 07:21:07.177822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.357 [2024-11-18 07:21:07.177888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.357 qpair failed and we were unable to recover it. 00:35:46.357 [2024-11-18 07:21:07.178135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.357 [2024-11-18 07:21:07.178202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.357 qpair failed and we were unable to recover it. 00:35:46.357 [2024-11-18 07:21:07.178473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.357 [2024-11-18 07:21:07.178514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.357 qpair failed and we were unable to recover it. 00:35:46.357 [2024-11-18 07:21:07.178683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.357 [2024-11-18 07:21:07.178748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.357 qpair failed and we were unable to recover it. 00:35:46.357 [2024-11-18 07:21:07.179044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.357 [2024-11-18 07:21:07.179108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.357 qpair failed and we were unable to recover it. 00:35:46.357 [2024-11-18 07:21:07.179349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.357 [2024-11-18 07:21:07.179412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.357 qpair failed and we were unable to recover it. 00:35:46.357 [2024-11-18 07:21:07.179732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.357 [2024-11-18 07:21:07.179799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.357 qpair failed and we were unable to recover it. 00:35:46.357 [2024-11-18 07:21:07.180103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.357 [2024-11-18 07:21:07.180168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.357 qpair failed and we were unable to recover it. 00:35:46.357 [2024-11-18 07:21:07.180435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.357 [2024-11-18 07:21:07.180514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.357 qpair failed and we were unable to recover it. 00:35:46.357 [2024-11-18 07:21:07.180775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.357 [2024-11-18 07:21:07.180840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.357 qpair failed and we were unable to recover it. 00:35:46.357 [2024-11-18 07:21:07.181147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.357 [2024-11-18 07:21:07.181213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.357 qpair failed and we were unable to recover it. 00:35:46.357 [2024-11-18 07:21:07.181477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.357 [2024-11-18 07:21:07.181519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.357 qpair failed and we were unable to recover it. 00:35:46.357 [2024-11-18 07:21:07.181763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.357 [2024-11-18 07:21:07.181829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.357 qpair failed and we were unable to recover it. 00:35:46.357 [2024-11-18 07:21:07.182075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.357 [2024-11-18 07:21:07.182142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.357 qpair failed and we were unable to recover it. 00:35:46.357 [2024-11-18 07:21:07.182446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.357 [2024-11-18 07:21:07.182523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.357 qpair failed and we were unable to recover it. 00:35:46.357 [2024-11-18 07:21:07.182826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.357 [2024-11-18 07:21:07.182891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.357 qpair failed and we were unable to recover it. 00:35:46.357 [2024-11-18 07:21:07.183147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.357 [2024-11-18 07:21:07.183212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.357 qpair failed and we were unable to recover it. 00:35:46.357 [2024-11-18 07:21:07.183508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.358 [2024-11-18 07:21:07.183573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.358 qpair failed and we were unable to recover it. 00:35:46.358 [2024-11-18 07:21:07.183820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.358 [2024-11-18 07:21:07.183885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.358 qpair failed and we were unable to recover it. 00:35:46.358 [2024-11-18 07:21:07.184081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.358 [2024-11-18 07:21:07.184149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.358 qpair failed and we were unable to recover it. 00:35:46.358 [2024-11-18 07:21:07.184441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.358 [2024-11-18 07:21:07.184518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.358 qpair failed and we were unable to recover it. 00:35:46.358 [2024-11-18 07:21:07.184769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.358 [2024-11-18 07:21:07.184836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.358 qpair failed and we were unable to recover it. 00:35:46.358 [2024-11-18 07:21:07.185110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.358 [2024-11-18 07:21:07.185146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.358 qpair failed and we were unable to recover it. 00:35:46.358 [2024-11-18 07:21:07.185285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.358 [2024-11-18 07:21:07.185326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.358 qpair failed and we were unable to recover it. 00:35:46.358 [2024-11-18 07:21:07.185602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.358 [2024-11-18 07:21:07.185669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.358 qpair failed and we were unable to recover it. 00:35:46.358 [2024-11-18 07:21:07.185861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.358 [2024-11-18 07:21:07.185926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.358 qpair failed and we were unable to recover it. 00:35:46.358 [2024-11-18 07:21:07.186169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.358 [2024-11-18 07:21:07.186235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.358 qpair failed and we were unable to recover it. 00:35:46.358 [2024-11-18 07:21:07.186437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.358 [2024-11-18 07:21:07.186472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.358 qpair failed and we were unable to recover it. 00:35:46.358 [2024-11-18 07:21:07.186630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.358 [2024-11-18 07:21:07.186665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.358 qpair failed and we were unable to recover it. 00:35:46.358 [2024-11-18 07:21:07.186873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.358 [2024-11-18 07:21:07.186938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.358 qpair failed and we were unable to recover it. 00:35:46.358 [2024-11-18 07:21:07.187234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.358 [2024-11-18 07:21:07.187299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.358 qpair failed and we were unable to recover it. 00:35:46.358 [2024-11-18 07:21:07.187557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.358 [2024-11-18 07:21:07.187623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.358 qpair failed and we were unable to recover it. 00:35:46.358 [2024-11-18 07:21:07.187851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.358 [2024-11-18 07:21:07.187916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.358 qpair failed and we were unable to recover it. 00:35:46.358 [2024-11-18 07:21:07.188160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.358 [2024-11-18 07:21:07.188224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.358 qpair failed and we were unable to recover it. 00:35:46.358 [2024-11-18 07:21:07.188517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.358 [2024-11-18 07:21:07.188584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.358 qpair failed and we were unable to recover it. 00:35:46.358 [2024-11-18 07:21:07.188783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.358 [2024-11-18 07:21:07.188849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.358 qpair failed and we were unable to recover it. 00:35:46.358 [2024-11-18 07:21:07.189134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.358 [2024-11-18 07:21:07.189198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.358 qpair failed and we were unable to recover it. 00:35:46.358 [2024-11-18 07:21:07.189479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.358 [2024-11-18 07:21:07.189557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.358 qpair failed and we were unable to recover it. 00:35:46.358 [2024-11-18 07:21:07.189848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.358 [2024-11-18 07:21:07.189913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.358 qpair failed and we were unable to recover it. 00:35:46.358 [2024-11-18 07:21:07.190198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.358 [2024-11-18 07:21:07.190261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.358 qpair failed and we were unable to recover it. 00:35:46.358 [2024-11-18 07:21:07.190556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.358 [2024-11-18 07:21:07.190622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.358 qpair failed and we were unable to recover it. 00:35:46.358 [2024-11-18 07:21:07.190884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.358 [2024-11-18 07:21:07.190950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.358 qpair failed and we were unable to recover it. 00:35:46.358 [2024-11-18 07:21:07.191232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.358 [2024-11-18 07:21:07.191296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.358 qpair failed and we were unable to recover it. 00:35:46.358 [2024-11-18 07:21:07.191544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.358 [2024-11-18 07:21:07.191610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.358 qpair failed and we were unable to recover it. 00:35:46.358 [2024-11-18 07:21:07.191864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.358 [2024-11-18 07:21:07.191930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.358 qpair failed and we were unable to recover it. 00:35:46.358 [2024-11-18 07:21:07.192217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.358 [2024-11-18 07:21:07.192281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.358 qpair failed and we were unable to recover it. 00:35:46.358 [2024-11-18 07:21:07.192528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.358 [2024-11-18 07:21:07.192594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.358 qpair failed and we were unable to recover it. 00:35:46.358 [2024-11-18 07:21:07.192841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.358 [2024-11-18 07:21:07.192908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.358 qpair failed and we were unable to recover it. 00:35:46.358 [2024-11-18 07:21:07.193194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.358 [2024-11-18 07:21:07.193259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.358 qpair failed and we were unable to recover it. 00:35:46.358 [2024-11-18 07:21:07.193561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.358 [2024-11-18 07:21:07.193628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.358 qpair failed and we were unable to recover it. 00:35:46.358 [2024-11-18 07:21:07.193895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.358 [2024-11-18 07:21:07.193962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.358 qpair failed and we were unable to recover it. 00:35:46.359 [2024-11-18 07:21:07.194256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.359 [2024-11-18 07:21:07.194319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.359 qpair failed and we were unable to recover it. 00:35:46.359 [2024-11-18 07:21:07.194613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.359 [2024-11-18 07:21:07.194679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.359 qpair failed and we were unable to recover it. 00:35:46.359 [2024-11-18 07:21:07.194924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.359 [2024-11-18 07:21:07.194988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.359 qpair failed and we were unable to recover it. 00:35:46.359 [2024-11-18 07:21:07.195239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.359 [2024-11-18 07:21:07.195303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.359 qpair failed and we were unable to recover it. 00:35:46.359 [2024-11-18 07:21:07.195548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.359 [2024-11-18 07:21:07.195615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.359 qpair failed and we were unable to recover it. 00:35:46.359 [2024-11-18 07:21:07.195916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.359 [2024-11-18 07:21:07.195981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.359 qpair failed and we were unable to recover it. 00:35:46.359 [2024-11-18 07:21:07.196224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.359 [2024-11-18 07:21:07.196287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.359 qpair failed and we were unable to recover it. 00:35:46.359 [2024-11-18 07:21:07.196582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.359 [2024-11-18 07:21:07.196648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.359 qpair failed and we were unable to recover it. 00:35:46.359 [2024-11-18 07:21:07.196860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.359 [2024-11-18 07:21:07.196925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.359 qpair failed and we were unable to recover it. 00:35:46.359 [2024-11-18 07:21:07.197205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.359 [2024-11-18 07:21:07.197269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.359 qpair failed and we were unable to recover it. 00:35:46.359 [2024-11-18 07:21:07.197518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.359 [2024-11-18 07:21:07.197584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.359 qpair failed and we were unable to recover it. 00:35:46.359 [2024-11-18 07:21:07.197842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.359 [2024-11-18 07:21:07.197907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.359 qpair failed and we were unable to recover it. 00:35:46.359 [2024-11-18 07:21:07.198189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.359 [2024-11-18 07:21:07.198263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.359 qpair failed and we were unable to recover it. 00:35:46.359 [2024-11-18 07:21:07.198518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.359 [2024-11-18 07:21:07.198584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.359 qpair failed and we were unable to recover it. 00:35:46.359 [2024-11-18 07:21:07.198830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.359 [2024-11-18 07:21:07.198898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.359 qpair failed and we were unable to recover it. 00:35:46.359 [2024-11-18 07:21:07.199163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.359 [2024-11-18 07:21:07.199227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.359 qpair failed and we were unable to recover it. 00:35:46.359 [2024-11-18 07:21:07.199486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.359 [2024-11-18 07:21:07.199563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.359 qpair failed and we were unable to recover it. 00:35:46.359 [2024-11-18 07:21:07.199800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.359 [2024-11-18 07:21:07.199866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.359 qpair failed and we were unable to recover it. 00:35:46.359 [2024-11-18 07:21:07.200157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.359 [2024-11-18 07:21:07.200221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.359 qpair failed and we were unable to recover it. 00:35:46.359 [2024-11-18 07:21:07.200517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.359 [2024-11-18 07:21:07.200583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.359 qpair failed and we were unable to recover it. 00:35:46.359 [2024-11-18 07:21:07.200831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.359 [2024-11-18 07:21:07.200897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.359 qpair failed and we were unable to recover it. 00:35:46.359 [2024-11-18 07:21:07.201155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.359 [2024-11-18 07:21:07.201220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.359 qpair failed and we were unable to recover it. 00:35:46.359 [2024-11-18 07:21:07.201418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.359 [2024-11-18 07:21:07.201486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.359 qpair failed and we were unable to recover it. 00:35:46.359 [2024-11-18 07:21:07.201757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.359 [2024-11-18 07:21:07.201823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.359 qpair failed and we were unable to recover it. 00:35:46.359 [2024-11-18 07:21:07.202119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.359 [2024-11-18 07:21:07.202184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.359 qpair failed and we were unable to recover it. 00:35:46.359 [2024-11-18 07:21:07.202444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.359 [2024-11-18 07:21:07.202522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.359 qpair failed and we were unable to recover it. 00:35:46.359 [2024-11-18 07:21:07.202804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.359 [2024-11-18 07:21:07.202869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.359 qpair failed and we were unable to recover it. 00:35:46.359 [2024-11-18 07:21:07.203161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.359 [2024-11-18 07:21:07.203226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.359 qpair failed and we were unable to recover it. 00:35:46.359 [2024-11-18 07:21:07.203544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.359 [2024-11-18 07:21:07.203611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.359 qpair failed and we were unable to recover it. 00:35:46.359 [2024-11-18 07:21:07.203907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.359 [2024-11-18 07:21:07.203973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.359 qpair failed and we were unable to recover it. 00:35:46.359 [2024-11-18 07:21:07.204158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.359 [2024-11-18 07:21:07.204224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.359 qpair failed and we were unable to recover it. 00:35:46.359 [2024-11-18 07:21:07.204524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.359 [2024-11-18 07:21:07.204591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.359 qpair failed and we were unable to recover it. 00:35:46.359 [2024-11-18 07:21:07.204885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.359 [2024-11-18 07:21:07.204950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.359 qpair failed and we were unable to recover it. 00:35:46.359 [2024-11-18 07:21:07.205252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.359 [2024-11-18 07:21:07.205316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.359 qpair failed and we were unable to recover it. 00:35:46.360 [2024-11-18 07:21:07.205570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.360 [2024-11-18 07:21:07.205638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.360 qpair failed and we were unable to recover it. 00:35:46.360 [2024-11-18 07:21:07.205935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.360 [2024-11-18 07:21:07.206001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.360 qpair failed and we were unable to recover it. 00:35:46.360 [2024-11-18 07:21:07.206289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.360 [2024-11-18 07:21:07.206353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.360 qpair failed and we were unable to recover it. 00:35:46.360 [2024-11-18 07:21:07.206646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.360 [2024-11-18 07:21:07.206712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.360 qpair failed and we were unable to recover it. 00:35:46.360 [2024-11-18 07:21:07.206974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.360 [2024-11-18 07:21:07.207039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.360 qpair failed and we were unable to recover it. 00:35:46.360 [2024-11-18 07:21:07.207288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.360 [2024-11-18 07:21:07.207353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.360 qpair failed and we were unable to recover it. 00:35:46.360 [2024-11-18 07:21:07.207613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.360 [2024-11-18 07:21:07.207680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.360 qpair failed and we were unable to recover it. 00:35:46.360 [2024-11-18 07:21:07.207970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.360 [2024-11-18 07:21:07.208036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.360 qpair failed and we were unable to recover it. 00:35:46.360 [2024-11-18 07:21:07.208322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.360 [2024-11-18 07:21:07.208386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.360 qpair failed and we were unable to recover it. 00:35:46.360 [2024-11-18 07:21:07.208685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.360 [2024-11-18 07:21:07.208751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.360 qpair failed and we were unable to recover it. 00:35:46.360 [2024-11-18 07:21:07.208964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.360 [2024-11-18 07:21:07.209030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.360 qpair failed and we were unable to recover it. 00:35:46.360 [2024-11-18 07:21:07.209319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.360 [2024-11-18 07:21:07.209384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.360 qpair failed and we were unable to recover it. 00:35:46.360 [2024-11-18 07:21:07.209689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.360 [2024-11-18 07:21:07.209755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.360 qpair failed and we were unable to recover it. 00:35:46.360 [2024-11-18 07:21:07.210043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.360 [2024-11-18 07:21:07.210107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.360 qpair failed and we were unable to recover it. 00:35:46.360 [2024-11-18 07:21:07.210352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.360 [2024-11-18 07:21:07.210418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.360 qpair failed and we were unable to recover it. 00:35:46.360 [2024-11-18 07:21:07.210738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.360 [2024-11-18 07:21:07.210806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.360 qpair failed and we were unable to recover it. 00:35:46.360 [2024-11-18 07:21:07.211097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.360 [2024-11-18 07:21:07.211162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.360 qpair failed and we were unable to recover it. 00:35:46.360 [2024-11-18 07:21:07.211456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.360 [2024-11-18 07:21:07.211535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.360 qpair failed and we were unable to recover it. 00:35:46.360 [2024-11-18 07:21:07.211833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.360 [2024-11-18 07:21:07.211909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.360 qpair failed and we were unable to recover it. 00:35:46.360 [2024-11-18 07:21:07.212165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.360 [2024-11-18 07:21:07.212230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.360 qpair failed and we were unable to recover it. 00:35:46.360 [2024-11-18 07:21:07.212517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.360 [2024-11-18 07:21:07.212583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.360 qpair failed and we were unable to recover it. 00:35:46.360 [2024-11-18 07:21:07.212880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.360 [2024-11-18 07:21:07.212945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.360 qpair failed and we were unable to recover it. 00:35:46.360 [2024-11-18 07:21:07.213190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.360 [2024-11-18 07:21:07.213253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.360 qpair failed and we were unable to recover it. 00:35:46.360 [2024-11-18 07:21:07.213556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.360 [2024-11-18 07:21:07.213621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.360 qpair failed and we were unable to recover it. 00:35:46.360 [2024-11-18 07:21:07.213874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.360 [2024-11-18 07:21:07.213939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.360 qpair failed and we were unable to recover it. 00:35:46.360 [2024-11-18 07:21:07.214227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.360 [2024-11-18 07:21:07.214291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.360 qpair failed and we were unable to recover it. 00:35:46.360 [2024-11-18 07:21:07.214585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.360 [2024-11-18 07:21:07.214651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.360 qpair failed and we were unable to recover it. 00:35:46.360 [2024-11-18 07:21:07.214891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.360 [2024-11-18 07:21:07.214957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.360 qpair failed and we were unable to recover it. 00:35:46.360 [2024-11-18 07:21:07.215241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.360 [2024-11-18 07:21:07.215305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.360 qpair failed and we were unable to recover it. 00:35:46.360 [2024-11-18 07:21:07.215548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.360 [2024-11-18 07:21:07.215616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.360 qpair failed and we were unable to recover it. 00:35:46.360 [2024-11-18 07:21:07.215887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.360 [2024-11-18 07:21:07.215954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.360 qpair failed and we were unable to recover it. 00:35:46.360 [2024-11-18 07:21:07.216207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.360 [2024-11-18 07:21:07.216271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.360 qpair failed and we were unable to recover it. 00:35:46.360 [2024-11-18 07:21:07.216519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.360 [2024-11-18 07:21:07.216586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.361 qpair failed and we were unable to recover it. 00:35:46.361 [2024-11-18 07:21:07.216875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.361 [2024-11-18 07:21:07.216940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.361 qpair failed and we were unable to recover it. 00:35:46.361 [2024-11-18 07:21:07.217197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.361 [2024-11-18 07:21:07.217264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.361 qpair failed and we were unable to recover it. 00:35:46.361 [2024-11-18 07:21:07.217555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.361 [2024-11-18 07:21:07.217622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.361 qpair failed and we were unable to recover it. 00:35:46.361 [2024-11-18 07:21:07.217915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.361 [2024-11-18 07:21:07.217981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.361 qpair failed and we were unable to recover it. 00:35:46.361 [2024-11-18 07:21:07.218270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.361 [2024-11-18 07:21:07.218334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.361 qpair failed and we were unable to recover it. 00:35:46.361 [2024-11-18 07:21:07.218628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.361 [2024-11-18 07:21:07.218695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.361 qpair failed and we were unable to recover it. 00:35:46.361 [2024-11-18 07:21:07.218998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.361 [2024-11-18 07:21:07.219063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.361 qpair failed and we were unable to recover it. 00:35:46.361 [2024-11-18 07:21:07.219327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.361 [2024-11-18 07:21:07.219391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.361 qpair failed and we were unable to recover it. 00:35:46.361 [2024-11-18 07:21:07.219702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.361 [2024-11-18 07:21:07.219769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.361 qpair failed and we were unable to recover it. 00:35:46.361 [2024-11-18 07:21:07.219965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.361 [2024-11-18 07:21:07.220033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.361 qpair failed and we were unable to recover it. 00:35:46.361 [2024-11-18 07:21:07.220322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.361 [2024-11-18 07:21:07.220386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.361 qpair failed and we were unable to recover it. 00:35:46.361 [2024-11-18 07:21:07.220648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.361 [2024-11-18 07:21:07.220715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.361 qpair failed and we were unable to recover it. 00:35:46.361 [2024-11-18 07:21:07.220984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.361 [2024-11-18 07:21:07.221051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.361 qpair failed and we were unable to recover it. 00:35:46.361 [2024-11-18 07:21:07.221295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.361 [2024-11-18 07:21:07.221361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.361 qpair failed and we were unable to recover it. 00:35:46.361 [2024-11-18 07:21:07.221570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.361 [2024-11-18 07:21:07.221636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.361 qpair failed and we were unable to recover it. 00:35:46.361 [2024-11-18 07:21:07.221861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.361 [2024-11-18 07:21:07.221927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.361 qpair failed and we were unable to recover it. 00:35:46.361 [2024-11-18 07:21:07.222182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.361 [2024-11-18 07:21:07.222247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.361 qpair failed and we were unable to recover it. 00:35:46.361 [2024-11-18 07:21:07.222519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.361 [2024-11-18 07:21:07.222585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.361 qpair failed and we were unable to recover it. 00:35:46.361 [2024-11-18 07:21:07.222826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.361 [2024-11-18 07:21:07.222890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.361 qpair failed and we were unable to recover it. 00:35:46.361 [2024-11-18 07:21:07.223120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.361 [2024-11-18 07:21:07.223185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.361 qpair failed and we were unable to recover it. 00:35:46.361 [2024-11-18 07:21:07.223426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.361 [2024-11-18 07:21:07.223505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.361 qpair failed and we were unable to recover it. 00:35:46.361 [2024-11-18 07:21:07.223702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.361 [2024-11-18 07:21:07.223766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.361 qpair failed and we were unable to recover it. 00:35:46.361 [2024-11-18 07:21:07.224047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.361 [2024-11-18 07:21:07.224111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.361 qpair failed and we were unable to recover it. 00:35:46.361 [2024-11-18 07:21:07.224395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.361 [2024-11-18 07:21:07.224459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.361 qpair failed and we were unable to recover it. 00:35:46.361 [2024-11-18 07:21:07.224764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.361 [2024-11-18 07:21:07.224827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.361 qpair failed and we were unable to recover it. 00:35:46.361 [2024-11-18 07:21:07.225041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.361 [2024-11-18 07:21:07.225117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.361 qpair failed and we were unable to recover it. 00:35:46.361 [2024-11-18 07:21:07.225412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.361 [2024-11-18 07:21:07.225477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.361 qpair failed and we were unable to recover it. 00:35:46.361 [2024-11-18 07:21:07.225785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.361 [2024-11-18 07:21:07.225852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.361 qpair failed and we were unable to recover it. 00:35:46.361 [2024-11-18 07:21:07.226103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.361 [2024-11-18 07:21:07.226169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.361 qpair failed and we were unable to recover it. 00:35:46.361 [2024-11-18 07:21:07.226458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.361 [2024-11-18 07:21:07.226554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.361 qpair failed and we were unable to recover it. 00:35:46.361 [2024-11-18 07:21:07.226768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.361 [2024-11-18 07:21:07.226834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.361 qpair failed and we were unable to recover it. 00:35:46.361 [2024-11-18 07:21:07.227125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.362 [2024-11-18 07:21:07.227189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.362 qpair failed and we were unable to recover it. 00:35:46.362 [2024-11-18 07:21:07.227472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.362 [2024-11-18 07:21:07.227552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.362 qpair failed and we were unable to recover it. 00:35:46.362 [2024-11-18 07:21:07.227850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.362 [2024-11-18 07:21:07.227916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.362 qpair failed and we were unable to recover it. 00:35:46.362 [2024-11-18 07:21:07.228173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.362 [2024-11-18 07:21:07.228238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.362 qpair failed and we were unable to recover it. 00:35:46.362 [2024-11-18 07:21:07.228534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.362 [2024-11-18 07:21:07.228599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.362 qpair failed and we were unable to recover it. 00:35:46.362 [2024-11-18 07:21:07.228856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.362 [2024-11-18 07:21:07.228924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.362 qpair failed and we were unable to recover it. 00:35:46.362 [2024-11-18 07:21:07.229223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.362 [2024-11-18 07:21:07.229287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.362 qpair failed and we were unable to recover it. 00:35:46.362 [2024-11-18 07:21:07.229567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.362 [2024-11-18 07:21:07.229635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.362 qpair failed and we were unable to recover it. 00:35:46.362 [2024-11-18 07:21:07.229918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.362 [2024-11-18 07:21:07.229984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.362 qpair failed and we were unable to recover it. 00:35:46.362 [2024-11-18 07:21:07.230211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.362 [2024-11-18 07:21:07.230275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.362 qpair failed and we were unable to recover it. 00:35:46.362 [2024-11-18 07:21:07.230567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.362 [2024-11-18 07:21:07.230635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.362 qpair failed and we were unable to recover it. 00:35:46.362 [2024-11-18 07:21:07.230833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.362 [2024-11-18 07:21:07.230903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.362 qpair failed and we were unable to recover it. 00:35:46.362 [2024-11-18 07:21:07.231122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.362 [2024-11-18 07:21:07.231187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.362 qpair failed and we were unable to recover it. 00:35:46.362 [2024-11-18 07:21:07.231422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.362 [2024-11-18 07:21:07.231487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.362 qpair failed and we were unable to recover it. 00:35:46.362 [2024-11-18 07:21:07.231701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.362 [2024-11-18 07:21:07.231769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.362 qpair failed and we were unable to recover it. 00:35:46.362 [2024-11-18 07:21:07.232042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.362 [2024-11-18 07:21:07.232108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.362 qpair failed and we were unable to recover it. 00:35:46.362 [2024-11-18 07:21:07.232360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.362 [2024-11-18 07:21:07.232425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.362 qpair failed and we were unable to recover it. 00:35:46.362 [2024-11-18 07:21:07.232725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.362 [2024-11-18 07:21:07.232790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.362 qpair failed and we were unable to recover it. 00:35:46.362 [2024-11-18 07:21:07.233040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.362 [2024-11-18 07:21:07.233104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.362 qpair failed and we were unable to recover it. 00:35:46.362 [2024-11-18 07:21:07.233304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.362 [2024-11-18 07:21:07.233369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.362 qpair failed and we were unable to recover it. 00:35:46.362 [2024-11-18 07:21:07.233567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.362 [2024-11-18 07:21:07.233633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.362 qpair failed and we were unable to recover it. 00:35:46.362 [2024-11-18 07:21:07.233938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.362 [2024-11-18 07:21:07.234004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.362 qpair failed and we were unable to recover it. 00:35:46.362 [2024-11-18 07:21:07.234305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.362 [2024-11-18 07:21:07.234371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.362 qpair failed and we were unable to recover it. 00:35:46.362 [2024-11-18 07:21:07.234626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.362 [2024-11-18 07:21:07.234693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.362 qpair failed and we were unable to recover it. 00:35:46.362 [2024-11-18 07:21:07.234929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.362 [2024-11-18 07:21:07.234995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.362 qpair failed and we were unable to recover it. 00:35:46.362 [2024-11-18 07:21:07.235282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.362 [2024-11-18 07:21:07.235349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.362 qpair failed and we were unable to recover it. 00:35:46.362 [2024-11-18 07:21:07.235670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.362 [2024-11-18 07:21:07.235736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.362 qpair failed and we were unable to recover it. 00:35:46.362 [2024-11-18 07:21:07.235990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.362 [2024-11-18 07:21:07.236056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.362 qpair failed and we were unable to recover it. 00:35:46.362 [2024-11-18 07:21:07.236346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.362 [2024-11-18 07:21:07.236412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.362 qpair failed and we were unable to recover it. 00:35:46.362 [2024-11-18 07:21:07.236657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.362 [2024-11-18 07:21:07.236723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.362 qpair failed and we were unable to recover it. 00:35:46.362 [2024-11-18 07:21:07.236972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.362 [2024-11-18 07:21:07.237037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.362 qpair failed and we were unable to recover it. 00:35:46.362 [2024-11-18 07:21:07.237334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.362 [2024-11-18 07:21:07.237401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.362 qpair failed and we were unable to recover it. 00:35:46.362 [2024-11-18 07:21:07.237623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.362 [2024-11-18 07:21:07.237691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.362 qpair failed and we were unable to recover it. 00:35:46.362 [2024-11-18 07:21:07.237936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.362 [2024-11-18 07:21:07.238004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.362 qpair failed and we were unable to recover it. 00:35:46.362 [2024-11-18 07:21:07.238294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.362 [2024-11-18 07:21:07.238371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.362 qpair failed and we were unable to recover it. 00:35:46.362 [2024-11-18 07:21:07.238622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.362 [2024-11-18 07:21:07.238688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.362 qpair failed and we were unable to recover it. 00:35:46.362 [2024-11-18 07:21:07.238932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.362 [2024-11-18 07:21:07.238999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.362 qpair failed and we were unable to recover it. 00:35:46.363 [2024-11-18 07:21:07.239248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.363 [2024-11-18 07:21:07.239313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.363 qpair failed and we were unable to recover it. 00:35:46.363 [2024-11-18 07:21:07.239623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.363 [2024-11-18 07:21:07.239691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.363 qpair failed and we were unable to recover it. 00:35:46.363 [2024-11-18 07:21:07.239991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.363 [2024-11-18 07:21:07.240057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.363 qpair failed and we were unable to recover it. 00:35:46.363 [2024-11-18 07:21:07.240305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.363 [2024-11-18 07:21:07.240373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.363 qpair failed and we were unable to recover it. 00:35:46.363 [2024-11-18 07:21:07.240675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.363 [2024-11-18 07:21:07.240742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.363 qpair failed and we were unable to recover it. 00:35:46.363 [2024-11-18 07:21:07.241044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.363 [2024-11-18 07:21:07.241110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.363 qpair failed and we were unable to recover it. 00:35:46.363 [2024-11-18 07:21:07.241400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.363 [2024-11-18 07:21:07.241465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.363 qpair failed and we were unable to recover it. 00:35:46.363 [2024-11-18 07:21:07.241729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.363 [2024-11-18 07:21:07.241794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.363 qpair failed and we were unable to recover it. 00:35:46.363 [2024-11-18 07:21:07.242084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.363 [2024-11-18 07:21:07.242152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.363 qpair failed and we were unable to recover it. 00:35:46.363 [2024-11-18 07:21:07.242353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.363 [2024-11-18 07:21:07.242419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.363 qpair failed and we were unable to recover it. 00:35:46.363 [2024-11-18 07:21:07.242699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.363 [2024-11-18 07:21:07.242766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.363 qpair failed and we were unable to recover it. 00:35:46.363 [2024-11-18 07:21:07.243075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.363 [2024-11-18 07:21:07.243141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.363 qpair failed and we were unable to recover it. 00:35:46.363 [2024-11-18 07:21:07.243399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.363 [2024-11-18 07:21:07.243466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.363 qpair failed and we were unable to recover it. 00:35:46.363 [2024-11-18 07:21:07.243687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.363 [2024-11-18 07:21:07.243753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.363 qpair failed and we were unable to recover it. 00:35:46.363 [2024-11-18 07:21:07.244044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.363 [2024-11-18 07:21:07.244111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.363 qpair failed and we were unable to recover it. 00:35:46.363 [2024-11-18 07:21:07.244404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.363 [2024-11-18 07:21:07.244470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.363 qpair failed and we were unable to recover it. 00:35:46.363 [2024-11-18 07:21:07.244694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.363 [2024-11-18 07:21:07.244761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.363 qpair failed and we were unable to recover it. 00:35:46.363 [2024-11-18 07:21:07.245028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.363 [2024-11-18 07:21:07.245094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.363 qpair failed and we were unable to recover it. 00:35:46.363 [2024-11-18 07:21:07.245345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.363 [2024-11-18 07:21:07.245410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.363 qpair failed and we were unable to recover it. 00:35:46.363 [2024-11-18 07:21:07.245713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.363 [2024-11-18 07:21:07.245781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.363 qpair failed and we were unable to recover it. 00:35:46.363 [2024-11-18 07:21:07.245977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.363 [2024-11-18 07:21:07.246046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.363 qpair failed and we were unable to recover it. 00:35:46.363 [2024-11-18 07:21:07.246279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.363 [2024-11-18 07:21:07.246345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.363 qpair failed and we were unable to recover it. 00:35:46.363 [2024-11-18 07:21:07.246629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.363 [2024-11-18 07:21:07.246697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.363 qpair failed and we were unable to recover it. 00:35:46.363 [2024-11-18 07:21:07.246995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.363 [2024-11-18 07:21:07.247062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.363 qpair failed and we were unable to recover it. 00:35:46.363 [2024-11-18 07:21:07.247320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.363 [2024-11-18 07:21:07.247387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.363 qpair failed and we were unable to recover it. 00:35:46.363 [2024-11-18 07:21:07.247685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.363 [2024-11-18 07:21:07.247753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.363 qpair failed and we were unable to recover it. 00:35:46.363 [2024-11-18 07:21:07.248012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.363 [2024-11-18 07:21:07.248079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.363 qpair failed and we were unable to recover it. 00:35:46.363 [2024-11-18 07:21:07.248315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.363 [2024-11-18 07:21:07.248381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.363 qpair failed and we were unable to recover it. 00:35:46.363 [2024-11-18 07:21:07.248699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.363 [2024-11-18 07:21:07.248767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.363 qpair failed and we were unable to recover it. 00:35:46.363 [2024-11-18 07:21:07.249017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.363 [2024-11-18 07:21:07.249082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.363 qpair failed and we were unable to recover it. 00:35:46.363 [2024-11-18 07:21:07.249326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.363 [2024-11-18 07:21:07.249393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.363 qpair failed and we were unable to recover it. 00:35:46.363 [2024-11-18 07:21:07.249630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.363 [2024-11-18 07:21:07.249699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.363 qpair failed and we were unable to recover it. 00:35:46.363 [2024-11-18 07:21:07.249932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.363 [2024-11-18 07:21:07.249997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.363 qpair failed and we were unable to recover it. 00:35:46.363 [2024-11-18 07:21:07.250190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.363 [2024-11-18 07:21:07.250258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.363 qpair failed and we were unable to recover it. 00:35:46.363 [2024-11-18 07:21:07.250519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.363 [2024-11-18 07:21:07.250587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.363 qpair failed and we were unable to recover it. 00:35:46.363 [2024-11-18 07:21:07.250835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.363 [2024-11-18 07:21:07.250901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.363 qpair failed and we were unable to recover it. 00:35:46.364 [2024-11-18 07:21:07.251139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.364 [2024-11-18 07:21:07.251207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.364 qpair failed and we were unable to recover it. 00:35:46.364 [2024-11-18 07:21:07.251461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.364 [2024-11-18 07:21:07.251568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.364 qpair failed and we were unable to recover it. 00:35:46.364 [2024-11-18 07:21:07.251800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.364 [2024-11-18 07:21:07.251865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.364 qpair failed and we were unable to recover it. 00:35:46.364 [2024-11-18 07:21:07.252116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.364 [2024-11-18 07:21:07.252182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.364 qpair failed and we were unable to recover it. 00:35:46.364 [2024-11-18 07:21:07.252426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.364 [2024-11-18 07:21:07.252518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.364 qpair failed and we were unable to recover it. 00:35:46.364 [2024-11-18 07:21:07.252750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.364 [2024-11-18 07:21:07.252815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.364 qpair failed and we were unable to recover it. 00:35:46.364 [2024-11-18 07:21:07.253107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.364 [2024-11-18 07:21:07.253173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.364 qpair failed and we were unable to recover it. 00:35:46.364 [2024-11-18 07:21:07.253466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.364 [2024-11-18 07:21:07.253546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.364 qpair failed and we were unable to recover it. 00:35:46.364 [2024-11-18 07:21:07.253754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.364 [2024-11-18 07:21:07.253819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.364 qpair failed and we were unable to recover it. 00:35:46.364 [2024-11-18 07:21:07.254109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.364 [2024-11-18 07:21:07.254175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.364 qpair failed and we were unable to recover it. 00:35:46.364 [2024-11-18 07:21:07.254442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.364 [2024-11-18 07:21:07.254522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.364 qpair failed and we were unable to recover it. 00:35:46.364 [2024-11-18 07:21:07.254778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.364 [2024-11-18 07:21:07.254844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.364 qpair failed and we were unable to recover it. 00:35:46.364 [2024-11-18 07:21:07.255131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.364 [2024-11-18 07:21:07.255197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.364 qpair failed and we were unable to recover it. 00:35:46.364 [2024-11-18 07:21:07.255455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.364 [2024-11-18 07:21:07.255539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.364 qpair failed and we were unable to recover it. 00:35:46.364 [2024-11-18 07:21:07.255843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.364 [2024-11-18 07:21:07.255909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.364 qpair failed and we were unable to recover it. 00:35:46.364 [2024-11-18 07:21:07.256155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.364 [2024-11-18 07:21:07.256221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.364 qpair failed and we were unable to recover it. 00:35:46.364 [2024-11-18 07:21:07.256515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.364 [2024-11-18 07:21:07.256582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.364 qpair failed and we were unable to recover it. 00:35:46.364 [2024-11-18 07:21:07.256829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.364 [2024-11-18 07:21:07.256895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.364 qpair failed and we were unable to recover it. 00:35:46.364 [2024-11-18 07:21:07.257098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.364 [2024-11-18 07:21:07.257163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.364 qpair failed and we were unable to recover it. 00:35:46.364 [2024-11-18 07:21:07.257461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.364 [2024-11-18 07:21:07.257540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.364 qpair failed and we were unable to recover it. 00:35:46.364 [2024-11-18 07:21:07.257792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.364 [2024-11-18 07:21:07.257859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.364 qpair failed and we were unable to recover it. 00:35:46.364 [2024-11-18 07:21:07.258144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.364 [2024-11-18 07:21:07.258210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.364 qpair failed and we were unable to recover it. 00:35:46.364 [2024-11-18 07:21:07.258449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.364 [2024-11-18 07:21:07.258537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.364 qpair failed and we were unable to recover it. 00:35:46.364 [2024-11-18 07:21:07.258853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.364 [2024-11-18 07:21:07.258921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.364 qpair failed and we were unable to recover it. 00:35:46.364 [2024-11-18 07:21:07.259233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.364 [2024-11-18 07:21:07.259298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.364 qpair failed and we were unable to recover it. 00:35:46.364 [2024-11-18 07:21:07.259533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.364 [2024-11-18 07:21:07.259602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.364 qpair failed and we were unable to recover it. 00:35:46.364 [2024-11-18 07:21:07.259846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.364 [2024-11-18 07:21:07.259913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.364 qpair failed and we were unable to recover it. 00:35:46.364 [2024-11-18 07:21:07.260193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.364 [2024-11-18 07:21:07.260259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.364 qpair failed and we were unable to recover it. 00:35:46.364 [2024-11-18 07:21:07.260536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.364 [2024-11-18 07:21:07.260606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.364 qpair failed and we were unable to recover it. 00:35:46.364 [2024-11-18 07:21:07.260943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.364 [2024-11-18 07:21:07.261011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.364 qpair failed and we were unable to recover it. 00:35:46.364 [2024-11-18 07:21:07.261201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.364 [2024-11-18 07:21:07.261267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.364 qpair failed and we were unable to recover it. 00:35:46.364 [2024-11-18 07:21:07.261476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.364 [2024-11-18 07:21:07.261567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.364 qpair failed and we were unable to recover it. 00:35:46.364 [2024-11-18 07:21:07.261776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.364 [2024-11-18 07:21:07.261841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.364 qpair failed and we were unable to recover it. 00:35:46.364 [2024-11-18 07:21:07.262127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.364 [2024-11-18 07:21:07.262194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.364 qpair failed and we were unable to recover it. 00:35:46.364 [2024-11-18 07:21:07.262453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.364 [2024-11-18 07:21:07.262539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.364 qpair failed and we were unable to recover it. 00:35:46.364 [2024-11-18 07:21:07.262804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.364 [2024-11-18 07:21:07.262870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.364 qpair failed and we were unable to recover it. 00:35:46.364 [2024-11-18 07:21:07.263142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.364 [2024-11-18 07:21:07.263208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.364 qpair failed and we were unable to recover it. 00:35:46.364 [2024-11-18 07:21:07.263413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.364 [2024-11-18 07:21:07.263482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.365 qpair failed and we were unable to recover it. 00:35:46.365 [2024-11-18 07:21:07.263781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.365 [2024-11-18 07:21:07.263847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.365 qpair failed and we were unable to recover it. 00:35:46.365 [2024-11-18 07:21:07.264101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.365 [2024-11-18 07:21:07.264166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.365 qpair failed and we were unable to recover it. 00:35:46.365 [2024-11-18 07:21:07.264416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.365 [2024-11-18 07:21:07.264487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.365 qpair failed and we were unable to recover it. 00:35:46.365 [2024-11-18 07:21:07.264764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.365 [2024-11-18 07:21:07.264840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.365 qpair failed and we were unable to recover it. 00:35:46.365 [2024-11-18 07:21:07.265094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.365 [2024-11-18 07:21:07.265166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.365 qpair failed and we were unable to recover it. 00:35:46.365 [2024-11-18 07:21:07.265367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.365 [2024-11-18 07:21:07.265435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.365 qpair failed and we were unable to recover it. 00:35:46.365 [2024-11-18 07:21:07.265705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.365 [2024-11-18 07:21:07.265779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.365 qpair failed and we were unable to recover it. 00:35:46.365 [2024-11-18 07:21:07.266030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.365 [2024-11-18 07:21:07.266094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.365 qpair failed and we were unable to recover it. 00:35:46.365 [2024-11-18 07:21:07.266397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.365 [2024-11-18 07:21:07.266461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.365 qpair failed and we were unable to recover it. 00:35:46.365 [2024-11-18 07:21:07.266785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.365 [2024-11-18 07:21:07.266852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.365 qpair failed and we were unable to recover it. 00:35:46.365 [2024-11-18 07:21:07.267102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.365 [2024-11-18 07:21:07.267167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.365 qpair failed and we were unable to recover it. 00:35:46.365 [2024-11-18 07:21:07.267397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.365 [2024-11-18 07:21:07.267463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.365 qpair failed and we were unable to recover it. 00:35:46.365 [2024-11-18 07:21:07.267648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.365 [2024-11-18 07:21:07.267712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.365 qpair failed and we were unable to recover it. 00:35:46.365 [2024-11-18 07:21:07.267910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.365 [2024-11-18 07:21:07.267942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.365 qpair failed and we were unable to recover it. 00:35:46.365 [2024-11-18 07:21:07.268133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.365 [2024-11-18 07:21:07.268199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.365 qpair failed and we were unable to recover it. 00:35:46.365 [2024-11-18 07:21:07.269686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.365 [2024-11-18 07:21:07.269718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.365 qpair failed and we were unable to recover it. 00:35:46.365 [2024-11-18 07:21:07.269869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.365 [2024-11-18 07:21:07.269913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.365 qpair failed and we were unable to recover it. 00:35:46.365 [2024-11-18 07:21:07.270093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.365 [2024-11-18 07:21:07.270140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.365 qpair failed and we were unable to recover it. 00:35:46.365 [2024-11-18 07:21:07.270229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.365 [2024-11-18 07:21:07.270256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.365 qpair failed and we were unable to recover it. 00:35:46.365 [2024-11-18 07:21:07.270375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.365 [2024-11-18 07:21:07.270402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.365 qpair failed and we were unable to recover it. 00:35:46.365 [2024-11-18 07:21:07.270501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.365 [2024-11-18 07:21:07.270540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.365 qpair failed and we were unable to recover it. 00:35:46.365 [2024-11-18 07:21:07.270656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.365 [2024-11-18 07:21:07.270684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.365 qpair failed and we were unable to recover it. 00:35:46.365 [2024-11-18 07:21:07.270781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.365 [2024-11-18 07:21:07.270808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.365 qpair failed and we were unable to recover it. 00:35:46.365 [2024-11-18 07:21:07.270919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.365 [2024-11-18 07:21:07.270946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.365 qpair failed and we were unable to recover it. 00:35:46.365 [2024-11-18 07:21:07.271037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.365 [2024-11-18 07:21:07.271063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.365 qpair failed and we were unable to recover it. 00:35:46.365 [2024-11-18 07:21:07.271179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.365 [2024-11-18 07:21:07.271206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.365 qpair failed and we were unable to recover it. 00:35:46.365 [2024-11-18 07:21:07.271322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.365 [2024-11-18 07:21:07.271347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.365 qpair failed and we were unable to recover it. 00:35:46.365 [2024-11-18 07:21:07.271494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.365 [2024-11-18 07:21:07.271523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.365 qpair failed and we were unable to recover it. 00:35:46.365 [2024-11-18 07:21:07.271640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.365 [2024-11-18 07:21:07.271667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.365 qpair failed and we were unable to recover it. 00:35:46.365 [2024-11-18 07:21:07.271812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.365 [2024-11-18 07:21:07.271839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.365 qpair failed and we were unable to recover it. 00:35:46.365 [2024-11-18 07:21:07.271930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.365 [2024-11-18 07:21:07.271957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.365 qpair failed and we were unable to recover it. 00:35:46.365 [2024-11-18 07:21:07.272069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.365 [2024-11-18 07:21:07.272096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.365 qpair failed and we were unable to recover it. 00:35:46.365 [2024-11-18 07:21:07.272222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.365 [2024-11-18 07:21:07.272249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.365 qpair failed and we were unable to recover it. 00:35:46.365 [2024-11-18 07:21:07.272395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.365 [2024-11-18 07:21:07.272423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.365 qpair failed and we were unable to recover it. 00:35:46.365 [2024-11-18 07:21:07.272517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.366 [2024-11-18 07:21:07.272554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.366 qpair failed and we were unable to recover it. 00:35:46.366 [2024-11-18 07:21:07.272658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.366 [2024-11-18 07:21:07.272703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.366 qpair failed and we were unable to recover it. 00:35:46.366 [2024-11-18 07:21:07.272829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.366 [2024-11-18 07:21:07.272877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.366 qpair failed and we were unable to recover it. 00:35:46.366 [2024-11-18 07:21:07.272999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.366 [2024-11-18 07:21:07.273027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.366 qpair failed and we were unable to recover it. 00:35:46.366 [2024-11-18 07:21:07.273158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.366 [2024-11-18 07:21:07.273184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.366 qpair failed and we were unable to recover it. 00:35:46.366 [2024-11-18 07:21:07.273338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.366 [2024-11-18 07:21:07.273366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.366 qpair failed and we were unable to recover it. 00:35:46.366 [2024-11-18 07:21:07.273498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.366 [2024-11-18 07:21:07.273554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.366 qpair failed and we were unable to recover it. 00:35:46.366 [2024-11-18 07:21:07.273722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.366 [2024-11-18 07:21:07.273787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.366 qpair failed and we were unable to recover it. 00:35:46.366 [2024-11-18 07:21:07.273942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.366 [2024-11-18 07:21:07.273999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.366 qpair failed and we were unable to recover it. 00:35:46.366 [2024-11-18 07:21:07.274183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.366 [2024-11-18 07:21:07.274238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.366 qpair failed and we were unable to recover it. 00:35:46.366 [2024-11-18 07:21:07.274354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.366 [2024-11-18 07:21:07.274381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.366 qpair failed and we were unable to recover it. 00:35:46.366 [2024-11-18 07:21:07.274511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.366 [2024-11-18 07:21:07.274551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.366 qpair failed and we were unable to recover it. 00:35:46.366 [2024-11-18 07:21:07.274667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.366 [2024-11-18 07:21:07.274694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.366 qpair failed and we were unable to recover it. 00:35:46.366 [2024-11-18 07:21:07.274829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.366 [2024-11-18 07:21:07.274857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.366 qpair failed and we were unable to recover it. 00:35:46.366 [2024-11-18 07:21:07.274938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.366 [2024-11-18 07:21:07.274963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.366 qpair failed and we were unable to recover it. 00:35:46.366 [2024-11-18 07:21:07.275048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.366 [2024-11-18 07:21:07.275075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.366 qpair failed and we were unable to recover it. 00:35:46.366 [2024-11-18 07:21:07.275197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.366 [2024-11-18 07:21:07.275224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.366 qpair failed and we were unable to recover it. 00:35:46.366 [2024-11-18 07:21:07.275341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.366 [2024-11-18 07:21:07.275367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.366 qpair failed and we were unable to recover it. 00:35:46.366 [2024-11-18 07:21:07.275460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.366 [2024-11-18 07:21:07.275487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.366 qpair failed and we were unable to recover it. 00:35:46.366 [2024-11-18 07:21:07.275634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.366 [2024-11-18 07:21:07.275680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.366 qpair failed and we were unable to recover it. 00:35:46.366 [2024-11-18 07:21:07.275794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.366 [2024-11-18 07:21:07.275840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.366 qpair failed and we were unable to recover it. 00:35:46.366 [2024-11-18 07:21:07.275924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.366 [2024-11-18 07:21:07.275949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.366 qpair failed and we were unable to recover it. 00:35:46.366 [2024-11-18 07:21:07.276051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.366 [2024-11-18 07:21:07.276089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.366 qpair failed and we were unable to recover it. 00:35:46.366 [2024-11-18 07:21:07.276191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.366 [2024-11-18 07:21:07.276221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.366 qpair failed and we were unable to recover it. 00:35:46.366 [2024-11-18 07:21:07.276311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.651 [2024-11-18 07:21:07.276338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.651 qpair failed and we were unable to recover it. 00:35:46.651 [2024-11-18 07:21:07.276444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.651 [2024-11-18 07:21:07.276471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.651 qpair failed and we were unable to recover it. 00:35:46.651 [2024-11-18 07:21:07.276587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.651 [2024-11-18 07:21:07.276614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.651 qpair failed and we were unable to recover it. 00:35:46.651 [2024-11-18 07:21:07.276703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.651 [2024-11-18 07:21:07.276751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.651 qpair failed and we were unable to recover it. 00:35:46.651 [2024-11-18 07:21:07.276912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.651 [2024-11-18 07:21:07.276946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.651 qpair failed and we were unable to recover it. 00:35:46.651 [2024-11-18 07:21:07.277093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.651 [2024-11-18 07:21:07.277127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.651 qpair failed and we were unable to recover it. 00:35:46.651 [2024-11-18 07:21:07.277261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.651 [2024-11-18 07:21:07.277302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.651 qpair failed and we were unable to recover it. 00:35:46.651 [2024-11-18 07:21:07.277447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.651 [2024-11-18 07:21:07.277475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.651 qpair failed and we were unable to recover it. 00:35:46.651 [2024-11-18 07:21:07.277599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.651 [2024-11-18 07:21:07.277626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.651 qpair failed and we were unable to recover it. 00:35:46.651 [2024-11-18 07:21:07.277720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.651 [2024-11-18 07:21:07.277758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.651 qpair failed and we were unable to recover it. 00:35:46.651 [2024-11-18 07:21:07.277893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.651 [2024-11-18 07:21:07.277940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.651 qpair failed and we were unable to recover it. 00:35:46.651 [2024-11-18 07:21:07.278078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.651 [2024-11-18 07:21:07.278112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.651 qpair failed and we were unable to recover it. 00:35:46.651 [2024-11-18 07:21:07.278259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.651 [2024-11-18 07:21:07.278300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.651 qpair failed and we were unable to recover it. 00:35:46.651 [2024-11-18 07:21:07.278448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.651 [2024-11-18 07:21:07.278476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.651 qpair failed and we were unable to recover it. 00:35:46.651 [2024-11-18 07:21:07.278583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.651 [2024-11-18 07:21:07.278610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.651 qpair failed and we were unable to recover it. 00:35:46.651 [2024-11-18 07:21:07.278699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.651 [2024-11-18 07:21:07.278726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.651 qpair failed and we were unable to recover it. 00:35:46.651 [2024-11-18 07:21:07.278833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.651 [2024-11-18 07:21:07.278881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.651 qpair failed and we were unable to recover it. 00:35:46.651 [2024-11-18 07:21:07.278965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.651 [2024-11-18 07:21:07.278992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.651 qpair failed and we were unable to recover it. 00:35:46.651 [2024-11-18 07:21:07.279115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.651 [2024-11-18 07:21:07.279141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.651 qpair failed and we were unable to recover it. 00:35:46.651 [2024-11-18 07:21:07.279261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.651 [2024-11-18 07:21:07.279300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.651 qpair failed and we were unable to recover it. 00:35:46.651 [2024-11-18 07:21:07.279393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.651 [2024-11-18 07:21:07.279418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.651 qpair failed and we were unable to recover it. 00:35:46.651 [2024-11-18 07:21:07.279505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.651 [2024-11-18 07:21:07.279543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.651 qpair failed and we were unable to recover it. 00:35:46.651 [2024-11-18 07:21:07.279654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.651 [2024-11-18 07:21:07.279686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.651 qpair failed and we were unable to recover it. 00:35:46.651 [2024-11-18 07:21:07.279795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.651 [2024-11-18 07:21:07.279828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.651 qpair failed and we were unable to recover it. 00:35:46.652 [2024-11-18 07:21:07.279988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.652 [2024-11-18 07:21:07.280020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.652 qpair failed and we were unable to recover it. 00:35:46.652 [2024-11-18 07:21:07.280148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.652 [2024-11-18 07:21:07.280180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.652 qpair failed and we were unable to recover it. 00:35:46.652 [2024-11-18 07:21:07.280329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.652 [2024-11-18 07:21:07.280362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.652 qpair failed and we were unable to recover it. 00:35:46.652 [2024-11-18 07:21:07.280507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.652 [2024-11-18 07:21:07.280562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.652 qpair failed and we were unable to recover it. 00:35:46.652 [2024-11-18 07:21:07.280685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.652 [2024-11-18 07:21:07.280720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.652 qpair failed and we were unable to recover it. 00:35:46.652 [2024-11-18 07:21:07.280846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.652 [2024-11-18 07:21:07.280881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.652 qpair failed and we were unable to recover it. 00:35:46.652 [2024-11-18 07:21:07.281018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.652 [2024-11-18 07:21:07.281052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.652 qpair failed and we were unable to recover it. 00:35:46.652 [2024-11-18 07:21:07.281191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.652 [2024-11-18 07:21:07.281225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.652 qpair failed and we were unable to recover it. 00:35:46.652 [2024-11-18 07:21:07.281331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.652 [2024-11-18 07:21:07.281366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.652 qpair failed and we were unable to recover it. 00:35:46.652 [2024-11-18 07:21:07.281511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.652 [2024-11-18 07:21:07.281556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.652 qpair failed and we were unable to recover it. 00:35:46.652 [2024-11-18 07:21:07.281695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.652 [2024-11-18 07:21:07.281722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.652 qpair failed and we were unable to recover it. 00:35:46.652 [2024-11-18 07:21:07.281831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.652 [2024-11-18 07:21:07.281877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.652 qpair failed and we were unable to recover it. 00:35:46.652 [2024-11-18 07:21:07.282076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.652 [2024-11-18 07:21:07.282110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.652 qpair failed and we were unable to recover it. 00:35:46.652 [2024-11-18 07:21:07.282313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.652 [2024-11-18 07:21:07.282365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.652 qpair failed and we were unable to recover it. 00:35:46.652 [2024-11-18 07:21:07.282509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.652 [2024-11-18 07:21:07.282555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.652 qpair failed and we were unable to recover it. 00:35:46.652 [2024-11-18 07:21:07.282653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.652 [2024-11-18 07:21:07.282684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.652 qpair failed and we were unable to recover it. 00:35:46.652 [2024-11-18 07:21:07.282793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.652 [2024-11-18 07:21:07.282820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.652 qpair failed and we were unable to recover it. 00:35:46.652 [2024-11-18 07:21:07.282929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.652 [2024-11-18 07:21:07.282964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.652 qpair failed and we were unable to recover it. 00:35:46.652 [2024-11-18 07:21:07.283158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.652 [2024-11-18 07:21:07.283205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.652 qpair failed and we were unable to recover it. 00:35:46.652 [2024-11-18 07:21:07.283440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.652 [2024-11-18 07:21:07.283487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.652 qpair failed and we were unable to recover it. 00:35:46.652 [2024-11-18 07:21:07.283666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.652 [2024-11-18 07:21:07.283703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.652 qpair failed and we were unable to recover it. 00:35:46.652 [2024-11-18 07:21:07.283865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.652 [2024-11-18 07:21:07.283915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.652 qpair failed and we were unable to recover it. 00:35:46.652 [2024-11-18 07:21:07.284053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.652 [2024-11-18 07:21:07.284109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.652 qpair failed and we were unable to recover it. 00:35:46.652 [2024-11-18 07:21:07.284298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.652 [2024-11-18 07:21:07.284325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.652 qpair failed and we were unable to recover it. 00:35:46.652 [2024-11-18 07:21:07.284443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.652 [2024-11-18 07:21:07.284471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.652 qpair failed and we were unable to recover it. 00:35:46.652 [2024-11-18 07:21:07.284607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.652 [2024-11-18 07:21:07.284635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.652 qpair failed and we were unable to recover it. 00:35:46.652 [2024-11-18 07:21:07.284756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.652 [2024-11-18 07:21:07.284782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.652 qpair failed and we were unable to recover it. 00:35:46.652 [2024-11-18 07:21:07.284875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.652 [2024-11-18 07:21:07.284902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.652 qpair failed and we were unable to recover it. 00:35:46.652 [2024-11-18 07:21:07.284982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.652 [2024-11-18 07:21:07.285008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.652 qpair failed and we were unable to recover it. 00:35:46.652 [2024-11-18 07:21:07.285138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.652 [2024-11-18 07:21:07.285165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.652 qpair failed and we were unable to recover it. 00:35:46.652 [2024-11-18 07:21:07.285321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.653 [2024-11-18 07:21:07.285348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.653 qpair failed and we were unable to recover it. 00:35:46.653 [2024-11-18 07:21:07.285468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.653 [2024-11-18 07:21:07.285502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.653 qpair failed and we were unable to recover it. 00:35:46.653 [2024-11-18 07:21:07.285593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.653 [2024-11-18 07:21:07.285620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.653 qpair failed and we were unable to recover it. 00:35:46.653 [2024-11-18 07:21:07.285716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.653 [2024-11-18 07:21:07.285743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.653 qpair failed and we were unable to recover it. 00:35:46.653 [2024-11-18 07:21:07.285864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.653 [2024-11-18 07:21:07.285891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.653 qpair failed and we were unable to recover it. 00:35:46.653 [2024-11-18 07:21:07.285967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.653 [2024-11-18 07:21:07.285994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.653 qpair failed and we were unable to recover it. 00:35:46.653 [2024-11-18 07:21:07.286076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.653 [2024-11-18 07:21:07.286102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.653 qpair failed and we were unable to recover it. 00:35:46.653 [2024-11-18 07:21:07.286219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.653 [2024-11-18 07:21:07.286246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.653 qpair failed and we were unable to recover it. 00:35:46.653 [2024-11-18 07:21:07.286332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.653 [2024-11-18 07:21:07.286357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.653 qpair failed and we were unable to recover it. 00:35:46.653 [2024-11-18 07:21:07.286441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.653 [2024-11-18 07:21:07.286467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.653 qpair failed and we were unable to recover it. 00:35:46.653 [2024-11-18 07:21:07.286588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.653 [2024-11-18 07:21:07.286616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.653 qpair failed and we were unable to recover it. 00:35:46.653 [2024-11-18 07:21:07.286728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.653 [2024-11-18 07:21:07.286763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.653 qpair failed and we were unable to recover it. 00:35:46.653 [2024-11-18 07:21:07.286853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.653 [2024-11-18 07:21:07.286884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.653 qpair failed and we were unable to recover it. 00:35:46.653 [2024-11-18 07:21:07.286998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.653 [2024-11-18 07:21:07.287025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.653 qpair failed and we were unable to recover it. 00:35:46.653 [2024-11-18 07:21:07.287111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.653 [2024-11-18 07:21:07.287138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.653 qpair failed and we were unable to recover it. 00:35:46.653 [2024-11-18 07:21:07.287301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.653 [2024-11-18 07:21:07.287340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.653 qpair failed and we were unable to recover it. 00:35:46.653 [2024-11-18 07:21:07.287455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.653 [2024-11-18 07:21:07.287488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.653 qpair failed and we were unable to recover it. 00:35:46.653 [2024-11-18 07:21:07.287607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.653 [2024-11-18 07:21:07.287636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.653 qpair failed and we were unable to recover it. 00:35:46.653 [2024-11-18 07:21:07.287717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.653 [2024-11-18 07:21:07.287744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.653 qpair failed and we were unable to recover it. 00:35:46.653 [2024-11-18 07:21:07.287880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.653 [2024-11-18 07:21:07.287918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.653 qpair failed and we were unable to recover it. 00:35:46.653 [2024-11-18 07:21:07.288066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.653 [2024-11-18 07:21:07.288096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.653 qpair failed and we were unable to recover it. 00:35:46.653 [2024-11-18 07:21:07.288221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.653 [2024-11-18 07:21:07.288249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.653 qpair failed and we were unable to recover it. 00:35:46.653 [2024-11-18 07:21:07.288341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.653 [2024-11-18 07:21:07.288369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.653 qpair failed and we were unable to recover it. 00:35:46.653 [2024-11-18 07:21:07.288484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.653 [2024-11-18 07:21:07.288520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.653 qpair failed and we were unable to recover it. 00:35:46.653 [2024-11-18 07:21:07.288638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.653 [2024-11-18 07:21:07.288665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.653 qpair failed and we were unable to recover it. 00:35:46.653 [2024-11-18 07:21:07.288747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.653 [2024-11-18 07:21:07.288775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.653 qpair failed and we were unable to recover it. 00:35:46.653 [2024-11-18 07:21:07.288892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.653 [2024-11-18 07:21:07.288920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.653 qpair failed and we were unable to recover it. 00:35:46.653 [2024-11-18 07:21:07.289045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.653 [2024-11-18 07:21:07.289072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.653 qpair failed and we were unable to recover it. 00:35:46.653 [2024-11-18 07:21:07.289211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.653 [2024-11-18 07:21:07.289239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.653 qpair failed and we were unable to recover it. 00:35:46.653 [2024-11-18 07:21:07.289359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.653 [2024-11-18 07:21:07.289387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.654 qpair failed and we were unable to recover it. 00:35:46.654 [2024-11-18 07:21:07.289510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.654 [2024-11-18 07:21:07.289543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.654 qpair failed and we were unable to recover it. 00:35:46.654 [2024-11-18 07:21:07.289676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.654 [2024-11-18 07:21:07.289705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.654 qpair failed and we were unable to recover it. 00:35:46.654 [2024-11-18 07:21:07.289837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.654 [2024-11-18 07:21:07.289864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.654 qpair failed and we were unable to recover it. 00:35:46.654 [2024-11-18 07:21:07.290025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.654 [2024-11-18 07:21:07.290052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.654 qpair failed and we were unable to recover it. 00:35:46.654 [2024-11-18 07:21:07.290200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.654 [2024-11-18 07:21:07.290234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.654 qpair failed and we were unable to recover it. 00:35:46.654 [2024-11-18 07:21:07.290375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.654 [2024-11-18 07:21:07.290410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.654 qpair failed and we were unable to recover it. 00:35:46.654 [2024-11-18 07:21:07.290578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.654 [2024-11-18 07:21:07.290619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.654 qpair failed and we were unable to recover it. 00:35:46.654 [2024-11-18 07:21:07.290710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.654 [2024-11-18 07:21:07.290738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.654 qpair failed and we were unable to recover it. 00:35:46.654 [2024-11-18 07:21:07.290830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.654 [2024-11-18 07:21:07.290858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.654 qpair failed and we were unable to recover it. 00:35:46.654 [2024-11-18 07:21:07.290976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.654 [2024-11-18 07:21:07.291010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.654 qpair failed and we were unable to recover it. 00:35:46.654 [2024-11-18 07:21:07.291102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.654 [2024-11-18 07:21:07.291129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.654 qpair failed and we were unable to recover it. 00:35:46.654 [2024-11-18 07:21:07.291246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.654 [2024-11-18 07:21:07.291274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.654 qpair failed and we were unable to recover it. 00:35:46.654 [2024-11-18 07:21:07.291385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.654 [2024-11-18 07:21:07.291412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.654 qpair failed and we were unable to recover it. 00:35:46.654 [2024-11-18 07:21:07.291530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.654 [2024-11-18 07:21:07.291557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.654 qpair failed and we were unable to recover it. 00:35:46.654 [2024-11-18 07:21:07.291666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.654 [2024-11-18 07:21:07.291693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.654 qpair failed and we were unable to recover it. 00:35:46.654 [2024-11-18 07:21:07.291816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.654 [2024-11-18 07:21:07.291844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.654 qpair failed and we were unable to recover it. 00:35:46.654 [2024-11-18 07:21:07.291938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.654 [2024-11-18 07:21:07.291965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.654 qpair failed and we were unable to recover it. 00:35:46.654 [2024-11-18 07:21:07.292093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.654 [2024-11-18 07:21:07.292120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.654 qpair failed and we were unable to recover it. 00:35:46.654 [2024-11-18 07:21:07.292214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.654 [2024-11-18 07:21:07.292243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.654 qpair failed and we were unable to recover it. 00:35:46.654 [2024-11-18 07:21:07.292388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.654 [2024-11-18 07:21:07.292415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.654 qpair failed and we were unable to recover it. 00:35:46.654 [2024-11-18 07:21:07.292533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.654 [2024-11-18 07:21:07.292561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.654 qpair failed and we were unable to recover it. 00:35:46.654 [2024-11-18 07:21:07.292687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.654 [2024-11-18 07:21:07.292715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.654 qpair failed and we were unable to recover it. 00:35:46.654 [2024-11-18 07:21:07.292831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.654 [2024-11-18 07:21:07.292858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.654 qpair failed and we were unable to recover it. 00:35:46.654 [2024-11-18 07:21:07.292961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.654 [2024-11-18 07:21:07.292995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.654 qpair failed and we were unable to recover it. 00:35:46.654 [2024-11-18 07:21:07.293131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.654 [2024-11-18 07:21:07.293164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.654 qpair failed and we were unable to recover it. 00:35:46.654 [2024-11-18 07:21:07.293289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.654 [2024-11-18 07:21:07.293322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.654 qpair failed and we were unable to recover it. 00:35:46.654 [2024-11-18 07:21:07.293460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.654 [2024-11-18 07:21:07.293487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.654 qpair failed and we were unable to recover it. 00:35:46.654 [2024-11-18 07:21:07.293579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.654 [2024-11-18 07:21:07.293606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.654 qpair failed and we were unable to recover it. 00:35:46.654 [2024-11-18 07:21:07.293700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.654 [2024-11-18 07:21:07.293727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.654 qpair failed and we were unable to recover it. 00:35:46.654 [2024-11-18 07:21:07.293851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.654 [2024-11-18 07:21:07.293879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.654 qpair failed and we were unable to recover it. 00:35:46.654 [2024-11-18 07:21:07.294018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.654 [2024-11-18 07:21:07.294046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.654 qpair failed and we were unable to recover it. 00:35:46.654 [2024-11-18 07:21:07.294135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.654 [2024-11-18 07:21:07.294162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.654 qpair failed and we were unable to recover it. 00:35:46.654 [2024-11-18 07:21:07.294276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.655 [2024-11-18 07:21:07.294305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.655 qpair failed and we were unable to recover it. 00:35:46.655 [2024-11-18 07:21:07.295011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.655 [2024-11-18 07:21:07.295066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.655 qpair failed and we were unable to recover it. 00:35:46.655 [2024-11-18 07:21:07.295226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.655 [2024-11-18 07:21:07.295257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.655 qpair failed and we were unable to recover it. 00:35:46.655 [2024-11-18 07:21:07.295414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.655 [2024-11-18 07:21:07.295445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.655 qpair failed and we were unable to recover it. 00:35:46.655 [2024-11-18 07:21:07.295625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.655 [2024-11-18 07:21:07.295654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.655 qpair failed and we were unable to recover it. 00:35:46.655 [2024-11-18 07:21:07.295803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.655 [2024-11-18 07:21:07.295830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.655 qpair failed and we were unable to recover it. 00:35:46.655 [2024-11-18 07:21:07.295929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.655 [2024-11-18 07:21:07.295956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.655 qpair failed and we were unable to recover it. 00:35:46.655 [2024-11-18 07:21:07.296099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.655 [2024-11-18 07:21:07.296127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.655 qpair failed and we were unable to recover it. 00:35:46.655 [2024-11-18 07:21:07.296235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.655 [2024-11-18 07:21:07.296262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.655 qpair failed and we were unable to recover it. 00:35:46.655 [2024-11-18 07:21:07.296606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.655 [2024-11-18 07:21:07.296638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.655 qpair failed and we were unable to recover it. 00:35:46.655 [2024-11-18 07:21:07.296739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.655 [2024-11-18 07:21:07.296767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.655 qpair failed and we were unable to recover it. 00:35:46.655 [2024-11-18 07:21:07.297609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.655 [2024-11-18 07:21:07.297643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.655 qpair failed and we were unable to recover it. 00:35:46.655 [2024-11-18 07:21:07.297742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.655 [2024-11-18 07:21:07.297772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.655 qpair failed and we were unable to recover it. 00:35:46.655 [2024-11-18 07:21:07.297919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.655 [2024-11-18 07:21:07.297946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.655 qpair failed and we were unable to recover it. 00:35:46.655 [2024-11-18 07:21:07.298054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.655 [2024-11-18 07:21:07.298082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.655 qpair failed and we were unable to recover it. 00:35:46.655 [2024-11-18 07:21:07.298175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.655 [2024-11-18 07:21:07.298203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.655 qpair failed and we were unable to recover it. 00:35:46.655 [2024-11-18 07:21:07.298339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.655 [2024-11-18 07:21:07.298370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.655 qpair failed and we were unable to recover it. 00:35:46.655 [2024-11-18 07:21:07.298472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.655 [2024-11-18 07:21:07.298518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.655 qpair failed and we were unable to recover it. 00:35:46.655 [2024-11-18 07:21:07.298660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.655 [2024-11-18 07:21:07.298688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.655 qpair failed and we were unable to recover it. 00:35:46.655 [2024-11-18 07:21:07.298818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.655 [2024-11-18 07:21:07.298845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.655 qpair failed and we were unable to recover it. 00:35:46.655 [2024-11-18 07:21:07.298933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.655 [2024-11-18 07:21:07.298961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.655 qpair failed and we were unable to recover it. 00:35:46.655 [2024-11-18 07:21:07.299080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.655 [2024-11-18 07:21:07.299108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.655 qpair failed and we were unable to recover it. 00:35:46.655 [2024-11-18 07:21:07.299257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.655 [2024-11-18 07:21:07.299318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.655 qpair failed and we were unable to recover it. 00:35:46.655 [2024-11-18 07:21:07.299413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.655 [2024-11-18 07:21:07.299442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.655 qpair failed and we were unable to recover it. 00:35:46.655 [2024-11-18 07:21:07.299546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.655 [2024-11-18 07:21:07.299576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.655 qpair failed and we were unable to recover it. 00:35:46.655 [2024-11-18 07:21:07.299684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.655 [2024-11-18 07:21:07.299710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.655 qpair failed and we were unable to recover it. 00:35:46.655 [2024-11-18 07:21:07.300220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.655 [2024-11-18 07:21:07.300251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.655 qpair failed and we were unable to recover it. 00:35:46.655 [2024-11-18 07:21:07.300395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.655 [2024-11-18 07:21:07.300422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.655 qpair failed and we were unable to recover it. 00:35:46.655 [2024-11-18 07:21:07.300551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.655 [2024-11-18 07:21:07.300579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.655 qpair failed and we were unable to recover it. 00:35:46.655 [2024-11-18 07:21:07.300675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.655 [2024-11-18 07:21:07.300703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.655 qpair failed and we were unable to recover it. 00:35:46.655 [2024-11-18 07:21:07.300853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.655 [2024-11-18 07:21:07.300880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.655 qpair failed and we were unable to recover it. 00:35:46.655 [2024-11-18 07:21:07.300986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.655 [2024-11-18 07:21:07.301016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.655 qpair failed and we were unable to recover it. 00:35:46.655 [2024-11-18 07:21:07.301130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.655 [2024-11-18 07:21:07.301157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.655 qpair failed and we were unable to recover it. 00:35:46.656 [2024-11-18 07:21:07.301278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.656 [2024-11-18 07:21:07.301304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.656 qpair failed and we were unable to recover it. 00:35:46.656 [2024-11-18 07:21:07.301392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.656 [2024-11-18 07:21:07.301419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.656 qpair failed and we were unable to recover it. 00:35:46.656 [2024-11-18 07:21:07.301509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.656 [2024-11-18 07:21:07.301538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.656 qpair failed and we were unable to recover it. 00:35:46.656 [2024-11-18 07:21:07.301623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.656 [2024-11-18 07:21:07.301650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.656 qpair failed and we were unable to recover it. 00:35:46.656 [2024-11-18 07:21:07.301738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.656 [2024-11-18 07:21:07.301765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.656 qpair failed and we were unable to recover it. 00:35:46.656 [2024-11-18 07:21:07.301948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.656 [2024-11-18 07:21:07.301995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.656 qpair failed and we were unable to recover it. 00:35:46.656 [2024-11-18 07:21:07.302134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.656 [2024-11-18 07:21:07.302179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.656 qpair failed and we were unable to recover it. 00:35:46.656 [2024-11-18 07:21:07.302358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.656 [2024-11-18 07:21:07.302388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.656 qpair failed and we were unable to recover it. 00:35:46.656 [2024-11-18 07:21:07.302516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.656 [2024-11-18 07:21:07.302561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.656 qpair failed and we were unable to recover it. 00:35:46.656 [2024-11-18 07:21:07.302685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.656 [2024-11-18 07:21:07.302711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.656 qpair failed and we were unable to recover it. 00:35:46.656 [2024-11-18 07:21:07.302843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.656 [2024-11-18 07:21:07.302888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.656 qpair failed and we were unable to recover it. 00:35:46.656 [2024-11-18 07:21:07.303035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.656 [2024-11-18 07:21:07.303079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.656 qpair failed and we were unable to recover it. 00:35:46.656 [2024-11-18 07:21:07.303237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.656 [2024-11-18 07:21:07.303269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.656 qpair failed and we were unable to recover it. 00:35:46.656 [2024-11-18 07:21:07.303371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.656 [2024-11-18 07:21:07.303398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.656 qpair failed and we were unable to recover it. 00:35:46.656 [2024-11-18 07:21:07.303560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.656 [2024-11-18 07:21:07.303589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.656 qpair failed and we were unable to recover it. 00:35:46.656 [2024-11-18 07:21:07.303668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.656 [2024-11-18 07:21:07.303695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.656 qpair failed and we were unable to recover it. 00:35:46.656 [2024-11-18 07:21:07.303815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.656 [2024-11-18 07:21:07.303843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.656 qpair failed and we were unable to recover it. 00:35:46.656 [2024-11-18 07:21:07.303987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.656 [2024-11-18 07:21:07.304014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.656 qpair failed and we were unable to recover it. 00:35:46.656 [2024-11-18 07:21:07.304117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.656 [2024-11-18 07:21:07.304145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.656 qpair failed and we were unable to recover it. 00:35:46.656 [2024-11-18 07:21:07.304266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.656 [2024-11-18 07:21:07.304293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.656 qpair failed and we were unable to recover it. 00:35:46.656 [2024-11-18 07:21:07.304383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.656 [2024-11-18 07:21:07.304411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.656 qpair failed and we were unable to recover it. 00:35:46.656 [2024-11-18 07:21:07.304500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.656 [2024-11-18 07:21:07.304529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.656 qpair failed and we were unable to recover it. 00:35:46.656 [2024-11-18 07:21:07.304625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.656 [2024-11-18 07:21:07.304651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.656 qpair failed and we were unable to recover it. 00:35:46.656 [2024-11-18 07:21:07.304742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.656 [2024-11-18 07:21:07.304770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.657 qpair failed and we were unable to recover it. 00:35:46.657 [2024-11-18 07:21:07.304911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.657 [2024-11-18 07:21:07.304944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.657 qpair failed and we were unable to recover it. 00:35:46.657 [2024-11-18 07:21:07.305088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.657 [2024-11-18 07:21:07.305116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.657 qpair failed and we were unable to recover it. 00:35:46.657 [2024-11-18 07:21:07.305231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.657 [2024-11-18 07:21:07.305259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.657 qpair failed and we were unable to recover it. 00:35:46.657 [2024-11-18 07:21:07.305373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.657 [2024-11-18 07:21:07.305400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.657 qpair failed and we were unable to recover it. 00:35:46.657 [2024-11-18 07:21:07.305501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.657 [2024-11-18 07:21:07.305538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.657 qpair failed and we were unable to recover it. 00:35:46.657 [2024-11-18 07:21:07.305646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.657 [2024-11-18 07:21:07.305686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.657 qpair failed and we were unable to recover it. 00:35:46.657 [2024-11-18 07:21:07.305779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.657 [2024-11-18 07:21:07.305808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.657 qpair failed and we were unable to recover it. 00:35:46.657 [2024-11-18 07:21:07.305929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.657 [2024-11-18 07:21:07.305957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.657 qpair failed and we were unable to recover it. 00:35:46.657 [2024-11-18 07:21:07.306094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.657 [2024-11-18 07:21:07.306140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.657 qpair failed and we were unable to recover it. 00:35:46.657 [2024-11-18 07:21:07.306269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.657 [2024-11-18 07:21:07.306296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.657 qpair failed and we were unable to recover it. 00:35:46.657 [2024-11-18 07:21:07.306442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.657 [2024-11-18 07:21:07.306470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.657 qpair failed and we were unable to recover it. 00:35:46.657 [2024-11-18 07:21:07.306564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.657 [2024-11-18 07:21:07.306592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.657 qpair failed and we were unable to recover it. 00:35:46.657 [2024-11-18 07:21:07.306709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.657 [2024-11-18 07:21:07.306737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.657 qpair failed and we were unable to recover it. 00:35:46.657 [2024-11-18 07:21:07.306855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.657 [2024-11-18 07:21:07.306882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.657 qpair failed and we were unable to recover it. 00:35:46.657 [2024-11-18 07:21:07.307028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.657 [2024-11-18 07:21:07.307055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.657 qpair failed and we were unable to recover it. 00:35:46.657 [2024-11-18 07:21:07.307171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.657 [2024-11-18 07:21:07.307199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.657 qpair failed and we were unable to recover it. 00:35:46.657 [2024-11-18 07:21:07.307291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.657 [2024-11-18 07:21:07.307319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.657 qpair failed and we were unable to recover it. 00:35:46.657 [2024-11-18 07:21:07.307441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.657 [2024-11-18 07:21:07.307469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.657 qpair failed and we were unable to recover it. 00:35:46.657 [2024-11-18 07:21:07.307567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.657 [2024-11-18 07:21:07.307594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.657 qpair failed and we were unable to recover it. 00:35:46.657 [2024-11-18 07:21:07.307688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.657 [2024-11-18 07:21:07.307716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.657 qpair failed and we were unable to recover it. 00:35:46.657 [2024-11-18 07:21:07.307855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.657 [2024-11-18 07:21:07.307883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.657 qpair failed and we were unable to recover it. 00:35:46.657 [2024-11-18 07:21:07.308020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.657 [2024-11-18 07:21:07.308067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.657 qpair failed and we were unable to recover it. 00:35:46.657 [2024-11-18 07:21:07.308196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.657 [2024-11-18 07:21:07.308243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.657 qpair failed and we were unable to recover it. 00:35:46.657 [2024-11-18 07:21:07.308408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.657 [2024-11-18 07:21:07.308436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.657 qpair failed and we were unable to recover it. 00:35:46.657 [2024-11-18 07:21:07.308522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.657 [2024-11-18 07:21:07.308551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.657 qpair failed and we were unable to recover it. 00:35:46.657 [2024-11-18 07:21:07.308645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.657 [2024-11-18 07:21:07.308672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.657 qpair failed and we were unable to recover it. 00:35:46.657 [2024-11-18 07:21:07.308760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.657 [2024-11-18 07:21:07.308787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.657 qpair failed and we were unable to recover it. 00:35:46.657 [2024-11-18 07:21:07.308869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.657 [2024-11-18 07:21:07.308919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.658 qpair failed and we were unable to recover it. 00:35:46.658 [2024-11-18 07:21:07.309066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.658 [2024-11-18 07:21:07.309094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.658 qpair failed and we were unable to recover it. 00:35:46.658 [2024-11-18 07:21:07.309188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.658 [2024-11-18 07:21:07.309215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.658 qpair failed and we were unable to recover it. 00:35:46.658 [2024-11-18 07:21:07.309351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.658 [2024-11-18 07:21:07.309379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.658 qpair failed and we were unable to recover it. 00:35:46.658 [2024-11-18 07:21:07.309474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.658 [2024-11-18 07:21:07.309511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.658 qpair failed and we were unable to recover it. 00:35:46.658 [2024-11-18 07:21:07.309609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.658 [2024-11-18 07:21:07.309635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.658 qpair failed and we were unable to recover it. 00:35:46.658 [2024-11-18 07:21:07.309732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.658 [2024-11-18 07:21:07.309757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.658 qpair failed and we were unable to recover it. 00:35:46.658 [2024-11-18 07:21:07.309843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.658 [2024-11-18 07:21:07.309870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.658 qpair failed and we were unable to recover it. 00:35:46.658 [2024-11-18 07:21:07.310023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.658 [2024-11-18 07:21:07.310076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.658 qpair failed and we were unable to recover it. 00:35:46.658 [2024-11-18 07:21:07.310225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.658 [2024-11-18 07:21:07.310275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.658 qpair failed and we were unable to recover it. 00:35:46.658 [2024-11-18 07:21:07.310368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.658 [2024-11-18 07:21:07.310395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.658 qpair failed and we were unable to recover it. 00:35:46.658 [2024-11-18 07:21:07.310511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.658 [2024-11-18 07:21:07.310553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.658 qpair failed and we were unable to recover it. 00:35:46.658 [2024-11-18 07:21:07.310667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.658 [2024-11-18 07:21:07.310695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.658 qpair failed and we were unable to recover it. 00:35:46.658 [2024-11-18 07:21:07.310780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.658 [2024-11-18 07:21:07.310816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.658 qpair failed and we were unable to recover it. 00:35:46.658 [2024-11-18 07:21:07.310935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.658 [2024-11-18 07:21:07.310962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.658 qpair failed and we were unable to recover it. 00:35:46.658 [2024-11-18 07:21:07.311077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.658 [2024-11-18 07:21:07.311104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.658 qpair failed and we were unable to recover it. 00:35:46.658 [2024-11-18 07:21:07.311194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.658 [2024-11-18 07:21:07.311221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.658 qpair failed and we were unable to recover it. 00:35:46.658 [2024-11-18 07:21:07.311308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.658 [2024-11-18 07:21:07.311336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.658 qpair failed and we were unable to recover it. 00:35:46.658 [2024-11-18 07:21:07.311414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.658 [2024-11-18 07:21:07.311441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.658 qpair failed and we were unable to recover it. 00:35:46.658 [2024-11-18 07:21:07.311536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.658 [2024-11-18 07:21:07.311565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.658 qpair failed and we were unable to recover it. 00:35:46.658 [2024-11-18 07:21:07.311660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.658 [2024-11-18 07:21:07.311688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.658 qpair failed and we were unable to recover it. 00:35:46.658 [2024-11-18 07:21:07.311777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.658 [2024-11-18 07:21:07.311804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.658 qpair failed and we were unable to recover it. 00:35:46.658 [2024-11-18 07:21:07.311918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.658 [2024-11-18 07:21:07.311945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.658 qpair failed and we were unable to recover it. 00:35:46.658 [2024-11-18 07:21:07.312060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.658 [2024-11-18 07:21:07.312086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.658 qpair failed and we were unable to recover it. 00:35:46.658 [2024-11-18 07:21:07.312176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.658 [2024-11-18 07:21:07.312203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.658 qpair failed and we were unable to recover it. 00:35:46.658 [2024-11-18 07:21:07.312290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.658 [2024-11-18 07:21:07.312318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.658 qpair failed and we were unable to recover it. 00:35:46.658 [2024-11-18 07:21:07.312421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.658 [2024-11-18 07:21:07.312460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.658 qpair failed and we were unable to recover it. 00:35:46.658 [2024-11-18 07:21:07.312590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.658 [2024-11-18 07:21:07.312618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.658 qpair failed and we were unable to recover it. 00:35:46.658 [2024-11-18 07:21:07.312706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.658 [2024-11-18 07:21:07.312733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.658 qpair failed and we were unable to recover it. 00:35:46.658 [2024-11-18 07:21:07.312821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.658 [2024-11-18 07:21:07.312847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.658 qpair failed and we were unable to recover it. 00:35:46.658 [2024-11-18 07:21:07.313003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.658 [2024-11-18 07:21:07.313043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.658 qpair failed and we were unable to recover it. 00:35:46.658 [2024-11-18 07:21:07.313155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.658 [2024-11-18 07:21:07.313183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.658 qpair failed and we were unable to recover it. 00:35:46.659 [2024-11-18 07:21:07.313297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.659 [2024-11-18 07:21:07.313324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.659 qpair failed and we were unable to recover it. 00:35:46.659 [2024-11-18 07:21:07.313463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.659 [2024-11-18 07:21:07.313500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.659 qpair failed and we were unable to recover it. 00:35:46.659 [2024-11-18 07:21:07.313593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.659 [2024-11-18 07:21:07.313619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.659 qpair failed and we were unable to recover it. 00:35:46.659 [2024-11-18 07:21:07.313733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.659 [2024-11-18 07:21:07.313759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.659 qpair failed and we were unable to recover it. 00:35:46.659 [2024-11-18 07:21:07.313869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.659 [2024-11-18 07:21:07.313914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.659 qpair failed and we were unable to recover it. 00:35:46.659 [2024-11-18 07:21:07.314005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.659 [2024-11-18 07:21:07.314036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.659 qpair failed and we were unable to recover it. 00:35:46.659 [2024-11-18 07:21:07.314135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.659 [2024-11-18 07:21:07.314165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.659 qpair failed and we were unable to recover it. 00:35:46.659 [2024-11-18 07:21:07.314336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.659 [2024-11-18 07:21:07.314394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.659 qpair failed and we were unable to recover it. 00:35:46.659 [2024-11-18 07:21:07.314542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.659 [2024-11-18 07:21:07.314581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.659 qpair failed and we were unable to recover it. 00:35:46.659 [2024-11-18 07:21:07.314672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.659 [2024-11-18 07:21:07.314700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.659 qpair failed and we were unable to recover it. 00:35:46.659 [2024-11-18 07:21:07.314786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.659 [2024-11-18 07:21:07.314812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.659 qpair failed and we were unable to recover it. 00:35:46.659 [2024-11-18 07:21:07.314945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.659 [2024-11-18 07:21:07.314973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.659 qpair failed and we were unable to recover it. 00:35:46.659 [2024-11-18 07:21:07.315101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.659 [2024-11-18 07:21:07.315129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.659 qpair failed and we were unable to recover it. 00:35:46.659 [2024-11-18 07:21:07.315241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.659 [2024-11-18 07:21:07.315267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.659 qpair failed and we were unable to recover it. 00:35:46.659 [2024-11-18 07:21:07.315355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.659 [2024-11-18 07:21:07.315383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.659 qpair failed and we were unable to recover it. 00:35:46.659 [2024-11-18 07:21:07.315470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.659 [2024-11-18 07:21:07.315508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.659 qpair failed and we were unable to recover it. 00:35:46.659 [2024-11-18 07:21:07.315596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.659 [2024-11-18 07:21:07.315623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.659 qpair failed and we were unable to recover it. 00:35:46.659 [2024-11-18 07:21:07.315733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.659 [2024-11-18 07:21:07.315759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.659 qpair failed and we were unable to recover it. 00:35:46.659 [2024-11-18 07:21:07.315853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.659 [2024-11-18 07:21:07.315881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.659 qpair failed and we were unable to recover it. 00:35:46.659 [2024-11-18 07:21:07.315962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.659 [2024-11-18 07:21:07.315988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.659 qpair failed and we were unable to recover it. 00:35:46.659 [2024-11-18 07:21:07.316074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.659 [2024-11-18 07:21:07.316100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.659 qpair failed and we were unable to recover it. 00:35:46.659 [2024-11-18 07:21:07.316178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.659 [2024-11-18 07:21:07.316204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.659 qpair failed and we were unable to recover it. 00:35:46.659 [2024-11-18 07:21:07.316321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.659 [2024-11-18 07:21:07.316347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.659 qpair failed and we were unable to recover it. 00:35:46.659 [2024-11-18 07:21:07.316432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.659 [2024-11-18 07:21:07.316458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.659 qpair failed and we were unable to recover it. 00:35:46.659 [2024-11-18 07:21:07.316567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.659 [2024-11-18 07:21:07.316594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.659 qpair failed and we were unable to recover it. 00:35:46.659 [2024-11-18 07:21:07.316677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.659 [2024-11-18 07:21:07.316702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.659 qpair failed and we were unable to recover it. 00:35:46.659 [2024-11-18 07:21:07.316785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.659 [2024-11-18 07:21:07.316811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.659 qpair failed and we were unable to recover it. 00:35:46.659 [2024-11-18 07:21:07.316904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.659 [2024-11-18 07:21:07.316930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.659 qpair failed and we were unable to recover it. 00:35:46.659 [2024-11-18 07:21:07.317018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.659 [2024-11-18 07:21:07.317047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.659 qpair failed and we were unable to recover it. 00:35:46.659 [2024-11-18 07:21:07.317141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.659 [2024-11-18 07:21:07.317185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.659 qpair failed and we were unable to recover it. 00:35:46.659 [2024-11-18 07:21:07.317331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.659 [2024-11-18 07:21:07.317358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.659 qpair failed and we were unable to recover it. 00:35:46.659 [2024-11-18 07:21:07.317470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.659 [2024-11-18 07:21:07.317504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.659 qpair failed and we were unable to recover it. 00:35:46.659 [2024-11-18 07:21:07.317588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.660 [2024-11-18 07:21:07.317615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.660 qpair failed and we were unable to recover it. 00:35:46.660 [2024-11-18 07:21:07.317697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.660 [2024-11-18 07:21:07.317723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.660 qpair failed and we were unable to recover it. 00:35:46.660 [2024-11-18 07:21:07.317835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.660 [2024-11-18 07:21:07.317860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.660 qpair failed and we were unable to recover it. 00:35:46.660 [2024-11-18 07:21:07.317955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.660 [2024-11-18 07:21:07.317981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.660 qpair failed and we were unable to recover it. 00:35:46.660 [2024-11-18 07:21:07.318073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.660 [2024-11-18 07:21:07.318101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.660 qpair failed and we were unable to recover it. 00:35:46.660 [2024-11-18 07:21:07.318215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.660 [2024-11-18 07:21:07.318241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.660 qpair failed and we were unable to recover it. 00:35:46.660 [2024-11-18 07:21:07.318326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.660 [2024-11-18 07:21:07.318352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.660 qpair failed and we were unable to recover it. 00:35:46.660 [2024-11-18 07:21:07.318467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.660 [2024-11-18 07:21:07.318499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.660 qpair failed and we were unable to recover it. 00:35:46.660 [2024-11-18 07:21:07.318630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.660 [2024-11-18 07:21:07.318657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.660 qpair failed and we were unable to recover it. 00:35:46.660 [2024-11-18 07:21:07.318742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.660 [2024-11-18 07:21:07.318769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.660 qpair failed and we were unable to recover it. 00:35:46.660 [2024-11-18 07:21:07.318861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.660 [2024-11-18 07:21:07.318893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.660 qpair failed and we were unable to recover it. 00:35:46.660 [2024-11-18 07:21:07.318969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.660 [2024-11-18 07:21:07.318995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.660 qpair failed and we were unable to recover it. 00:35:46.660 [2024-11-18 07:21:07.319115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.660 [2024-11-18 07:21:07.319154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.660 qpair failed and we were unable to recover it. 00:35:46.660 [2024-11-18 07:21:07.319254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.660 [2024-11-18 07:21:07.319283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.660 qpair failed and we were unable to recover it. 00:35:46.660 [2024-11-18 07:21:07.319389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.660 [2024-11-18 07:21:07.319428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.660 qpair failed and we were unable to recover it. 00:35:46.660 [2024-11-18 07:21:07.319549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.660 [2024-11-18 07:21:07.319577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.660 qpair failed and we were unable to recover it. 00:35:46.660 [2024-11-18 07:21:07.319663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.660 [2024-11-18 07:21:07.319696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.660 qpair failed and we were unable to recover it. 00:35:46.660 [2024-11-18 07:21:07.319806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.660 [2024-11-18 07:21:07.319832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.660 qpair failed and we were unable to recover it. 00:35:46.660 [2024-11-18 07:21:07.319946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.660 [2024-11-18 07:21:07.319972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.660 qpair failed and we were unable to recover it. 00:35:46.660 [2024-11-18 07:21:07.320084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.660 [2024-11-18 07:21:07.320110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.660 qpair failed and we were unable to recover it. 00:35:46.660 [2024-11-18 07:21:07.320218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.660 [2024-11-18 07:21:07.320244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.660 qpair failed and we were unable to recover it. 00:35:46.660 [2024-11-18 07:21:07.320327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.660 [2024-11-18 07:21:07.320353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.660 qpair failed and we were unable to recover it. 00:35:46.660 [2024-11-18 07:21:07.320462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.660 [2024-11-18 07:21:07.320507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.660 qpair failed and we were unable to recover it. 00:35:46.660 [2024-11-18 07:21:07.320618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.660 [2024-11-18 07:21:07.320647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.660 qpair failed and we were unable to recover it. 00:35:46.660 [2024-11-18 07:21:07.320731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.660 [2024-11-18 07:21:07.320759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.660 qpair failed and we were unable to recover it. 00:35:46.660 [2024-11-18 07:21:07.320847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.660 [2024-11-18 07:21:07.320873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.660 qpair failed and we were unable to recover it. 00:35:46.660 [2024-11-18 07:21:07.320970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.660 [2024-11-18 07:21:07.321000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.660 qpair failed and we were unable to recover it. 00:35:46.660 [2024-11-18 07:21:07.321111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.660 [2024-11-18 07:21:07.321139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.660 qpair failed and we were unable to recover it. 00:35:46.660 [2024-11-18 07:21:07.321261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.660 [2024-11-18 07:21:07.321288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.660 qpair failed and we were unable to recover it. 00:35:46.660 [2024-11-18 07:21:07.321451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.661 [2024-11-18 07:21:07.321477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.661 qpair failed and we were unable to recover it. 00:35:46.661 [2024-11-18 07:21:07.321617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.661 [2024-11-18 07:21:07.321644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.661 qpair failed and we were unable to recover it. 00:35:46.661 [2024-11-18 07:21:07.321761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.661 [2024-11-18 07:21:07.321787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.661 qpair failed and we were unable to recover it. 00:35:46.661 [2024-11-18 07:21:07.321905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.661 [2024-11-18 07:21:07.321932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.661 qpair failed and we were unable to recover it. 00:35:46.661 [2024-11-18 07:21:07.322045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.661 [2024-11-18 07:21:07.322071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.661 qpair failed and we were unable to recover it. 00:35:46.661 [2024-11-18 07:21:07.322166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.661 [2024-11-18 07:21:07.322194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.661 qpair failed and we were unable to recover it. 00:35:46.661 [2024-11-18 07:21:07.322296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.661 [2024-11-18 07:21:07.322322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.661 qpair failed and we were unable to recover it. 00:35:46.661 [2024-11-18 07:21:07.322416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.661 [2024-11-18 07:21:07.322442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.661 qpair failed and we were unable to recover it. 00:35:46.661 [2024-11-18 07:21:07.322532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.661 [2024-11-18 07:21:07.322558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.661 qpair failed and we were unable to recover it. 00:35:46.661 [2024-11-18 07:21:07.322643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.661 [2024-11-18 07:21:07.322669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.661 qpair failed and we were unable to recover it. 00:35:46.661 [2024-11-18 07:21:07.322754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.661 [2024-11-18 07:21:07.322780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.661 qpair failed and we were unable to recover it. 00:35:46.661 [2024-11-18 07:21:07.322889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.661 [2024-11-18 07:21:07.322915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.661 qpair failed and we were unable to recover it. 00:35:46.661 [2024-11-18 07:21:07.323004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.661 [2024-11-18 07:21:07.323030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.661 qpair failed and we were unable to recover it. 00:35:46.661 [2024-11-18 07:21:07.323116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.661 [2024-11-18 07:21:07.323142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.661 qpair failed and we were unable to recover it. 00:35:46.661 [2024-11-18 07:21:07.323277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.661 [2024-11-18 07:21:07.323308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.661 qpair failed and we were unable to recover it. 00:35:46.661 [2024-11-18 07:21:07.323406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.661 [2024-11-18 07:21:07.323445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.661 qpair failed and we were unable to recover it. 00:35:46.661 [2024-11-18 07:21:07.323561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.661 [2024-11-18 07:21:07.323592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.661 qpair failed and we were unable to recover it. 00:35:46.661 [2024-11-18 07:21:07.323680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.661 [2024-11-18 07:21:07.323707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.661 qpair failed and we were unable to recover it. 00:35:46.661 [2024-11-18 07:21:07.323791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.661 [2024-11-18 07:21:07.323818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.661 qpair failed and we were unable to recover it. 00:35:46.661 [2024-11-18 07:21:07.324346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.661 [2024-11-18 07:21:07.324378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.661 qpair failed and we were unable to recover it. 00:35:46.661 [2024-11-18 07:21:07.324470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.661 [2024-11-18 07:21:07.324507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.661 qpair failed and we were unable to recover it. 00:35:46.661 [2024-11-18 07:21:07.324615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.661 [2024-11-18 07:21:07.324642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.661 qpair failed and we were unable to recover it. 00:35:46.661 [2024-11-18 07:21:07.324740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.661 [2024-11-18 07:21:07.324767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.661 qpair failed and we were unable to recover it. 00:35:46.661 [2024-11-18 07:21:07.324854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.661 [2024-11-18 07:21:07.324880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.661 qpair failed and we were unable to recover it. 00:35:46.661 [2024-11-18 07:21:07.325023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.661 [2024-11-18 07:21:07.325050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.661 qpair failed and we were unable to recover it. 00:35:46.661 [2024-11-18 07:21:07.325142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.661 [2024-11-18 07:21:07.325168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.661 qpair failed and we were unable to recover it. 00:35:46.661 [2024-11-18 07:21:07.325286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.661 [2024-11-18 07:21:07.325312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.661 qpair failed and we were unable to recover it. 00:35:46.661 [2024-11-18 07:21:07.325398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.661 [2024-11-18 07:21:07.325428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.661 qpair failed and we were unable to recover it. 00:35:46.661 [2024-11-18 07:21:07.325543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.661 [2024-11-18 07:21:07.325569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.661 qpair failed and we were unable to recover it. 00:35:46.661 [2024-11-18 07:21:07.325653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.661 [2024-11-18 07:21:07.325680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.661 qpair failed and we were unable to recover it. 00:35:46.661 [2024-11-18 07:21:07.325761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.661 [2024-11-18 07:21:07.325787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.661 qpair failed and we were unable to recover it. 00:35:46.661 [2024-11-18 07:21:07.325875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.661 [2024-11-18 07:21:07.325901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.661 qpair failed and we were unable to recover it. 00:35:46.662 [2024-11-18 07:21:07.325986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.662 [2024-11-18 07:21:07.326012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.662 qpair failed and we were unable to recover it. 00:35:46.662 [2024-11-18 07:21:07.326174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.662 [2024-11-18 07:21:07.326200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.662 qpair failed and we were unable to recover it. 00:35:46.662 [2024-11-18 07:21:07.326322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.662 [2024-11-18 07:21:07.326348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.662 qpair failed and we were unable to recover it. 00:35:46.662 [2024-11-18 07:21:07.326440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.662 [2024-11-18 07:21:07.326467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.662 qpair failed and we were unable to recover it. 00:35:46.662 [2024-11-18 07:21:07.326593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.662 [2024-11-18 07:21:07.326620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.662 qpair failed and we were unable to recover it. 00:35:46.662 [2024-11-18 07:21:07.326707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.662 [2024-11-18 07:21:07.326733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.662 qpair failed and we were unable to recover it. 00:35:46.662 [2024-11-18 07:21:07.326850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.662 [2024-11-18 07:21:07.326876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.662 qpair failed and we were unable to recover it. 00:35:46.662 [2024-11-18 07:21:07.326992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.662 [2024-11-18 07:21:07.327021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.662 qpair failed and we were unable to recover it. 00:35:46.662 [2024-11-18 07:21:07.327175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.662 [2024-11-18 07:21:07.327203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.662 qpair failed and we were unable to recover it. 00:35:46.662 [2024-11-18 07:21:07.327348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.662 [2024-11-18 07:21:07.327404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.662 qpair failed and we were unable to recover it. 00:35:46.662 [2024-11-18 07:21:07.327525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.662 [2024-11-18 07:21:07.327564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.662 qpair failed and we were unable to recover it. 00:35:46.662 [2024-11-18 07:21:07.327662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.662 [2024-11-18 07:21:07.327696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.662 qpair failed and we were unable to recover it. 00:35:46.662 [2024-11-18 07:21:07.327821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.662 [2024-11-18 07:21:07.327849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.662 qpair failed and we were unable to recover it. 00:35:46.662 [2024-11-18 07:21:07.327966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.662 [2024-11-18 07:21:07.327992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.662 qpair failed and we were unable to recover it. 00:35:46.662 [2024-11-18 07:21:07.328110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.662 [2024-11-18 07:21:07.328136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.662 qpair failed and we were unable to recover it. 00:35:46.662 [2024-11-18 07:21:07.328260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.662 [2024-11-18 07:21:07.328291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.662 qpair failed and we were unable to recover it. 00:35:46.662 [2024-11-18 07:21:07.328403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.662 [2024-11-18 07:21:07.328430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.662 qpair failed and we were unable to recover it. 00:35:46.662 [2024-11-18 07:21:07.328546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.662 [2024-11-18 07:21:07.328585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.662 qpair failed and we were unable to recover it. 00:35:46.662 [2024-11-18 07:21:07.328705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.662 [2024-11-18 07:21:07.328732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.662 qpair failed and we were unable to recover it. 00:35:46.662 [2024-11-18 07:21:07.328844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.662 [2024-11-18 07:21:07.328870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.662 qpair failed and we were unable to recover it. 00:35:46.662 [2024-11-18 07:21:07.328966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.662 [2024-11-18 07:21:07.328991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.662 qpair failed and we were unable to recover it. 00:35:46.662 [2024-11-18 07:21:07.329129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.662 [2024-11-18 07:21:07.329157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.662 qpair failed and we were unable to recover it. 00:35:46.662 [2024-11-18 07:21:07.329276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.662 [2024-11-18 07:21:07.329314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.662 qpair failed and we were unable to recover it. 00:35:46.662 [2024-11-18 07:21:07.329435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.662 [2024-11-18 07:21:07.329463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.662 qpair failed and we were unable to recover it. 00:35:46.663 [2024-11-18 07:21:07.329572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.663 [2024-11-18 07:21:07.329599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.663 qpair failed and we were unable to recover it. 00:35:46.663 [2024-11-18 07:21:07.329687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.663 [2024-11-18 07:21:07.329713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.663 qpair failed and we were unable to recover it. 00:35:46.663 [2024-11-18 07:21:07.329852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.663 [2024-11-18 07:21:07.329878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.663 qpair failed and we were unable to recover it. 00:35:46.663 [2024-11-18 07:21:07.329993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.663 [2024-11-18 07:21:07.330020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.663 qpair failed and we were unable to recover it. 00:35:46.663 [2024-11-18 07:21:07.330102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.663 [2024-11-18 07:21:07.330129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.663 qpair failed and we were unable to recover it. 00:35:46.663 [2024-11-18 07:21:07.330224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.663 [2024-11-18 07:21:07.330253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.663 qpair failed and we were unable to recover it. 00:35:46.663 [2024-11-18 07:21:07.330390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.663 [2024-11-18 07:21:07.330417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.663 qpair failed and we were unable to recover it. 00:35:46.663 [2024-11-18 07:21:07.330525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.663 [2024-11-18 07:21:07.330563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.663 qpair failed and we were unable to recover it. 00:35:46.663 [2024-11-18 07:21:07.330651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.663 [2024-11-18 07:21:07.330678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.663 qpair failed and we were unable to recover it. 00:35:46.663 [2024-11-18 07:21:07.330768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.663 [2024-11-18 07:21:07.330795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.663 qpair failed and we were unable to recover it. 00:35:46.663 [2024-11-18 07:21:07.330905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.663 [2024-11-18 07:21:07.330931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.663 qpair failed and we were unable to recover it. 00:35:46.663 [2024-11-18 07:21:07.331038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.663 [2024-11-18 07:21:07.331067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.663 qpair failed and we were unable to recover it. 00:35:46.663 [2024-11-18 07:21:07.331170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.663 [2024-11-18 07:21:07.331197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.663 qpair failed and we were unable to recover it. 00:35:46.663 [2024-11-18 07:21:07.331334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.663 [2024-11-18 07:21:07.331361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.663 qpair failed and we were unable to recover it. 00:35:46.663 [2024-11-18 07:21:07.331454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.663 [2024-11-18 07:21:07.331481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.663 qpair failed and we were unable to recover it. 00:35:46.663 [2024-11-18 07:21:07.331593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.663 [2024-11-18 07:21:07.331620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.663 qpair failed and we were unable to recover it. 00:35:46.663 [2024-11-18 07:21:07.331705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.663 [2024-11-18 07:21:07.331732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.663 qpair failed and we were unable to recover it. 00:35:46.663 [2024-11-18 07:21:07.331843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.663 [2024-11-18 07:21:07.331887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.663 qpair failed and we were unable to recover it. 00:35:46.663 [2024-11-18 07:21:07.332008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.663 [2024-11-18 07:21:07.332035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.663 qpair failed and we were unable to recover it. 00:35:46.663 [2024-11-18 07:21:07.332207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.663 [2024-11-18 07:21:07.332238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.663 qpair failed and we were unable to recover it. 00:35:46.663 [2024-11-18 07:21:07.332390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.663 [2024-11-18 07:21:07.332421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.663 qpair failed and we were unable to recover it. 00:35:46.663 [2024-11-18 07:21:07.332550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.663 [2024-11-18 07:21:07.332578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.663 qpair failed and we were unable to recover it. 00:35:46.663 [2024-11-18 07:21:07.332668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.663 [2024-11-18 07:21:07.332694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.663 qpair failed and we were unable to recover it. 00:35:46.663 [2024-11-18 07:21:07.332779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.663 [2024-11-18 07:21:07.332805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.663 qpair failed and we were unable to recover it. 00:35:46.663 [2024-11-18 07:21:07.332884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.663 [2024-11-18 07:21:07.332910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.663 qpair failed and we were unable to recover it. 00:35:46.663 [2024-11-18 07:21:07.332998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.663 [2024-11-18 07:21:07.333035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.663 qpair failed and we were unable to recover it. 00:35:46.663 [2024-11-18 07:21:07.333156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.663 [2024-11-18 07:21:07.333184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.663 qpair failed and we were unable to recover it. 00:35:46.663 [2024-11-18 07:21:07.333313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.663 [2024-11-18 07:21:07.333340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.663 qpair failed and we were unable to recover it. 00:35:46.663 [2024-11-18 07:21:07.333436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.663 [2024-11-18 07:21:07.333464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.663 qpair failed and we were unable to recover it. 00:35:46.663 [2024-11-18 07:21:07.333578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.663 [2024-11-18 07:21:07.333606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.663 qpair failed and we were unable to recover it. 00:35:46.663 [2024-11-18 07:21:07.333698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.663 [2024-11-18 07:21:07.333725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.663 qpair failed and we were unable to recover it. 00:35:46.663 [2024-11-18 07:21:07.333832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.663 [2024-11-18 07:21:07.333859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.663 qpair failed and we were unable to recover it. 00:35:46.663 [2024-11-18 07:21:07.333939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.663 [2024-11-18 07:21:07.333966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.663 qpair failed and we were unable to recover it. 00:35:46.664 [2024-11-18 07:21:07.334054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.664 [2024-11-18 07:21:07.334081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.664 qpair failed and we were unable to recover it. 00:35:46.664 [2024-11-18 07:21:07.334170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.664 [2024-11-18 07:21:07.334198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.664 qpair failed and we were unable to recover it. 00:35:46.664 [2024-11-18 07:21:07.334288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.664 [2024-11-18 07:21:07.334314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.664 qpair failed and we were unable to recover it. 00:35:46.664 [2024-11-18 07:21:07.334401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.664 [2024-11-18 07:21:07.334428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.664 qpair failed and we were unable to recover it. 00:35:46.664 [2024-11-18 07:21:07.334526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.664 [2024-11-18 07:21:07.334562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.664 qpair failed and we were unable to recover it. 00:35:46.664 [2024-11-18 07:21:07.334643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.664 [2024-11-18 07:21:07.334669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.664 qpair failed and we were unable to recover it. 00:35:46.664 [2024-11-18 07:21:07.334763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.664 [2024-11-18 07:21:07.334789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.664 qpair failed and we were unable to recover it. 00:35:46.664 [2024-11-18 07:21:07.334873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.664 [2024-11-18 07:21:07.334899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.664 qpair failed and we were unable to recover it. 00:35:46.664 [2024-11-18 07:21:07.335029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.664 [2024-11-18 07:21:07.335058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.664 qpair failed and we were unable to recover it. 00:35:46.664 [2024-11-18 07:21:07.335148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.664 [2024-11-18 07:21:07.335174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.664 qpair failed and we were unable to recover it. 00:35:46.664 [2024-11-18 07:21:07.335302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.664 [2024-11-18 07:21:07.335329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.664 qpair failed and we were unable to recover it. 00:35:46.664 [2024-11-18 07:21:07.335419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.664 [2024-11-18 07:21:07.335445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.664 qpair failed and we were unable to recover it. 00:35:46.664 [2024-11-18 07:21:07.335563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.664 [2024-11-18 07:21:07.335590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.664 qpair failed and we were unable to recover it. 00:35:46.664 [2024-11-18 07:21:07.335675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.664 [2024-11-18 07:21:07.335701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.664 qpair failed and we were unable to recover it. 00:35:46.664 [2024-11-18 07:21:07.335784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.664 [2024-11-18 07:21:07.335811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.664 qpair failed and we were unable to recover it. 00:35:46.664 [2024-11-18 07:21:07.335924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.664 [2024-11-18 07:21:07.335949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.664 qpair failed and we were unable to recover it. 00:35:46.664 [2024-11-18 07:21:07.336060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.664 [2024-11-18 07:21:07.336088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.664 qpair failed and we were unable to recover it. 00:35:46.664 [2024-11-18 07:21:07.336204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.664 [2024-11-18 07:21:07.336231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.664 qpair failed and we were unable to recover it. 00:35:46.664 [2024-11-18 07:21:07.336351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.664 [2024-11-18 07:21:07.336377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.664 qpair failed and we were unable to recover it. 00:35:46.664 [2024-11-18 07:21:07.336506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.664 [2024-11-18 07:21:07.336533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.664 qpair failed and we were unable to recover it. 00:35:46.664 [2024-11-18 07:21:07.336625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.664 [2024-11-18 07:21:07.336652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.664 qpair failed and we were unable to recover it. 00:35:46.664 [2024-11-18 07:21:07.336731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.664 [2024-11-18 07:21:07.336758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.664 qpair failed and we were unable to recover it. 00:35:46.664 [2024-11-18 07:21:07.336875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.664 [2024-11-18 07:21:07.336901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.664 qpair failed and we were unable to recover it. 00:35:46.664 [2024-11-18 07:21:07.337014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.664 [2024-11-18 07:21:07.337040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.664 qpair failed and we were unable to recover it. 00:35:46.664 [2024-11-18 07:21:07.337165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.664 [2024-11-18 07:21:07.337203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.664 qpair failed and we were unable to recover it. 00:35:46.664 [2024-11-18 07:21:07.337295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.664 [2024-11-18 07:21:07.337324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.664 qpair failed and we were unable to recover it. 00:35:46.664 [2024-11-18 07:21:07.337413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.664 [2024-11-18 07:21:07.337440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.664 qpair failed and we were unable to recover it. 00:35:46.664 [2024-11-18 07:21:07.337532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.664 [2024-11-18 07:21:07.337560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.664 qpair failed and we were unable to recover it. 00:35:46.664 [2024-11-18 07:21:07.337645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.664 [2024-11-18 07:21:07.337672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.664 qpair failed and we were unable to recover it. 00:35:46.664 [2024-11-18 07:21:07.337757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.664 [2024-11-18 07:21:07.337784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.664 qpair failed and we were unable to recover it. 00:35:46.664 [2024-11-18 07:21:07.337900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.664 [2024-11-18 07:21:07.337927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.664 qpair failed and we were unable to recover it. 00:35:46.664 [2024-11-18 07:21:07.338010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.664 [2024-11-18 07:21:07.338038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.664 qpair failed and we were unable to recover it. 00:35:46.664 [2024-11-18 07:21:07.338127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.664 [2024-11-18 07:21:07.338159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.664 qpair failed and we were unable to recover it. 00:35:46.665 [2024-11-18 07:21:07.338265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.665 [2024-11-18 07:21:07.338291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.665 qpair failed and we were unable to recover it. 00:35:46.665 [2024-11-18 07:21:07.338421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.665 [2024-11-18 07:21:07.338461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.665 qpair failed and we were unable to recover it. 00:35:46.665 [2024-11-18 07:21:07.338570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.665 [2024-11-18 07:21:07.338598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.665 qpair failed and we were unable to recover it. 00:35:46.665 [2024-11-18 07:21:07.338679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.665 [2024-11-18 07:21:07.338705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.665 qpair failed and we were unable to recover it. 00:35:46.665 [2024-11-18 07:21:07.338846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.665 [2024-11-18 07:21:07.338872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.665 qpair failed and we were unable to recover it. 00:35:46.665 [2024-11-18 07:21:07.338958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.665 [2024-11-18 07:21:07.338984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.665 qpair failed and we were unable to recover it. 00:35:46.665 [2024-11-18 07:21:07.339073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.665 [2024-11-18 07:21:07.339105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.665 qpair failed and we were unable to recover it. 00:35:46.665 [2024-11-18 07:21:07.339200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.665 [2024-11-18 07:21:07.339226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.665 qpair failed and we were unable to recover it. 00:35:46.665 [2024-11-18 07:21:07.339319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.665 [2024-11-18 07:21:07.339345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.665 qpair failed and we were unable to recover it. 00:35:46.665 [2024-11-18 07:21:07.339430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.665 [2024-11-18 07:21:07.339458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.665 qpair failed and we were unable to recover it. 00:35:46.665 [2024-11-18 07:21:07.339569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.665 [2024-11-18 07:21:07.339597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.665 qpair failed and we were unable to recover it. 00:35:46.665 [2024-11-18 07:21:07.339696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.665 [2024-11-18 07:21:07.339724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.665 qpair failed and we were unable to recover it. 00:35:46.665 [2024-11-18 07:21:07.339838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.665 [2024-11-18 07:21:07.339866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.665 qpair failed and we were unable to recover it. 00:35:46.665 [2024-11-18 07:21:07.339995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.665 [2024-11-18 07:21:07.340030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.665 qpair failed and we were unable to recover it. 00:35:46.665 [2024-11-18 07:21:07.340126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.665 [2024-11-18 07:21:07.340155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.665 qpair failed and we were unable to recover it. 00:35:46.665 [2024-11-18 07:21:07.340245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.665 [2024-11-18 07:21:07.340271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.665 qpair failed and we were unable to recover it. 00:35:46.665 [2024-11-18 07:21:07.340365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.665 [2024-11-18 07:21:07.340403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.665 qpair failed and we were unable to recover it. 00:35:46.665 [2024-11-18 07:21:07.340507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.665 [2024-11-18 07:21:07.340535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.665 qpair failed and we were unable to recover it. 00:35:46.665 [2024-11-18 07:21:07.340621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.665 [2024-11-18 07:21:07.340647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.665 qpair failed and we were unable to recover it. 00:35:46.665 [2024-11-18 07:21:07.340730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.665 [2024-11-18 07:21:07.340757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.665 qpair failed and we were unable to recover it. 00:35:46.665 [2024-11-18 07:21:07.340893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.665 [2024-11-18 07:21:07.340919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.665 qpair failed and we were unable to recover it. 00:35:46.665 [2024-11-18 07:21:07.341031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.665 [2024-11-18 07:21:07.341058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.665 qpair failed and we were unable to recover it. 00:35:46.665 [2024-11-18 07:21:07.341139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.665 [2024-11-18 07:21:07.341166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.665 qpair failed and we were unable to recover it. 00:35:46.665 [2024-11-18 07:21:07.341281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.665 [2024-11-18 07:21:07.341310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.665 qpair failed and we were unable to recover it. 00:35:46.665 [2024-11-18 07:21:07.341389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.665 [2024-11-18 07:21:07.341416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.665 qpair failed and we were unable to recover it. 00:35:46.665 [2024-11-18 07:21:07.341513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.665 [2024-11-18 07:21:07.341540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.665 qpair failed and we were unable to recover it. 00:35:46.665 [2024-11-18 07:21:07.341629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.665 [2024-11-18 07:21:07.341657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.665 qpair failed and we were unable to recover it. 00:35:46.665 [2024-11-18 07:21:07.341739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.665 [2024-11-18 07:21:07.341766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.665 qpair failed and we were unable to recover it. 00:35:46.665 [2024-11-18 07:21:07.341849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.665 [2024-11-18 07:21:07.341876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.665 qpair failed and we were unable to recover it. 00:35:46.665 [2024-11-18 07:21:07.341967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.665 [2024-11-18 07:21:07.341994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.665 qpair failed and we were unable to recover it. 00:35:46.665 [2024-11-18 07:21:07.342105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.665 [2024-11-18 07:21:07.342133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.665 qpair failed and we were unable to recover it. 00:35:46.665 [2024-11-18 07:21:07.342216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.665 [2024-11-18 07:21:07.342242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.665 qpair failed and we were unable to recover it. 00:35:46.665 [2024-11-18 07:21:07.342332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.665 [2024-11-18 07:21:07.342359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.665 qpair failed and we were unable to recover it. 00:35:46.665 [2024-11-18 07:21:07.342442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.665 [2024-11-18 07:21:07.342471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.665 qpair failed and we were unable to recover it. 00:35:46.666 [2024-11-18 07:21:07.342568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.666 [2024-11-18 07:21:07.342595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.666 qpair failed and we were unable to recover it. 00:35:46.666 [2024-11-18 07:21:07.342687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.666 [2024-11-18 07:21:07.342714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.666 qpair failed and we were unable to recover it. 00:35:46.666 [2024-11-18 07:21:07.342857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.666 [2024-11-18 07:21:07.342884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.666 qpair failed and we were unable to recover it. 00:35:46.666 [2024-11-18 07:21:07.343000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.666 [2024-11-18 07:21:07.343027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.666 qpair failed and we were unable to recover it. 00:35:46.666 [2024-11-18 07:21:07.343143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.666 [2024-11-18 07:21:07.343170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.666 qpair failed and we were unable to recover it. 00:35:46.666 [2024-11-18 07:21:07.343310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.666 [2024-11-18 07:21:07.343342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.666 qpair failed and we were unable to recover it. 00:35:46.666 [2024-11-18 07:21:07.343426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.666 [2024-11-18 07:21:07.343455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.666 qpair failed and we were unable to recover it. 00:35:46.666 [2024-11-18 07:21:07.343606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.666 [2024-11-18 07:21:07.343634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.666 qpair failed and we were unable to recover it. 00:35:46.666 [2024-11-18 07:21:07.343730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.666 [2024-11-18 07:21:07.343759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.666 qpair failed and we were unable to recover it. 00:35:46.666 [2024-11-18 07:21:07.343857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.666 [2024-11-18 07:21:07.343896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.666 qpair failed and we were unable to recover it. 00:35:46.666 [2024-11-18 07:21:07.343992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.666 [2024-11-18 07:21:07.344019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.666 qpair failed and we were unable to recover it. 00:35:46.666 [2024-11-18 07:21:07.344158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.666 [2024-11-18 07:21:07.344185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.666 qpair failed and we were unable to recover it. 00:35:46.666 [2024-11-18 07:21:07.344269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.666 [2024-11-18 07:21:07.344296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.666 qpair failed and we were unable to recover it. 00:35:46.666 [2024-11-18 07:21:07.344377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.666 [2024-11-18 07:21:07.344403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.666 qpair failed and we were unable to recover it. 00:35:46.666 [2024-11-18 07:21:07.344478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.666 [2024-11-18 07:21:07.344513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.666 qpair failed and we were unable to recover it. 00:35:46.666 [2024-11-18 07:21:07.344591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.666 [2024-11-18 07:21:07.344616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.666 qpair failed and we were unable to recover it. 00:35:46.666 [2024-11-18 07:21:07.344700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.666 [2024-11-18 07:21:07.344726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.666 qpair failed and we were unable to recover it. 00:35:46.666 [2024-11-18 07:21:07.344812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.666 [2024-11-18 07:21:07.344838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.666 qpair failed and we were unable to recover it. 00:35:46.666 [2024-11-18 07:21:07.344954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.666 [2024-11-18 07:21:07.344980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.666 qpair failed and we were unable to recover it. 00:35:46.666 [2024-11-18 07:21:07.345084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.666 [2024-11-18 07:21:07.345111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.666 qpair failed and we were unable to recover it. 00:35:46.666 [2024-11-18 07:21:07.345192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.666 [2024-11-18 07:21:07.345218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.666 qpair failed and we were unable to recover it. 00:35:46.666 [2024-11-18 07:21:07.345308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.666 [2024-11-18 07:21:07.345333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.666 qpair failed and we were unable to recover it. 00:35:46.666 [2024-11-18 07:21:07.345418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.666 [2024-11-18 07:21:07.345444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.666 qpair failed and we were unable to recover it. 00:35:46.666 [2024-11-18 07:21:07.345562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.666 [2024-11-18 07:21:07.345589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.666 qpair failed and we were unable to recover it. 00:35:46.666 [2024-11-18 07:21:07.345682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.666 [2024-11-18 07:21:07.345708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.666 qpair failed and we were unable to recover it. 00:35:46.666 [2024-11-18 07:21:07.345791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.666 [2024-11-18 07:21:07.345817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.666 qpair failed and we were unable to recover it. 00:35:46.666 [2024-11-18 07:21:07.345927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.666 [2024-11-18 07:21:07.345953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.666 qpair failed and we were unable to recover it. 00:35:46.666 [2024-11-18 07:21:07.346092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.666 [2024-11-18 07:21:07.346118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.666 qpair failed and we were unable to recover it. 00:35:46.666 [2024-11-18 07:21:07.346198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.666 [2024-11-18 07:21:07.346224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.666 qpair failed and we were unable to recover it. 00:35:46.666 [2024-11-18 07:21:07.346317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.666 [2024-11-18 07:21:07.346345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.666 qpair failed and we were unable to recover it. 00:35:46.666 [2024-11-18 07:21:07.346429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.666 [2024-11-18 07:21:07.346454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.666 qpair failed and we were unable to recover it. 00:35:46.666 [2024-11-18 07:21:07.346544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.667 [2024-11-18 07:21:07.346570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.667 qpair failed and we were unable to recover it. 00:35:46.667 [2024-11-18 07:21:07.346652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.667 [2024-11-18 07:21:07.346689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.667 qpair failed and we were unable to recover it. 00:35:46.667 [2024-11-18 07:21:07.346778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.667 [2024-11-18 07:21:07.346810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.667 qpair failed and we were unable to recover it. 00:35:46.667 [2024-11-18 07:21:07.346944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.667 [2024-11-18 07:21:07.346970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.667 qpair failed and we were unable to recover it. 00:35:46.667 [2024-11-18 07:21:07.347074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.667 [2024-11-18 07:21:07.347101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.667 qpair failed and we were unable to recover it. 00:35:46.667 [2024-11-18 07:21:07.347225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.667 [2024-11-18 07:21:07.347252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.667 qpair failed and we were unable to recover it. 00:35:46.667 [2024-11-18 07:21:07.347347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.667 [2024-11-18 07:21:07.347373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.667 qpair failed and we were unable to recover it. 00:35:46.667 [2024-11-18 07:21:07.347480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.667 [2024-11-18 07:21:07.347513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.667 qpair failed and we were unable to recover it. 00:35:46.667 [2024-11-18 07:21:07.347594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.667 [2024-11-18 07:21:07.347620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.667 qpair failed and we were unable to recover it. 00:35:46.667 [2024-11-18 07:21:07.347710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.667 [2024-11-18 07:21:07.347737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.667 qpair failed and we were unable to recover it. 00:35:46.667 [2024-11-18 07:21:07.347844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.667 [2024-11-18 07:21:07.347870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.667 qpair failed and we were unable to recover it. 00:35:46.667 [2024-11-18 07:21:07.347955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.667 [2024-11-18 07:21:07.347981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.667 qpair failed and we were unable to recover it. 00:35:46.667 [2024-11-18 07:21:07.348067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.667 [2024-11-18 07:21:07.348096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.667 qpair failed and we were unable to recover it. 00:35:46.667 [2024-11-18 07:21:07.348237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.667 [2024-11-18 07:21:07.348272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.667 qpair failed and we were unable to recover it. 00:35:46.667 [2024-11-18 07:21:07.348380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.667 [2024-11-18 07:21:07.348407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.667 qpair failed and we were unable to recover it. 00:35:46.667 [2024-11-18 07:21:07.348500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.667 [2024-11-18 07:21:07.348529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.667 qpair failed and we were unable to recover it. 00:35:46.667 [2024-11-18 07:21:07.348610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.667 [2024-11-18 07:21:07.348638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.667 qpair failed and we were unable to recover it. 00:35:46.667 [2024-11-18 07:21:07.348720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.667 [2024-11-18 07:21:07.348747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.667 qpair failed and we were unable to recover it. 00:35:46.667 [2024-11-18 07:21:07.348869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.667 [2024-11-18 07:21:07.348896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.667 qpair failed and we were unable to recover it. 00:35:46.667 [2024-11-18 07:21:07.348988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.667 [2024-11-18 07:21:07.349015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.667 qpair failed and we were unable to recover it. 00:35:46.667 [2024-11-18 07:21:07.349110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.667 [2024-11-18 07:21:07.349150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.667 qpair failed and we were unable to recover it. 00:35:46.667 [2024-11-18 07:21:07.349253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.667 [2024-11-18 07:21:07.349281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.667 qpair failed and we were unable to recover it. 00:35:46.667 [2024-11-18 07:21:07.349365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.667 [2024-11-18 07:21:07.349393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.667 qpair failed and we were unable to recover it. 00:35:46.667 [2024-11-18 07:21:07.349504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.667 [2024-11-18 07:21:07.349532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.667 qpair failed and we were unable to recover it. 00:35:46.667 [2024-11-18 07:21:07.349619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.667 [2024-11-18 07:21:07.349646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.667 qpair failed and we were unable to recover it. 00:35:46.667 [2024-11-18 07:21:07.349734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.667 [2024-11-18 07:21:07.349762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.667 qpair failed and we were unable to recover it. 00:35:46.667 [2024-11-18 07:21:07.349874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.667 [2024-11-18 07:21:07.349901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.667 qpair failed and we were unable to recover it. 00:35:46.667 [2024-11-18 07:21:07.350044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.667 [2024-11-18 07:21:07.350072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.667 qpair failed and we were unable to recover it. 00:35:46.667 [2024-11-18 07:21:07.350211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.667 [2024-11-18 07:21:07.350245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.668 qpair failed and we were unable to recover it. 00:35:46.668 [2024-11-18 07:21:07.350380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.668 [2024-11-18 07:21:07.350406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.668 qpair failed and we were unable to recover it. 00:35:46.668 [2024-11-18 07:21:07.350502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.668 [2024-11-18 07:21:07.350542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.668 qpair failed and we were unable to recover it. 00:35:46.668 [2024-11-18 07:21:07.350636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.668 [2024-11-18 07:21:07.350663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.668 qpair failed and we were unable to recover it. 00:35:46.668 [2024-11-18 07:21:07.350752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.668 [2024-11-18 07:21:07.350777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.668 qpair failed and we were unable to recover it. 00:35:46.668 [2024-11-18 07:21:07.350881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.668 [2024-11-18 07:21:07.350908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.668 qpair failed and we were unable to recover it. 00:35:46.668 [2024-11-18 07:21:07.351024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.668 [2024-11-18 07:21:07.351051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.668 qpair failed and we were unable to recover it. 00:35:46.668 [2024-11-18 07:21:07.351141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.668 [2024-11-18 07:21:07.351167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.668 qpair failed and we were unable to recover it. 00:35:46.668 [2024-11-18 07:21:07.351259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.668 [2024-11-18 07:21:07.351285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.668 qpair failed and we were unable to recover it. 00:35:46.668 [2024-11-18 07:21:07.351379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.668 [2024-11-18 07:21:07.351405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.668 qpair failed and we were unable to recover it. 00:35:46.668 [2024-11-18 07:21:07.351506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.668 [2024-11-18 07:21:07.351531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.668 qpair failed and we were unable to recover it. 00:35:46.668 [2024-11-18 07:21:07.351616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.668 [2024-11-18 07:21:07.351642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.668 qpair failed and we were unable to recover it. 00:35:46.668 [2024-11-18 07:21:07.351732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.668 [2024-11-18 07:21:07.351758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.668 qpair failed and we were unable to recover it. 00:35:46.668 [2024-11-18 07:21:07.351849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.668 [2024-11-18 07:21:07.351874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.668 qpair failed and we were unable to recover it. 00:35:46.668 [2024-11-18 07:21:07.351995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.668 [2024-11-18 07:21:07.352023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.668 qpair failed and we were unable to recover it. 00:35:46.668 [2024-11-18 07:21:07.352136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.668 [2024-11-18 07:21:07.352165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.668 qpair failed and we were unable to recover it. 00:35:46.668 [2024-11-18 07:21:07.352258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.668 [2024-11-18 07:21:07.352285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.668 qpair failed and we were unable to recover it. 00:35:46.668 [2024-11-18 07:21:07.352380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.668 [2024-11-18 07:21:07.352406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.668 qpair failed and we were unable to recover it. 00:35:46.668 [2024-11-18 07:21:07.352506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.668 [2024-11-18 07:21:07.352534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.668 qpair failed and we were unable to recover it. 00:35:46.668 [2024-11-18 07:21:07.352624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.668 [2024-11-18 07:21:07.352650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.668 qpair failed and we were unable to recover it. 00:35:46.668 [2024-11-18 07:21:07.352735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.668 [2024-11-18 07:21:07.352761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.668 qpair failed and we were unable to recover it. 00:35:46.668 [2024-11-18 07:21:07.352848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.668 [2024-11-18 07:21:07.352875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.668 qpair failed and we were unable to recover it. 00:35:46.668 [2024-11-18 07:21:07.353010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.668 [2024-11-18 07:21:07.353054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.668 qpair failed and we were unable to recover it. 00:35:46.668 [2024-11-18 07:21:07.353163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.668 [2024-11-18 07:21:07.353191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.668 qpair failed and we were unable to recover it. 00:35:46.668 [2024-11-18 07:21:07.353269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.668 [2024-11-18 07:21:07.353298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.668 qpair failed and we were unable to recover it. 00:35:46.668 [2024-11-18 07:21:07.353388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.668 [2024-11-18 07:21:07.353414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.668 qpair failed and we were unable to recover it. 00:35:46.668 [2024-11-18 07:21:07.353527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.668 [2024-11-18 07:21:07.353554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.668 qpair failed and we were unable to recover it. 00:35:46.668 [2024-11-18 07:21:07.353643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.668 [2024-11-18 07:21:07.353671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.668 qpair failed and we were unable to recover it. 00:35:46.668 [2024-11-18 07:21:07.353760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.669 [2024-11-18 07:21:07.353786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.669 qpair failed and we were unable to recover it. 00:35:46.669 [2024-11-18 07:21:07.353875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.669 [2024-11-18 07:21:07.353903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.669 qpair failed and we were unable to recover it. 00:35:46.669 [2024-11-18 07:21:07.354050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.669 [2024-11-18 07:21:07.354077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.669 qpair failed and we were unable to recover it. 00:35:46.669 [2024-11-18 07:21:07.354196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.669 [2024-11-18 07:21:07.354223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.669 qpair failed and we were unable to recover it. 00:35:46.669 [2024-11-18 07:21:07.354316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.669 [2024-11-18 07:21:07.354341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.669 qpair failed and we were unable to recover it. 00:35:46.669 [2024-11-18 07:21:07.354459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.669 [2024-11-18 07:21:07.354487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.669 qpair failed and we were unable to recover it. 00:35:46.669 [2024-11-18 07:21:07.354582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.669 [2024-11-18 07:21:07.354608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.669 qpair failed and we were unable to recover it. 00:35:46.669 [2024-11-18 07:21:07.354696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.669 [2024-11-18 07:21:07.354724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.669 qpair failed and we were unable to recover it. 00:35:46.669 [2024-11-18 07:21:07.354810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.669 [2024-11-18 07:21:07.354839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.669 qpair failed and we were unable to recover it. 00:35:46.669 [2024-11-18 07:21:07.354923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.669 [2024-11-18 07:21:07.354949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.669 qpair failed and we were unable to recover it. 00:35:46.669 [2024-11-18 07:21:07.355040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.669 [2024-11-18 07:21:07.355067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.669 qpair failed and we were unable to recover it. 00:35:46.669 [2024-11-18 07:21:07.355181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.669 [2024-11-18 07:21:07.355207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.669 qpair failed and we were unable to recover it. 00:35:46.669 [2024-11-18 07:21:07.355313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.669 [2024-11-18 07:21:07.355344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.669 qpair failed and we were unable to recover it. 00:35:46.669 [2024-11-18 07:21:07.355429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.669 [2024-11-18 07:21:07.355455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.669 qpair failed and we were unable to recover it. 00:35:46.669 [2024-11-18 07:21:07.355566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.669 [2024-11-18 07:21:07.355592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.669 qpair failed and we were unable to recover it. 00:35:46.669 [2024-11-18 07:21:07.355677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.669 [2024-11-18 07:21:07.355704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.669 qpair failed and we were unable to recover it. 00:35:46.669 [2024-11-18 07:21:07.355791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.669 [2024-11-18 07:21:07.355818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.669 qpair failed and we were unable to recover it. 00:35:46.669 [2024-11-18 07:21:07.355933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.669 [2024-11-18 07:21:07.355959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.669 qpair failed and we were unable to recover it. 00:35:46.669 [2024-11-18 07:21:07.356045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.669 [2024-11-18 07:21:07.356072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.669 qpair failed and we were unable to recover it. 00:35:46.669 [2024-11-18 07:21:07.356161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.669 [2024-11-18 07:21:07.356187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.669 qpair failed and we were unable to recover it. 00:35:46.669 [2024-11-18 07:21:07.356272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.669 [2024-11-18 07:21:07.356299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.669 qpair failed and we were unable to recover it. 00:35:46.669 [2024-11-18 07:21:07.356421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.669 [2024-11-18 07:21:07.356449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.669 qpair failed and we were unable to recover it. 00:35:46.669 [2024-11-18 07:21:07.356543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.669 [2024-11-18 07:21:07.356571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.669 qpair failed and we were unable to recover it. 00:35:46.669 [2024-11-18 07:21:07.356649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.669 [2024-11-18 07:21:07.356674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.669 qpair failed and we were unable to recover it. 00:35:46.669 [2024-11-18 07:21:07.356758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.669 [2024-11-18 07:21:07.356784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.669 qpair failed and we were unable to recover it. 00:35:46.669 [2024-11-18 07:21:07.356865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.669 [2024-11-18 07:21:07.356892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.669 qpair failed and we were unable to recover it. 00:35:46.669 [2024-11-18 07:21:07.356979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.669 [2024-11-18 07:21:07.357005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.669 qpair failed and we were unable to recover it. 00:35:46.669 [2024-11-18 07:21:07.357112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.669 [2024-11-18 07:21:07.357138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.669 qpair failed and we were unable to recover it. 00:35:46.669 [2024-11-18 07:21:07.357215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.669 [2024-11-18 07:21:07.357241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.669 qpair failed and we were unable to recover it. 00:35:46.669 [2024-11-18 07:21:07.357329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.669 [2024-11-18 07:21:07.357355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.669 qpair failed and we were unable to recover it. 00:35:46.669 [2024-11-18 07:21:07.357440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.669 [2024-11-18 07:21:07.357466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.669 qpair failed and we were unable to recover it. 00:35:46.669 [2024-11-18 07:21:07.357571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.669 [2024-11-18 07:21:07.357597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.669 qpair failed and we were unable to recover it. 00:35:46.669 [2024-11-18 07:21:07.357684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.670 [2024-11-18 07:21:07.357710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.670 qpair failed and we were unable to recover it. 00:35:46.670 [2024-11-18 07:21:07.357790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.670 [2024-11-18 07:21:07.357817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.670 qpair failed and we were unable to recover it. 00:35:46.670 [2024-11-18 07:21:07.357931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.670 [2024-11-18 07:21:07.357957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.670 qpair failed and we were unable to recover it. 00:35:46.670 [2024-11-18 07:21:07.358073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.670 [2024-11-18 07:21:07.358104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.670 qpair failed and we were unable to recover it. 00:35:46.670 [2024-11-18 07:21:07.358205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.670 [2024-11-18 07:21:07.358231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.670 qpair failed and we were unable to recover it. 00:35:46.670 [2024-11-18 07:21:07.358311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.670 [2024-11-18 07:21:07.358337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.670 qpair failed and we were unable to recover it. 00:35:46.670 [2024-11-18 07:21:07.358427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.670 [2024-11-18 07:21:07.358453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.670 qpair failed and we were unable to recover it. 00:35:46.670 [2024-11-18 07:21:07.358575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.670 [2024-11-18 07:21:07.358613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.670 qpair failed and we were unable to recover it. 00:35:46.670 [2024-11-18 07:21:07.358714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.670 [2024-11-18 07:21:07.358743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.670 qpair failed and we were unable to recover it. 00:35:46.670 [2024-11-18 07:21:07.358832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.670 [2024-11-18 07:21:07.358859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.670 qpair failed and we were unable to recover it. 00:35:46.670 [2024-11-18 07:21:07.358978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.670 [2024-11-18 07:21:07.359005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.670 qpair failed and we were unable to recover it. 00:35:46.670 [2024-11-18 07:21:07.359111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.670 [2024-11-18 07:21:07.359142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.670 qpair failed and we were unable to recover it. 00:35:46.670 [2024-11-18 07:21:07.359285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.670 [2024-11-18 07:21:07.359312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.670 qpair failed and we were unable to recover it. 00:35:46.670 [2024-11-18 07:21:07.359457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.670 [2024-11-18 07:21:07.359485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.670 qpair failed and we were unable to recover it. 00:35:46.670 [2024-11-18 07:21:07.359583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.670 [2024-11-18 07:21:07.359609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.670 qpair failed and we were unable to recover it. 00:35:46.670 [2024-11-18 07:21:07.359694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.670 [2024-11-18 07:21:07.359721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.670 qpair failed and we were unable to recover it. 00:35:46.670 [2024-11-18 07:21:07.359844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.670 [2024-11-18 07:21:07.359871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.670 qpair failed and we were unable to recover it. 00:35:46.670 [2024-11-18 07:21:07.359980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.670 [2024-11-18 07:21:07.360006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.670 qpair failed and we were unable to recover it. 00:35:46.670 [2024-11-18 07:21:07.360117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.670 [2024-11-18 07:21:07.360144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.670 qpair failed and we were unable to recover it. 00:35:46.670 [2024-11-18 07:21:07.360296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.670 [2024-11-18 07:21:07.360325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.670 qpair failed and we were unable to recover it. 00:35:46.670 [2024-11-18 07:21:07.360443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.670 [2024-11-18 07:21:07.360475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.670 qpair failed and we were unable to recover it. 00:35:46.670 [2024-11-18 07:21:07.360587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.670 [2024-11-18 07:21:07.360626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.670 qpair failed and we were unable to recover it. 00:35:46.670 [2024-11-18 07:21:07.360747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.670 [2024-11-18 07:21:07.360774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.670 qpair failed and we were unable to recover it. 00:35:46.670 [2024-11-18 07:21:07.360884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.670 [2024-11-18 07:21:07.360911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.670 qpair failed and we were unable to recover it. 00:35:46.670 [2024-11-18 07:21:07.361046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.670 [2024-11-18 07:21:07.361091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.670 qpair failed and we were unable to recover it. 00:35:46.670 [2024-11-18 07:21:07.361226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.670 [2024-11-18 07:21:07.361269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.670 qpair failed and we were unable to recover it. 00:35:46.670 [2024-11-18 07:21:07.361356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.670 [2024-11-18 07:21:07.361382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.670 qpair failed and we were unable to recover it. 00:35:46.670 [2024-11-18 07:21:07.361458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.670 [2024-11-18 07:21:07.361483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.670 qpair failed and we were unable to recover it. 00:35:46.670 [2024-11-18 07:21:07.361589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.670 [2024-11-18 07:21:07.361615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.670 qpair failed and we were unable to recover it. 00:35:46.670 [2024-11-18 07:21:07.361694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.670 [2024-11-18 07:21:07.361720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.670 qpair failed and we were unable to recover it. 00:35:46.670 [2024-11-18 07:21:07.361800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.671 [2024-11-18 07:21:07.361826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.671 qpair failed and we were unable to recover it. 00:35:46.671 [2024-11-18 07:21:07.361942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.671 [2024-11-18 07:21:07.361968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.671 qpair failed and we were unable to recover it. 00:35:46.671 [2024-11-18 07:21:07.362050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.671 [2024-11-18 07:21:07.362076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.671 qpair failed and we were unable to recover it. 00:35:46.671 [2024-11-18 07:21:07.362173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.671 [2024-11-18 07:21:07.362201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.671 qpair failed and we were unable to recover it. 00:35:46.671 [2024-11-18 07:21:07.362328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.671 [2024-11-18 07:21:07.362356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.671 qpair failed and we were unable to recover it. 00:35:46.671 [2024-11-18 07:21:07.362445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.671 [2024-11-18 07:21:07.362472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.671 qpair failed and we were unable to recover it. 00:35:46.671 [2024-11-18 07:21:07.362570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.671 [2024-11-18 07:21:07.362598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.671 qpair failed and we were unable to recover it. 00:35:46.671 [2024-11-18 07:21:07.362689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.671 [2024-11-18 07:21:07.362715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.671 qpair failed and we were unable to recover it. 00:35:46.671 [2024-11-18 07:21:07.362827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.671 [2024-11-18 07:21:07.362854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.671 qpair failed and we were unable to recover it. 00:35:46.671 [2024-11-18 07:21:07.362967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.671 [2024-11-18 07:21:07.362994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.671 qpair failed and we were unable to recover it. 00:35:46.671 [2024-11-18 07:21:07.363075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.671 [2024-11-18 07:21:07.363102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.671 qpair failed and we were unable to recover it. 00:35:46.671 [2024-11-18 07:21:07.363198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.671 [2024-11-18 07:21:07.363224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.671 qpair failed and we were unable to recover it. 00:35:46.671 [2024-11-18 07:21:07.363356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.671 [2024-11-18 07:21:07.363382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.671 qpair failed and we were unable to recover it. 00:35:46.671 [2024-11-18 07:21:07.363500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.671 [2024-11-18 07:21:07.363527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.671 qpair failed and we were unable to recover it. 00:35:46.671 [2024-11-18 07:21:07.363613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.671 [2024-11-18 07:21:07.363640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.671 qpair failed and we were unable to recover it. 00:35:46.671 [2024-11-18 07:21:07.363720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.671 [2024-11-18 07:21:07.363748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.671 qpair failed and we were unable to recover it. 00:35:46.671 [2024-11-18 07:21:07.363835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.671 [2024-11-18 07:21:07.363862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.671 qpair failed and we were unable to recover it. 00:35:46.671 [2024-11-18 07:21:07.364009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.671 [2024-11-18 07:21:07.364036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.671 qpair failed and we were unable to recover it. 00:35:46.671 [2024-11-18 07:21:07.364181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.671 [2024-11-18 07:21:07.364213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.671 qpair failed and we were unable to recover it. 00:35:46.671 [2024-11-18 07:21:07.364310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.671 [2024-11-18 07:21:07.364349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.671 qpair failed and we were unable to recover it. 00:35:46.671 [2024-11-18 07:21:07.364467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.671 [2024-11-18 07:21:07.364500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.671 qpair failed and we were unable to recover it. 00:35:46.671 [2024-11-18 07:21:07.364625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.671 [2024-11-18 07:21:07.364652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.671 qpair failed and we were unable to recover it. 00:35:46.671 [2024-11-18 07:21:07.364736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.671 [2024-11-18 07:21:07.364762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.671 qpair failed and we were unable to recover it. 00:35:46.671 [2024-11-18 07:21:07.364873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.671 [2024-11-18 07:21:07.364900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.671 qpair failed and we were unable to recover it. 00:35:46.671 [2024-11-18 07:21:07.365045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.671 [2024-11-18 07:21:07.365074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.671 qpair failed and we were unable to recover it. 00:35:46.671 [2024-11-18 07:21:07.365189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.671 [2024-11-18 07:21:07.365214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.671 qpair failed and we were unable to recover it. 00:35:46.671 [2024-11-18 07:21:07.365328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.671 [2024-11-18 07:21:07.365355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.671 qpair failed and we were unable to recover it. 00:35:46.671 [2024-11-18 07:21:07.365446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.671 [2024-11-18 07:21:07.365473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.671 qpair failed and we were unable to recover it. 00:35:46.671 [2024-11-18 07:21:07.365600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.671 [2024-11-18 07:21:07.365626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.671 qpair failed and we were unable to recover it. 00:35:46.671 [2024-11-18 07:21:07.365728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.671 [2024-11-18 07:21:07.365767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.671 qpair failed and we were unable to recover it. 00:35:46.671 [2024-11-18 07:21:07.365860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.671 [2024-11-18 07:21:07.365887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.671 qpair failed and we were unable to recover it. 00:35:46.671 [2024-11-18 07:21:07.365980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.671 [2024-11-18 07:21:07.366006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.671 qpair failed and we were unable to recover it. 00:35:46.671 [2024-11-18 07:21:07.366143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.671 [2024-11-18 07:21:07.366188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.671 qpair failed and we were unable to recover it. 00:35:46.671 [2024-11-18 07:21:07.366316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.671 [2024-11-18 07:21:07.366354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.671 qpair failed and we were unable to recover it. 00:35:46.671 [2024-11-18 07:21:07.366458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.671 [2024-11-18 07:21:07.366509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.671 qpair failed and we were unable to recover it. 00:35:46.672 [2024-11-18 07:21:07.366611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.672 [2024-11-18 07:21:07.366640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.672 qpair failed and we were unable to recover it. 00:35:46.672 [2024-11-18 07:21:07.366766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.672 [2024-11-18 07:21:07.366793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.672 qpair failed and we were unable to recover it. 00:35:46.672 [2024-11-18 07:21:07.366908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.672 [2024-11-18 07:21:07.366954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.672 qpair failed and we were unable to recover it. 00:35:46.672 [2024-11-18 07:21:07.367089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.672 [2024-11-18 07:21:07.367135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.672 qpair failed and we were unable to recover it. 00:35:46.672 [2024-11-18 07:21:07.367222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.672 [2024-11-18 07:21:07.367250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.672 qpair failed and we were unable to recover it. 00:35:46.672 [2024-11-18 07:21:07.367338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.672 [2024-11-18 07:21:07.367365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.672 qpair failed and we were unable to recover it. 00:35:46.672 [2024-11-18 07:21:07.367476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.672 [2024-11-18 07:21:07.367509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.672 qpair failed and we were unable to recover it. 00:35:46.672 [2024-11-18 07:21:07.367583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.672 [2024-11-18 07:21:07.367610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.672 qpair failed and we were unable to recover it. 00:35:46.672 [2024-11-18 07:21:07.367731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.672 [2024-11-18 07:21:07.367756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.672 qpair failed and we were unable to recover it. 00:35:46.672 [2024-11-18 07:21:07.367848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.672 [2024-11-18 07:21:07.367875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.672 qpair failed and we were unable to recover it. 00:35:46.672 [2024-11-18 07:21:07.367951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.672 [2024-11-18 07:21:07.367976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.672 qpair failed and we were unable to recover it. 00:35:46.672 [2024-11-18 07:21:07.368120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.672 [2024-11-18 07:21:07.368145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.672 qpair failed and we were unable to recover it. 00:35:46.672 [2024-11-18 07:21:07.368241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.672 [2024-11-18 07:21:07.368268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.672 qpair failed and we were unable to recover it. 00:35:46.672 [2024-11-18 07:21:07.368383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.672 [2024-11-18 07:21:07.368410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.672 qpair failed and we were unable to recover it. 00:35:46.672 [2024-11-18 07:21:07.368515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.672 [2024-11-18 07:21:07.368555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.672 qpair failed and we were unable to recover it. 00:35:46.672 [2024-11-18 07:21:07.368658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.672 [2024-11-18 07:21:07.368698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.672 qpair failed and we were unable to recover it. 00:35:46.672 [2024-11-18 07:21:07.368847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.672 [2024-11-18 07:21:07.368875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.672 qpair failed and we were unable to recover it. 00:35:46.672 [2024-11-18 07:21:07.369016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.672 [2024-11-18 07:21:07.369063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.672 qpair failed and we were unable to recover it. 00:35:46.672 [2024-11-18 07:21:07.369174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.672 [2024-11-18 07:21:07.369220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.672 qpair failed and we were unable to recover it. 00:35:46.672 [2024-11-18 07:21:07.369338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.672 [2024-11-18 07:21:07.369366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.672 qpair failed and we were unable to recover it. 00:35:46.672 [2024-11-18 07:21:07.369457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.672 [2024-11-18 07:21:07.369485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.672 qpair failed and we were unable to recover it. 00:35:46.672 [2024-11-18 07:21:07.369586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.672 [2024-11-18 07:21:07.369612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.672 qpair failed and we were unable to recover it. 00:35:46.672 [2024-11-18 07:21:07.369697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.672 [2024-11-18 07:21:07.369727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.672 qpair failed and we were unable to recover it. 00:35:46.672 [2024-11-18 07:21:07.369814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.672 [2024-11-18 07:21:07.369841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.672 qpair failed and we were unable to recover it. 00:35:46.672 [2024-11-18 07:21:07.369930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.672 [2024-11-18 07:21:07.369956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.672 qpair failed and we were unable to recover it. 00:35:46.672 [2024-11-18 07:21:07.370043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.672 [2024-11-18 07:21:07.370069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.672 qpair failed and we were unable to recover it. 00:35:46.672 [2024-11-18 07:21:07.370165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.672 [2024-11-18 07:21:07.370204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.672 qpair failed and we were unable to recover it. 00:35:46.672 [2024-11-18 07:21:07.370302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.672 [2024-11-18 07:21:07.370331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.672 qpair failed and we were unable to recover it. 00:35:46.672 [2024-11-18 07:21:07.370408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.672 [2024-11-18 07:21:07.370433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.672 qpair failed and we were unable to recover it. 00:35:46.672 [2024-11-18 07:21:07.370548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.672 [2024-11-18 07:21:07.370574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.672 qpair failed and we were unable to recover it. 00:35:46.672 [2024-11-18 07:21:07.370668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.672 [2024-11-18 07:21:07.370694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.672 qpair failed and we were unable to recover it. 00:35:46.672 [2024-11-18 07:21:07.370772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.672 [2024-11-18 07:21:07.370797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.672 qpair failed and we were unable to recover it. 00:35:46.672 [2024-11-18 07:21:07.370890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.672 [2024-11-18 07:21:07.370914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.672 qpair failed and we were unable to recover it. 00:35:46.672 [2024-11-18 07:21:07.371024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.673 [2024-11-18 07:21:07.371050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.673 qpair failed and we were unable to recover it. 00:35:46.673 [2024-11-18 07:21:07.371126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.673 [2024-11-18 07:21:07.371151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.673 qpair failed and we were unable to recover it. 00:35:46.673 [2024-11-18 07:21:07.371287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.673 [2024-11-18 07:21:07.371315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.673 qpair failed and we were unable to recover it. 00:35:46.673 [2024-11-18 07:21:07.371430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.673 [2024-11-18 07:21:07.371458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.673 qpair failed and we were unable to recover it. 00:35:46.673 [2024-11-18 07:21:07.371584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.673 [2024-11-18 07:21:07.371611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.673 qpair failed and we were unable to recover it. 00:35:46.673 [2024-11-18 07:21:07.371691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.673 [2024-11-18 07:21:07.371719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.673 qpair failed and we were unable to recover it. 00:35:46.673 [2024-11-18 07:21:07.371804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.673 [2024-11-18 07:21:07.371831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.673 qpair failed and we were unable to recover it. 00:35:46.673 [2024-11-18 07:21:07.371948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.673 [2024-11-18 07:21:07.371975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.673 qpair failed and we were unable to recover it. 00:35:46.673 [2024-11-18 07:21:07.372062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.673 [2024-11-18 07:21:07.372088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.673 qpair failed and we were unable to recover it. 00:35:46.673 [2024-11-18 07:21:07.372199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.673 [2024-11-18 07:21:07.372238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.673 qpair failed and we were unable to recover it. 00:35:46.673 [2024-11-18 07:21:07.372331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.673 [2024-11-18 07:21:07.372358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.673 qpair failed and we were unable to recover it. 00:35:46.673 [2024-11-18 07:21:07.372470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.673 [2024-11-18 07:21:07.372506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.673 qpair failed and we were unable to recover it. 00:35:46.673 [2024-11-18 07:21:07.372589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.673 [2024-11-18 07:21:07.372615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.673 qpair failed and we were unable to recover it. 00:35:46.673 [2024-11-18 07:21:07.372694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.673 [2024-11-18 07:21:07.372720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.673 qpair failed and we were unable to recover it. 00:35:46.673 [2024-11-18 07:21:07.372804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.673 [2024-11-18 07:21:07.372831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.673 qpair failed and we were unable to recover it. 00:35:46.673 [2024-11-18 07:21:07.372919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.673 [2024-11-18 07:21:07.372946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.673 qpair failed and we were unable to recover it. 00:35:46.673 [2024-11-18 07:21:07.373063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.673 [2024-11-18 07:21:07.373099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.673 qpair failed and we were unable to recover it. 00:35:46.673 [2024-11-18 07:21:07.373186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.673 [2024-11-18 07:21:07.373213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.673 qpair failed and we were unable to recover it. 00:35:46.673 [2024-11-18 07:21:07.373293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.673 [2024-11-18 07:21:07.373320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.673 qpair failed and we were unable to recover it. 00:35:46.673 [2024-11-18 07:21:07.373437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.673 [2024-11-18 07:21:07.373464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.673 qpair failed and we were unable to recover it. 00:35:46.673 [2024-11-18 07:21:07.373561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.674 [2024-11-18 07:21:07.373590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.674 qpair failed and we were unable to recover it. 00:35:46.674 [2024-11-18 07:21:07.373683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.674 [2024-11-18 07:21:07.373711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.674 qpair failed and we were unable to recover it. 00:35:46.674 [2024-11-18 07:21:07.373824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.674 [2024-11-18 07:21:07.373850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.674 qpair failed and we were unable to recover it. 00:35:46.674 [2024-11-18 07:21:07.373971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.674 [2024-11-18 07:21:07.374016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.674 qpair failed and we were unable to recover it. 00:35:46.674 [2024-11-18 07:21:07.374145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.674 [2024-11-18 07:21:07.374190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.674 qpair failed and we were unable to recover it. 00:35:46.674 [2024-11-18 07:21:07.374266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.674 [2024-11-18 07:21:07.374292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.674 qpair failed and we were unable to recover it. 00:35:46.674 [2024-11-18 07:21:07.374403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.674 [2024-11-18 07:21:07.374430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.674 qpair failed and we were unable to recover it. 00:35:46.674 [2024-11-18 07:21:07.374518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.674 [2024-11-18 07:21:07.374545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.674 qpair failed and we were unable to recover it. 00:35:46.674 [2024-11-18 07:21:07.374628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.674 [2024-11-18 07:21:07.374655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.674 qpair failed and we were unable to recover it. 00:35:46.674 [2024-11-18 07:21:07.374741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.674 [2024-11-18 07:21:07.374767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.674 qpair failed and we were unable to recover it. 00:35:46.674 [2024-11-18 07:21:07.374862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.674 [2024-11-18 07:21:07.374888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.674 qpair failed and we were unable to recover it. 00:35:46.674 [2024-11-18 07:21:07.374976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.674 [2024-11-18 07:21:07.375003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.674 qpair failed and we were unable to recover it. 00:35:46.674 [2024-11-18 07:21:07.375124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.674 [2024-11-18 07:21:07.375150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.674 qpair failed and we were unable to recover it. 00:35:46.674 [2024-11-18 07:21:07.375234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.674 [2024-11-18 07:21:07.375260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.674 qpair failed and we were unable to recover it. 00:35:46.674 [2024-11-18 07:21:07.375346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.674 [2024-11-18 07:21:07.375372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.674 qpair failed and we were unable to recover it. 00:35:46.674 [2024-11-18 07:21:07.375480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.674 [2024-11-18 07:21:07.375515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.674 qpair failed and we were unable to recover it. 00:35:46.674 [2024-11-18 07:21:07.375602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.674 [2024-11-18 07:21:07.375629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.674 qpair failed and we were unable to recover it. 00:35:46.674 [2024-11-18 07:21:07.375717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.674 [2024-11-18 07:21:07.375745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.674 qpair failed and we were unable to recover it. 00:35:46.674 [2024-11-18 07:21:07.375852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.674 [2024-11-18 07:21:07.375878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.674 qpair failed and we were unable to recover it. 00:35:46.674 [2024-11-18 07:21:07.375961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.674 [2024-11-18 07:21:07.375986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.674 qpair failed and we were unable to recover it. 00:35:46.674 [2024-11-18 07:21:07.376083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.674 [2024-11-18 07:21:07.376110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.674 qpair failed and we were unable to recover it. 00:35:46.674 [2024-11-18 07:21:07.376224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.674 [2024-11-18 07:21:07.376253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.674 qpair failed and we were unable to recover it. 00:35:46.674 [2024-11-18 07:21:07.376380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.674 [2024-11-18 07:21:07.376406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.674 qpair failed and we were unable to recover it. 00:35:46.674 [2024-11-18 07:21:07.376521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.674 [2024-11-18 07:21:07.376549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.674 qpair failed and we were unable to recover it. 00:35:46.674 [2024-11-18 07:21:07.376640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.674 [2024-11-18 07:21:07.376665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.674 qpair failed and we were unable to recover it. 00:35:46.674 [2024-11-18 07:21:07.376816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.674 [2024-11-18 07:21:07.376842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.674 qpair failed and we were unable to recover it. 00:35:46.674 [2024-11-18 07:21:07.376970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.674 [2024-11-18 07:21:07.376999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.674 qpair failed and we were unable to recover it. 00:35:46.674 [2024-11-18 07:21:07.377121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.674 [2024-11-18 07:21:07.377167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.674 qpair failed and we were unable to recover it. 00:35:46.674 [2024-11-18 07:21:07.377271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.674 [2024-11-18 07:21:07.377297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.674 qpair failed and we were unable to recover it. 00:35:46.674 [2024-11-18 07:21:07.377415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.675 [2024-11-18 07:21:07.377440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.675 qpair failed and we were unable to recover it. 00:35:46.675 [2024-11-18 07:21:07.377519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.675 [2024-11-18 07:21:07.377545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.675 qpair failed and we were unable to recover it. 00:35:46.675 [2024-11-18 07:21:07.377631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.675 [2024-11-18 07:21:07.377657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.675 qpair failed and we were unable to recover it. 00:35:46.675 [2024-11-18 07:21:07.377765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.675 [2024-11-18 07:21:07.377791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.675 qpair failed and we were unable to recover it. 00:35:46.675 [2024-11-18 07:21:07.377874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.675 [2024-11-18 07:21:07.377900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.675 qpair failed and we were unable to recover it. 00:35:46.675 [2024-11-18 07:21:07.377982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.675 [2024-11-18 07:21:07.378008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.675 qpair failed and we were unable to recover it. 00:35:46.675 [2024-11-18 07:21:07.378099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.675 [2024-11-18 07:21:07.378127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.675 qpair failed and we were unable to recover it. 00:35:46.675 [2024-11-18 07:21:07.378210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.675 [2024-11-18 07:21:07.378243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.675 qpair failed and we were unable to recover it. 00:35:46.675 [2024-11-18 07:21:07.378334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.675 [2024-11-18 07:21:07.378361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.675 qpair failed and we were unable to recover it. 00:35:46.675 [2024-11-18 07:21:07.378451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.675 [2024-11-18 07:21:07.378476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.675 qpair failed and we were unable to recover it. 00:35:46.675 [2024-11-18 07:21:07.378579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.675 [2024-11-18 07:21:07.378605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.675 qpair failed and we were unable to recover it. 00:35:46.675 [2024-11-18 07:21:07.378717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.675 [2024-11-18 07:21:07.378743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.675 qpair failed and we were unable to recover it. 00:35:46.675 [2024-11-18 07:21:07.378833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.675 [2024-11-18 07:21:07.378859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.675 qpair failed and we were unable to recover it. 00:35:46.675 [2024-11-18 07:21:07.378972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.675 [2024-11-18 07:21:07.378997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.675 qpair failed and we were unable to recover it. 00:35:46.675 [2024-11-18 07:21:07.379111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.675 [2024-11-18 07:21:07.379137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.675 qpair failed and we were unable to recover it. 00:35:46.675 [2024-11-18 07:21:07.379274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.675 [2024-11-18 07:21:07.379299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.675 qpair failed and we were unable to recover it. 00:35:46.675 [2024-11-18 07:21:07.379426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.675 [2024-11-18 07:21:07.379452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.675 qpair failed and we were unable to recover it. 00:35:46.675 [2024-11-18 07:21:07.379583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.675 [2024-11-18 07:21:07.379610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.675 qpair failed and we were unable to recover it. 00:35:46.675 [2024-11-18 07:21:07.379691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.675 [2024-11-18 07:21:07.379716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.675 qpair failed and we were unable to recover it. 00:35:46.675 [2024-11-18 07:21:07.379814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.675 [2024-11-18 07:21:07.379840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.675 qpair failed and we were unable to recover it. 00:35:46.675 [2024-11-18 07:21:07.379955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.675 [2024-11-18 07:21:07.379981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.675 qpair failed and we were unable to recover it. 00:35:46.675 [2024-11-18 07:21:07.380107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.675 [2024-11-18 07:21:07.380146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.675 qpair failed and we were unable to recover it. 00:35:46.675 [2024-11-18 07:21:07.380267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.675 [2024-11-18 07:21:07.380294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.675 qpair failed and we were unable to recover it. 00:35:46.675 [2024-11-18 07:21:07.380375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.675 [2024-11-18 07:21:07.380402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.675 qpair failed and we were unable to recover it. 00:35:46.675 [2024-11-18 07:21:07.380501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.675 [2024-11-18 07:21:07.380529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.675 qpair failed and we were unable to recover it. 00:35:46.675 [2024-11-18 07:21:07.380611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.675 [2024-11-18 07:21:07.380637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.675 qpair failed and we were unable to recover it. 00:35:46.675 [2024-11-18 07:21:07.380746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.675 [2024-11-18 07:21:07.380773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.675 qpair failed and we were unable to recover it. 00:35:46.675 [2024-11-18 07:21:07.380888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.675 [2024-11-18 07:21:07.380915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.675 qpair failed and we were unable to recover it. 00:35:46.675 [2024-11-18 07:21:07.381022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.675 [2024-11-18 07:21:07.381049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.675 qpair failed and we were unable to recover it. 00:35:46.675 [2024-11-18 07:21:07.381164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.675 [2024-11-18 07:21:07.381190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.675 qpair failed and we were unable to recover it. 00:35:46.675 [2024-11-18 07:21:07.381331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.675 [2024-11-18 07:21:07.381357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.675 qpair failed and we were unable to recover it. 00:35:46.675 [2024-11-18 07:21:07.381454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.675 [2024-11-18 07:21:07.381502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.675 qpair failed and we were unable to recover it. 00:35:46.676 [2024-11-18 07:21:07.381624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.676 [2024-11-18 07:21:07.381652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.676 qpair failed and we were unable to recover it. 00:35:46.676 [2024-11-18 07:21:07.381744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.676 [2024-11-18 07:21:07.381772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.676 qpair failed and we were unable to recover it. 00:35:46.676 [2024-11-18 07:21:07.381913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.676 [2024-11-18 07:21:07.381945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.676 qpair failed and we were unable to recover it. 00:35:46.676 [2024-11-18 07:21:07.382043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.676 [2024-11-18 07:21:07.382082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.676 qpair failed and we were unable to recover it. 00:35:46.676 [2024-11-18 07:21:07.382203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.676 [2024-11-18 07:21:07.382230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.676 qpair failed and we were unable to recover it. 00:35:46.676 [2024-11-18 07:21:07.382344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.676 [2024-11-18 07:21:07.382373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.676 qpair failed and we were unable to recover it. 00:35:46.676 [2024-11-18 07:21:07.382486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.676 [2024-11-18 07:21:07.382521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.676 qpair failed and we were unable to recover it. 00:35:46.676 [2024-11-18 07:21:07.382606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.676 [2024-11-18 07:21:07.382632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.676 qpair failed and we were unable to recover it. 00:35:46.676 [2024-11-18 07:21:07.382715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.676 [2024-11-18 07:21:07.382741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.676 qpair failed and we were unable to recover it. 00:35:46.676 [2024-11-18 07:21:07.382852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.676 [2024-11-18 07:21:07.382892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.676 qpair failed and we were unable to recover it. 00:35:46.676 [2024-11-18 07:21:07.383035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.676 [2024-11-18 07:21:07.383080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.676 qpair failed and we were unable to recover it. 00:35:46.676 [2024-11-18 07:21:07.383190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.676 [2024-11-18 07:21:07.383216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.676 qpair failed and we were unable to recover it. 00:35:46.676 [2024-11-18 07:21:07.383330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.676 [2024-11-18 07:21:07.383356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.676 qpair failed and we were unable to recover it. 00:35:46.676 [2024-11-18 07:21:07.383448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.676 [2024-11-18 07:21:07.383474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.676 qpair failed and we were unable to recover it. 00:35:46.676 [2024-11-18 07:21:07.383579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.676 [2024-11-18 07:21:07.383607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.676 qpair failed and we were unable to recover it. 00:35:46.676 [2024-11-18 07:21:07.383708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.676 [2024-11-18 07:21:07.383748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.676 qpair failed and we were unable to recover it. 00:35:46.676 [2024-11-18 07:21:07.383883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.676 [2024-11-18 07:21:07.383911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.676 qpair failed and we were unable to recover it. 00:35:46.676 [2024-11-18 07:21:07.384052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.676 [2024-11-18 07:21:07.384078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.676 qpair failed and we were unable to recover it. 00:35:46.676 [2024-11-18 07:21:07.384220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.676 [2024-11-18 07:21:07.384247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.676 qpair failed and we were unable to recover it. 00:35:46.676 [2024-11-18 07:21:07.384331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.676 [2024-11-18 07:21:07.384357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.676 qpair failed and we were unable to recover it. 00:35:46.676 [2024-11-18 07:21:07.384434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.676 [2024-11-18 07:21:07.384462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.676 qpair failed and we were unable to recover it. 00:35:46.676 [2024-11-18 07:21:07.384593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.676 [2024-11-18 07:21:07.384620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.676 qpair failed and we were unable to recover it. 00:35:46.676 [2024-11-18 07:21:07.384700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.676 [2024-11-18 07:21:07.384726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.676 qpair failed and we were unable to recover it. 00:35:46.676 [2024-11-18 07:21:07.384815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.676 [2024-11-18 07:21:07.384842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.676 qpair failed and we were unable to recover it. 00:35:46.676 [2024-11-18 07:21:07.384916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.676 [2024-11-18 07:21:07.384942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.676 qpair failed and we were unable to recover it. 00:35:46.676 [2024-11-18 07:21:07.385053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.676 [2024-11-18 07:21:07.385078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.676 qpair failed and we were unable to recover it. 00:35:46.676 [2024-11-18 07:21:07.385186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.676 [2024-11-18 07:21:07.385211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.676 qpair failed and we were unable to recover it. 00:35:46.676 [2024-11-18 07:21:07.385291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.676 [2024-11-18 07:21:07.385316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.676 qpair failed and we were unable to recover it. 00:35:46.676 [2024-11-18 07:21:07.385433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.676 [2024-11-18 07:21:07.385458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.676 qpair failed and we were unable to recover it. 00:35:46.676 [2024-11-18 07:21:07.385558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.677 [2024-11-18 07:21:07.385592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.677 qpair failed and we were unable to recover it. 00:35:46.677 [2024-11-18 07:21:07.385706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.677 [2024-11-18 07:21:07.385732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.677 qpair failed and we were unable to recover it. 00:35:46.677 [2024-11-18 07:21:07.385850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.677 [2024-11-18 07:21:07.385876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.677 qpair failed and we were unable to recover it. 00:35:46.677 [2024-11-18 07:21:07.385992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.677 [2024-11-18 07:21:07.386019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.677 qpair failed and we were unable to recover it. 00:35:46.677 [2024-11-18 07:21:07.386128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.677 [2024-11-18 07:21:07.386156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.677 qpair failed and we were unable to recover it. 00:35:46.677 [2024-11-18 07:21:07.386251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.677 [2024-11-18 07:21:07.386282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.677 qpair failed and we were unable to recover it. 00:35:46.677 [2024-11-18 07:21:07.386430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.677 [2024-11-18 07:21:07.386457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.677 qpair failed and we were unable to recover it. 00:35:46.677 [2024-11-18 07:21:07.386574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.677 [2024-11-18 07:21:07.386600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.677 qpair failed and we were unable to recover it. 00:35:46.677 [2024-11-18 07:21:07.386679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.677 [2024-11-18 07:21:07.386707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.677 qpair failed and we were unable to recover it. 00:35:46.677 [2024-11-18 07:21:07.386830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.677 [2024-11-18 07:21:07.386874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.677 qpair failed and we were unable to recover it. 00:35:46.677 [2024-11-18 07:21:07.386948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.677 [2024-11-18 07:21:07.386974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.677 qpair failed and we were unable to recover it. 00:35:46.677 [2024-11-18 07:21:07.387114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.677 [2024-11-18 07:21:07.387159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.677 qpair failed and we were unable to recover it. 00:35:46.677 [2024-11-18 07:21:07.387264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.677 [2024-11-18 07:21:07.387290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.677 qpair failed and we were unable to recover it. 00:35:46.677 [2024-11-18 07:21:07.387404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.677 [2024-11-18 07:21:07.387430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.677 qpair failed and we were unable to recover it. 00:35:46.677 [2024-11-18 07:21:07.387519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.677 [2024-11-18 07:21:07.387546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.677 qpair failed and we were unable to recover it. 00:35:46.677 [2024-11-18 07:21:07.387631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.677 [2024-11-18 07:21:07.387656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.677 qpair failed and we were unable to recover it. 00:35:46.677 [2024-11-18 07:21:07.387741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.677 [2024-11-18 07:21:07.387767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.677 qpair failed and we were unable to recover it. 00:35:46.677 [2024-11-18 07:21:07.387903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.677 [2024-11-18 07:21:07.387946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.677 qpair failed and we were unable to recover it. 00:35:46.677 [2024-11-18 07:21:07.388060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.677 [2024-11-18 07:21:07.388086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.677 qpair failed and we were unable to recover it. 00:35:46.677 [2024-11-18 07:21:07.388199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.677 [2024-11-18 07:21:07.388226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.677 qpair failed and we were unable to recover it. 00:35:46.677 [2024-11-18 07:21:07.388340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.677 [2024-11-18 07:21:07.388366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.677 qpair failed and we were unable to recover it. 00:35:46.677 [2024-11-18 07:21:07.388515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.677 [2024-11-18 07:21:07.388541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.677 qpair failed and we were unable to recover it. 00:35:46.677 [2024-11-18 07:21:07.388654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.677 [2024-11-18 07:21:07.388681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.677 qpair failed and we were unable to recover it. 00:35:46.677 [2024-11-18 07:21:07.388777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.677 [2024-11-18 07:21:07.388804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.677 qpair failed and we were unable to recover it. 00:35:46.677 [2024-11-18 07:21:07.388883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.677 [2024-11-18 07:21:07.388915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.677 qpair failed and we were unable to recover it. 00:35:46.677 [2024-11-18 07:21:07.389065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.677 [2024-11-18 07:21:07.389091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.677 qpair failed and we were unable to recover it. 00:35:46.677 [2024-11-18 07:21:07.389173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.677 [2024-11-18 07:21:07.389199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.677 qpair failed and we were unable to recover it. 00:35:46.677 [2024-11-18 07:21:07.389287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.677 [2024-11-18 07:21:07.389313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.677 qpair failed and we were unable to recover it. 00:35:46.677 [2024-11-18 07:21:07.389419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.677 [2024-11-18 07:21:07.389445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.677 qpair failed and we were unable to recover it. 00:35:46.677 [2024-11-18 07:21:07.389578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.677 [2024-11-18 07:21:07.389604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.677 qpair failed and we were unable to recover it. 00:35:46.677 [2024-11-18 07:21:07.389687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.678 [2024-11-18 07:21:07.389713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.678 qpair failed and we were unable to recover it. 00:35:46.678 [2024-11-18 07:21:07.389830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.678 [2024-11-18 07:21:07.389856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.678 qpair failed and we were unable to recover it. 00:35:46.678 [2024-11-18 07:21:07.389962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.678 [2024-11-18 07:21:07.389988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.678 qpair failed and we were unable to recover it. 00:35:46.678 [2024-11-18 07:21:07.390095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.678 [2024-11-18 07:21:07.390121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.678 qpair failed and we were unable to recover it. 00:35:46.678 [2024-11-18 07:21:07.390227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.678 [2024-11-18 07:21:07.390253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.678 qpair failed and we were unable to recover it. 00:35:46.678 [2024-11-18 07:21:07.390329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.678 [2024-11-18 07:21:07.390354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.678 qpair failed and we were unable to recover it. 00:35:46.678 [2024-11-18 07:21:07.390443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.678 [2024-11-18 07:21:07.390472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.678 qpair failed and we were unable to recover it. 00:35:46.678 [2024-11-18 07:21:07.390580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.678 [2024-11-18 07:21:07.390607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.678 qpair failed and we were unable to recover it. 00:35:46.678 [2024-11-18 07:21:07.390693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.678 [2024-11-18 07:21:07.390720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.678 qpair failed and we were unable to recover it. 00:35:46.678 [2024-11-18 07:21:07.390813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.678 [2024-11-18 07:21:07.390840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.678 qpair failed and we were unable to recover it. 00:35:46.678 [2024-11-18 07:21:07.390954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.678 [2024-11-18 07:21:07.390986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.678 qpair failed and we were unable to recover it. 00:35:46.678 [2024-11-18 07:21:07.391084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.678 [2024-11-18 07:21:07.391111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.678 qpair failed and we were unable to recover it. 00:35:46.678 [2024-11-18 07:21:07.391193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.678 [2024-11-18 07:21:07.391219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.678 qpair failed and we were unable to recover it. 00:35:46.678 [2024-11-18 07:21:07.391323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.678 [2024-11-18 07:21:07.391349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.678 qpair failed and we were unable to recover it. 00:35:46.678 [2024-11-18 07:21:07.391468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.678 [2024-11-18 07:21:07.391503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.678 qpair failed and we were unable to recover it. 00:35:46.678 [2024-11-18 07:21:07.391621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.678 [2024-11-18 07:21:07.391647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.678 qpair failed and we were unable to recover it. 00:35:46.678 [2024-11-18 07:21:07.391758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.678 [2024-11-18 07:21:07.391784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.678 qpair failed and we were unable to recover it. 00:35:46.678 [2024-11-18 07:21:07.391898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.678 [2024-11-18 07:21:07.391923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.678 qpair failed and we were unable to recover it. 00:35:46.678 [2024-11-18 07:21:07.392048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.678 [2024-11-18 07:21:07.392076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.678 qpair failed and we were unable to recover it. 00:35:46.678 [2024-11-18 07:21:07.392196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.678 [2024-11-18 07:21:07.392222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.678 qpair failed and we were unable to recover it. 00:35:46.678 [2024-11-18 07:21:07.392311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.678 [2024-11-18 07:21:07.392337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.678 qpair failed and we were unable to recover it. 00:35:46.678 [2024-11-18 07:21:07.392420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.678 [2024-11-18 07:21:07.392447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.678 qpair failed and we were unable to recover it. 00:35:46.678 [2024-11-18 07:21:07.392584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.678 [2024-11-18 07:21:07.392611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.678 qpair failed and we were unable to recover it. 00:35:46.678 [2024-11-18 07:21:07.392698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.678 [2024-11-18 07:21:07.392724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.678 qpair failed and we were unable to recover it. 00:35:46.678 [2024-11-18 07:21:07.392875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.678 [2024-11-18 07:21:07.392901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.678 qpair failed and we were unable to recover it. 00:35:46.678 [2024-11-18 07:21:07.392981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.678 [2024-11-18 07:21:07.393007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.678 qpair failed and we were unable to recover it. 00:35:46.678 [2024-11-18 07:21:07.393089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.678 [2024-11-18 07:21:07.393118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.678 qpair failed and we were unable to recover it. 00:35:46.678 [2024-11-18 07:21:07.393238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.678 [2024-11-18 07:21:07.393265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.678 qpair failed and we were unable to recover it. 00:35:46.678 [2024-11-18 07:21:07.393408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.678 [2024-11-18 07:21:07.393434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.678 qpair failed and we were unable to recover it. 00:35:46.678 [2024-11-18 07:21:07.393522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.678 [2024-11-18 07:21:07.393549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.678 qpair failed and we were unable to recover it. 00:35:46.678 [2024-11-18 07:21:07.393637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.678 [2024-11-18 07:21:07.393664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.678 qpair failed and we were unable to recover it. 00:35:46.678 [2024-11-18 07:21:07.393781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.679 [2024-11-18 07:21:07.393807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.679 qpair failed and we were unable to recover it. 00:35:46.679 [2024-11-18 07:21:07.393926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.679 [2024-11-18 07:21:07.393953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.679 qpair failed and we were unable to recover it. 00:35:46.679 [2024-11-18 07:21:07.394052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.679 [2024-11-18 07:21:07.394077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.679 qpair failed and we were unable to recover it. 00:35:46.679 [2024-11-18 07:21:07.394156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.679 [2024-11-18 07:21:07.394182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.679 qpair failed and we were unable to recover it. 00:35:46.679 [2024-11-18 07:21:07.394297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.679 [2024-11-18 07:21:07.394323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.679 qpair failed and we were unable to recover it. 00:35:46.679 [2024-11-18 07:21:07.394442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.679 [2024-11-18 07:21:07.394467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.679 qpair failed and we were unable to recover it. 00:35:46.679 [2024-11-18 07:21:07.394561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.679 [2024-11-18 07:21:07.394588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.679 qpair failed and we were unable to recover it. 00:35:46.679 [2024-11-18 07:21:07.394663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.679 [2024-11-18 07:21:07.394689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.679 qpair failed and we were unable to recover it. 00:35:46.679 [2024-11-18 07:21:07.394767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.679 [2024-11-18 07:21:07.394793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.679 qpair failed and we were unable to recover it. 00:35:46.679 [2024-11-18 07:21:07.394907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.679 [2024-11-18 07:21:07.394933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.679 qpair failed and we were unable to recover it. 00:35:46.679 [2024-11-18 07:21:07.395019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.679 [2024-11-18 07:21:07.395045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.679 qpair failed and we were unable to recover it. 00:35:46.679 [2024-11-18 07:21:07.395133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.679 [2024-11-18 07:21:07.395158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.679 qpair failed and we were unable to recover it. 00:35:46.679 [2024-11-18 07:21:07.395242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.679 [2024-11-18 07:21:07.395268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.679 qpair failed and we were unable to recover it. 00:35:46.679 [2024-11-18 07:21:07.395407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.679 [2024-11-18 07:21:07.395435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.679 qpair failed and we were unable to recover it. 00:35:46.679 [2024-11-18 07:21:07.395560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.679 [2024-11-18 07:21:07.395600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.679 qpair failed and we were unable to recover it. 00:35:46.679 [2024-11-18 07:21:07.395690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.679 [2024-11-18 07:21:07.395718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.679 qpair failed and we were unable to recover it. 00:35:46.679 [2024-11-18 07:21:07.395831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.679 [2024-11-18 07:21:07.395857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.679 qpair failed and we were unable to recover it. 00:35:46.679 [2024-11-18 07:21:07.395973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.679 [2024-11-18 07:21:07.395999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.679 qpair failed and we were unable to recover it. 00:35:46.679 [2024-11-18 07:21:07.396134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.679 [2024-11-18 07:21:07.396160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.679 qpair failed and we were unable to recover it. 00:35:46.679 [2024-11-18 07:21:07.396273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.679 [2024-11-18 07:21:07.396298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.679 qpair failed and we were unable to recover it. 00:35:46.679 [2024-11-18 07:21:07.396389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.679 [2024-11-18 07:21:07.396415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.679 qpair failed and we were unable to recover it. 00:35:46.679 [2024-11-18 07:21:07.396512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.679 [2024-11-18 07:21:07.396538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.679 qpair failed and we were unable to recover it. 00:35:46.679 [2024-11-18 07:21:07.396651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.679 [2024-11-18 07:21:07.396678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.679 qpair failed and we were unable to recover it. 00:35:46.679 [2024-11-18 07:21:07.396759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.679 [2024-11-18 07:21:07.396785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.679 qpair failed and we were unable to recover it. 00:35:46.679 [2024-11-18 07:21:07.396865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.679 [2024-11-18 07:21:07.396892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.679 qpair failed and we were unable to recover it. 00:35:46.679 [2024-11-18 07:21:07.397005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.680 [2024-11-18 07:21:07.397032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.680 qpair failed and we were unable to recover it. 00:35:46.680 [2024-11-18 07:21:07.397176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.680 [2024-11-18 07:21:07.397216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.680 qpair failed and we were unable to recover it. 00:35:46.680 [2024-11-18 07:21:07.397360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.680 [2024-11-18 07:21:07.397400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.680 qpair failed and we were unable to recover it. 00:35:46.680 [2024-11-18 07:21:07.397527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.680 [2024-11-18 07:21:07.397557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.680 qpair failed and we were unable to recover it. 00:35:46.680 [2024-11-18 07:21:07.397648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.680 [2024-11-18 07:21:07.397675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.680 qpair failed and we were unable to recover it. 00:35:46.680 [2024-11-18 07:21:07.397769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.680 [2024-11-18 07:21:07.397804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.680 qpair failed and we were unable to recover it. 00:35:46.680 [2024-11-18 07:21:07.397887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.680 [2024-11-18 07:21:07.397914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.680 qpair failed and we were unable to recover it. 00:35:46.680 [2024-11-18 07:21:07.398051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.680 [2024-11-18 07:21:07.398077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.680 qpair failed and we were unable to recover it. 00:35:46.680 [2024-11-18 07:21:07.398170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.680 [2024-11-18 07:21:07.398197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.680 qpair failed and we were unable to recover it. 00:35:46.680 [2024-11-18 07:21:07.398311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.680 [2024-11-18 07:21:07.398339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.680 qpair failed and we were unable to recover it. 00:35:46.680 [2024-11-18 07:21:07.398448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.680 [2024-11-18 07:21:07.398476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.680 qpair failed and we were unable to recover it. 00:35:46.680 [2024-11-18 07:21:07.398580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.680 [2024-11-18 07:21:07.398606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.680 qpair failed and we were unable to recover it. 00:35:46.680 [2024-11-18 07:21:07.398690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.680 [2024-11-18 07:21:07.398716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.680 qpair failed and we were unable to recover it. 00:35:46.680 [2024-11-18 07:21:07.398845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.680 [2024-11-18 07:21:07.398875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.680 qpair failed and we were unable to recover it. 00:35:46.680 [2024-11-18 07:21:07.399018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.680 [2024-11-18 07:21:07.399048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.680 qpair failed and we were unable to recover it. 00:35:46.680 [2024-11-18 07:21:07.399174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.680 [2024-11-18 07:21:07.399202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.680 qpair failed and we were unable to recover it. 00:35:46.680 [2024-11-18 07:21:07.399339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.680 [2024-11-18 07:21:07.399365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.680 qpair failed and we were unable to recover it. 00:35:46.680 [2024-11-18 07:21:07.399450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.680 [2024-11-18 07:21:07.399483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.680 qpair failed and we were unable to recover it. 00:35:46.680 [2024-11-18 07:21:07.399587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.680 [2024-11-18 07:21:07.399614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.680 qpair failed and we were unable to recover it. 00:35:46.680 [2024-11-18 07:21:07.399743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.680 [2024-11-18 07:21:07.399769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.680 qpair failed and we were unable to recover it. 00:35:46.680 [2024-11-18 07:21:07.399885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.680 [2024-11-18 07:21:07.399911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.680 qpair failed and we were unable to recover it. 00:35:46.680 [2024-11-18 07:21:07.399989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.680 [2024-11-18 07:21:07.400020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.680 qpair failed and we were unable to recover it. 00:35:46.680 [2024-11-18 07:21:07.400137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.680 [2024-11-18 07:21:07.400163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.680 qpair failed and we were unable to recover it. 00:35:46.680 [2024-11-18 07:21:07.400245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.680 [2024-11-18 07:21:07.400272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.680 qpair failed and we were unable to recover it. 00:35:46.680 [2024-11-18 07:21:07.400364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.680 [2024-11-18 07:21:07.400393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.680 qpair failed and we were unable to recover it. 00:35:46.680 [2024-11-18 07:21:07.400524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.680 [2024-11-18 07:21:07.400551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.680 qpair failed and we were unable to recover it. 00:35:46.680 [2024-11-18 07:21:07.400662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.680 [2024-11-18 07:21:07.400690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.680 qpair failed and we were unable to recover it. 00:35:46.680 [2024-11-18 07:21:07.400837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.680 [2024-11-18 07:21:07.400883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.680 qpair failed and we were unable to recover it. 00:35:46.680 [2024-11-18 07:21:07.400994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.680 [2024-11-18 07:21:07.401021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.680 qpair failed and we were unable to recover it. 00:35:46.680 [2024-11-18 07:21:07.401193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.680 [2024-11-18 07:21:07.401229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.680 qpair failed and we were unable to recover it. 00:35:46.680 [2024-11-18 07:21:07.401340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.680 [2024-11-18 07:21:07.401366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.680 qpair failed and we were unable to recover it. 00:35:46.680 [2024-11-18 07:21:07.401454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.680 [2024-11-18 07:21:07.401480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.680 qpair failed and we were unable to recover it. 00:35:46.680 [2024-11-18 07:21:07.401577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.680 [2024-11-18 07:21:07.401604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.680 qpair failed and we were unable to recover it. 00:35:46.680 [2024-11-18 07:21:07.401686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.681 [2024-11-18 07:21:07.401713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.681 qpair failed and we were unable to recover it. 00:35:46.681 [2024-11-18 07:21:07.401829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.681 [2024-11-18 07:21:07.401856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.681 qpair failed and we were unable to recover it. 00:35:46.681 [2024-11-18 07:21:07.401950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.681 [2024-11-18 07:21:07.401986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.681 qpair failed and we were unable to recover it. 00:35:46.681 [2024-11-18 07:21:07.402114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.681 [2024-11-18 07:21:07.402149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.681 qpair failed and we were unable to recover it. 00:35:46.681 [2024-11-18 07:21:07.402334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.681 [2024-11-18 07:21:07.402378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.681 qpair failed and we were unable to recover it. 00:35:46.681 [2024-11-18 07:21:07.402487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.681 [2024-11-18 07:21:07.402522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.681 qpair failed and we were unable to recover it. 00:35:46.681 [2024-11-18 07:21:07.402638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.681 [2024-11-18 07:21:07.402664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.681 qpair failed and we were unable to recover it. 00:35:46.681 [2024-11-18 07:21:07.402747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.681 [2024-11-18 07:21:07.402773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.681 qpair failed and we were unable to recover it. 00:35:46.681 [2024-11-18 07:21:07.402866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.681 [2024-11-18 07:21:07.402893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.681 qpair failed and we were unable to recover it. 00:35:46.681 [2024-11-18 07:21:07.403001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.681 [2024-11-18 07:21:07.403028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.681 qpair failed and we were unable to recover it. 00:35:46.681 [2024-11-18 07:21:07.403105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.681 [2024-11-18 07:21:07.403153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.681 qpair failed and we were unable to recover it. 00:35:46.681 [2024-11-18 07:21:07.403295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.681 [2024-11-18 07:21:07.403330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.681 qpair failed and we were unable to recover it. 00:35:46.681 [2024-11-18 07:21:07.403502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.681 [2024-11-18 07:21:07.403531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.681 qpair failed and we were unable to recover it. 00:35:46.681 [2024-11-18 07:21:07.403669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.681 [2024-11-18 07:21:07.403695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.681 qpair failed and we were unable to recover it. 00:35:46.681 [2024-11-18 07:21:07.403786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.681 [2024-11-18 07:21:07.403812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.681 qpair failed and we were unable to recover it. 00:35:46.681 [2024-11-18 07:21:07.403905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.681 [2024-11-18 07:21:07.403933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.681 qpair failed and we were unable to recover it. 00:35:46.681 [2024-11-18 07:21:07.404104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.681 [2024-11-18 07:21:07.404152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.681 qpair failed and we were unable to recover it. 00:35:46.681 [2024-11-18 07:21:07.404302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.681 [2024-11-18 07:21:07.404328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.681 qpair failed and we were unable to recover it. 00:35:46.681 [2024-11-18 07:21:07.404439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.681 [2024-11-18 07:21:07.404466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.681 qpair failed and we were unable to recover it. 00:35:46.681 [2024-11-18 07:21:07.404582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.681 [2024-11-18 07:21:07.404621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.681 qpair failed and we were unable to recover it. 00:35:46.681 [2024-11-18 07:21:07.404739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.681 [2024-11-18 07:21:07.404766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.681 qpair failed and we were unable to recover it. 00:35:46.681 [2024-11-18 07:21:07.404857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.681 [2024-11-18 07:21:07.404883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.681 qpair failed and we were unable to recover it. 00:35:46.681 [2024-11-18 07:21:07.405049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.681 [2024-11-18 07:21:07.405097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.681 qpair failed and we were unable to recover it. 00:35:46.681 [2024-11-18 07:21:07.405215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.681 [2024-11-18 07:21:07.405278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.681 qpair failed and we were unable to recover it. 00:35:46.681 [2024-11-18 07:21:07.405392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.681 [2024-11-18 07:21:07.405419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.681 qpair failed and we were unable to recover it. 00:35:46.681 [2024-11-18 07:21:07.405535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.681 [2024-11-18 07:21:07.405562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.681 qpair failed and we were unable to recover it. 00:35:46.681 [2024-11-18 07:21:07.405646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.681 [2024-11-18 07:21:07.405672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.681 qpair failed and we were unable to recover it. 00:35:46.681 [2024-11-18 07:21:07.405750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.681 [2024-11-18 07:21:07.405777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.681 qpair failed and we were unable to recover it. 00:35:46.681 [2024-11-18 07:21:07.405913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.681 [2024-11-18 07:21:07.405946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.681 qpair failed and we were unable to recover it. 00:35:46.681 [2024-11-18 07:21:07.406077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.681 [2024-11-18 07:21:07.406103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.681 qpair failed and we were unable to recover it. 00:35:46.681 [2024-11-18 07:21:07.406227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.681 [2024-11-18 07:21:07.406253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.681 qpair failed and we were unable to recover it. 00:35:46.681 [2024-11-18 07:21:07.406367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.681 [2024-11-18 07:21:07.406394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.681 qpair failed and we were unable to recover it. 00:35:46.681 [2024-11-18 07:21:07.406471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.681 [2024-11-18 07:21:07.406502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.681 qpair failed and we were unable to recover it. 00:35:46.681 [2024-11-18 07:21:07.406617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.681 [2024-11-18 07:21:07.406643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.681 qpair failed and we were unable to recover it. 00:35:46.681 [2024-11-18 07:21:07.406760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.682 [2024-11-18 07:21:07.406786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.682 qpair failed and we were unable to recover it. 00:35:46.682 [2024-11-18 07:21:07.406915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.682 [2024-11-18 07:21:07.406941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.682 qpair failed and we were unable to recover it. 00:35:46.682 [2024-11-18 07:21:07.407032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.682 [2024-11-18 07:21:07.407059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.682 qpair failed and we were unable to recover it. 00:35:46.682 [2024-11-18 07:21:07.407138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.682 [2024-11-18 07:21:07.407165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.682 qpair failed and we were unable to recover it. 00:35:46.682 [2024-11-18 07:21:07.407278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.682 [2024-11-18 07:21:07.407309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.682 qpair failed and we were unable to recover it. 00:35:46.682 [2024-11-18 07:21:07.407418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.682 [2024-11-18 07:21:07.407444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.682 qpair failed and we were unable to recover it. 00:35:46.682 [2024-11-18 07:21:07.407581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.682 [2024-11-18 07:21:07.407607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.682 qpair failed and we were unable to recover it. 00:35:46.682 [2024-11-18 07:21:07.407694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.682 [2024-11-18 07:21:07.407720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.682 qpair failed and we were unable to recover it. 00:35:46.682 [2024-11-18 07:21:07.407819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.682 [2024-11-18 07:21:07.407846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.682 qpair failed and we were unable to recover it. 00:35:46.682 [2024-11-18 07:21:07.407964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.682 [2024-11-18 07:21:07.407990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.682 qpair failed and we were unable to recover it. 00:35:46.682 [2024-11-18 07:21:07.408108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.682 [2024-11-18 07:21:07.408135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.682 qpair failed and we were unable to recover it. 00:35:46.682 [2024-11-18 07:21:07.408236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.682 [2024-11-18 07:21:07.408276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.682 qpair failed and we were unable to recover it. 00:35:46.682 [2024-11-18 07:21:07.408398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.682 [2024-11-18 07:21:07.408437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.682 qpair failed and we were unable to recover it. 00:35:46.682 [2024-11-18 07:21:07.408574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.682 [2024-11-18 07:21:07.408601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.682 qpair failed and we were unable to recover it. 00:35:46.682 [2024-11-18 07:21:07.408714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.682 [2024-11-18 07:21:07.408740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.682 qpair failed and we were unable to recover it. 00:35:46.682 [2024-11-18 07:21:07.408878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.682 [2024-11-18 07:21:07.408904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.682 qpair failed and we were unable to recover it. 00:35:46.682 [2024-11-18 07:21:07.409032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.682 [2024-11-18 07:21:07.409079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.682 qpair failed and we were unable to recover it. 00:35:46.682 [2024-11-18 07:21:07.409166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.682 [2024-11-18 07:21:07.409194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.682 qpair failed and we were unable to recover it. 00:35:46.682 [2024-11-18 07:21:07.409290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.682 [2024-11-18 07:21:07.409320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.682 qpair failed and we were unable to recover it. 00:35:46.682 [2024-11-18 07:21:07.409455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.682 [2024-11-18 07:21:07.409502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.682 qpair failed and we were unable to recover it. 00:35:46.682 [2024-11-18 07:21:07.409599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.682 [2024-11-18 07:21:07.409626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.682 qpair failed and we were unable to recover it. 00:35:46.682 [2024-11-18 07:21:07.409741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.682 [2024-11-18 07:21:07.409769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.682 qpair failed and we were unable to recover it. 00:35:46.682 [2024-11-18 07:21:07.409890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.682 [2024-11-18 07:21:07.409916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.682 qpair failed and we were unable to recover it. 00:35:46.682 [2024-11-18 07:21:07.410038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.682 [2024-11-18 07:21:07.410065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.682 qpair failed and we were unable to recover it. 00:35:46.682 [2024-11-18 07:21:07.410205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.682 [2024-11-18 07:21:07.410231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.682 qpair failed and we were unable to recover it. 00:35:46.682 [2024-11-18 07:21:07.410371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.682 [2024-11-18 07:21:07.410399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.682 qpair failed and we were unable to recover it. 00:35:46.682 [2024-11-18 07:21:07.410516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.682 [2024-11-18 07:21:07.410543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.682 qpair failed and we were unable to recover it. 00:35:46.682 [2024-11-18 07:21:07.410653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.682 [2024-11-18 07:21:07.410680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.682 qpair failed and we were unable to recover it. 00:35:46.682 [2024-11-18 07:21:07.410791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.682 [2024-11-18 07:21:07.410816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.682 qpair failed and we were unable to recover it. 00:35:46.682 [2024-11-18 07:21:07.410925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.682 [2024-11-18 07:21:07.410951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.682 qpair failed and we were unable to recover it. 00:35:46.682 [2024-11-18 07:21:07.411056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.682 [2024-11-18 07:21:07.411095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.682 qpair failed and we were unable to recover it. 00:35:46.682 [2024-11-18 07:21:07.411218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.682 [2024-11-18 07:21:07.411245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.682 qpair failed and we were unable to recover it. 00:35:46.682 [2024-11-18 07:21:07.411359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.682 [2024-11-18 07:21:07.411387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.682 qpair failed and we were unable to recover it. 00:35:46.682 [2024-11-18 07:21:07.411542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.682 [2024-11-18 07:21:07.411569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.682 qpair failed and we were unable to recover it. 00:35:46.682 [2024-11-18 07:21:07.411651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.683 [2024-11-18 07:21:07.411683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.683 qpair failed and we were unable to recover it. 00:35:46.683 [2024-11-18 07:21:07.411791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.683 [2024-11-18 07:21:07.411826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.683 qpair failed and we were unable to recover it. 00:35:46.683 [2024-11-18 07:21:07.411914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.683 [2024-11-18 07:21:07.411941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.683 qpair failed and we were unable to recover it. 00:35:46.683 [2024-11-18 07:21:07.412055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.683 [2024-11-18 07:21:07.412081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.683 qpair failed and we were unable to recover it. 00:35:46.683 [2024-11-18 07:21:07.412202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.683 [2024-11-18 07:21:07.412228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.683 qpair failed and we were unable to recover it. 00:35:46.683 [2024-11-18 07:21:07.412348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.683 [2024-11-18 07:21:07.412374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.683 qpair failed and we were unable to recover it. 00:35:46.683 [2024-11-18 07:21:07.412463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.683 [2024-11-18 07:21:07.412494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.683 qpair failed and we were unable to recover it. 00:35:46.683 [2024-11-18 07:21:07.412578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.683 [2024-11-18 07:21:07.412604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.683 qpair failed and we were unable to recover it. 00:35:46.683 [2024-11-18 07:21:07.412714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.683 [2024-11-18 07:21:07.412742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.683 qpair failed and we were unable to recover it. 00:35:46.683 [2024-11-18 07:21:07.412864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.683 [2024-11-18 07:21:07.412890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.683 qpair failed and we were unable to recover it. 00:35:46.683 [2024-11-18 07:21:07.412981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.683 [2024-11-18 07:21:07.413007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.683 qpair failed and we were unable to recover it. 00:35:46.683 [2024-11-18 07:21:07.413122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.683 [2024-11-18 07:21:07.413151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.683 qpair failed and we were unable to recover it. 00:35:46.683 [2024-11-18 07:21:07.413254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.683 [2024-11-18 07:21:07.413294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.683 qpair failed and we were unable to recover it. 00:35:46.683 [2024-11-18 07:21:07.413381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.683 [2024-11-18 07:21:07.413409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.683 qpair failed and we were unable to recover it. 00:35:46.683 [2024-11-18 07:21:07.413564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.683 [2024-11-18 07:21:07.413591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.683 qpair failed and we were unable to recover it. 00:35:46.683 [2024-11-18 07:21:07.413680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.683 [2024-11-18 07:21:07.413706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.683 qpair failed and we were unable to recover it. 00:35:46.683 [2024-11-18 07:21:07.413829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.683 [2024-11-18 07:21:07.413854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.683 qpair failed and we were unable to recover it. 00:35:46.683 [2024-11-18 07:21:07.413969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.683 [2024-11-18 07:21:07.413995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.683 qpair failed and we were unable to recover it. 00:35:46.683 [2024-11-18 07:21:07.414084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.683 [2024-11-18 07:21:07.414110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.683 qpair failed and we were unable to recover it. 00:35:46.683 [2024-11-18 07:21:07.414202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.683 [2024-11-18 07:21:07.414227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.683 qpair failed and we were unable to recover it. 00:35:46.683 [2024-11-18 07:21:07.414307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.683 [2024-11-18 07:21:07.414335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.683 qpair failed and we were unable to recover it. 00:35:46.683 [2024-11-18 07:21:07.414448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.683 [2024-11-18 07:21:07.414474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.683 qpair failed and we were unable to recover it. 00:35:46.683 [2024-11-18 07:21:07.414620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.683 [2024-11-18 07:21:07.414659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.683 qpair failed and we were unable to recover it. 00:35:46.683 [2024-11-18 07:21:07.414751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.683 [2024-11-18 07:21:07.414780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.683 qpair failed and we were unable to recover it. 00:35:46.683 [2024-11-18 07:21:07.414894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.683 [2024-11-18 07:21:07.414921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.683 qpair failed and we were unable to recover it. 00:35:46.683 [2024-11-18 07:21:07.415004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.683 [2024-11-18 07:21:07.415031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.683 qpair failed and we were unable to recover it. 00:35:46.683 [2024-11-18 07:21:07.415147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.683 [2024-11-18 07:21:07.415200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.683 qpair failed and we were unable to recover it. 00:35:46.683 [2024-11-18 07:21:07.415350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.683 [2024-11-18 07:21:07.415382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.683 qpair failed and we were unable to recover it. 00:35:46.683 [2024-11-18 07:21:07.415508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.683 [2024-11-18 07:21:07.415536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.683 qpair failed and we were unable to recover it. 00:35:46.683 [2024-11-18 07:21:07.415631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.683 [2024-11-18 07:21:07.415657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.684 qpair failed and we were unable to recover it. 00:35:46.684 [2024-11-18 07:21:07.415739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.684 [2024-11-18 07:21:07.415766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.684 qpair failed and we were unable to recover it. 00:35:46.684 [2024-11-18 07:21:07.415851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.684 [2024-11-18 07:21:07.415880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.684 qpair failed and we were unable to recover it. 00:35:46.684 [2024-11-18 07:21:07.416022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.684 [2024-11-18 07:21:07.416048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.684 qpair failed and we were unable to recover it. 00:35:46.684 [2024-11-18 07:21:07.416146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.684 [2024-11-18 07:21:07.416173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.684 qpair failed and we were unable to recover it. 00:35:46.684 [2024-11-18 07:21:07.416316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.684 [2024-11-18 07:21:07.416364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.684 qpair failed and we were unable to recover it. 00:35:46.684 [2024-11-18 07:21:07.416533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.684 [2024-11-18 07:21:07.416572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.684 qpair failed and we were unable to recover it. 00:35:46.684 [2024-11-18 07:21:07.416695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.684 [2024-11-18 07:21:07.416722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.684 qpair failed and we were unable to recover it. 00:35:46.684 [2024-11-18 07:21:07.416845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.684 [2024-11-18 07:21:07.416873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.684 qpair failed and we were unable to recover it. 00:35:46.684 [2024-11-18 07:21:07.417001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.684 [2024-11-18 07:21:07.417048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.684 qpair failed and we were unable to recover it. 00:35:46.684 [2024-11-18 07:21:07.417157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.684 [2024-11-18 07:21:07.417183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.684 qpair failed and we were unable to recover it. 00:35:46.684 [2024-11-18 07:21:07.417335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.684 [2024-11-18 07:21:07.417362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.684 qpair failed and we were unable to recover it. 00:35:46.684 [2024-11-18 07:21:07.417487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.684 [2024-11-18 07:21:07.417521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.684 qpair failed and we were unable to recover it. 00:35:46.684 [2024-11-18 07:21:07.417643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.684 [2024-11-18 07:21:07.417669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.684 qpair failed and we were unable to recover it. 00:35:46.684 [2024-11-18 07:21:07.417792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.684 [2024-11-18 07:21:07.417819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.684 qpair failed and we were unable to recover it. 00:35:46.684 [2024-11-18 07:21:07.417937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.684 [2024-11-18 07:21:07.417964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.684 qpair failed and we were unable to recover it. 00:35:46.684 [2024-11-18 07:21:07.418088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.684 [2024-11-18 07:21:07.418127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.684 qpair failed and we were unable to recover it. 00:35:46.684 [2024-11-18 07:21:07.418246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.684 [2024-11-18 07:21:07.418280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.684 qpair failed and we were unable to recover it. 00:35:46.684 [2024-11-18 07:21:07.418376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.684 [2024-11-18 07:21:07.418402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.684 qpair failed and we were unable to recover it. 00:35:46.684 [2024-11-18 07:21:07.418505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.684 [2024-11-18 07:21:07.418531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.684 qpair failed and we were unable to recover it. 00:35:46.684 [2024-11-18 07:21:07.418644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.684 [2024-11-18 07:21:07.418670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.684 qpair failed and we were unable to recover it. 00:35:46.684 [2024-11-18 07:21:07.418772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.684 [2024-11-18 07:21:07.418807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.684 qpair failed and we were unable to recover it. 00:35:46.684 [2024-11-18 07:21:07.418948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.684 [2024-11-18 07:21:07.418974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.684 qpair failed and we were unable to recover it. 00:35:46.684 [2024-11-18 07:21:07.419053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.684 [2024-11-18 07:21:07.419078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.684 qpair failed and we were unable to recover it. 00:35:46.684 [2024-11-18 07:21:07.419164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.684 [2024-11-18 07:21:07.419191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.684 qpair failed and we were unable to recover it. 00:35:46.684 [2024-11-18 07:21:07.419276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.685 [2024-11-18 07:21:07.419306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.685 qpair failed and we were unable to recover it. 00:35:46.685 [2024-11-18 07:21:07.419416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.685 [2024-11-18 07:21:07.419442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.685 qpair failed and we were unable to recover it. 00:35:46.685 [2024-11-18 07:21:07.419559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.685 [2024-11-18 07:21:07.419587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.685 qpair failed and we were unable to recover it. 00:35:46.685 [2024-11-18 07:21:07.419693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.685 [2024-11-18 07:21:07.419719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.685 qpair failed and we were unable to recover it. 00:35:46.685 [2024-11-18 07:21:07.419815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.685 [2024-11-18 07:21:07.419841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.685 qpair failed and we were unable to recover it. 00:35:46.685 [2024-11-18 07:21:07.419926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.685 [2024-11-18 07:21:07.419953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.685 qpair failed and we were unable to recover it. 00:35:46.685 [2024-11-18 07:21:07.420048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.685 [2024-11-18 07:21:07.420074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.685 qpair failed and we were unable to recover it. 00:35:46.685 [2024-11-18 07:21:07.420166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.685 [2024-11-18 07:21:07.420205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.685 qpair failed and we were unable to recover it. 00:35:46.685 [2024-11-18 07:21:07.420327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.685 [2024-11-18 07:21:07.420356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.685 qpair failed and we were unable to recover it. 00:35:46.685 [2024-11-18 07:21:07.420500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.685 [2024-11-18 07:21:07.420529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.685 qpair failed and we were unable to recover it. 00:35:46.685 [2024-11-18 07:21:07.420620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.685 [2024-11-18 07:21:07.420646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.685 qpair failed and we were unable to recover it. 00:35:46.685 [2024-11-18 07:21:07.420730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.685 [2024-11-18 07:21:07.420757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.685 qpair failed and we were unable to recover it. 00:35:46.685 [2024-11-18 07:21:07.420881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.685 [2024-11-18 07:21:07.420931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.685 qpair failed and we were unable to recover it. 00:35:46.685 [2024-11-18 07:21:07.421070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.685 [2024-11-18 07:21:07.421120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.685 qpair failed and we were unable to recover it. 00:35:46.685 [2024-11-18 07:21:07.421316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.685 [2024-11-18 07:21:07.421343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.685 qpair failed and we were unable to recover it. 00:35:46.685 [2024-11-18 07:21:07.421454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.685 [2024-11-18 07:21:07.421480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.685 qpair failed and we were unable to recover it. 00:35:46.685 [2024-11-18 07:21:07.421602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.685 [2024-11-18 07:21:07.421627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.685 qpair failed and we were unable to recover it. 00:35:46.685 [2024-11-18 07:21:07.421737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.685 [2024-11-18 07:21:07.421762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.685 qpair failed and we were unable to recover it. 00:35:46.685 [2024-11-18 07:21:07.421852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.685 [2024-11-18 07:21:07.421877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.685 qpair failed and we were unable to recover it. 00:35:46.685 [2024-11-18 07:21:07.421985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.685 [2024-11-18 07:21:07.422011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.685 qpair failed and we were unable to recover it. 00:35:46.685 [2024-11-18 07:21:07.422123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.685 [2024-11-18 07:21:07.422157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.685 qpair failed and we were unable to recover it. 00:35:46.685 [2024-11-18 07:21:07.422239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.685 [2024-11-18 07:21:07.422265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.685 qpair failed and we were unable to recover it. 00:35:46.685 [2024-11-18 07:21:07.422392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.685 [2024-11-18 07:21:07.422418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.685 qpair failed and we were unable to recover it. 00:35:46.685 [2024-11-18 07:21:07.422533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.685 [2024-11-18 07:21:07.422559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.685 qpair failed and we were unable to recover it. 00:35:46.685 [2024-11-18 07:21:07.422673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.685 [2024-11-18 07:21:07.422699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.685 qpair failed and we were unable to recover it. 00:35:46.685 [2024-11-18 07:21:07.422823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.685 [2024-11-18 07:21:07.422849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.685 qpair failed and we were unable to recover it. 00:35:46.685 [2024-11-18 07:21:07.422989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.685 [2024-11-18 07:21:07.423015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.685 qpair failed and we were unable to recover it. 00:35:46.685 [2024-11-18 07:21:07.423155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.685 [2024-11-18 07:21:07.423185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.685 qpair failed and we were unable to recover it. 00:35:46.685 [2024-11-18 07:21:07.423308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.685 [2024-11-18 07:21:07.423334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.685 qpair failed and we were unable to recover it. 00:35:46.685 [2024-11-18 07:21:07.423446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.685 [2024-11-18 07:21:07.423471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.685 qpair failed and we were unable to recover it. 00:35:46.685 [2024-11-18 07:21:07.423569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.685 [2024-11-18 07:21:07.423595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.685 qpair failed and we were unable to recover it. 00:35:46.685 [2024-11-18 07:21:07.423700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.685 [2024-11-18 07:21:07.423726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.685 qpair failed and we were unable to recover it. 00:35:46.685 [2024-11-18 07:21:07.423851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.685 [2024-11-18 07:21:07.423877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.685 qpair failed and we were unable to recover it. 00:35:46.685 [2024-11-18 07:21:07.423985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.686 [2024-11-18 07:21:07.424011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.686 qpair failed and we were unable to recover it. 00:35:46.686 [2024-11-18 07:21:07.424125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.686 [2024-11-18 07:21:07.424151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.686 qpair failed and we were unable to recover it. 00:35:46.686 [2024-11-18 07:21:07.424256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.686 [2024-11-18 07:21:07.424295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.686 qpair failed and we were unable to recover it. 00:35:46.686 [2024-11-18 07:21:07.424412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.686 [2024-11-18 07:21:07.424440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.686 qpair failed and we were unable to recover it. 00:35:46.686 [2024-11-18 07:21:07.424556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.686 [2024-11-18 07:21:07.424583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.686 qpair failed and we were unable to recover it. 00:35:46.686 [2024-11-18 07:21:07.424668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.686 [2024-11-18 07:21:07.424695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.686 qpair failed and we were unable to recover it. 00:35:46.686 [2024-11-18 07:21:07.424804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.686 [2024-11-18 07:21:07.424831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.686 qpair failed and we were unable to recover it. 00:35:46.686 [2024-11-18 07:21:07.424946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.686 [2024-11-18 07:21:07.424973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.686 qpair failed and we were unable to recover it. 00:35:46.686 [2024-11-18 07:21:07.425115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.686 [2024-11-18 07:21:07.425152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.686 qpair failed and we were unable to recover it. 00:35:46.686 [2024-11-18 07:21:07.425263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.686 [2024-11-18 07:21:07.425299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.686 qpair failed and we were unable to recover it. 00:35:46.686 [2024-11-18 07:21:07.425427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.686 [2024-11-18 07:21:07.425453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.686 qpair failed and we were unable to recover it. 00:35:46.686 [2024-11-18 07:21:07.425588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.686 [2024-11-18 07:21:07.425615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.686 qpair failed and we were unable to recover it. 00:35:46.686 [2024-11-18 07:21:07.425727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.686 [2024-11-18 07:21:07.425755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.686 qpair failed and we were unable to recover it. 00:35:46.686 [2024-11-18 07:21:07.425866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.686 [2024-11-18 07:21:07.425892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.686 qpair failed and we were unable to recover it. 00:35:46.686 [2024-11-18 07:21:07.426106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.686 [2024-11-18 07:21:07.426142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.686 qpair failed and we were unable to recover it. 00:35:46.686 [2024-11-18 07:21:07.426312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.686 [2024-11-18 07:21:07.426348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.686 qpair failed and we were unable to recover it. 00:35:46.686 [2024-11-18 07:21:07.426495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.686 [2024-11-18 07:21:07.426522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.686 qpair failed and we were unable to recover it. 00:35:46.686 [2024-11-18 07:21:07.426629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.686 [2024-11-18 07:21:07.426655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.686 qpair failed and we were unable to recover it. 00:35:46.686 [2024-11-18 07:21:07.426764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.686 [2024-11-18 07:21:07.426792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.686 qpair failed and we were unable to recover it. 00:35:46.686 [2024-11-18 07:21:07.426937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.686 [2024-11-18 07:21:07.426963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.686 qpair failed and we were unable to recover it. 00:35:46.686 [2024-11-18 07:21:07.427065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.686 [2024-11-18 07:21:07.427091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.686 qpair failed and we were unable to recover it. 00:35:46.686 [2024-11-18 07:21:07.427220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.686 [2024-11-18 07:21:07.427256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.686 qpair failed and we were unable to recover it. 00:35:46.686 [2024-11-18 07:21:07.427384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.686 [2024-11-18 07:21:07.427428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.686 qpair failed and we were unable to recover it. 00:35:46.686 [2024-11-18 07:21:07.427523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.686 [2024-11-18 07:21:07.427550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.686 qpair failed and we were unable to recover it. 00:35:46.686 [2024-11-18 07:21:07.427635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.686 [2024-11-18 07:21:07.427662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.686 qpair failed and we were unable to recover it. 00:35:46.686 [2024-11-18 07:21:07.427786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.686 [2024-11-18 07:21:07.427813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.686 qpair failed and we were unable to recover it. 00:35:46.686 [2024-11-18 07:21:07.427929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.686 [2024-11-18 07:21:07.427956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.686 qpair failed and we were unable to recover it. 00:35:46.686 [2024-11-18 07:21:07.428083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.686 [2024-11-18 07:21:07.428122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.686 qpair failed and we were unable to recover it. 00:35:46.686 [2024-11-18 07:21:07.428268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.686 [2024-11-18 07:21:07.428317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.686 qpair failed and we were unable to recover it. 00:35:46.686 [2024-11-18 07:21:07.428456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.686 [2024-11-18 07:21:07.428501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.686 qpair failed and we were unable to recover it. 00:35:46.686 [2024-11-18 07:21:07.428623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.686 [2024-11-18 07:21:07.428650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.686 qpair failed and we were unable to recover it. 00:35:46.686 [2024-11-18 07:21:07.428746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.686 [2024-11-18 07:21:07.428772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.686 qpair failed and we were unable to recover it. 00:35:46.686 [2024-11-18 07:21:07.428917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.686 [2024-11-18 07:21:07.428943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.686 qpair failed and we were unable to recover it. 00:35:46.686 [2024-11-18 07:21:07.429023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.687 [2024-11-18 07:21:07.429048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.687 qpair failed and we were unable to recover it. 00:35:46.687 [2024-11-18 07:21:07.429180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.687 [2024-11-18 07:21:07.429232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.687 qpair failed and we were unable to recover it. 00:35:46.687 [2024-11-18 07:21:07.429354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.687 [2024-11-18 07:21:07.429380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.687 qpair failed and we were unable to recover it. 00:35:46.687 [2024-11-18 07:21:07.429456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.687 [2024-11-18 07:21:07.429482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.687 qpair failed and we were unable to recover it. 00:35:46.687 [2024-11-18 07:21:07.429578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.687 [2024-11-18 07:21:07.429604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.687 qpair failed and we were unable to recover it. 00:35:46.687 [2024-11-18 07:21:07.429694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.687 [2024-11-18 07:21:07.429720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.687 qpair failed and we were unable to recover it. 00:35:46.687 [2024-11-18 07:21:07.429792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.687 [2024-11-18 07:21:07.429818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.687 qpair failed and we were unable to recover it. 00:35:46.687 [2024-11-18 07:21:07.429923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.687 [2024-11-18 07:21:07.429948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.687 qpair failed and we were unable to recover it. 00:35:46.687 [2024-11-18 07:21:07.430043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.687 [2024-11-18 07:21:07.430068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.687 qpair failed and we were unable to recover it. 00:35:46.687 [2024-11-18 07:21:07.430186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.687 [2024-11-18 07:21:07.430214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.687 qpair failed and we were unable to recover it. 00:35:46.687 [2024-11-18 07:21:07.430314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.687 [2024-11-18 07:21:07.430353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.687 qpair failed and we were unable to recover it. 00:35:46.687 [2024-11-18 07:21:07.430505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.687 [2024-11-18 07:21:07.430533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.687 qpair failed and we were unable to recover it. 00:35:46.687 [2024-11-18 07:21:07.430622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.687 [2024-11-18 07:21:07.430649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.687 qpair failed and we were unable to recover it. 00:35:46.687 [2024-11-18 07:21:07.430762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.687 [2024-11-18 07:21:07.430788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.687 qpair failed and we were unable to recover it. 00:35:46.687 [2024-11-18 07:21:07.430906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.687 [2024-11-18 07:21:07.430933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.687 qpair failed and we were unable to recover it. 00:35:46.687 [2024-11-18 07:21:07.431053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.687 [2024-11-18 07:21:07.431080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.687 qpair failed and we were unable to recover it. 00:35:46.687 [2024-11-18 07:21:07.431197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.687 [2024-11-18 07:21:07.431225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.687 qpair failed and we were unable to recover it. 00:35:46.687 [2024-11-18 07:21:07.431336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.687 [2024-11-18 07:21:07.431363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.687 qpair failed and we were unable to recover it. 00:35:46.687 [2024-11-18 07:21:07.431479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.687 [2024-11-18 07:21:07.431520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.687 qpair failed and we were unable to recover it. 00:35:46.687 [2024-11-18 07:21:07.431606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.687 [2024-11-18 07:21:07.431632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.687 qpair failed and we were unable to recover it. 00:35:46.687 [2024-11-18 07:21:07.431774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.687 [2024-11-18 07:21:07.431800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.687 qpair failed and we were unable to recover it. 00:35:46.687 [2024-11-18 07:21:07.431962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.687 [2024-11-18 07:21:07.432010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.687 qpair failed and we were unable to recover it. 00:35:46.687 [2024-11-18 07:21:07.432093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.687 [2024-11-18 07:21:07.432120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.687 qpair failed and we were unable to recover it. 00:35:46.687 [2024-11-18 07:21:07.432236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.687 [2024-11-18 07:21:07.432261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.687 qpair failed and we were unable to recover it. 00:35:46.687 [2024-11-18 07:21:07.432388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.687 [2024-11-18 07:21:07.432414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.687 qpair failed and we were unable to recover it. 00:35:46.687 [2024-11-18 07:21:07.432532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.687 [2024-11-18 07:21:07.432559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.687 qpair failed and we were unable to recover it. 00:35:46.687 [2024-11-18 07:21:07.432641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.687 [2024-11-18 07:21:07.432667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.687 qpair failed and we were unable to recover it. 00:35:46.687 [2024-11-18 07:21:07.432779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.687 [2024-11-18 07:21:07.432816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.687 qpair failed and we were unable to recover it. 00:35:46.687 [2024-11-18 07:21:07.432944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.687 [2024-11-18 07:21:07.432973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.687 qpair failed and we were unable to recover it. 00:35:46.687 [2024-11-18 07:21:07.433101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.687 [2024-11-18 07:21:07.433128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.687 qpair failed and we were unable to recover it. 00:35:46.687 [2024-11-18 07:21:07.433204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.687 [2024-11-18 07:21:07.433239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.687 qpair failed and we were unable to recover it. 00:35:46.687 [2024-11-18 07:21:07.433331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.687 [2024-11-18 07:21:07.433358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.687 qpair failed and we were unable to recover it. 00:35:46.687 [2024-11-18 07:21:07.433473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.687 [2024-11-18 07:21:07.433504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.687 qpair failed and we were unable to recover it. 00:35:46.687 [2024-11-18 07:21:07.433620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.687 [2024-11-18 07:21:07.433647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.687 qpair failed and we were unable to recover it. 00:35:46.687 [2024-11-18 07:21:07.433769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.688 [2024-11-18 07:21:07.433816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.688 qpair failed and we were unable to recover it. 00:35:46.688 [2024-11-18 07:21:07.433984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.688 [2024-11-18 07:21:07.434020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.688 qpair failed and we were unable to recover it. 00:35:46.688 [2024-11-18 07:21:07.434183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.688 [2024-11-18 07:21:07.434219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.688 qpair failed and we were unable to recover it. 00:35:46.688 [2024-11-18 07:21:07.434365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.688 [2024-11-18 07:21:07.434392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.688 qpair failed and we were unable to recover it. 00:35:46.688 [2024-11-18 07:21:07.434517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.688 [2024-11-18 07:21:07.434544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.688 qpair failed and we were unable to recover it. 00:35:46.688 [2024-11-18 07:21:07.434631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.688 [2024-11-18 07:21:07.434657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.688 qpair failed and we were unable to recover it. 00:35:46.688 [2024-11-18 07:21:07.434775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.688 [2024-11-18 07:21:07.434801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.688 qpair failed and we were unable to recover it. 00:35:46.688 [2024-11-18 07:21:07.434941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.688 [2024-11-18 07:21:07.434977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.688 qpair failed and we were unable to recover it. 00:35:46.688 [2024-11-18 07:21:07.435165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.688 [2024-11-18 07:21:07.435201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.688 qpair failed and we were unable to recover it. 00:35:46.688 [2024-11-18 07:21:07.435331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.688 [2024-11-18 07:21:07.435359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.688 qpair failed and we were unable to recover it. 00:35:46.688 [2024-11-18 07:21:07.435487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.688 [2024-11-18 07:21:07.435533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.688 qpair failed and we were unable to recover it. 00:35:46.688 [2024-11-18 07:21:07.435678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.688 [2024-11-18 07:21:07.435705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.688 qpair failed and we were unable to recover it. 00:35:46.688 [2024-11-18 07:21:07.435798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.688 [2024-11-18 07:21:07.435826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.688 qpair failed and we were unable to recover it. 00:35:46.688 [2024-11-18 07:21:07.435913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.688 [2024-11-18 07:21:07.435939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.688 qpair failed and we were unable to recover it. 00:35:46.688 [2024-11-18 07:21:07.436088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.688 [2024-11-18 07:21:07.436137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.688 qpair failed and we were unable to recover it. 00:35:46.688 [2024-11-18 07:21:07.436249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.688 [2024-11-18 07:21:07.436275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.688 qpair failed and we were unable to recover it. 00:35:46.688 [2024-11-18 07:21:07.436370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.688 [2024-11-18 07:21:07.436409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.688 qpair failed and we were unable to recover it. 00:35:46.688 [2024-11-18 07:21:07.436528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.688 [2024-11-18 07:21:07.436557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.688 qpair failed and we were unable to recover it. 00:35:46.688 [2024-11-18 07:21:07.436654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.688 [2024-11-18 07:21:07.436682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.688 qpair failed and we were unable to recover it. 00:35:46.688 [2024-11-18 07:21:07.436797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.688 [2024-11-18 07:21:07.436823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.688 qpair failed and we were unable to recover it. 00:35:46.688 [2024-11-18 07:21:07.436908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.688 [2024-11-18 07:21:07.436935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.688 qpair failed and we were unable to recover it. 00:35:46.688 [2024-11-18 07:21:07.437027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.688 [2024-11-18 07:21:07.437054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.688 qpair failed and we were unable to recover it. 00:35:46.688 [2024-11-18 07:21:07.437156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.688 [2024-11-18 07:21:07.437184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.688 qpair failed and we were unable to recover it. 00:35:46.688 [2024-11-18 07:21:07.437299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.688 [2024-11-18 07:21:07.437325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.688 qpair failed and we were unable to recover it. 00:35:46.688 [2024-11-18 07:21:07.437407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.688 [2024-11-18 07:21:07.437433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.688 qpair failed and we were unable to recover it. 00:35:46.688 [2024-11-18 07:21:07.437521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.688 [2024-11-18 07:21:07.437548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.688 qpair failed and we were unable to recover it. 00:35:46.688 [2024-11-18 07:21:07.437658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.688 [2024-11-18 07:21:07.437684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.688 qpair failed and we were unable to recover it. 00:35:46.688 [2024-11-18 07:21:07.437762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.688 [2024-11-18 07:21:07.437791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.688 qpair failed and we were unable to recover it. 00:35:46.688 [2024-11-18 07:21:07.437876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.688 [2024-11-18 07:21:07.437903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.688 qpair failed and we were unable to recover it. 00:35:46.688 [2024-11-18 07:21:07.437997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.688 [2024-11-18 07:21:07.438025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.688 qpair failed and we were unable to recover it. 00:35:46.688 [2024-11-18 07:21:07.438130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.688 [2024-11-18 07:21:07.438157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.689 qpair failed and we were unable to recover it. 00:35:46.689 [2024-11-18 07:21:07.438295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.689 [2024-11-18 07:21:07.438321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.689 qpair failed and we were unable to recover it. 00:35:46.689 [2024-11-18 07:21:07.438408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.689 [2024-11-18 07:21:07.438434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.689 qpair failed and we were unable to recover it. 00:35:46.689 [2024-11-18 07:21:07.438553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.689 [2024-11-18 07:21:07.438579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.689 qpair failed and we were unable to recover it. 00:35:46.689 [2024-11-18 07:21:07.438718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.689 [2024-11-18 07:21:07.438749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.689 qpair failed and we were unable to recover it. 00:35:46.689 [2024-11-18 07:21:07.438869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.689 [2024-11-18 07:21:07.438896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.689 qpair failed and we were unable to recover it. 00:35:46.689 [2024-11-18 07:21:07.439008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.689 [2024-11-18 07:21:07.439035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.689 qpair failed and we were unable to recover it. 00:35:46.689 [2024-11-18 07:21:07.439118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.689 [2024-11-18 07:21:07.439143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.689 qpair failed and we were unable to recover it. 00:35:46.689 [2024-11-18 07:21:07.439230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.689 [2024-11-18 07:21:07.439256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.689 qpair failed and we were unable to recover it. 00:35:46.689 [2024-11-18 07:21:07.439395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.689 [2024-11-18 07:21:07.439420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.689 qpair failed and we were unable to recover it. 00:35:46.689 [2024-11-18 07:21:07.439556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.689 [2024-11-18 07:21:07.439583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.689 qpair failed and we were unable to recover it. 00:35:46.689 [2024-11-18 07:21:07.439683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.689 [2024-11-18 07:21:07.439709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.689 qpair failed and we were unable to recover it. 00:35:46.689 [2024-11-18 07:21:07.439796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.689 [2024-11-18 07:21:07.439821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.689 qpair failed and we were unable to recover it. 00:35:46.689 [2024-11-18 07:21:07.439961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.689 [2024-11-18 07:21:07.439987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.689 qpair failed and we were unable to recover it. 00:35:46.689 [2024-11-18 07:21:07.440116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.689 [2024-11-18 07:21:07.440154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.689 qpair failed and we were unable to recover it. 00:35:46.689 [2024-11-18 07:21:07.440250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.689 [2024-11-18 07:21:07.440279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.689 qpair failed and we were unable to recover it. 00:35:46.689 [2024-11-18 07:21:07.440390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.689 [2024-11-18 07:21:07.440417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.689 qpair failed and we were unable to recover it. 00:35:46.689 [2024-11-18 07:21:07.440558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.689 [2024-11-18 07:21:07.440585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.689 qpair failed and we were unable to recover it. 00:35:46.689 [2024-11-18 07:21:07.440701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.689 [2024-11-18 07:21:07.440729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.689 qpair failed and we were unable to recover it. 00:35:46.689 [2024-11-18 07:21:07.440818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.689 [2024-11-18 07:21:07.440844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.689 qpair failed and we were unable to recover it. 00:35:46.689 [2024-11-18 07:21:07.440992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.689 [2024-11-18 07:21:07.441030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.689 qpair failed and we were unable to recover it. 00:35:46.689 [2024-11-18 07:21:07.441150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.689 [2024-11-18 07:21:07.441194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.689 qpair failed and we were unable to recover it. 00:35:46.689 [2024-11-18 07:21:07.441334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.689 [2024-11-18 07:21:07.441361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.689 qpair failed and we were unable to recover it. 00:35:46.689 [2024-11-18 07:21:07.441445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.689 [2024-11-18 07:21:07.441472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.689 qpair failed and we were unable to recover it. 00:35:46.689 [2024-11-18 07:21:07.441600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.689 [2024-11-18 07:21:07.441627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.689 qpair failed and we were unable to recover it. 00:35:46.689 [2024-11-18 07:21:07.441738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.689 [2024-11-18 07:21:07.441764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.689 qpair failed and we were unable to recover it. 00:35:46.689 [2024-11-18 07:21:07.441886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.689 [2024-11-18 07:21:07.441919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.689 qpair failed and we were unable to recover it. 00:35:46.689 [2024-11-18 07:21:07.442054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.689 [2024-11-18 07:21:07.442102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.689 qpair failed and we were unable to recover it. 00:35:46.689 [2024-11-18 07:21:07.442184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.689 [2024-11-18 07:21:07.442222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.689 qpair failed and we were unable to recover it. 00:35:46.689 [2024-11-18 07:21:07.442337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.689 [2024-11-18 07:21:07.442363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.689 qpair failed and we were unable to recover it. 00:35:46.689 [2024-11-18 07:21:07.442482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.689 [2024-11-18 07:21:07.442515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.689 qpair failed and we were unable to recover it. 00:35:46.689 [2024-11-18 07:21:07.442628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.689 [2024-11-18 07:21:07.442654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.689 qpair failed and we were unable to recover it. 00:35:46.689 [2024-11-18 07:21:07.442736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.689 [2024-11-18 07:21:07.442762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.690 qpair failed and we were unable to recover it. 00:35:46.690 [2024-11-18 07:21:07.442891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.690 [2024-11-18 07:21:07.442917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.690 qpair failed and we were unable to recover it. 00:35:46.690 [2024-11-18 07:21:07.443033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.690 [2024-11-18 07:21:07.443059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.690 qpair failed and we were unable to recover it. 00:35:46.690 [2024-11-18 07:21:07.443171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.690 [2024-11-18 07:21:07.443199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.690 qpair failed and we were unable to recover it. 00:35:46.690 [2024-11-18 07:21:07.443338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.690 [2024-11-18 07:21:07.443364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.690 qpair failed and we were unable to recover it. 00:35:46.690 [2024-11-18 07:21:07.443481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.690 [2024-11-18 07:21:07.443527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.690 qpair failed and we were unable to recover it. 00:35:46.690 [2024-11-18 07:21:07.443639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.690 [2024-11-18 07:21:07.443666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.690 qpair failed and we were unable to recover it. 00:35:46.690 [2024-11-18 07:21:07.443746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.690 [2024-11-18 07:21:07.443772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.690 qpair failed and we were unable to recover it. 00:35:46.690 [2024-11-18 07:21:07.443899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.690 [2024-11-18 07:21:07.443926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.690 qpair failed and we were unable to recover it. 00:35:46.690 [2024-11-18 07:21:07.444042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.690 [2024-11-18 07:21:07.444069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.690 qpair failed and we were unable to recover it. 00:35:46.690 [2024-11-18 07:21:07.444179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.690 [2024-11-18 07:21:07.444206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.690 qpair failed and we were unable to recover it. 00:35:46.690 [2024-11-18 07:21:07.444316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.690 [2024-11-18 07:21:07.444354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.690 qpair failed and we were unable to recover it. 00:35:46.690 [2024-11-18 07:21:07.444441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.690 [2024-11-18 07:21:07.444473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.690 qpair failed and we were unable to recover it. 00:35:46.690 [2024-11-18 07:21:07.444597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.690 [2024-11-18 07:21:07.444625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.690 qpair failed and we were unable to recover it. 00:35:46.690 [2024-11-18 07:21:07.444741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.690 [2024-11-18 07:21:07.444768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.690 qpair failed and we were unable to recover it. 00:35:46.690 [2024-11-18 07:21:07.444915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.690 [2024-11-18 07:21:07.444941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.690 qpair failed and we were unable to recover it. 00:35:46.690 [2024-11-18 07:21:07.445058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.690 [2024-11-18 07:21:07.445100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.690 qpair failed and we were unable to recover it. 00:35:46.690 [2024-11-18 07:21:07.445322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.690 [2024-11-18 07:21:07.445360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.690 qpair failed and we were unable to recover it. 00:35:46.690 [2024-11-18 07:21:07.445521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.690 [2024-11-18 07:21:07.445560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.690 qpair failed and we were unable to recover it. 00:35:46.690 [2024-11-18 07:21:07.445651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.690 [2024-11-18 07:21:07.445679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.690 qpair failed and we were unable to recover it. 00:35:46.690 [2024-11-18 07:21:07.445796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.690 [2024-11-18 07:21:07.445823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.690 qpair failed and we were unable to recover it. 00:35:46.690 [2024-11-18 07:21:07.445937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.690 [2024-11-18 07:21:07.445963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.690 qpair failed and we were unable to recover it. 00:35:46.690 [2024-11-18 07:21:07.446115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.690 [2024-11-18 07:21:07.446168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.690 qpair failed and we were unable to recover it. 00:35:46.690 [2024-11-18 07:21:07.446254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.690 [2024-11-18 07:21:07.446280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.690 qpair failed and we were unable to recover it. 00:35:46.690 [2024-11-18 07:21:07.446367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.690 [2024-11-18 07:21:07.446395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.691 qpair failed and we were unable to recover it. 00:35:46.691 [2024-11-18 07:21:07.446477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.691 [2024-11-18 07:21:07.446510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.691 qpair failed and we were unable to recover it. 00:35:46.691 [2024-11-18 07:21:07.446606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.691 [2024-11-18 07:21:07.446633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.691 qpair failed and we were unable to recover it. 00:35:46.691 [2024-11-18 07:21:07.446776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.691 [2024-11-18 07:21:07.446803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.691 qpair failed and we were unable to recover it. 00:35:46.691 [2024-11-18 07:21:07.446962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.691 [2024-11-18 07:21:07.447000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.691 qpair failed and we were unable to recover it. 00:35:46.691 [2024-11-18 07:21:07.447147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.691 [2024-11-18 07:21:07.447185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.691 qpair failed and we were unable to recover it. 00:35:46.691 [2024-11-18 07:21:07.447362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.691 [2024-11-18 07:21:07.447408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.691 qpair failed and we were unable to recover it. 00:35:46.691 [2024-11-18 07:21:07.447531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.691 [2024-11-18 07:21:07.447560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.691 qpair failed and we were unable to recover it. 00:35:46.691 [2024-11-18 07:21:07.447658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.691 [2024-11-18 07:21:07.447685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.691 qpair failed and we were unable to recover it. 00:35:46.691 [2024-11-18 07:21:07.447813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.691 [2024-11-18 07:21:07.447839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.691 qpair failed and we were unable to recover it. 00:35:46.691 [2024-11-18 07:21:07.448029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.691 [2024-11-18 07:21:07.448082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.691 qpair failed and we were unable to recover it. 00:35:46.691 [2024-11-18 07:21:07.448231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.691 [2024-11-18 07:21:07.448280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.691 qpair failed and we were unable to recover it. 00:35:46.691 [2024-11-18 07:21:07.448372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.691 [2024-11-18 07:21:07.448399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.691 qpair failed and we were unable to recover it. 00:35:46.691 [2024-11-18 07:21:07.448510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.691 [2024-11-18 07:21:07.448538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.691 qpair failed and we were unable to recover it. 00:35:46.691 [2024-11-18 07:21:07.448622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.691 [2024-11-18 07:21:07.448648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.691 qpair failed and we were unable to recover it. 00:35:46.691 [2024-11-18 07:21:07.449422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.691 [2024-11-18 07:21:07.449454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.691 qpair failed and we were unable to recover it. 00:35:46.691 [2024-11-18 07:21:07.449615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.691 [2024-11-18 07:21:07.449642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.691 qpair failed and we were unable to recover it. 00:35:46.691 [2024-11-18 07:21:07.449731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.691 [2024-11-18 07:21:07.449758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.691 qpair failed and we were unable to recover it. 00:35:46.691 [2024-11-18 07:21:07.449903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.691 [2024-11-18 07:21:07.449931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.691 qpair failed and we were unable to recover it. 00:35:46.691 [2024-11-18 07:21:07.450008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.691 [2024-11-18 07:21:07.450033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.691 qpair failed and we were unable to recover it. 00:35:46.691 [2024-11-18 07:21:07.450120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.691 [2024-11-18 07:21:07.450147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.691 qpair failed and we were unable to recover it. 00:35:46.691 [2024-11-18 07:21:07.450242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.691 [2024-11-18 07:21:07.450269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.691 qpair failed and we were unable to recover it. 00:35:46.691 [2024-11-18 07:21:07.450411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.691 [2024-11-18 07:21:07.450437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.691 qpair failed and we were unable to recover it. 00:35:46.691 [2024-11-18 07:21:07.450543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.691 [2024-11-18 07:21:07.450571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.691 qpair failed and we were unable to recover it. 00:35:46.691 [2024-11-18 07:21:07.450662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.691 [2024-11-18 07:21:07.450688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.691 qpair failed and we were unable to recover it. 00:35:46.691 [2024-11-18 07:21:07.450781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.691 [2024-11-18 07:21:07.450807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.691 qpair failed and we were unable to recover it. 00:35:46.691 [2024-11-18 07:21:07.450897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.691 [2024-11-18 07:21:07.450924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.691 qpair failed and we were unable to recover it. 00:35:46.691 [2024-11-18 07:21:07.451037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.691 [2024-11-18 07:21:07.451065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.691 qpair failed and we were unable to recover it. 00:35:46.691 [2024-11-18 07:21:07.451185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.691 [2024-11-18 07:21:07.451219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.691 qpair failed and we were unable to recover it. 00:35:46.691 [2024-11-18 07:21:07.451335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.691 [2024-11-18 07:21:07.451361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.691 qpair failed and we were unable to recover it. 00:35:46.691 [2024-11-18 07:21:07.451451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.691 [2024-11-18 07:21:07.451477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.691 qpair failed and we were unable to recover it. 00:35:46.691 [2024-11-18 07:21:07.451579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.691 [2024-11-18 07:21:07.451606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.691 qpair failed and we were unable to recover it. 00:35:46.691 [2024-11-18 07:21:07.451690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.691 [2024-11-18 07:21:07.451716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.691 qpair failed and we were unable to recover it. 00:35:46.691 [2024-11-18 07:21:07.451819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.691 [2024-11-18 07:21:07.451865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.691 qpair failed and we were unable to recover it. 00:35:46.691 [2024-11-18 07:21:07.452057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.691 [2024-11-18 07:21:07.452093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.692 qpair failed and we were unable to recover it. 00:35:46.692 [2024-11-18 07:21:07.452232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.692 [2024-11-18 07:21:07.452281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.692 qpair failed and we were unable to recover it. 00:35:46.692 [2024-11-18 07:21:07.452420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.692 [2024-11-18 07:21:07.452447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.692 qpair failed and we were unable to recover it. 00:35:46.692 [2024-11-18 07:21:07.452573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.692 [2024-11-18 07:21:07.452602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.692 qpair failed and we were unable to recover it. 00:35:46.692 [2024-11-18 07:21:07.452689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.692 [2024-11-18 07:21:07.452716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.692 qpair failed and we were unable to recover it. 00:35:46.692 [2024-11-18 07:21:07.452867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.692 [2024-11-18 07:21:07.452913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.692 qpair failed and we were unable to recover it. 00:35:46.692 [2024-11-18 07:21:07.453044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.692 [2024-11-18 07:21:07.453093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.692 qpair failed and we were unable to recover it. 00:35:46.692 [2024-11-18 07:21:07.453209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.692 [2024-11-18 07:21:07.453235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.692 qpair failed and we were unable to recover it. 00:35:46.692 [2024-11-18 07:21:07.453367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.692 [2024-11-18 07:21:07.453394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.692 qpair failed and we were unable to recover it. 00:35:46.692 [2024-11-18 07:21:07.453520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.692 [2024-11-18 07:21:07.453546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.692 qpair failed and we were unable to recover it. 00:35:46.692 [2024-11-18 07:21:07.453668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.692 [2024-11-18 07:21:07.453694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.692 qpair failed and we were unable to recover it. 00:35:46.692 [2024-11-18 07:21:07.453842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.692 [2024-11-18 07:21:07.453889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.692 qpair failed and we were unable to recover it. 00:35:46.692 [2024-11-18 07:21:07.454015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.692 [2024-11-18 07:21:07.454062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.692 qpair failed and we were unable to recover it. 00:35:46.692 [2024-11-18 07:21:07.454167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.692 [2024-11-18 07:21:07.454201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.692 qpair failed and we were unable to recover it. 00:35:46.692 [2024-11-18 07:21:07.454323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.692 [2024-11-18 07:21:07.454348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.692 qpair failed and we were unable to recover it. 00:35:46.692 [2024-11-18 07:21:07.454435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.692 [2024-11-18 07:21:07.454461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.692 qpair failed and we were unable to recover it. 00:35:46.692 [2024-11-18 07:21:07.454587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.692 [2024-11-18 07:21:07.454614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.692 qpair failed and we were unable to recover it. 00:35:46.692 [2024-11-18 07:21:07.454732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.692 [2024-11-18 07:21:07.454757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.692 qpair failed and we were unable to recover it. 00:35:46.692 [2024-11-18 07:21:07.454841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.692 [2024-11-18 07:21:07.454867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.692 qpair failed and we were unable to recover it. 00:35:46.692 [2024-11-18 07:21:07.454986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.692 [2024-11-18 07:21:07.455012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.692 qpair failed and we were unable to recover it. 00:35:46.692 [2024-11-18 07:21:07.455125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.692 [2024-11-18 07:21:07.455151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.692 qpair failed and we were unable to recover it. 00:35:46.692 [2024-11-18 07:21:07.455235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.692 [2024-11-18 07:21:07.455261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.692 qpair failed and we were unable to recover it. 00:35:46.692 [2024-11-18 07:21:07.455355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.692 [2024-11-18 07:21:07.455382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.692 qpair failed and we were unable to recover it. 00:35:46.692 [2024-11-18 07:21:07.455468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.692 [2024-11-18 07:21:07.455511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.692 qpair failed and we were unable to recover it. 00:35:46.692 [2024-11-18 07:21:07.455601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.692 [2024-11-18 07:21:07.455627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.692 qpair failed and we were unable to recover it. 00:35:46.692 [2024-11-18 07:21:07.455736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.692 [2024-11-18 07:21:07.455762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.692 qpair failed and we were unable to recover it. 00:35:46.692 [2024-11-18 07:21:07.455877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.692 [2024-11-18 07:21:07.455903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.692 qpair failed and we were unable to recover it. 00:35:46.692 [2024-11-18 07:21:07.456038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.692 [2024-11-18 07:21:07.456064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.692 qpair failed and we were unable to recover it. 00:35:46.692 [2024-11-18 07:21:07.456169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.692 [2024-11-18 07:21:07.456195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.692 qpair failed and we were unable to recover it. 00:35:46.692 [2024-11-18 07:21:07.456306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.692 [2024-11-18 07:21:07.456331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.692 qpair failed and we were unable to recover it. 00:35:46.692 [2024-11-18 07:21:07.456441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.692 [2024-11-18 07:21:07.456467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.692 qpair failed and we were unable to recover it. 00:35:46.692 [2024-11-18 07:21:07.456588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.692 [2024-11-18 07:21:07.456614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.692 qpair failed and we were unable to recover it. 00:35:46.692 [2024-11-18 07:21:07.456708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.692 [2024-11-18 07:21:07.456733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.692 qpair failed and we were unable to recover it. 00:35:46.692 [2024-11-18 07:21:07.456884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.692 [2024-11-18 07:21:07.456910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.692 qpair failed and we were unable to recover it. 00:35:46.692 [2024-11-18 07:21:07.457023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.693 [2024-11-18 07:21:07.457053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.693 qpair failed and we were unable to recover it. 00:35:46.693 [2024-11-18 07:21:07.457174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.693 [2024-11-18 07:21:07.457210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.693 qpair failed and we were unable to recover it. 00:35:46.693 [2024-11-18 07:21:07.457303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.693 [2024-11-18 07:21:07.457330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.693 qpair failed and we were unable to recover it. 00:35:46.693 [2024-11-18 07:21:07.457473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.693 [2024-11-18 07:21:07.457520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.693 qpair failed and we were unable to recover it. 00:35:46.693 [2024-11-18 07:21:07.457609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.693 [2024-11-18 07:21:07.457636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.693 qpair failed and we were unable to recover it. 00:35:46.693 [2024-11-18 07:21:07.457748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.693 [2024-11-18 07:21:07.457774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.693 qpair failed and we were unable to recover it. 00:35:46.693 [2024-11-18 07:21:07.457885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.693 [2024-11-18 07:21:07.457911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.693 qpair failed and we were unable to recover it. 00:35:46.693 [2024-11-18 07:21:07.458004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.693 [2024-11-18 07:21:07.458031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.693 qpair failed and we were unable to recover it. 00:35:46.693 [2024-11-18 07:21:07.458139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.693 [2024-11-18 07:21:07.458165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.693 qpair failed and we were unable to recover it. 00:35:46.693 [2024-11-18 07:21:07.458286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.693 [2024-11-18 07:21:07.458312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.693 qpair failed and we were unable to recover it. 00:35:46.693 [2024-11-18 07:21:07.458424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.693 [2024-11-18 07:21:07.458450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.693 qpair failed and we were unable to recover it. 00:35:46.693 [2024-11-18 07:21:07.458582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.693 [2024-11-18 07:21:07.458620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.693 qpair failed and we were unable to recover it. 00:35:46.693 [2024-11-18 07:21:07.458742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.693 [2024-11-18 07:21:07.458771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.693 qpair failed and we were unable to recover it. 00:35:46.693 [2024-11-18 07:21:07.458940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.693 [2024-11-18 07:21:07.458978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.693 qpair failed and we were unable to recover it. 00:35:46.693 [2024-11-18 07:21:07.459117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.693 [2024-11-18 07:21:07.459166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.693 qpair failed and we were unable to recover it. 00:35:46.693 [2024-11-18 07:21:07.459251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.693 [2024-11-18 07:21:07.459278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.693 qpair failed and we were unable to recover it. 00:35:46.693 [2024-11-18 07:21:07.459363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.693 [2024-11-18 07:21:07.459390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.693 qpair failed and we were unable to recover it. 00:35:46.693 [2024-11-18 07:21:07.459475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.693 [2024-11-18 07:21:07.459514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.693 qpair failed and we were unable to recover it. 00:35:46.693 [2024-11-18 07:21:07.459601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.693 [2024-11-18 07:21:07.459628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.693 qpair failed and we were unable to recover it. 00:35:46.693 [2024-11-18 07:21:07.459713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.693 [2024-11-18 07:21:07.459741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.693 qpair failed and we were unable to recover it. 00:35:46.693 [2024-11-18 07:21:07.459832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.693 [2024-11-18 07:21:07.459858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.693 qpair failed and we were unable to recover it. 00:35:46.693 [2024-11-18 07:21:07.459934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.693 [2024-11-18 07:21:07.459961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.693 qpair failed and we were unable to recover it. 00:35:46.693 [2024-11-18 07:21:07.460081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.693 [2024-11-18 07:21:07.460108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.693 qpair failed and we were unable to recover it. 00:35:46.693 [2024-11-18 07:21:07.460194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.693 [2024-11-18 07:21:07.460221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.693 qpair failed and we were unable to recover it. 00:35:46.693 [2024-11-18 07:21:07.460300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.693 [2024-11-18 07:21:07.460326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.693 qpair failed and we were unable to recover it. 00:35:46.693 [2024-11-18 07:21:07.460405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.693 [2024-11-18 07:21:07.460431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.693 qpair failed and we were unable to recover it. 00:35:46.693 [2024-11-18 07:21:07.460635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.693 [2024-11-18 07:21:07.460663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.693 qpair failed and we were unable to recover it. 00:35:46.693 [2024-11-18 07:21:07.460759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.693 [2024-11-18 07:21:07.460798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.693 qpair failed and we were unable to recover it. 00:35:46.693 [2024-11-18 07:21:07.460945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.693 [2024-11-18 07:21:07.460973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.693 qpair failed and we were unable to recover it. 00:35:46.693 [2024-11-18 07:21:07.461090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.693 [2024-11-18 07:21:07.461118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.693 qpair failed and we were unable to recover it. 00:35:46.693 [2024-11-18 07:21:07.461233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.693 [2024-11-18 07:21:07.461261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.693 qpair failed and we were unable to recover it. 00:35:46.693 [2024-11-18 07:21:07.461348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.693 [2024-11-18 07:21:07.461375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.693 qpair failed and we were unable to recover it. 00:35:46.693 [2024-11-18 07:21:07.461505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.693 [2024-11-18 07:21:07.461534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.693 qpair failed and we were unable to recover it. 00:35:46.693 [2024-11-18 07:21:07.461666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.693 [2024-11-18 07:21:07.461712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.693 qpair failed and we were unable to recover it. 00:35:46.693 [2024-11-18 07:21:07.461898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.693 [2024-11-18 07:21:07.461944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.693 qpair failed and we were unable to recover it. 00:35:46.694 [2024-11-18 07:21:07.462027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.694 [2024-11-18 07:21:07.462053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.694 qpair failed and we were unable to recover it. 00:35:46.694 [2024-11-18 07:21:07.462136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.694 [2024-11-18 07:21:07.462165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.694 qpair failed and we were unable to recover it. 00:35:46.694 [2024-11-18 07:21:07.462267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.694 [2024-11-18 07:21:07.462307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.694 qpair failed and we were unable to recover it. 00:35:46.694 [2024-11-18 07:21:07.462452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.694 [2024-11-18 07:21:07.462480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.694 qpair failed and we were unable to recover it. 00:35:46.694 [2024-11-18 07:21:07.462633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.694 [2024-11-18 07:21:07.462663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.694 qpair failed and we were unable to recover it. 00:35:46.694 [2024-11-18 07:21:07.462780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.694 [2024-11-18 07:21:07.462834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.694 qpair failed and we were unable to recover it. 00:35:46.694 [2024-11-18 07:21:07.462990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.694 [2024-11-18 07:21:07.463038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.694 qpair failed and we were unable to recover it. 00:35:46.694 [2024-11-18 07:21:07.463196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.694 [2024-11-18 07:21:07.463245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.694 qpair failed and we were unable to recover it. 00:35:46.694 [2024-11-18 07:21:07.463359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.694 [2024-11-18 07:21:07.463386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.694 qpair failed and we were unable to recover it. 00:35:46.694 [2024-11-18 07:21:07.463475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.694 [2024-11-18 07:21:07.463518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.694 qpair failed and we were unable to recover it. 00:35:46.694 [2024-11-18 07:21:07.463661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.694 [2024-11-18 07:21:07.463688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.694 qpair failed and we were unable to recover it. 00:35:46.694 [2024-11-18 07:21:07.463870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.694 [2024-11-18 07:21:07.463902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.694 qpair failed and we were unable to recover it. 00:35:46.694 [2024-11-18 07:21:07.464041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.694 [2024-11-18 07:21:07.464090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.694 qpair failed and we were unable to recover it. 00:35:46.694 [2024-11-18 07:21:07.464272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.694 [2024-11-18 07:21:07.464326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.694 qpair failed and we were unable to recover it. 00:35:46.694 [2024-11-18 07:21:07.464433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.694 [2024-11-18 07:21:07.464459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.694 qpair failed and we were unable to recover it. 00:35:46.694 [2024-11-18 07:21:07.464550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.694 [2024-11-18 07:21:07.464579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.694 qpair failed and we were unable to recover it. 00:35:46.694 [2024-11-18 07:21:07.464689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.694 [2024-11-18 07:21:07.464716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.694 qpair failed and we were unable to recover it. 00:35:46.694 [2024-11-18 07:21:07.464825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.694 [2024-11-18 07:21:07.464851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.694 qpair failed and we were unable to recover it. 00:35:46.694 [2024-11-18 07:21:07.464924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.694 [2024-11-18 07:21:07.464951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.694 qpair failed and we were unable to recover it. 00:35:46.694 [2024-11-18 07:21:07.465043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.694 [2024-11-18 07:21:07.465071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.694 qpair failed and we were unable to recover it. 00:35:46.694 [2024-11-18 07:21:07.465182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.694 [2024-11-18 07:21:07.465209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.694 qpair failed and we were unable to recover it. 00:35:46.694 [2024-11-18 07:21:07.465301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.694 [2024-11-18 07:21:07.465329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.694 qpair failed and we were unable to recover it. 00:35:46.694 [2024-11-18 07:21:07.465442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.694 [2024-11-18 07:21:07.465469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.694 qpair failed and we were unable to recover it. 00:35:46.694 [2024-11-18 07:21:07.465604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.694 [2024-11-18 07:21:07.465631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.694 qpair failed and we were unable to recover it. 00:35:46.694 [2024-11-18 07:21:07.465742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.694 [2024-11-18 07:21:07.465768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.694 qpair failed and we were unable to recover it. 00:35:46.694 [2024-11-18 07:21:07.465931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.694 [2024-11-18 07:21:07.465977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.694 qpair failed and we were unable to recover it. 00:35:46.694 [2024-11-18 07:21:07.466100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.694 [2024-11-18 07:21:07.466147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.694 qpair failed and we were unable to recover it. 00:35:46.694 [2024-11-18 07:21:07.466252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.694 [2024-11-18 07:21:07.466279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.694 qpair failed and we were unable to recover it. 00:35:46.694 [2024-11-18 07:21:07.466397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.694 [2024-11-18 07:21:07.466424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.694 qpair failed and we were unable to recover it. 00:35:46.694 [2024-11-18 07:21:07.466526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.694 [2024-11-18 07:21:07.466555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.694 qpair failed and we were unable to recover it. 00:35:46.694 [2024-11-18 07:21:07.466643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.694 [2024-11-18 07:21:07.466672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.694 qpair failed and we were unable to recover it. 00:35:46.694 [2024-11-18 07:21:07.466774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.694 [2024-11-18 07:21:07.466818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.694 qpair failed and we were unable to recover it. 00:35:46.694 [2024-11-18 07:21:07.466939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.694 [2024-11-18 07:21:07.466977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.694 qpair failed and we were unable to recover it. 00:35:46.694 [2024-11-18 07:21:07.467090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.694 [2024-11-18 07:21:07.467116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.694 qpair failed and we were unable to recover it. 00:35:46.694 [2024-11-18 07:21:07.467202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.694 [2024-11-18 07:21:07.467228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.694 qpair failed and we were unable to recover it. 00:35:46.694 [2024-11-18 07:21:07.467342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.694 [2024-11-18 07:21:07.467368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.694 qpair failed and we were unable to recover it. 00:35:46.695 [2024-11-18 07:21:07.467535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.695 [2024-11-18 07:21:07.467575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.695 qpair failed and we were unable to recover it. 00:35:46.695 [2024-11-18 07:21:07.467776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.695 [2024-11-18 07:21:07.467805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.695 qpair failed and we were unable to recover it. 00:35:46.695 [2024-11-18 07:21:07.467925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.695 [2024-11-18 07:21:07.467951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.695 qpair failed and we were unable to recover it. 00:35:46.695 [2024-11-18 07:21:07.468086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.695 [2024-11-18 07:21:07.468132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.695 qpair failed and we were unable to recover it. 00:35:46.695 [2024-11-18 07:21:07.468273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.695 [2024-11-18 07:21:07.468300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.695 qpair failed and we were unable to recover it. 00:35:46.695 [2024-11-18 07:21:07.468438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.695 [2024-11-18 07:21:07.468464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.695 qpair failed and we were unable to recover it. 00:35:46.695 [2024-11-18 07:21:07.468620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.695 [2024-11-18 07:21:07.468648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.695 qpair failed and we were unable to recover it. 00:35:46.695 [2024-11-18 07:21:07.468787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.695 [2024-11-18 07:21:07.468832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.695 qpair failed and we were unable to recover it. 00:35:46.695 [2024-11-18 07:21:07.469001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.695 [2024-11-18 07:21:07.469050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.695 qpair failed and we were unable to recover it. 00:35:46.695 [2024-11-18 07:21:07.469181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.695 [2024-11-18 07:21:07.469235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.695 qpair failed and we were unable to recover it. 00:35:46.695 [2024-11-18 07:21:07.469375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.695 [2024-11-18 07:21:07.469402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.695 qpair failed and we were unable to recover it. 00:35:46.695 [2024-11-18 07:21:07.469520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.695 [2024-11-18 07:21:07.469547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.695 qpair failed and we were unable to recover it. 00:35:46.695 [2024-11-18 07:21:07.469658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.695 [2024-11-18 07:21:07.469685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.695 qpair failed and we were unable to recover it. 00:35:46.695 [2024-11-18 07:21:07.469821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.695 [2024-11-18 07:21:07.469860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.695 qpair failed and we were unable to recover it. 00:35:46.695 [2024-11-18 07:21:07.470014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.695 [2024-11-18 07:21:07.470041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.695 qpair failed and we were unable to recover it. 00:35:46.695 [2024-11-18 07:21:07.470152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.695 [2024-11-18 07:21:07.470178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.695 qpair failed and we were unable to recover it. 00:35:46.695 [2024-11-18 07:21:07.470276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.695 [2024-11-18 07:21:07.470324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.695 qpair failed and we were unable to recover it. 00:35:46.695 [2024-11-18 07:21:07.470480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.695 [2024-11-18 07:21:07.470516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.695 qpair failed and we were unable to recover it. 00:35:46.695 [2024-11-18 07:21:07.470624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.695 [2024-11-18 07:21:07.470650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.695 qpair failed and we were unable to recover it. 00:35:46.695 [2024-11-18 07:21:07.470755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.695 [2024-11-18 07:21:07.470810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.695 qpair failed and we were unable to recover it. 00:35:46.695 [2024-11-18 07:21:07.470963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.695 [2024-11-18 07:21:07.470991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.695 qpair failed and we were unable to recover it. 00:35:46.695 [2024-11-18 07:21:07.471109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.695 [2024-11-18 07:21:07.471156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.695 qpair failed and we were unable to recover it. 00:35:46.695 [2024-11-18 07:21:07.471260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.695 [2024-11-18 07:21:07.471292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.695 qpair failed and we were unable to recover it. 00:35:46.695 [2024-11-18 07:21:07.471429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.695 [2024-11-18 07:21:07.471472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.695 qpair failed and we were unable to recover it. 00:35:46.695 [2024-11-18 07:21:07.471627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.695 [2024-11-18 07:21:07.471655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.695 qpair failed and we were unable to recover it. 00:35:46.695 [2024-11-18 07:21:07.471741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.695 [2024-11-18 07:21:07.471770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.695 qpair failed and we were unable to recover it. 00:35:46.695 [2024-11-18 07:21:07.471915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.695 [2024-11-18 07:21:07.471966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.695 qpair failed and we were unable to recover it. 00:35:46.695 [2024-11-18 07:21:07.472108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.695 [2024-11-18 07:21:07.472158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.695 qpair failed and we were unable to recover it. 00:35:46.695 [2024-11-18 07:21:07.472289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.696 [2024-11-18 07:21:07.472314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.696 qpair failed and we were unable to recover it. 00:35:46.696 [2024-11-18 07:21:07.472424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.696 [2024-11-18 07:21:07.472450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.696 qpair failed and we were unable to recover it. 00:35:46.696 [2024-11-18 07:21:07.472564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.696 [2024-11-18 07:21:07.472590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.696 qpair failed and we were unable to recover it. 00:35:46.696 [2024-11-18 07:21:07.472672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.696 [2024-11-18 07:21:07.472698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.696 qpair failed and we were unable to recover it. 00:35:46.696 [2024-11-18 07:21:07.472790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.696 [2024-11-18 07:21:07.472816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.696 qpair failed and we were unable to recover it. 00:35:46.696 [2024-11-18 07:21:07.472962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.696 [2024-11-18 07:21:07.472996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.696 qpair failed and we were unable to recover it. 00:35:46.696 [2024-11-18 07:21:07.473112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.696 [2024-11-18 07:21:07.473153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.696 qpair failed and we were unable to recover it. 00:35:46.696 [2024-11-18 07:21:07.473289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.696 [2024-11-18 07:21:07.473332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.696 qpair failed and we were unable to recover it. 00:35:46.696 [2024-11-18 07:21:07.473468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.696 [2024-11-18 07:21:07.473504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.696 qpair failed and we were unable to recover it. 00:35:46.696 [2024-11-18 07:21:07.473623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.696 [2024-11-18 07:21:07.473649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.696 qpair failed and we were unable to recover it. 00:35:46.696 [2024-11-18 07:21:07.473765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.696 [2024-11-18 07:21:07.473791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.696 qpair failed and we were unable to recover it. 00:35:46.696 [2024-11-18 07:21:07.473982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.696 [2024-11-18 07:21:07.474013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.696 qpair failed and we were unable to recover it. 00:35:46.696 [2024-11-18 07:21:07.474148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.696 [2024-11-18 07:21:07.474196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.696 qpair failed and we were unable to recover it. 00:35:46.696 [2024-11-18 07:21:07.474348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.696 [2024-11-18 07:21:07.474374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.696 qpair failed and we were unable to recover it. 00:35:46.696 [2024-11-18 07:21:07.474514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.696 [2024-11-18 07:21:07.474554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.696 qpair failed and we were unable to recover it. 00:35:46.696 [2024-11-18 07:21:07.474650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.696 [2024-11-18 07:21:07.474678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.696 qpair failed and we were unable to recover it. 00:35:46.696 [2024-11-18 07:21:07.474791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.696 [2024-11-18 07:21:07.474842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.696 qpair failed and we were unable to recover it. 00:35:46.696 [2024-11-18 07:21:07.474973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.696 [2024-11-18 07:21:07.475020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.696 qpair failed and we were unable to recover it. 00:35:46.696 [2024-11-18 07:21:07.475184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.696 [2024-11-18 07:21:07.475233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.696 qpair failed and we were unable to recover it. 00:35:46.696 [2024-11-18 07:21:07.475321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.696 [2024-11-18 07:21:07.475348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.696 qpair failed and we were unable to recover it. 00:35:46.696 [2024-11-18 07:21:07.475461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.696 [2024-11-18 07:21:07.475487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.696 qpair failed and we were unable to recover it. 00:35:46.696 [2024-11-18 07:21:07.475577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.696 [2024-11-18 07:21:07.475602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.696 qpair failed and we were unable to recover it. 00:35:46.696 [2024-11-18 07:21:07.475727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.696 [2024-11-18 07:21:07.475755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.696 qpair failed and we were unable to recover it. 00:35:46.696 [2024-11-18 07:21:07.475865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.696 [2024-11-18 07:21:07.475895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.696 qpair failed and we were unable to recover it. 00:35:46.696 [2024-11-18 07:21:07.476049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.696 [2024-11-18 07:21:07.476078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.696 qpair failed and we were unable to recover it. 00:35:46.696 [2024-11-18 07:21:07.476213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.696 [2024-11-18 07:21:07.476262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.696 qpair failed and we were unable to recover it. 00:35:46.696 [2024-11-18 07:21:07.476431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.696 [2024-11-18 07:21:07.476459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.696 qpair failed and we were unable to recover it. 00:35:46.696 [2024-11-18 07:21:07.476594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.696 [2024-11-18 07:21:07.476634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.696 qpair failed and we were unable to recover it. 00:35:46.696 [2024-11-18 07:21:07.476749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.696 [2024-11-18 07:21:07.476776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.696 qpair failed and we were unable to recover it. 00:35:46.696 [2024-11-18 07:21:07.476906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.696 [2024-11-18 07:21:07.476956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.696 qpair failed and we were unable to recover it. 00:35:46.696 [2024-11-18 07:21:07.477106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.696 [2024-11-18 07:21:07.477153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.696 qpair failed and we were unable to recover it. 00:35:46.696 [2024-11-18 07:21:07.477264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.696 [2024-11-18 07:21:07.477309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.696 qpair failed and we were unable to recover it. 00:35:46.696 [2024-11-18 07:21:07.477437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.696 [2024-11-18 07:21:07.477466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.696 qpair failed and we were unable to recover it. 00:35:46.696 [2024-11-18 07:21:07.477581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.696 [2024-11-18 07:21:07.477608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.696 qpair failed and we were unable to recover it. 00:35:46.696 [2024-11-18 07:21:07.477693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.696 [2024-11-18 07:21:07.477718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.696 qpair failed and we were unable to recover it. 00:35:46.697 [2024-11-18 07:21:07.477807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.697 [2024-11-18 07:21:07.477837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.697 qpair failed and we were unable to recover it. 00:35:46.697 [2024-11-18 07:21:07.477925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.697 [2024-11-18 07:21:07.477951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.697 qpair failed and we were unable to recover it. 00:35:46.697 [2024-11-18 07:21:07.478062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.697 [2024-11-18 07:21:07.478088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.697 qpair failed and we were unable to recover it. 00:35:46.697 [2024-11-18 07:21:07.478169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.697 [2024-11-18 07:21:07.478196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.697 qpair failed and we were unable to recover it. 00:35:46.697 [2024-11-18 07:21:07.478277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.697 [2024-11-18 07:21:07.478303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.697 qpair failed and we were unable to recover it. 00:35:46.697 [2024-11-18 07:21:07.478384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.697 [2024-11-18 07:21:07.478410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.697 qpair failed and we were unable to recover it. 00:35:46.697 [2024-11-18 07:21:07.478504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.697 [2024-11-18 07:21:07.478531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.697 qpair failed and we were unable to recover it. 00:35:46.697 [2024-11-18 07:21:07.478640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.697 [2024-11-18 07:21:07.478666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.697 qpair failed and we were unable to recover it. 00:35:46.697 [2024-11-18 07:21:07.478775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.697 [2024-11-18 07:21:07.478823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.697 qpair failed and we were unable to recover it. 00:35:46.697 [2024-11-18 07:21:07.478970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.697 [2024-11-18 07:21:07.479002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.697 qpair failed and we were unable to recover it. 00:35:46.697 [2024-11-18 07:21:07.479130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.697 [2024-11-18 07:21:07.479162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.697 qpair failed and we were unable to recover it. 00:35:46.697 [2024-11-18 07:21:07.479287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.697 [2024-11-18 07:21:07.479319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.697 qpair failed and we were unable to recover it. 00:35:46.697 [2024-11-18 07:21:07.479471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.697 [2024-11-18 07:21:07.479541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.697 qpair failed and we were unable to recover it. 00:35:46.697 [2024-11-18 07:21:07.479632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.697 [2024-11-18 07:21:07.479661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.697 qpair failed and we were unable to recover it. 00:35:46.697 [2024-11-18 07:21:07.479748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.697 [2024-11-18 07:21:07.479776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.697 qpair failed and we were unable to recover it. 00:35:46.697 [2024-11-18 07:21:07.479907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.697 [2024-11-18 07:21:07.479952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.697 qpair failed and we were unable to recover it. 00:35:46.697 [2024-11-18 07:21:07.480123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.697 [2024-11-18 07:21:07.480155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.697 qpair failed and we were unable to recover it. 00:35:46.697 [2024-11-18 07:21:07.480308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.697 [2024-11-18 07:21:07.480356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.697 qpair failed and we were unable to recover it. 00:35:46.697 [2024-11-18 07:21:07.480441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.697 [2024-11-18 07:21:07.480467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.697 qpair failed and we were unable to recover it. 00:35:46.697 [2024-11-18 07:21:07.480567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.697 [2024-11-18 07:21:07.480606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.697 qpair failed and we were unable to recover it. 00:35:46.697 [2024-11-18 07:21:07.480757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.697 [2024-11-18 07:21:07.480785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.697 qpair failed and we were unable to recover it. 00:35:46.697 [2024-11-18 07:21:07.480906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.697 [2024-11-18 07:21:07.480932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.697 qpair failed and we were unable to recover it. 00:35:46.697 [2024-11-18 07:21:07.481098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.697 [2024-11-18 07:21:07.481148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.697 qpair failed and we were unable to recover it. 00:35:46.697 [2024-11-18 07:21:07.481272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.697 [2024-11-18 07:21:07.481302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.697 qpair failed and we were unable to recover it. 00:35:46.697 [2024-11-18 07:21:07.481434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.697 [2024-11-18 07:21:07.481460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.697 qpair failed and we were unable to recover it. 00:35:46.697 [2024-11-18 07:21:07.481561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.697 [2024-11-18 07:21:07.481588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.697 qpair failed and we were unable to recover it. 00:35:46.697 [2024-11-18 07:21:07.481704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.697 [2024-11-18 07:21:07.481731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.697 qpair failed and we were unable to recover it. 00:35:46.697 [2024-11-18 07:21:07.481897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.697 [2024-11-18 07:21:07.481953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.697 qpair failed and we were unable to recover it. 00:35:46.697 [2024-11-18 07:21:07.482150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.697 [2024-11-18 07:21:07.482198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.697 qpair failed and we were unable to recover it. 00:35:46.697 [2024-11-18 07:21:07.482333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.697 [2024-11-18 07:21:07.482360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.697 qpair failed and we were unable to recover it. 00:35:46.697 [2024-11-18 07:21:07.482505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.697 [2024-11-18 07:21:07.482532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.697 qpair failed and we were unable to recover it. 00:35:46.697 [2024-11-18 07:21:07.482620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.697 [2024-11-18 07:21:07.482646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.697 qpair failed and we were unable to recover it. 00:35:46.697 [2024-11-18 07:21:07.482733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.697 [2024-11-18 07:21:07.482759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.697 qpair failed and we were unable to recover it. 00:35:46.697 [2024-11-18 07:21:07.482877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.697 [2024-11-18 07:21:07.482904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.697 qpair failed and we were unable to recover it. 00:35:46.697 [2024-11-18 07:21:07.482987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.697 [2024-11-18 07:21:07.483015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.697 qpair failed and we were unable to recover it. 00:35:46.697 [2024-11-18 07:21:07.483144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.697 [2024-11-18 07:21:07.483173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.697 qpair failed and we were unable to recover it. 00:35:46.697 [2024-11-18 07:21:07.483274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.697 [2024-11-18 07:21:07.483300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.697 qpair failed and we were unable to recover it. 00:35:46.697 [2024-11-18 07:21:07.483427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.697 [2024-11-18 07:21:07.483466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.697 qpair failed and we were unable to recover it. 00:35:46.697 [2024-11-18 07:21:07.483566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.698 [2024-11-18 07:21:07.483596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.698 qpair failed and we were unable to recover it. 00:35:46.698 [2024-11-18 07:21:07.483737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.698 [2024-11-18 07:21:07.483764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.698 qpair failed and we were unable to recover it. 00:35:46.698 [2024-11-18 07:21:07.483851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.698 [2024-11-18 07:21:07.483877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.698 qpair failed and we were unable to recover it. 00:35:46.698 [2024-11-18 07:21:07.484014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.698 [2024-11-18 07:21:07.484044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.698 qpair failed and we were unable to recover it. 00:35:46.698 [2024-11-18 07:21:07.484163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.698 [2024-11-18 07:21:07.484192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.698 qpair failed and we were unable to recover it. 00:35:46.698 [2024-11-18 07:21:07.484327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.698 [2024-11-18 07:21:07.484354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.698 qpair failed and we were unable to recover it. 00:35:46.698 [2024-11-18 07:21:07.484509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.698 [2024-11-18 07:21:07.484548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.698 qpair failed and we were unable to recover it. 00:35:46.698 [2024-11-18 07:21:07.484664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.698 [2024-11-18 07:21:07.484691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.698 qpair failed and we were unable to recover it. 00:35:46.698 [2024-11-18 07:21:07.484774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.698 [2024-11-18 07:21:07.484801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.698 qpair failed and we were unable to recover it. 00:35:46.698 [2024-11-18 07:21:07.485040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.698 [2024-11-18 07:21:07.485092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.698 qpair failed and we were unable to recover it. 00:35:46.698 [2024-11-18 07:21:07.485220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.698 [2024-11-18 07:21:07.485261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.698 qpair failed and we were unable to recover it. 00:35:46.698 [2024-11-18 07:21:07.485447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.698 [2024-11-18 07:21:07.485472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.698 qpair failed and we were unable to recover it. 00:35:46.698 [2024-11-18 07:21:07.485571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.698 [2024-11-18 07:21:07.485597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.698 qpair failed and we were unable to recover it. 00:35:46.698 [2024-11-18 07:21:07.485705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.698 [2024-11-18 07:21:07.485731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.698 qpair failed and we were unable to recover it. 00:35:46.698 [2024-11-18 07:21:07.485844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.698 [2024-11-18 07:21:07.485870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.698 qpair failed and we were unable to recover it. 00:35:46.698 [2024-11-18 07:21:07.486015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.698 [2024-11-18 07:21:07.486046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.698 qpair failed and we were unable to recover it. 00:35:46.698 [2024-11-18 07:21:07.486157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.698 [2024-11-18 07:21:07.486205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.698 qpair failed and we were unable to recover it. 00:35:46.698 [2024-11-18 07:21:07.486359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.698 [2024-11-18 07:21:07.486391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.698 qpair failed and we were unable to recover it. 00:35:46.698 [2024-11-18 07:21:07.486499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.698 [2024-11-18 07:21:07.486528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.698 qpair failed and we were unable to recover it. 00:35:46.698 [2024-11-18 07:21:07.486618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.698 [2024-11-18 07:21:07.486644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.698 qpair failed and we were unable to recover it. 00:35:46.698 [2024-11-18 07:21:07.486734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.698 [2024-11-18 07:21:07.486762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.698 qpair failed and we were unable to recover it. 00:35:46.698 [2024-11-18 07:21:07.486856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.698 [2024-11-18 07:21:07.486883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.698 qpair failed and we were unable to recover it. 00:35:46.698 [2024-11-18 07:21:07.487064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.698 [2024-11-18 07:21:07.487135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.698 qpair failed and we were unable to recover it. 00:35:46.698 [2024-11-18 07:21:07.487284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.698 [2024-11-18 07:21:07.487332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.698 qpair failed and we were unable to recover it. 00:35:46.698 [2024-11-18 07:21:07.487448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.698 [2024-11-18 07:21:07.487475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.698 qpair failed and we were unable to recover it. 00:35:46.698 [2024-11-18 07:21:07.487571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.698 [2024-11-18 07:21:07.487598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.698 qpair failed and we were unable to recover it. 00:35:46.698 [2024-11-18 07:21:07.487721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.698 [2024-11-18 07:21:07.487748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.698 qpair failed and we were unable to recover it. 00:35:46.698 [2024-11-18 07:21:07.487851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.698 [2024-11-18 07:21:07.487904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.698 qpair failed and we were unable to recover it. 00:35:46.698 [2024-11-18 07:21:07.488065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.698 [2024-11-18 07:21:07.488115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.698 qpair failed and we were unable to recover it. 00:35:46.698 [2024-11-18 07:21:07.488208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.698 [2024-11-18 07:21:07.488239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.698 qpair failed and we were unable to recover it. 00:35:46.698 [2024-11-18 07:21:07.488414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.698 [2024-11-18 07:21:07.488442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.698 qpair failed and we were unable to recover it. 00:35:46.698 [2024-11-18 07:21:07.488560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.698 [2024-11-18 07:21:07.488588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.698 qpair failed and we were unable to recover it. 00:35:46.698 [2024-11-18 07:21:07.488702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.698 [2024-11-18 07:21:07.488728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.698 qpair failed and we were unable to recover it. 00:35:46.698 [2024-11-18 07:21:07.488814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.698 [2024-11-18 07:21:07.488841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.698 qpair failed and we were unable to recover it. 00:35:46.698 [2024-11-18 07:21:07.488971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.698 [2024-11-18 07:21:07.489019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.698 qpair failed and we were unable to recover it. 00:35:46.698 [2024-11-18 07:21:07.489111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.698 [2024-11-18 07:21:07.489136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.698 qpair failed and we were unable to recover it. 00:35:46.698 [2024-11-18 07:21:07.489247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.698 [2024-11-18 07:21:07.489273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.698 qpair failed and we were unable to recover it. 00:35:46.698 [2024-11-18 07:21:07.489397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.698 [2024-11-18 07:21:07.489424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.698 qpair failed and we were unable to recover it. 00:35:46.698 [2024-11-18 07:21:07.489520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.699 [2024-11-18 07:21:07.489548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.699 qpair failed and we were unable to recover it. 00:35:46.699 [2024-11-18 07:21:07.489635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.699 [2024-11-18 07:21:07.489663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.699 qpair failed and we were unable to recover it. 00:35:46.699 [2024-11-18 07:21:07.489774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.699 [2024-11-18 07:21:07.489799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.699 qpair failed and we were unable to recover it. 00:35:46.699 [2024-11-18 07:21:07.489927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.699 [2024-11-18 07:21:07.489953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.699 qpair failed and we were unable to recover it. 00:35:46.699 [2024-11-18 07:21:07.490123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.699 [2024-11-18 07:21:07.490156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.699 qpair failed and we were unable to recover it. 00:35:46.699 [2024-11-18 07:21:07.490354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.699 [2024-11-18 07:21:07.490383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.699 qpair failed and we were unable to recover it. 00:35:46.699 [2024-11-18 07:21:07.490530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.699 [2024-11-18 07:21:07.490558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.699 qpair failed and we were unable to recover it. 00:35:46.699 [2024-11-18 07:21:07.490679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.699 [2024-11-18 07:21:07.490705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.699 qpair failed and we were unable to recover it. 00:35:46.699 [2024-11-18 07:21:07.490784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.699 [2024-11-18 07:21:07.490809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.699 qpair failed and we were unable to recover it. 00:35:46.699 [2024-11-18 07:21:07.490949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.699 [2024-11-18 07:21:07.490996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.699 qpair failed and we were unable to recover it. 00:35:46.699 [2024-11-18 07:21:07.491075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.699 [2024-11-18 07:21:07.491104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.699 qpair failed and we were unable to recover it. 00:35:46.699 [2024-11-18 07:21:07.491234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.699 [2024-11-18 07:21:07.491274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.699 qpair failed and we were unable to recover it. 00:35:46.699 [2024-11-18 07:21:07.491372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.699 [2024-11-18 07:21:07.491400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.699 qpair failed and we were unable to recover it. 00:35:46.699 [2024-11-18 07:21:07.491526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.699 [2024-11-18 07:21:07.491556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.699 qpair failed and we were unable to recover it. 00:35:46.699 [2024-11-18 07:21:07.491670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.699 [2024-11-18 07:21:07.491699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.699 qpair failed and we were unable to recover it. 00:35:46.699 [2024-11-18 07:21:07.491810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.699 [2024-11-18 07:21:07.491836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.699 qpair failed and we were unable to recover it. 00:35:46.699 [2024-11-18 07:21:07.491919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.699 [2024-11-18 07:21:07.491946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.699 qpair failed and we were unable to recover it. 00:35:46.699 [2024-11-18 07:21:07.492074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.699 [2024-11-18 07:21:07.492105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.699 qpair failed and we were unable to recover it. 00:35:46.699 [2024-11-18 07:21:07.492213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.699 [2024-11-18 07:21:07.492267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.699 qpair failed and we were unable to recover it. 00:35:46.699 [2024-11-18 07:21:07.492382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.699 [2024-11-18 07:21:07.492409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.699 qpair failed and we were unable to recover it. 00:35:46.699 [2024-11-18 07:21:07.492537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.699 [2024-11-18 07:21:07.492564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.699 qpair failed and we were unable to recover it. 00:35:46.699 [2024-11-18 07:21:07.492675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.699 [2024-11-18 07:21:07.492701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.699 qpair failed and we were unable to recover it. 00:35:46.699 [2024-11-18 07:21:07.492793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.699 [2024-11-18 07:21:07.492819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.699 qpair failed and we were unable to recover it. 00:35:46.699 [2024-11-18 07:21:07.492937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.699 [2024-11-18 07:21:07.492986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.699 qpair failed and we were unable to recover it. 00:35:46.699 [2024-11-18 07:21:07.493120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.699 [2024-11-18 07:21:07.493167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.699 qpair failed and we were unable to recover it. 00:35:46.699 [2024-11-18 07:21:07.493258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.699 [2024-11-18 07:21:07.493285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.699 qpair failed and we were unable to recover it. 00:35:46.699 [2024-11-18 07:21:07.493422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.699 [2024-11-18 07:21:07.493447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.699 qpair failed and we were unable to recover it. 00:35:46.699 [2024-11-18 07:21:07.493570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.699 [2024-11-18 07:21:07.493599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.699 qpair failed and we were unable to recover it. 00:35:46.699 [2024-11-18 07:21:07.493684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.699 [2024-11-18 07:21:07.493711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.699 qpair failed and we were unable to recover it. 00:35:46.699 [2024-11-18 07:21:07.493799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.699 [2024-11-18 07:21:07.493825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.699 qpair failed and we were unable to recover it. 00:35:46.699 [2024-11-18 07:21:07.493918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.699 [2024-11-18 07:21:07.493945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.699 qpair failed and we were unable to recover it. 00:35:46.699 [2024-11-18 07:21:07.494061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.699 [2024-11-18 07:21:07.494109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.699 qpair failed and we were unable to recover it. 00:35:46.699 [2024-11-18 07:21:07.494212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.699 [2024-11-18 07:21:07.494241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.699 qpair failed and we were unable to recover it. 00:35:46.699 [2024-11-18 07:21:07.494355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.699 [2024-11-18 07:21:07.494381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.699 qpair failed and we were unable to recover it. 00:35:46.699 [2024-11-18 07:21:07.494500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.699 [2024-11-18 07:21:07.494530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.699 qpair failed and we were unable to recover it. 00:35:46.699 [2024-11-18 07:21:07.494639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.699 [2024-11-18 07:21:07.494665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.699 qpair failed and we were unable to recover it. 00:35:46.699 [2024-11-18 07:21:07.494749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.699 [2024-11-18 07:21:07.494775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.699 qpair failed and we were unable to recover it. 00:35:46.699 [2024-11-18 07:21:07.494941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.699 [2024-11-18 07:21:07.494989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.699 qpair failed and we were unable to recover it. 00:35:46.699 [2024-11-18 07:21:07.495120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.699 [2024-11-18 07:21:07.495164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.700 qpair failed and we were unable to recover it. 00:35:46.700 [2024-11-18 07:21:07.495295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.700 [2024-11-18 07:21:07.495339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.700 qpair failed and we were unable to recover it. 00:35:46.700 [2024-11-18 07:21:07.495427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.700 [2024-11-18 07:21:07.495455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.700 qpair failed and we were unable to recover it. 00:35:46.700 [2024-11-18 07:21:07.495579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.700 [2024-11-18 07:21:07.495607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.700 qpair failed and we were unable to recover it. 00:35:46.700 [2024-11-18 07:21:07.495722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.700 [2024-11-18 07:21:07.495748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.700 qpair failed and we were unable to recover it. 00:35:46.700 [2024-11-18 07:21:07.495859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.700 [2024-11-18 07:21:07.495885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.700 qpair failed and we were unable to recover it. 00:35:46.700 [2024-11-18 07:21:07.496026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.700 [2024-11-18 07:21:07.496052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.700 qpair failed and we were unable to recover it. 00:35:46.700 [2024-11-18 07:21:07.496167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.700 [2024-11-18 07:21:07.496195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.700 qpair failed and we were unable to recover it. 00:35:46.700 [2024-11-18 07:21:07.496306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.700 [2024-11-18 07:21:07.496335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.700 qpair failed and we were unable to recover it. 00:35:46.700 [2024-11-18 07:21:07.496435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.700 [2024-11-18 07:21:07.496461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.700 qpair failed and we were unable to recover it. 00:35:46.700 [2024-11-18 07:21:07.496582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.700 [2024-11-18 07:21:07.496609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.700 qpair failed and we were unable to recover it. 00:35:46.700 [2024-11-18 07:21:07.496699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.700 [2024-11-18 07:21:07.496742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.700 qpair failed and we were unable to recover it. 00:35:46.700 [2024-11-18 07:21:07.496873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.700 [2024-11-18 07:21:07.496903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.700 qpair failed and we were unable to recover it. 00:35:46.700 [2024-11-18 07:21:07.497069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.700 [2024-11-18 07:21:07.497098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.700 qpair failed and we were unable to recover it. 00:35:46.700 [2024-11-18 07:21:07.497218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.700 [2024-11-18 07:21:07.497247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.700 qpair failed and we were unable to recover it. 00:35:46.700 [2024-11-18 07:21:07.497362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.700 [2024-11-18 07:21:07.497391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.700 qpair failed and we were unable to recover it. 00:35:46.700 [2024-11-18 07:21:07.497525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.700 [2024-11-18 07:21:07.497555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.700 qpair failed and we were unable to recover it. 00:35:46.700 [2024-11-18 07:21:07.497645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.700 [2024-11-18 07:21:07.497672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.700 qpair failed and we were unable to recover it. 00:35:46.700 [2024-11-18 07:21:07.497778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.700 [2024-11-18 07:21:07.497805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.700 qpair failed and we were unable to recover it. 00:35:46.700 [2024-11-18 07:21:07.497894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.700 [2024-11-18 07:21:07.497921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.700 qpair failed and we were unable to recover it. 00:35:46.700 [2024-11-18 07:21:07.498039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.700 [2024-11-18 07:21:07.498073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.700 qpair failed and we were unable to recover it. 00:35:46.700 [2024-11-18 07:21:07.498167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.700 [2024-11-18 07:21:07.498193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.700 qpair failed and we were unable to recover it. 00:35:46.700 [2024-11-18 07:21:07.498329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.700 [2024-11-18 07:21:07.498355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.700 qpair failed and we were unable to recover it. 00:35:46.700 [2024-11-18 07:21:07.498446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.700 [2024-11-18 07:21:07.498473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.700 qpair failed and we were unable to recover it. 00:35:46.700 [2024-11-18 07:21:07.498562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.700 [2024-11-18 07:21:07.498589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.700 qpair failed and we were unable to recover it. 00:35:46.700 [2024-11-18 07:21:07.498697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.700 [2024-11-18 07:21:07.498730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.700 qpair failed and we were unable to recover it. 00:35:46.700 [2024-11-18 07:21:07.498814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.700 [2024-11-18 07:21:07.498840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.700 qpair failed and we were unable to recover it. 00:35:46.700 [2024-11-18 07:21:07.498975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.700 [2024-11-18 07:21:07.499014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.700 qpair failed and we were unable to recover it. 00:35:46.700 [2024-11-18 07:21:07.499100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.700 [2024-11-18 07:21:07.499128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.700 qpair failed and we were unable to recover it. 00:35:46.700 [2024-11-18 07:21:07.499262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.700 [2024-11-18 07:21:07.499300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.700 qpair failed and we were unable to recover it. 00:35:46.700 [2024-11-18 07:21:07.499398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.700 [2024-11-18 07:21:07.499425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.700 qpair failed and we were unable to recover it. 00:35:46.700 [2024-11-18 07:21:07.499573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.701 [2024-11-18 07:21:07.499606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.701 qpair failed and we were unable to recover it. 00:35:46.701 [2024-11-18 07:21:07.499690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.701 [2024-11-18 07:21:07.499715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.701 qpair failed and we were unable to recover it. 00:35:46.701 [2024-11-18 07:21:07.499853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.701 [2024-11-18 07:21:07.499880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.701 qpair failed and we were unable to recover it. 00:35:46.701 [2024-11-18 07:21:07.499982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.701 [2024-11-18 07:21:07.500009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.701 qpair failed and we were unable to recover it. 00:35:46.701 [2024-11-18 07:21:07.500113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.701 [2024-11-18 07:21:07.500141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.701 qpair failed and we were unable to recover it. 00:35:46.701 [2024-11-18 07:21:07.500255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.701 [2024-11-18 07:21:07.500282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.701 qpair failed and we were unable to recover it. 00:35:46.701 [2024-11-18 07:21:07.500385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.701 [2024-11-18 07:21:07.500425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.701 qpair failed and we were unable to recover it. 00:35:46.701 [2024-11-18 07:21:07.500574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.701 [2024-11-18 07:21:07.500603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.701 qpair failed and we were unable to recover it. 00:35:46.701 [2024-11-18 07:21:07.500748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.701 [2024-11-18 07:21:07.500780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.701 qpair failed and we were unable to recover it. 00:35:46.701 [2024-11-18 07:21:07.500903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.701 [2024-11-18 07:21:07.500959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.701 qpair failed and we were unable to recover it. 00:35:46.701 [2024-11-18 07:21:07.501100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.701 [2024-11-18 07:21:07.501132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.701 qpair failed and we were unable to recover it. 00:35:46.701 [2024-11-18 07:21:07.501241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.701 [2024-11-18 07:21:07.501270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.701 qpair failed and we were unable to recover it. 00:35:46.701 [2024-11-18 07:21:07.501422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.701 [2024-11-18 07:21:07.501448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.701 qpair failed and we were unable to recover it. 00:35:46.701 [2024-11-18 07:21:07.501560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.701 [2024-11-18 07:21:07.501587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.701 qpair failed and we were unable to recover it. 00:35:46.701 [2024-11-18 07:21:07.501691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.701 [2024-11-18 07:21:07.501716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.701 qpair failed and we were unable to recover it. 00:35:46.701 [2024-11-18 07:21:07.501860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.701 [2024-11-18 07:21:07.501905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.701 qpair failed and we were unable to recover it. 00:35:46.701 [2024-11-18 07:21:07.502001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.701 [2024-11-18 07:21:07.502035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.701 qpair failed and we were unable to recover it. 00:35:46.701 [2024-11-18 07:21:07.502188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.701 [2024-11-18 07:21:07.502235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.701 qpair failed and we were unable to recover it. 00:35:46.701 [2024-11-18 07:21:07.502372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.701 [2024-11-18 07:21:07.502397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.701 qpair failed and we were unable to recover it. 00:35:46.701 [2024-11-18 07:21:07.502543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.701 [2024-11-18 07:21:07.502572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.701 qpair failed and we were unable to recover it. 00:35:46.701 [2024-11-18 07:21:07.502653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.701 [2024-11-18 07:21:07.502680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.701 qpair failed and we were unable to recover it. 00:35:46.701 [2024-11-18 07:21:07.502794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.701 [2024-11-18 07:21:07.502820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.701 qpair failed and we were unable to recover it. 00:35:46.701 [2024-11-18 07:21:07.502972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.701 [2024-11-18 07:21:07.502999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.701 qpair failed and we were unable to recover it. 00:35:46.701 [2024-11-18 07:21:07.503136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.701 [2024-11-18 07:21:07.503163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.701 qpair failed and we were unable to recover it. 00:35:46.701 [2024-11-18 07:21:07.503251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.701 [2024-11-18 07:21:07.503280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.701 qpair failed and we were unable to recover it. 00:35:46.701 [2024-11-18 07:21:07.503397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.701 [2024-11-18 07:21:07.503424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.701 qpair failed and we were unable to recover it. 00:35:46.701 [2024-11-18 07:21:07.503547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.701 [2024-11-18 07:21:07.503587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.701 qpair failed and we were unable to recover it. 00:35:46.701 [2024-11-18 07:21:07.503692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.701 [2024-11-18 07:21:07.503719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.701 qpair failed and we were unable to recover it. 00:35:46.701 [2024-11-18 07:21:07.503810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.701 [2024-11-18 07:21:07.503836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.701 qpair failed and we were unable to recover it. 00:35:46.701 [2024-11-18 07:21:07.503941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.701 [2024-11-18 07:21:07.503974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.701 qpair failed and we were unable to recover it. 00:35:46.701 [2024-11-18 07:21:07.504117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.701 [2024-11-18 07:21:07.504165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.701 qpair failed and we were unable to recover it. 00:35:46.701 [2024-11-18 07:21:07.504299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.701 [2024-11-18 07:21:07.504327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.701 qpair failed and we were unable to recover it. 00:35:46.701 [2024-11-18 07:21:07.504448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.701 [2024-11-18 07:21:07.504474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.701 qpair failed and we were unable to recover it. 00:35:46.701 [2024-11-18 07:21:07.504603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.701 [2024-11-18 07:21:07.504632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.701 qpair failed and we were unable to recover it. 00:35:46.701 [2024-11-18 07:21:07.504710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.702 [2024-11-18 07:21:07.504737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.702 qpair failed and we were unable to recover it. 00:35:46.702 [2024-11-18 07:21:07.504963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.702 [2024-11-18 07:21:07.505012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.702 qpair failed and we were unable to recover it. 00:35:46.702 [2024-11-18 07:21:07.505121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.702 [2024-11-18 07:21:07.505150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.702 qpair failed and we were unable to recover it. 00:35:46.702 [2024-11-18 07:21:07.505266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.702 [2024-11-18 07:21:07.505293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.702 qpair failed and we were unable to recover it. 00:35:46.702 [2024-11-18 07:21:07.505408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.702 [2024-11-18 07:21:07.505435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.702 qpair failed and we were unable to recover it. 00:35:46.702 [2024-11-18 07:21:07.505520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.702 [2024-11-18 07:21:07.505548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.702 qpair failed and we were unable to recover it. 00:35:46.702 [2024-11-18 07:21:07.505636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.702 [2024-11-18 07:21:07.505662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.702 qpair failed and we were unable to recover it. 00:35:46.702 [2024-11-18 07:21:07.505807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.702 [2024-11-18 07:21:07.505833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.702 qpair failed and we were unable to recover it. 00:35:46.702 [2024-11-18 07:21:07.505948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.702 [2024-11-18 07:21:07.505975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.702 qpair failed and we were unable to recover it. 00:35:46.702 [2024-11-18 07:21:07.506088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.702 [2024-11-18 07:21:07.506115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.702 qpair failed and we were unable to recover it. 00:35:46.702 [2024-11-18 07:21:07.506215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.702 [2024-11-18 07:21:07.506254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.702 qpair failed and we were unable to recover it. 00:35:46.702 [2024-11-18 07:21:07.506353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.702 [2024-11-18 07:21:07.506381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.702 qpair failed and we were unable to recover it. 00:35:46.702 [2024-11-18 07:21:07.506502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.702 [2024-11-18 07:21:07.506530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.702 qpair failed and we were unable to recover it. 00:35:46.702 [2024-11-18 07:21:07.506632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.702 [2024-11-18 07:21:07.506659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.702 qpair failed and we were unable to recover it. 00:35:46.702 [2024-11-18 07:21:07.506750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.702 [2024-11-18 07:21:07.506776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.702 qpair failed and we were unable to recover it. 00:35:46.702 [2024-11-18 07:21:07.506889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.702 [2024-11-18 07:21:07.506916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.702 qpair failed and we were unable to recover it. 00:35:46.702 [2024-11-18 07:21:07.507023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.702 [2024-11-18 07:21:07.507052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.702 qpair failed and we were unable to recover it. 00:35:46.702 [2024-11-18 07:21:07.507206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.702 [2024-11-18 07:21:07.507235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.702 qpair failed and we were unable to recover it. 00:35:46.702 [2024-11-18 07:21:07.507399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.702 [2024-11-18 07:21:07.507426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.702 qpair failed and we were unable to recover it. 00:35:46.702 [2024-11-18 07:21:07.507571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.702 [2024-11-18 07:21:07.507598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.702 qpair failed and we were unable to recover it. 00:35:46.702 [2024-11-18 07:21:07.507678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.702 [2024-11-18 07:21:07.507706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.702 qpair failed and we were unable to recover it. 00:35:46.702 [2024-11-18 07:21:07.507832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.702 [2024-11-18 07:21:07.507866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.702 qpair failed and we were unable to recover it. 00:35:46.702 [2024-11-18 07:21:07.507994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.702 [2024-11-18 07:21:07.508052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.702 qpair failed and we were unable to recover it. 00:35:46.702 [2024-11-18 07:21:07.508175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.702 [2024-11-18 07:21:07.508223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.702 qpair failed and we were unable to recover it. 00:35:46.702 [2024-11-18 07:21:07.508327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.702 [2024-11-18 07:21:07.508370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.702 qpair failed and we were unable to recover it. 00:35:46.702 [2024-11-18 07:21:07.508534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.702 [2024-11-18 07:21:07.508563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.702 qpair failed and we were unable to recover it. 00:35:46.702 [2024-11-18 07:21:07.508652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.702 [2024-11-18 07:21:07.508679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.702 qpair failed and we were unable to recover it. 00:35:46.702 [2024-11-18 07:21:07.508798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.702 [2024-11-18 07:21:07.508824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.702 qpair failed and we were unable to recover it. 00:35:46.702 [2024-11-18 07:21:07.508899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.702 [2024-11-18 07:21:07.508925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.702 qpair failed and we were unable to recover it. 00:35:46.702 [2024-11-18 07:21:07.509130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.702 [2024-11-18 07:21:07.509177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.702 qpair failed and we were unable to recover it. 00:35:46.702 [2024-11-18 07:21:07.509296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.702 [2024-11-18 07:21:07.509323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.702 qpair failed and we were unable to recover it. 00:35:46.702 [2024-11-18 07:21:07.509440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.702 [2024-11-18 07:21:07.509467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.702 qpair failed and we were unable to recover it. 00:35:46.702 [2024-11-18 07:21:07.509602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.702 [2024-11-18 07:21:07.509641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.702 qpair failed and we were unable to recover it. 00:35:46.702 [2024-11-18 07:21:07.509754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.702 [2024-11-18 07:21:07.509782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.702 qpair failed and we were unable to recover it. 00:35:46.702 [2024-11-18 07:21:07.509874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.702 [2024-11-18 07:21:07.509899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.702 qpair failed and we were unable to recover it. 00:35:46.702 [2024-11-18 07:21:07.510002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.702 [2024-11-18 07:21:07.510054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.702 qpair failed and we were unable to recover it. 00:35:46.702 [2024-11-18 07:21:07.510207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.702 [2024-11-18 07:21:07.510253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.702 qpair failed and we were unable to recover it. 00:35:46.702 [2024-11-18 07:21:07.510381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.702 [2024-11-18 07:21:07.510429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.702 qpair failed and we were unable to recover it. 00:35:46.702 [2024-11-18 07:21:07.510547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.702 [2024-11-18 07:21:07.510575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.702 qpair failed and we were unable to recover it. 00:35:46.703 [2024-11-18 07:21:07.510690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.703 [2024-11-18 07:21:07.510727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.703 qpair failed and we were unable to recover it. 00:35:46.703 [2024-11-18 07:21:07.510821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.703 [2024-11-18 07:21:07.510875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.703 qpair failed and we were unable to recover it. 00:35:46.703 [2024-11-18 07:21:07.510969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.703 [2024-11-18 07:21:07.510998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.703 qpair failed and we were unable to recover it. 00:35:46.703 [2024-11-18 07:21:07.511096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.703 [2024-11-18 07:21:07.511127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.703 qpair failed and we were unable to recover it. 00:35:46.703 [2024-11-18 07:21:07.511221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.703 [2024-11-18 07:21:07.511251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.703 qpair failed and we were unable to recover it. 00:35:46.703 [2024-11-18 07:21:07.511383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.703 [2024-11-18 07:21:07.511412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.703 qpair failed and we were unable to recover it. 00:35:46.703 [2024-11-18 07:21:07.511527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.703 [2024-11-18 07:21:07.511555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.703 qpair failed and we were unable to recover it. 00:35:46.703 [2024-11-18 07:21:07.511651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.703 [2024-11-18 07:21:07.511679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.703 qpair failed and we were unable to recover it. 00:35:46.703 [2024-11-18 07:21:07.511768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.703 [2024-11-18 07:21:07.511794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.703 qpair failed and we were unable to recover it. 00:35:46.703 [2024-11-18 07:21:07.511888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.703 [2024-11-18 07:21:07.511914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.703 qpair failed and we were unable to recover it. 00:35:46.703 [2024-11-18 07:21:07.512059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.703 [2024-11-18 07:21:07.512085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.703 qpair failed and we were unable to recover it. 00:35:46.703 [2024-11-18 07:21:07.512201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.703 [2024-11-18 07:21:07.512227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.703 qpair failed and we were unable to recover it. 00:35:46.703 [2024-11-18 07:21:07.512335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.703 [2024-11-18 07:21:07.512362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.703 qpair failed and we were unable to recover it. 00:35:46.703 [2024-11-18 07:21:07.512484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.703 [2024-11-18 07:21:07.512517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.703 qpair failed and we were unable to recover it. 00:35:46.703 [2024-11-18 07:21:07.512602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.703 [2024-11-18 07:21:07.512630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.703 qpair failed and we were unable to recover it. 00:35:46.703 [2024-11-18 07:21:07.512723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.703 [2024-11-18 07:21:07.512761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.703 qpair failed and we were unable to recover it. 00:35:46.703 [2024-11-18 07:21:07.512883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.703 [2024-11-18 07:21:07.512911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.703 qpair failed and we were unable to recover it. 00:35:46.703 [2024-11-18 07:21:07.513021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.703 [2024-11-18 07:21:07.513047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.703 qpair failed and we were unable to recover it. 00:35:46.703 [2024-11-18 07:21:07.513128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.703 [2024-11-18 07:21:07.513154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.703 qpair failed and we were unable to recover it. 00:35:46.703 [2024-11-18 07:21:07.513235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.703 [2024-11-18 07:21:07.513261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.703 qpair failed and we were unable to recover it. 00:35:46.703 [2024-11-18 07:21:07.513373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.703 [2024-11-18 07:21:07.513400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.703 qpair failed and we were unable to recover it. 00:35:46.703 [2024-11-18 07:21:07.513515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.703 [2024-11-18 07:21:07.513543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.703 qpair failed and we were unable to recover it. 00:35:46.703 [2024-11-18 07:21:07.513618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.703 [2024-11-18 07:21:07.513644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.703 qpair failed and we were unable to recover it. 00:35:46.703 [2024-11-18 07:21:07.513729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.703 [2024-11-18 07:21:07.513760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.703 qpair failed and we were unable to recover it. 00:35:46.703 [2024-11-18 07:21:07.513876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.703 [2024-11-18 07:21:07.513903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.703 qpair failed and we were unable to recover it. 00:35:46.703 [2024-11-18 07:21:07.513978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.703 [2024-11-18 07:21:07.514003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.703 qpair failed and we were unable to recover it. 00:35:46.703 [2024-11-18 07:21:07.514090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.703 [2024-11-18 07:21:07.514122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.703 qpair failed and we were unable to recover it. 00:35:46.703 [2024-11-18 07:21:07.514237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.703 [2024-11-18 07:21:07.514264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.703 qpair failed and we were unable to recover it. 00:35:46.703 [2024-11-18 07:21:07.514354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.703 [2024-11-18 07:21:07.514380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.703 qpair failed and we were unable to recover it. 00:35:46.703 [2024-11-18 07:21:07.514473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.703 [2024-11-18 07:21:07.514509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.703 qpair failed and we were unable to recover it. 00:35:46.703 [2024-11-18 07:21:07.514597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.703 [2024-11-18 07:21:07.514624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.703 qpair failed and we were unable to recover it. 00:35:46.703 [2024-11-18 07:21:07.514785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.703 [2024-11-18 07:21:07.514827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.703 qpair failed and we were unable to recover it. 00:35:46.703 [2024-11-18 07:21:07.514942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.703 [2024-11-18 07:21:07.514970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.703 qpair failed and we were unable to recover it. 00:35:46.703 [2024-11-18 07:21:07.515064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.703 [2024-11-18 07:21:07.515092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.703 qpair failed and we were unable to recover it. 00:35:46.703 [2024-11-18 07:21:07.515204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.703 [2024-11-18 07:21:07.515230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.703 qpair failed and we were unable to recover it. 00:35:46.703 [2024-11-18 07:21:07.515335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.703 [2024-11-18 07:21:07.515362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.703 qpair failed and we were unable to recover it. 00:35:46.703 [2024-11-18 07:21:07.515474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.703 [2024-11-18 07:21:07.515509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.703 qpair failed and we were unable to recover it. 00:35:46.703 [2024-11-18 07:21:07.515608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.703 [2024-11-18 07:21:07.515636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.703 qpair failed and we were unable to recover it. 00:35:46.703 [2024-11-18 07:21:07.515723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.703 [2024-11-18 07:21:07.515749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.703 qpair failed and we were unable to recover it. 00:35:46.703 [2024-11-18 07:21:07.515829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.703 [2024-11-18 07:21:07.515853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.703 qpair failed and we were unable to recover it. 00:35:46.703 [2024-11-18 07:21:07.516014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.704 [2024-11-18 07:21:07.516060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.704 qpair failed and we were unable to recover it. 00:35:46.704 [2024-11-18 07:21:07.516263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.704 [2024-11-18 07:21:07.516307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.704 qpair failed and we were unable to recover it. 00:35:46.704 [2024-11-18 07:21:07.516464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.704 [2024-11-18 07:21:07.516496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.704 qpair failed and we were unable to recover it. 00:35:46.704 [2024-11-18 07:21:07.516614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.704 [2024-11-18 07:21:07.516641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.704 qpair failed and we were unable to recover it. 00:35:46.704 [2024-11-18 07:21:07.516721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.704 [2024-11-18 07:21:07.516747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.704 qpair failed and we were unable to recover it. 00:35:46.704 [2024-11-18 07:21:07.516926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.704 [2024-11-18 07:21:07.516987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.704 qpair failed and we were unable to recover it. 00:35:46.704 [2024-11-18 07:21:07.517093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.704 [2024-11-18 07:21:07.517124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.704 qpair failed and we were unable to recover it. 00:35:46.704 [2024-11-18 07:21:07.517244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.704 [2024-11-18 07:21:07.517274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.704 qpair failed and we were unable to recover it. 00:35:46.704 [2024-11-18 07:21:07.517408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.704 [2024-11-18 07:21:07.517434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.704 qpair failed and we were unable to recover it. 00:35:46.704 [2024-11-18 07:21:07.517538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.704 [2024-11-18 07:21:07.517565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.704 qpair failed and we were unable to recover it. 00:35:46.704 [2024-11-18 07:21:07.517661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.704 [2024-11-18 07:21:07.517699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.704 qpair failed and we were unable to recover it. 00:35:46.704 [2024-11-18 07:21:07.517816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.704 [2024-11-18 07:21:07.517870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.704 qpair failed and we were unable to recover it. 00:35:46.704 [2024-11-18 07:21:07.518009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.704 [2024-11-18 07:21:07.518042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.704 qpair failed and we were unable to recover it. 00:35:46.704 [2024-11-18 07:21:07.518199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.704 [2024-11-18 07:21:07.518228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.704 qpair failed and we were unable to recover it. 00:35:46.704 [2024-11-18 07:21:07.518327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.704 [2024-11-18 07:21:07.518353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.704 qpair failed and we were unable to recover it. 00:35:46.704 [2024-11-18 07:21:07.518471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.704 [2024-11-18 07:21:07.518517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.704 qpair failed and we were unable to recover it. 00:35:46.704 [2024-11-18 07:21:07.518637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.704 [2024-11-18 07:21:07.518663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.704 qpair failed and we were unable to recover it. 00:35:46.704 [2024-11-18 07:21:07.518755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.704 [2024-11-18 07:21:07.518780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.704 qpair failed and we were unable to recover it. 00:35:46.704 [2024-11-18 07:21:07.518883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.704 [2024-11-18 07:21:07.518915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.704 qpair failed and we were unable to recover it. 00:35:46.704 [2024-11-18 07:21:07.519055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.704 [2024-11-18 07:21:07.519087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.704 qpair failed and we were unable to recover it. 00:35:46.704 [2024-11-18 07:21:07.519257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.704 [2024-11-18 07:21:07.519305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.704 qpair failed and we were unable to recover it. 00:35:46.704 [2024-11-18 07:21:07.519435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.704 [2024-11-18 07:21:07.519474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.704 qpair failed and we were unable to recover it. 00:35:46.704 [2024-11-18 07:21:07.519628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.704 [2024-11-18 07:21:07.519657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.704 qpair failed and we were unable to recover it. 00:35:46.704 [2024-11-18 07:21:07.519772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.704 [2024-11-18 07:21:07.519815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.704 qpair failed and we were unable to recover it. 00:35:46.704 [2024-11-18 07:21:07.520016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.704 [2024-11-18 07:21:07.520049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.704 qpair failed and we were unable to recover it. 00:35:46.704 [2024-11-18 07:21:07.520207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.704 [2024-11-18 07:21:07.520252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.704 qpair failed and we were unable to recover it. 00:35:46.704 [2024-11-18 07:21:07.520393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.704 [2024-11-18 07:21:07.520436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.704 qpair failed and we were unable to recover it. 00:35:46.704 [2024-11-18 07:21:07.520546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.704 [2024-11-18 07:21:07.520575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.704 qpair failed and we were unable to recover it. 00:35:46.704 [2024-11-18 07:21:07.520689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.704 [2024-11-18 07:21:07.520716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.704 qpair failed and we were unable to recover it. 00:35:46.704 [2024-11-18 07:21:07.520856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.704 [2024-11-18 07:21:07.520883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.704 qpair failed and we were unable to recover it. 00:35:46.704 [2024-11-18 07:21:07.521003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.704 [2024-11-18 07:21:07.521032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.704 qpair failed and we were unable to recover it. 00:35:46.704 [2024-11-18 07:21:07.521154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.704 [2024-11-18 07:21:07.521196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.704 qpair failed and we were unable to recover it. 00:35:46.704 [2024-11-18 07:21:07.521288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.704 [2024-11-18 07:21:07.521319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.704 qpair failed and we were unable to recover it. 00:35:46.704 [2024-11-18 07:21:07.521452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.704 [2024-11-18 07:21:07.521478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.704 qpair failed and we were unable to recover it. 00:35:46.704 [2024-11-18 07:21:07.521568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.704 [2024-11-18 07:21:07.521593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.704 qpair failed and we were unable to recover it. 00:35:46.704 [2024-11-18 07:21:07.521738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.704 [2024-11-18 07:21:07.521764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.705 qpair failed and we were unable to recover it. 00:35:46.705 [2024-11-18 07:21:07.521898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.705 [2024-11-18 07:21:07.521940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.705 qpair failed and we were unable to recover it. 00:35:46.705 [2024-11-18 07:21:07.522134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.705 [2024-11-18 07:21:07.522174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.705 qpair failed and we were unable to recover it. 00:35:46.705 [2024-11-18 07:21:07.522305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.705 [2024-11-18 07:21:07.522334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.705 qpair failed and we were unable to recover it. 00:35:46.705 [2024-11-18 07:21:07.522436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.705 [2024-11-18 07:21:07.522463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.705 qpair failed and we were unable to recover it. 00:35:46.705 [2024-11-18 07:21:07.522582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.705 [2024-11-18 07:21:07.522609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.705 qpair failed and we were unable to recover it. 00:35:46.705 [2024-11-18 07:21:07.522716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.705 [2024-11-18 07:21:07.522742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.705 qpair failed and we were unable to recover it. 00:35:46.705 [2024-11-18 07:21:07.522905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.705 [2024-11-18 07:21:07.522930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.705 qpair failed and we were unable to recover it. 00:35:46.705 [2024-11-18 07:21:07.523066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.705 [2024-11-18 07:21:07.523092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.705 qpair failed and we were unable to recover it. 00:35:46.705 [2024-11-18 07:21:07.523206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.705 [2024-11-18 07:21:07.523238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.705 qpair failed and we were unable to recover it. 00:35:46.705 [2024-11-18 07:21:07.523396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.705 [2024-11-18 07:21:07.523426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.705 qpair failed and we were unable to recover it. 00:35:46.705 [2024-11-18 07:21:07.523595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.705 [2024-11-18 07:21:07.523635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.705 qpair failed and we were unable to recover it. 00:35:46.705 [2024-11-18 07:21:07.523753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.705 [2024-11-18 07:21:07.523780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.705 qpair failed and we were unable to recover it. 00:35:46.705 [2024-11-18 07:21:07.523895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.705 [2024-11-18 07:21:07.523921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.705 qpair failed and we were unable to recover it. 00:35:46.705 [2024-11-18 07:21:07.524014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.705 [2024-11-18 07:21:07.524040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.705 qpair failed and we were unable to recover it. 00:35:46.705 [2024-11-18 07:21:07.524182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.705 [2024-11-18 07:21:07.524235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.705 qpair failed and we were unable to recover it. 00:35:46.705 [2024-11-18 07:21:07.524318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.705 [2024-11-18 07:21:07.524345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.705 qpair failed and we were unable to recover it. 00:35:46.705 [2024-11-18 07:21:07.524429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.705 [2024-11-18 07:21:07.524458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.705 qpair failed and we were unable to recover it. 00:35:46.705 [2024-11-18 07:21:07.524555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.705 [2024-11-18 07:21:07.524583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.705 qpair failed and we were unable to recover it. 00:35:46.705 [2024-11-18 07:21:07.524725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.705 [2024-11-18 07:21:07.524751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.705 qpair failed and we were unable to recover it. 00:35:46.705 [2024-11-18 07:21:07.524868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.705 [2024-11-18 07:21:07.524895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.705 qpair failed and we were unable to recover it. 00:35:46.705 [2024-11-18 07:21:07.525034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.705 [2024-11-18 07:21:07.525061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.705 qpair failed and we were unable to recover it. 00:35:46.705 [2024-11-18 07:21:07.525162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.705 [2024-11-18 07:21:07.525188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.705 qpair failed and we were unable to recover it. 00:35:46.705 [2024-11-18 07:21:07.525322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.705 [2024-11-18 07:21:07.525353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.705 qpair failed and we were unable to recover it. 00:35:46.705 [2024-11-18 07:21:07.525525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.705 [2024-11-18 07:21:07.525579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.705 qpair failed and we were unable to recover it. 00:35:46.705 [2024-11-18 07:21:07.525671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.706 [2024-11-18 07:21:07.525698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.706 qpair failed and we were unable to recover it. 00:35:46.706 [2024-11-18 07:21:07.525789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.706 [2024-11-18 07:21:07.525816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.706 qpair failed and we were unable to recover it. 00:35:46.706 [2024-11-18 07:21:07.525940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.706 [2024-11-18 07:21:07.525968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.706 qpair failed and we were unable to recover it. 00:35:46.706 [2024-11-18 07:21:07.526157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.706 [2024-11-18 07:21:07.526186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.706 qpair failed and we were unable to recover it. 00:35:46.706 [2024-11-18 07:21:07.526306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.706 [2024-11-18 07:21:07.526332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.706 qpair failed and we were unable to recover it. 00:35:46.706 [2024-11-18 07:21:07.526510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.706 [2024-11-18 07:21:07.526536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.706 qpair failed and we were unable to recover it. 00:35:46.706 [2024-11-18 07:21:07.526623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.706 [2024-11-18 07:21:07.526649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.706 qpair failed and we were unable to recover it. 00:35:46.706 [2024-11-18 07:21:07.526736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.706 [2024-11-18 07:21:07.526781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.706 qpair failed and we were unable to recover it. 00:35:46.706 [2024-11-18 07:21:07.526940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.706 [2024-11-18 07:21:07.526989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.706 qpair failed and we were unable to recover it. 00:35:46.706 [2024-11-18 07:21:07.527197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.706 [2024-11-18 07:21:07.527261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.706 qpair failed and we were unable to recover it. 00:35:46.706 [2024-11-18 07:21:07.527384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.706 [2024-11-18 07:21:07.527413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.706 qpair failed and we were unable to recover it. 00:35:46.706 [2024-11-18 07:21:07.527503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.706 [2024-11-18 07:21:07.527531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.706 qpair failed and we were unable to recover it. 00:35:46.706 [2024-11-18 07:21:07.527674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.706 [2024-11-18 07:21:07.527701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.706 qpair failed and we were unable to recover it. 00:35:46.706 [2024-11-18 07:21:07.527793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.706 [2024-11-18 07:21:07.527819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.706 qpair failed and we were unable to recover it. 00:35:46.706 [2024-11-18 07:21:07.527898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.706 [2024-11-18 07:21:07.527925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.706 qpair failed and we were unable to recover it. 00:35:46.706 [2024-11-18 07:21:07.528084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.706 [2024-11-18 07:21:07.528135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.706 qpair failed and we were unable to recover it. 00:35:46.706 [2024-11-18 07:21:07.528315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.706 [2024-11-18 07:21:07.528341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.706 qpair failed and we were unable to recover it. 00:35:46.706 [2024-11-18 07:21:07.528454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.706 [2024-11-18 07:21:07.528486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.706 qpair failed and we were unable to recover it. 00:35:46.706 [2024-11-18 07:21:07.528636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.706 [2024-11-18 07:21:07.528662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.706 qpair failed and we were unable to recover it. 00:35:46.706 [2024-11-18 07:21:07.528813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.706 [2024-11-18 07:21:07.528841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.706 qpair failed and we were unable to recover it. 00:35:46.706 [2024-11-18 07:21:07.528962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.706 [2024-11-18 07:21:07.528989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.706 qpair failed and we were unable to recover it. 00:35:46.706 [2024-11-18 07:21:07.529151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.706 [2024-11-18 07:21:07.529201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.706 qpair failed and we were unable to recover it. 00:35:46.706 [2024-11-18 07:21:07.529359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.706 [2024-11-18 07:21:07.529389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.706 qpair failed and we were unable to recover it. 00:35:46.706 [2024-11-18 07:21:07.529519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.706 [2024-11-18 07:21:07.529546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.706 qpair failed and we were unable to recover it. 00:35:46.706 [2024-11-18 07:21:07.529639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.706 [2024-11-18 07:21:07.529665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.706 qpair failed and we were unable to recover it. 00:35:46.706 [2024-11-18 07:21:07.529790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.706 [2024-11-18 07:21:07.529817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.706 qpair failed and we were unable to recover it. 00:35:46.706 [2024-11-18 07:21:07.529929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.706 [2024-11-18 07:21:07.529956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.706 qpair failed and we were unable to recover it. 00:35:46.706 [2024-11-18 07:21:07.530048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.706 [2024-11-18 07:21:07.530078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.706 qpair failed and we were unable to recover it. 00:35:46.706 [2024-11-18 07:21:07.530165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.706 [2024-11-18 07:21:07.530195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.706 qpair failed and we were unable to recover it. 00:35:46.706 [2024-11-18 07:21:07.530316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.706 [2024-11-18 07:21:07.530360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.706 qpair failed and we were unable to recover it. 00:35:46.706 [2024-11-18 07:21:07.530487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.706 [2024-11-18 07:21:07.530522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.706 qpair failed and we were unable to recover it. 00:35:46.706 [2024-11-18 07:21:07.530617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.706 [2024-11-18 07:21:07.530644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.706 qpair failed and we were unable to recover it. 00:35:46.706 [2024-11-18 07:21:07.530727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.706 [2024-11-18 07:21:07.530754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.706 qpair failed and we were unable to recover it. 00:35:46.706 [2024-11-18 07:21:07.530848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.706 [2024-11-18 07:21:07.530875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.706 qpair failed and we were unable to recover it. 00:35:46.706 [2024-11-18 07:21:07.530984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.706 [2024-11-18 07:21:07.531010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.706 qpair failed and we were unable to recover it. 00:35:46.706 [2024-11-18 07:21:07.531133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.706 [2024-11-18 07:21:07.531171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.706 qpair failed and we were unable to recover it. 00:35:46.706 [2024-11-18 07:21:07.531304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.706 [2024-11-18 07:21:07.531343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.706 qpair failed and we were unable to recover it. 00:35:46.706 [2024-11-18 07:21:07.531461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.706 [2024-11-18 07:21:07.531499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.707 qpair failed and we were unable to recover it. 00:35:46.707 [2024-11-18 07:21:07.531603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.707 [2024-11-18 07:21:07.531631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.707 qpair failed and we were unable to recover it. 00:35:46.707 [2024-11-18 07:21:07.531714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.707 [2024-11-18 07:21:07.531740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.707 qpair failed and we were unable to recover it. 00:35:46.707 [2024-11-18 07:21:07.531862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.707 [2024-11-18 07:21:07.531890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.707 qpair failed and we were unable to recover it. 00:35:46.707 [2024-11-18 07:21:07.532027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.707 [2024-11-18 07:21:07.532076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.707 qpair failed and we were unable to recover it. 00:35:46.707 [2024-11-18 07:21:07.532265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.707 [2024-11-18 07:21:07.532294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.707 qpair failed and we were unable to recover it. 00:35:46.707 [2024-11-18 07:21:07.532428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.707 [2024-11-18 07:21:07.532455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.707 qpair failed and we were unable to recover it. 00:35:46.707 [2024-11-18 07:21:07.532548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.707 [2024-11-18 07:21:07.532577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.707 qpair failed and we were unable to recover it. 00:35:46.707 [2024-11-18 07:21:07.532691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.707 [2024-11-18 07:21:07.532717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.707 qpair failed and we were unable to recover it. 00:35:46.707 [2024-11-18 07:21:07.532837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.707 [2024-11-18 07:21:07.532863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.707 qpair failed and we were unable to recover it. 00:35:46.707 [2024-11-18 07:21:07.532950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.707 [2024-11-18 07:21:07.532983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.707 qpair failed and we were unable to recover it. 00:35:46.707 [2024-11-18 07:21:07.533186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.707 [2024-11-18 07:21:07.533225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.707 qpair failed and we were unable to recover it. 00:35:46.707 [2024-11-18 07:21:07.533312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.707 [2024-11-18 07:21:07.533339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.707 qpair failed and we were unable to recover it. 00:35:46.707 [2024-11-18 07:21:07.533479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.707 [2024-11-18 07:21:07.533522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.707 qpair failed and we were unable to recover it. 00:35:46.707 [2024-11-18 07:21:07.533649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.707 [2024-11-18 07:21:07.533675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.707 qpair failed and we were unable to recover it. 00:35:46.707 [2024-11-18 07:21:07.533771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.707 [2024-11-18 07:21:07.533800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.707 qpair failed and we were unable to recover it. 00:35:46.707 [2024-11-18 07:21:07.533911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.707 [2024-11-18 07:21:07.533937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.707 qpair failed and we were unable to recover it. 00:35:46.707 [2024-11-18 07:21:07.534051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.707 [2024-11-18 07:21:07.534079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.707 qpair failed and we were unable to recover it. 00:35:46.707 [2024-11-18 07:21:07.534172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.707 [2024-11-18 07:21:07.534199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.707 qpair failed and we were unable to recover it. 00:35:46.707 [2024-11-18 07:21:07.534349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.707 [2024-11-18 07:21:07.534375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.707 qpair failed and we were unable to recover it. 00:35:46.707 [2024-11-18 07:21:07.534469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.707 [2024-11-18 07:21:07.534505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.707 qpair failed and we were unable to recover it. 00:35:46.707 [2024-11-18 07:21:07.534653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.707 [2024-11-18 07:21:07.534679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.707 qpair failed and we were unable to recover it. 00:35:46.707 [2024-11-18 07:21:07.534770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.707 [2024-11-18 07:21:07.534796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.707 qpair failed and we were unable to recover it. 00:35:46.707 [2024-11-18 07:21:07.534911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.707 [2024-11-18 07:21:07.534938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.707 qpair failed and we were unable to recover it. 00:35:46.707 [2024-11-18 07:21:07.535051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.707 [2024-11-18 07:21:07.535077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.707 qpair failed and we were unable to recover it. 00:35:46.707 [2024-11-18 07:21:07.535190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.707 [2024-11-18 07:21:07.535215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.707 qpair failed and we were unable to recover it. 00:35:46.707 [2024-11-18 07:21:07.535328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.707 [2024-11-18 07:21:07.535355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.707 qpair failed and we were unable to recover it. 00:35:46.707 [2024-11-18 07:21:07.535452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.707 [2024-11-18 07:21:07.535501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.707 qpair failed and we were unable to recover it. 00:35:46.707 [2024-11-18 07:21:07.535625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.707 [2024-11-18 07:21:07.535652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.707 qpair failed and we were unable to recover it. 00:35:46.707 [2024-11-18 07:21:07.535744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.707 [2024-11-18 07:21:07.535772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.707 qpair failed and we were unable to recover it. 00:35:46.707 [2024-11-18 07:21:07.535915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.707 [2024-11-18 07:21:07.535948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.707 qpair failed and we were unable to recover it. 00:35:46.707 [2024-11-18 07:21:07.536062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.707 [2024-11-18 07:21:07.536105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.707 qpair failed and we were unable to recover it. 00:35:46.707 [2024-11-18 07:21:07.536222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.707 [2024-11-18 07:21:07.536249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.707 qpair failed and we were unable to recover it. 00:35:46.707 [2024-11-18 07:21:07.536393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.707 [2024-11-18 07:21:07.536421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.707 qpair failed and we were unable to recover it. 00:35:46.707 [2024-11-18 07:21:07.536535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.707 [2024-11-18 07:21:07.536561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.707 qpair failed and we were unable to recover it. 00:35:46.707 [2024-11-18 07:21:07.536649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.707 [2024-11-18 07:21:07.536676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.707 qpair failed and we were unable to recover it. 00:35:46.707 [2024-11-18 07:21:07.536806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.707 [2024-11-18 07:21:07.536833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.707 qpair failed and we were unable to recover it. 00:35:46.707 [2024-11-18 07:21:07.536978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.707 [2024-11-18 07:21:07.537014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.707 qpair failed and we were unable to recover it. 00:35:46.708 [2024-11-18 07:21:07.537102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.708 [2024-11-18 07:21:07.537128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.708 qpair failed and we were unable to recover it. 00:35:46.708 [2024-11-18 07:21:07.537260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.708 [2024-11-18 07:21:07.537288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.708 qpair failed and we were unable to recover it. 00:35:46.708 [2024-11-18 07:21:07.537408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.708 [2024-11-18 07:21:07.537435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.708 qpair failed and we were unable to recover it. 00:35:46.708 [2024-11-18 07:21:07.537575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.708 [2024-11-18 07:21:07.537602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.708 qpair failed and we were unable to recover it. 00:35:46.708 [2024-11-18 07:21:07.537733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.708 [2024-11-18 07:21:07.537776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.708 qpair failed and we were unable to recover it. 00:35:46.708 [2024-11-18 07:21:07.537943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.708 [2024-11-18 07:21:07.537991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.708 qpair failed and we were unable to recover it. 00:35:46.708 [2024-11-18 07:21:07.538096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.708 [2024-11-18 07:21:07.538133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.708 qpair failed and we were unable to recover it. 00:35:46.708 [2024-11-18 07:21:07.538294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.708 [2024-11-18 07:21:07.538322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.708 qpair failed and we were unable to recover it. 00:35:46.708 [2024-11-18 07:21:07.538451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.708 [2024-11-18 07:21:07.538502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.708 qpair failed and we were unable to recover it. 00:35:46.708 [2024-11-18 07:21:07.538637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.708 [2024-11-18 07:21:07.538675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.708 qpair failed and we were unable to recover it. 00:35:46.708 [2024-11-18 07:21:07.538810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.708 [2024-11-18 07:21:07.538859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.708 qpair failed and we were unable to recover it. 00:35:46.708 [2024-11-18 07:21:07.539012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.708 [2024-11-18 07:21:07.539051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.708 qpair failed and we were unable to recover it. 00:35:46.708 [2024-11-18 07:21:07.539232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.708 [2024-11-18 07:21:07.539265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.708 qpair failed and we were unable to recover it. 00:35:46.708 [2024-11-18 07:21:07.539372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.708 [2024-11-18 07:21:07.539399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.708 qpair failed and we were unable to recover it. 00:35:46.708 [2024-11-18 07:21:07.539529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.708 [2024-11-18 07:21:07.539568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.708 qpair failed and we were unable to recover it. 00:35:46.708 [2024-11-18 07:21:07.539719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.708 [2024-11-18 07:21:07.539747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.708 qpair failed and we were unable to recover it. 00:35:46.708 [2024-11-18 07:21:07.539885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.708 [2024-11-18 07:21:07.539933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.708 qpair failed and we were unable to recover it. 00:35:46.708 [2024-11-18 07:21:07.540099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.708 [2024-11-18 07:21:07.540146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.708 qpair failed and we were unable to recover it. 00:35:46.708 [2024-11-18 07:21:07.540287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.708 [2024-11-18 07:21:07.540336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.708 qpair failed and we were unable to recover it. 00:35:46.708 [2024-11-18 07:21:07.540451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.708 [2024-11-18 07:21:07.540479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.708 qpair failed and we were unable to recover it. 00:35:46.708 [2024-11-18 07:21:07.540602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.708 [2024-11-18 07:21:07.540628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.708 qpair failed and we were unable to recover it. 00:35:46.708 [2024-11-18 07:21:07.540715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.708 [2024-11-18 07:21:07.540742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.708 qpair failed and we were unable to recover it. 00:35:46.708 [2024-11-18 07:21:07.540983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.708 [2024-11-18 07:21:07.541033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.708 qpair failed and we were unable to recover it. 00:35:46.708 [2024-11-18 07:21:07.541167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.708 [2024-11-18 07:21:07.541209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.708 qpair failed and we were unable to recover it. 00:35:46.708 [2024-11-18 07:21:07.541444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.708 [2024-11-18 07:21:07.541476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.708 qpair failed and we were unable to recover it. 00:35:46.708 [2024-11-18 07:21:07.541598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.708 [2024-11-18 07:21:07.541625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.708 qpair failed and we were unable to recover it. 00:35:46.708 [2024-11-18 07:21:07.541743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.708 [2024-11-18 07:21:07.541769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.708 qpair failed and we were unable to recover it. 00:35:46.708 [2024-11-18 07:21:07.541851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.708 [2024-11-18 07:21:07.541877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.708 qpair failed and we were unable to recover it. 00:35:46.708 [2024-11-18 07:21:07.541973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.708 [2024-11-18 07:21:07.541999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.708 qpair failed and we were unable to recover it. 00:35:46.708 [2024-11-18 07:21:07.542136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.708 [2024-11-18 07:21:07.542169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.708 qpair failed and we were unable to recover it. 00:35:46.708 [2024-11-18 07:21:07.542284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.708 [2024-11-18 07:21:07.542310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.708 qpair failed and we were unable to recover it. 00:35:46.708 [2024-11-18 07:21:07.542506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.708 [2024-11-18 07:21:07.542533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.708 qpair failed and we were unable to recover it. 00:35:46.708 [2024-11-18 07:21:07.542639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.708 [2024-11-18 07:21:07.542665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.708 qpair failed and we were unable to recover it. 00:35:46.708 [2024-11-18 07:21:07.542757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.708 [2024-11-18 07:21:07.542783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.708 qpair failed and we were unable to recover it. 00:35:46.708 [2024-11-18 07:21:07.542889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.708 [2024-11-18 07:21:07.542931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.708 qpair failed and we were unable to recover it. 00:35:46.708 [2024-11-18 07:21:07.543071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.708 [2024-11-18 07:21:07.543114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.708 qpair failed and we were unable to recover it. 00:35:46.708 [2024-11-18 07:21:07.543315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.708 [2024-11-18 07:21:07.543359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.708 qpair failed and we were unable to recover it. 00:35:46.708 [2024-11-18 07:21:07.543476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.708 [2024-11-18 07:21:07.543515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.709 qpair failed and we were unable to recover it. 00:35:46.709 [2024-11-18 07:21:07.543664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.709 [2024-11-18 07:21:07.543692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.709 qpair failed and we were unable to recover it. 00:35:46.709 [2024-11-18 07:21:07.543812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.709 [2024-11-18 07:21:07.543848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.709 qpair failed and we were unable to recover it. 00:35:46.709 [2024-11-18 07:21:07.544016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.709 [2024-11-18 07:21:07.544055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.709 qpair failed and we were unable to recover it. 00:35:46.709 [2024-11-18 07:21:07.544168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.709 [2024-11-18 07:21:07.544205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.709 qpair failed and we were unable to recover it. 00:35:46.709 [2024-11-18 07:21:07.544308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.709 [2024-11-18 07:21:07.544338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.709 qpair failed and we were unable to recover it. 00:35:46.709 [2024-11-18 07:21:07.544472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.709 [2024-11-18 07:21:07.544509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.709 qpair failed and we were unable to recover it. 00:35:46.709 [2024-11-18 07:21:07.544680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.709 [2024-11-18 07:21:07.544719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.709 qpair failed and we were unable to recover it. 00:35:46.709 [2024-11-18 07:21:07.544845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.709 [2024-11-18 07:21:07.544874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.709 qpair failed and we were unable to recover it. 00:35:46.709 [2024-11-18 07:21:07.544971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.709 [2024-11-18 07:21:07.544999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.709 qpair failed and we were unable to recover it. 00:35:46.709 [2024-11-18 07:21:07.545111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.709 [2024-11-18 07:21:07.545160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.709 qpair failed and we were unable to recover it. 00:35:46.709 [2024-11-18 07:21:07.545248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.709 [2024-11-18 07:21:07.545274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.709 qpair failed and we were unable to recover it. 00:35:46.709 [2024-11-18 07:21:07.545406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.709 [2024-11-18 07:21:07.545444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.709 qpair failed and we were unable to recover it. 00:35:46.709 [2024-11-18 07:21:07.545592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.709 [2024-11-18 07:21:07.545620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.709 qpair failed and we were unable to recover it. 00:35:46.709 [2024-11-18 07:21:07.545702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.709 [2024-11-18 07:21:07.545730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.709 qpair failed and we were unable to recover it. 00:35:46.709 [2024-11-18 07:21:07.545835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.709 [2024-11-18 07:21:07.545861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.709 qpair failed and we were unable to recover it. 00:35:46.709 [2024-11-18 07:21:07.545948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.709 [2024-11-18 07:21:07.545973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.709 qpair failed and we were unable to recover it. 00:35:46.709 [2024-11-18 07:21:07.546058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.709 [2024-11-18 07:21:07.546084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.709 qpair failed and we were unable to recover it. 00:35:46.709 [2024-11-18 07:21:07.546229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.709 [2024-11-18 07:21:07.546279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.709 qpair failed and we were unable to recover it. 00:35:46.709 [2024-11-18 07:21:07.546450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.709 [2024-11-18 07:21:07.546476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.709 qpair failed and we were unable to recover it. 00:35:46.709 [2024-11-18 07:21:07.546604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.709 [2024-11-18 07:21:07.546631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.709 qpair failed and we were unable to recover it. 00:35:46.709 [2024-11-18 07:21:07.546714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.709 [2024-11-18 07:21:07.546739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.709 qpair failed and we were unable to recover it. 00:35:46.709 [2024-11-18 07:21:07.546957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.709 [2024-11-18 07:21:07.546988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.709 qpair failed and we were unable to recover it. 00:35:46.709 [2024-11-18 07:21:07.547160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.709 [2024-11-18 07:21:07.547210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.709 qpair failed and we were unable to recover it. 00:35:46.709 [2024-11-18 07:21:07.547313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.709 [2024-11-18 07:21:07.547344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.709 qpair failed and we were unable to recover it. 00:35:46.709 [2024-11-18 07:21:07.547511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.709 [2024-11-18 07:21:07.547538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.709 qpair failed and we were unable to recover it. 00:35:46.709 [2024-11-18 07:21:07.547665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.709 [2024-11-18 07:21:07.547692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.709 qpair failed and we were unable to recover it. 00:35:46.709 [2024-11-18 07:21:07.547797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.709 [2024-11-18 07:21:07.547832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.709 qpair failed and we were unable to recover it. 00:35:46.709 [2024-11-18 07:21:07.547957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.709 [2024-11-18 07:21:07.547983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.709 qpair failed and we were unable to recover it. 00:35:46.709 [2024-11-18 07:21:07.548097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.709 [2024-11-18 07:21:07.548125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.709 qpair failed and we were unable to recover it. 00:35:46.709 [2024-11-18 07:21:07.548275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.709 [2024-11-18 07:21:07.548301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.709 qpair failed and we were unable to recover it. 00:35:46.709 [2024-11-18 07:21:07.548449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.709 [2024-11-18 07:21:07.548475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.710 qpair failed and we were unable to recover it. 00:35:46.710 [2024-11-18 07:21:07.548607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.710 [2024-11-18 07:21:07.548633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.710 qpair failed and we were unable to recover it. 00:35:46.710 [2024-11-18 07:21:07.548745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.710 [2024-11-18 07:21:07.548774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.710 qpair failed and we were unable to recover it. 00:35:46.710 [2024-11-18 07:21:07.548852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.710 [2024-11-18 07:21:07.548879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.710 qpair failed and we were unable to recover it. 00:35:46.710 [2024-11-18 07:21:07.549013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.710 [2024-11-18 07:21:07.549044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.710 qpair failed and we were unable to recover it. 00:35:46.710 [2024-11-18 07:21:07.549178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.710 [2024-11-18 07:21:07.549207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.710 qpair failed and we were unable to recover it. 00:35:46.710 [2024-11-18 07:21:07.549339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.710 [2024-11-18 07:21:07.549364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.710 qpair failed and we were unable to recover it. 00:35:46.710 [2024-11-18 07:21:07.549452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.710 [2024-11-18 07:21:07.549478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.710 qpair failed and we were unable to recover it. 00:35:46.710 [2024-11-18 07:21:07.549573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.710 [2024-11-18 07:21:07.549603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.710 qpair failed and we were unable to recover it. 00:35:46.710 [2024-11-18 07:21:07.549683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.710 [2024-11-18 07:21:07.549709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.710 qpair failed and we were unable to recover it. 00:35:46.710 [2024-11-18 07:21:07.549790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.710 [2024-11-18 07:21:07.549816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.710 qpair failed and we were unable to recover it. 00:35:46.710 [2024-11-18 07:21:07.549893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.710 [2024-11-18 07:21:07.549918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.710 qpair failed and we were unable to recover it. 00:35:46.710 [2024-11-18 07:21:07.550130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.710 [2024-11-18 07:21:07.550162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.710 qpair failed and we were unable to recover it. 00:35:46.710 [2024-11-18 07:21:07.550339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.710 [2024-11-18 07:21:07.550383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.710 qpair failed and we were unable to recover it. 00:35:46.710 [2024-11-18 07:21:07.550475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.710 [2024-11-18 07:21:07.550513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.710 qpair failed and we were unable to recover it. 00:35:46.710 [2024-11-18 07:21:07.550612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.710 [2024-11-18 07:21:07.550639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.710 qpair failed and we were unable to recover it. 00:35:46.710 [2024-11-18 07:21:07.550715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.710 [2024-11-18 07:21:07.550742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.710 qpair failed and we were unable to recover it. 00:35:46.710 [2024-11-18 07:21:07.550894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.710 [2024-11-18 07:21:07.550942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.710 qpair failed and we were unable to recover it. 00:35:46.710 [2024-11-18 07:21:07.551145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.710 [2024-11-18 07:21:07.551180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.710 qpair failed and we were unable to recover it. 00:35:46.710 [2024-11-18 07:21:07.551293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.710 [2024-11-18 07:21:07.551323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.710 qpair failed and we were unable to recover it. 00:35:46.710 [2024-11-18 07:21:07.551412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.710 [2024-11-18 07:21:07.551443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.710 qpair failed and we were unable to recover it. 00:35:46.710 [2024-11-18 07:21:07.551572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.710 [2024-11-18 07:21:07.551599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.710 qpair failed and we were unable to recover it. 00:35:46.710 [2024-11-18 07:21:07.551681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.710 [2024-11-18 07:21:07.551725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.710 qpair failed and we were unable to recover it. 00:35:46.710 [2024-11-18 07:21:07.551879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.710 [2024-11-18 07:21:07.551908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.710 qpair failed and we were unable to recover it. 00:35:46.710 [2024-11-18 07:21:07.552062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.710 [2024-11-18 07:21:07.552110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.710 qpair failed and we were unable to recover it. 00:35:46.710 [2024-11-18 07:21:07.552289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.710 [2024-11-18 07:21:07.552318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.710 qpair failed and we were unable to recover it. 00:35:46.710 [2024-11-18 07:21:07.552402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.710 [2024-11-18 07:21:07.552446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.710 qpair failed and we were unable to recover it. 00:35:46.710 [2024-11-18 07:21:07.552568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.710 [2024-11-18 07:21:07.552595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.710 qpair failed and we were unable to recover it. 00:35:46.710 [2024-11-18 07:21:07.552734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.710 [2024-11-18 07:21:07.552782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.710 qpair failed and we were unable to recover it. 00:35:46.710 [2024-11-18 07:21:07.552899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.710 [2024-11-18 07:21:07.552951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.710 qpair failed and we were unable to recover it. 00:35:46.710 [2024-11-18 07:21:07.553090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.710 [2024-11-18 07:21:07.553139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.710 qpair failed and we were unable to recover it. 00:35:46.710 [2024-11-18 07:21:07.553309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.710 [2024-11-18 07:21:07.553345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.710 qpair failed and we were unable to recover it. 00:35:46.710 [2024-11-18 07:21:07.553447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.710 [2024-11-18 07:21:07.553473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.710 qpair failed and we were unable to recover it. 00:35:46.710 [2024-11-18 07:21:07.553569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.710 [2024-11-18 07:21:07.553596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.710 qpair failed and we were unable to recover it. 00:35:46.710 [2024-11-18 07:21:07.553691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.710 [2024-11-18 07:21:07.553717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.710 qpair failed and we were unable to recover it. 00:35:46.710 [2024-11-18 07:21:07.553840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.710 [2024-11-18 07:21:07.553873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.710 qpair failed and we were unable to recover it. 00:35:46.710 [2024-11-18 07:21:07.553985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.710 [2024-11-18 07:21:07.554011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.710 qpair failed and we were unable to recover it. 00:35:46.710 [2024-11-18 07:21:07.554118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.710 [2024-11-18 07:21:07.554144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.710 qpair failed and we were unable to recover it. 00:35:46.710 [2024-11-18 07:21:07.554258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.711 [2024-11-18 07:21:07.554283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.711 qpair failed and we were unable to recover it. 00:35:46.711 [2024-11-18 07:21:07.554385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.711 [2024-11-18 07:21:07.554424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.711 qpair failed and we were unable to recover it. 00:35:46.711 [2024-11-18 07:21:07.554522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.711 [2024-11-18 07:21:07.554550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.711 qpair failed and we were unable to recover it. 00:35:46.711 [2024-11-18 07:21:07.554640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.711 [2024-11-18 07:21:07.554670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.711 qpair failed and we were unable to recover it. 00:35:46.711 [2024-11-18 07:21:07.554804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.711 [2024-11-18 07:21:07.554832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.711 qpair failed and we were unable to recover it. 00:35:46.711 [2024-11-18 07:21:07.555001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.711 [2024-11-18 07:21:07.555049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.711 qpair failed and we were unable to recover it. 00:35:46.711 [2024-11-18 07:21:07.555228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.711 [2024-11-18 07:21:07.555258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.711 qpair failed and we were unable to recover it. 00:35:46.711 [2024-11-18 07:21:07.555360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.711 [2024-11-18 07:21:07.555389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.711 qpair failed and we were unable to recover it. 00:35:46.711 [2024-11-18 07:21:07.555520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.711 [2024-11-18 07:21:07.555547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.711 qpair failed and we were unable to recover it. 00:35:46.711 [2024-11-18 07:21:07.555666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.711 [2024-11-18 07:21:07.555694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.711 qpair failed and we were unable to recover it. 00:35:46.711 [2024-11-18 07:21:07.555830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.711 [2024-11-18 07:21:07.555873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.711 qpair failed and we were unable to recover it. 00:35:46.711 [2024-11-18 07:21:07.555970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.711 [2024-11-18 07:21:07.555996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.711 qpair failed and we were unable to recover it. 00:35:46.711 [2024-11-18 07:21:07.556100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.711 [2024-11-18 07:21:07.556129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.711 qpair failed and we were unable to recover it. 00:35:46.711 [2024-11-18 07:21:07.556233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.711 [2024-11-18 07:21:07.556264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.711 qpair failed and we were unable to recover it. 00:35:46.711 [2024-11-18 07:21:07.556348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.711 [2024-11-18 07:21:07.556373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.711 qpair failed and we were unable to recover it. 00:35:46.711 [2024-11-18 07:21:07.556484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.711 [2024-11-18 07:21:07.556517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.711 qpair failed and we were unable to recover it. 00:35:46.711 [2024-11-18 07:21:07.556636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.711 [2024-11-18 07:21:07.556662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.711 qpair failed and we were unable to recover it. 00:35:46.711 [2024-11-18 07:21:07.556753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.711 [2024-11-18 07:21:07.556778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.711 qpair failed and we were unable to recover it. 00:35:46.711 [2024-11-18 07:21:07.556918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.711 [2024-11-18 07:21:07.556943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.711 qpair failed and we were unable to recover it. 00:35:46.711 [2024-11-18 07:21:07.557071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.711 [2024-11-18 07:21:07.557106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.711 qpair failed and we were unable to recover it. 00:35:46.711 [2024-11-18 07:21:07.557260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.711 [2024-11-18 07:21:07.557291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.711 qpair failed and we were unable to recover it. 00:35:46.711 [2024-11-18 07:21:07.557428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.711 [2024-11-18 07:21:07.557456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.711 qpair failed and we were unable to recover it. 00:35:46.711 [2024-11-18 07:21:07.557593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.711 [2024-11-18 07:21:07.557621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.711 qpair failed and we were unable to recover it. 00:35:46.711 [2024-11-18 07:21:07.557754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.711 [2024-11-18 07:21:07.557784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.711 qpair failed and we were unable to recover it. 00:35:46.711 [2024-11-18 07:21:07.557903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.711 [2024-11-18 07:21:07.557932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.711 qpair failed and we were unable to recover it. 00:35:46.711 [2024-11-18 07:21:07.558086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.711 [2024-11-18 07:21:07.558132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.711 qpair failed and we were unable to recover it. 00:35:46.711 [2024-11-18 07:21:07.558269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.711 [2024-11-18 07:21:07.558295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.711 qpair failed and we were unable to recover it. 00:35:46.711 [2024-11-18 07:21:07.558409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.711 [2024-11-18 07:21:07.558434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.711 qpair failed and we were unable to recover it. 00:35:46.711 [2024-11-18 07:21:07.558538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.711 [2024-11-18 07:21:07.558565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.711 qpair failed and we were unable to recover it. 00:35:46.711 [2024-11-18 07:21:07.558701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.711 [2024-11-18 07:21:07.558726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.711 qpair failed and we were unable to recover it. 00:35:46.711 [2024-11-18 07:21:07.558816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.711 [2024-11-18 07:21:07.558841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.711 qpair failed and we were unable to recover it. 00:35:46.711 [2024-11-18 07:21:07.558952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.711 [2024-11-18 07:21:07.558977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.711 qpair failed and we were unable to recover it. 00:35:46.711 [2024-11-18 07:21:07.559085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.711 [2024-11-18 07:21:07.559110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.711 qpair failed and we were unable to recover it. 00:35:46.711 [2024-11-18 07:21:07.559198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.711 [2024-11-18 07:21:07.559223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.711 qpair failed and we were unable to recover it. 00:35:46.711 [2024-11-18 07:21:07.559341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.711 [2024-11-18 07:21:07.559366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.711 qpair failed and we were unable to recover it. 00:35:46.711 [2024-11-18 07:21:07.559451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.711 [2024-11-18 07:21:07.559476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.711 qpair failed and we were unable to recover it. 00:35:46.711 [2024-11-18 07:21:07.559593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.711 [2024-11-18 07:21:07.559618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.711 qpair failed and we were unable to recover it. 00:35:46.711 [2024-11-18 07:21:07.559695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.711 [2024-11-18 07:21:07.559721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.711 qpair failed and we were unable to recover it. 00:35:46.712 [2024-11-18 07:21:07.559806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.712 [2024-11-18 07:21:07.559852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.712 qpair failed and we were unable to recover it. 00:35:46.712 [2024-11-18 07:21:07.559977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.712 [2024-11-18 07:21:07.560021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.712 qpair failed and we were unable to recover it. 00:35:46.712 [2024-11-18 07:21:07.560145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.712 [2024-11-18 07:21:07.560194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.712 qpair failed and we were unable to recover it. 00:35:46.712 [2024-11-18 07:21:07.560315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.712 [2024-11-18 07:21:07.560358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.712 qpair failed and we were unable to recover it. 00:35:46.712 [2024-11-18 07:21:07.560505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.712 [2024-11-18 07:21:07.560532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.712 qpair failed and we were unable to recover it. 00:35:46.712 [2024-11-18 07:21:07.560613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.712 [2024-11-18 07:21:07.560641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.712 qpair failed and we were unable to recover it. 00:35:46.712 [2024-11-18 07:21:07.560746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.712 [2024-11-18 07:21:07.560779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.712 qpair failed and we were unable to recover it. 00:35:46.712 [2024-11-18 07:21:07.560904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.712 [2024-11-18 07:21:07.560956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.712 qpair failed and we were unable to recover it. 00:35:46.712 [2024-11-18 07:21:07.561122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.712 [2024-11-18 07:21:07.561170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.712 qpair failed and we were unable to recover it. 00:35:46.712 [2024-11-18 07:21:07.561347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.712 [2024-11-18 07:21:07.561373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.712 qpair failed and we were unable to recover it. 00:35:46.712 [2024-11-18 07:21:07.561450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.712 [2024-11-18 07:21:07.561476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.712 qpair failed and we were unable to recover it. 00:35:46.712 [2024-11-18 07:21:07.561591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.712 [2024-11-18 07:21:07.561617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.712 qpair failed and we were unable to recover it. 00:35:46.712 [2024-11-18 07:21:07.561695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.712 [2024-11-18 07:21:07.561720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.712 qpair failed and we were unable to recover it. 00:35:46.712 [2024-11-18 07:21:07.561837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.712 [2024-11-18 07:21:07.561870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.712 qpair failed and we were unable to recover it. 00:35:46.712 [2024-11-18 07:21:07.562015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.712 [2024-11-18 07:21:07.562046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.712 qpair failed and we were unable to recover it. 00:35:46.712 [2024-11-18 07:21:07.562214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.712 [2024-11-18 07:21:07.562273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.712 qpair failed and we were unable to recover it. 00:35:46.712 [2024-11-18 07:21:07.562374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.712 [2024-11-18 07:21:07.562402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.712 qpair failed and we were unable to recover it. 00:35:46.712 [2024-11-18 07:21:07.562501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.712 [2024-11-18 07:21:07.562529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.712 qpair failed and we were unable to recover it. 00:35:46.712 [2024-11-18 07:21:07.562666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.712 [2024-11-18 07:21:07.562692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.712 qpair failed and we were unable to recover it. 00:35:46.712 [2024-11-18 07:21:07.562814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.712 [2024-11-18 07:21:07.562840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.712 qpair failed and we were unable to recover it. 00:35:46.712 [2024-11-18 07:21:07.562917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.712 [2024-11-18 07:21:07.562944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.712 qpair failed and we were unable to recover it. 00:35:46.712 [2024-11-18 07:21:07.563028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.712 [2024-11-18 07:21:07.563055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.712 qpair failed and we were unable to recover it. 00:35:46.712 [2024-11-18 07:21:07.563139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.712 [2024-11-18 07:21:07.563166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.712 qpair failed and we were unable to recover it. 00:35:46.712 [2024-11-18 07:21:07.563290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.712 [2024-11-18 07:21:07.563330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.712 qpair failed and we were unable to recover it. 00:35:46.712 [2024-11-18 07:21:07.563425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.712 [2024-11-18 07:21:07.563452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.712 qpair failed and we were unable to recover it. 00:35:46.712 [2024-11-18 07:21:07.563550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.712 [2024-11-18 07:21:07.563578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.712 qpair failed and we were unable to recover it. 00:35:46.712 [2024-11-18 07:21:07.563688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.712 [2024-11-18 07:21:07.563715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.712 qpair failed and we were unable to recover it. 00:35:46.712 [2024-11-18 07:21:07.563853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.712 [2024-11-18 07:21:07.563889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.712 qpair failed and we were unable to recover it. 00:35:46.712 [2024-11-18 07:21:07.564035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.712 [2024-11-18 07:21:07.564079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.712 qpair failed and we were unable to recover it. 00:35:46.712 [2024-11-18 07:21:07.564206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.712 [2024-11-18 07:21:07.564237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.712 qpair failed and we were unable to recover it. 00:35:46.712 [2024-11-18 07:21:07.564394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.712 [2024-11-18 07:21:07.564419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.712 qpair failed and we were unable to recover it. 00:35:46.712 [2024-11-18 07:21:07.564506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.712 [2024-11-18 07:21:07.564533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.712 qpair failed and we were unable to recover it. 00:35:46.712 [2024-11-18 07:21:07.564643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.712 [2024-11-18 07:21:07.564669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.712 qpair failed and we were unable to recover it. 00:35:46.712 [2024-11-18 07:21:07.564801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.712 [2024-11-18 07:21:07.564829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.712 qpair failed and we were unable to recover it. 00:35:46.712 [2024-11-18 07:21:07.564953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.712 [2024-11-18 07:21:07.564982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.712 qpair failed and we were unable to recover it. 00:35:46.712 [2024-11-18 07:21:07.565136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.712 [2024-11-18 07:21:07.565165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.712 qpair failed and we were unable to recover it. 00:35:46.712 [2024-11-18 07:21:07.565335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.712 [2024-11-18 07:21:07.565382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.712 qpair failed and we were unable to recover it. 00:35:46.712 [2024-11-18 07:21:07.565503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.713 [2024-11-18 07:21:07.565530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.713 qpair failed and we were unable to recover it. 00:35:46.713 [2024-11-18 07:21:07.565640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.713 [2024-11-18 07:21:07.565666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.713 qpair failed and we were unable to recover it. 00:35:46.713 [2024-11-18 07:21:07.565777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.713 [2024-11-18 07:21:07.565823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.713 qpair failed and we were unable to recover it. 00:35:46.713 [2024-11-18 07:21:07.565988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.713 [2024-11-18 07:21:07.566021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.713 qpair failed and we were unable to recover it. 00:35:46.713 [2024-11-18 07:21:07.566166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.713 [2024-11-18 07:21:07.566223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.713 qpair failed and we were unable to recover it. 00:35:46.713 [2024-11-18 07:21:07.566379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.713 [2024-11-18 07:21:07.566408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.713 qpair failed and we were unable to recover it. 00:35:46.713 [2024-11-18 07:21:07.566536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.713 [2024-11-18 07:21:07.566563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.713 qpair failed and we were unable to recover it. 00:35:46.713 [2024-11-18 07:21:07.566688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.713 [2024-11-18 07:21:07.566715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.713 qpair failed and we were unable to recover it. 00:35:46.713 [2024-11-18 07:21:07.566832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.713 [2024-11-18 07:21:07.566858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.713 qpair failed and we were unable to recover it. 00:35:46.713 [2024-11-18 07:21:07.566983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.713 [2024-11-18 07:21:07.567016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.713 qpair failed and we were unable to recover it. 00:35:46.713 [2024-11-18 07:21:07.567124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.713 [2024-11-18 07:21:07.567153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.713 qpair failed and we were unable to recover it. 00:35:46.713 [2024-11-18 07:21:07.567275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.713 [2024-11-18 07:21:07.567302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.713 qpair failed and we were unable to recover it. 00:35:46.713 [2024-11-18 07:21:07.567426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.713 [2024-11-18 07:21:07.567452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.713 qpair failed and we were unable to recover it. 00:35:46.713 [2024-11-18 07:21:07.567544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.713 [2024-11-18 07:21:07.567570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.713 qpair failed and we were unable to recover it. 00:35:46.713 [2024-11-18 07:21:07.567704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.713 [2024-11-18 07:21:07.567729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.713 qpair failed and we were unable to recover it. 00:35:46.713 [2024-11-18 07:21:07.567867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.713 [2024-11-18 07:21:07.567893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.713 qpair failed and we were unable to recover it. 00:35:46.713 [2024-11-18 07:21:07.567979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.713 [2024-11-18 07:21:07.568004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.713 qpair failed and we were unable to recover it. 00:35:46.713 [2024-11-18 07:21:07.568137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.713 [2024-11-18 07:21:07.568163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.713 qpair failed and we were unable to recover it. 00:35:46.713 [2024-11-18 07:21:07.568334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.713 [2024-11-18 07:21:07.568366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.713 qpair failed and we were unable to recover it. 00:35:46.713 [2024-11-18 07:21:07.568482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.713 [2024-11-18 07:21:07.568536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.713 qpair failed and we were unable to recover it. 00:35:46.713 [2024-11-18 07:21:07.568632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.713 [2024-11-18 07:21:07.568658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.713 qpair failed and we were unable to recover it. 00:35:46.713 [2024-11-18 07:21:07.568770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.713 [2024-11-18 07:21:07.568796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.713 qpair failed and we were unable to recover it. 00:35:46.713 [2024-11-18 07:21:07.568909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.713 [2024-11-18 07:21:07.568936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.713 qpair failed and we were unable to recover it. 00:35:46.713 [2024-11-18 07:21:07.569051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.713 [2024-11-18 07:21:07.569077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.713 qpair failed and we were unable to recover it. 00:35:46.713 [2024-11-18 07:21:07.569161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.713 [2024-11-18 07:21:07.569187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.713 qpair failed and we were unable to recover it. 00:35:46.713 [2024-11-18 07:21:07.569259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.713 [2024-11-18 07:21:07.569286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.713 qpair failed and we were unable to recover it. 00:35:46.713 [2024-11-18 07:21:07.569398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.713 [2024-11-18 07:21:07.569424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.713 qpair failed and we were unable to recover it. 00:35:46.713 [2024-11-18 07:21:07.569532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.713 [2024-11-18 07:21:07.569558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.713 qpair failed and we were unable to recover it. 00:35:46.713 [2024-11-18 07:21:07.569676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.713 [2024-11-18 07:21:07.569702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.713 qpair failed and we were unable to recover it. 00:35:46.713 [2024-11-18 07:21:07.569804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.713 [2024-11-18 07:21:07.569832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.713 qpair failed and we were unable to recover it. 00:35:46.713 [2024-11-18 07:21:07.569937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.713 [2024-11-18 07:21:07.569979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.713 qpair failed and we were unable to recover it. 00:35:46.713 [2024-11-18 07:21:07.570123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.713 [2024-11-18 07:21:07.570172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.713 qpair failed and we were unable to recover it. 00:35:46.713 [2024-11-18 07:21:07.570300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.713 [2024-11-18 07:21:07.570327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.713 qpair failed and we were unable to recover it. 00:35:46.713 [2024-11-18 07:21:07.570434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.713 [2024-11-18 07:21:07.570460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.713 qpair failed and we were unable to recover it. 00:35:46.713 [2024-11-18 07:21:07.570557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.713 [2024-11-18 07:21:07.570585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.713 qpair failed and we were unable to recover it. 00:35:46.713 [2024-11-18 07:21:07.570730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.713 [2024-11-18 07:21:07.570757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.713 qpair failed and we were unable to recover it. 00:35:46.713 [2024-11-18 07:21:07.570895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.713 [2024-11-18 07:21:07.570921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.713 qpair failed and we were unable to recover it. 00:35:46.713 [2024-11-18 07:21:07.571034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.713 [2024-11-18 07:21:07.571061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.713 qpair failed and we were unable to recover it. 00:35:46.713 [2024-11-18 07:21:07.571192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.714 [2024-11-18 07:21:07.571233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.714 qpair failed and we were unable to recover it. 00:35:46.714 [2024-11-18 07:21:07.571346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.714 [2024-11-18 07:21:07.571372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.714 qpair failed and we were unable to recover it. 00:35:46.714 [2024-11-18 07:21:07.571485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.714 [2024-11-18 07:21:07.571518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.714 qpair failed and we were unable to recover it. 00:35:46.714 [2024-11-18 07:21:07.571629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.714 [2024-11-18 07:21:07.571654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.714 qpair failed and we were unable to recover it. 00:35:46.714 [2024-11-18 07:21:07.571786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.714 [2024-11-18 07:21:07.571830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.714 qpair failed and we were unable to recover it. 00:35:46.714 [2024-11-18 07:21:07.571969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.714 [2024-11-18 07:21:07.572006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.714 qpair failed and we were unable to recover it. 00:35:46.714 [2024-11-18 07:21:07.572143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.714 [2024-11-18 07:21:07.572174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.714 qpair failed and we were unable to recover it. 00:35:46.714 [2024-11-18 07:21:07.572289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.714 [2024-11-18 07:21:07.572321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.714 qpair failed and we were unable to recover it. 00:35:46.714 [2024-11-18 07:21:07.572431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.714 [2024-11-18 07:21:07.572474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.714 qpair failed and we were unable to recover it. 00:35:46.714 [2024-11-18 07:21:07.572613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.714 [2024-11-18 07:21:07.572639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.714 qpair failed and we were unable to recover it. 00:35:46.714 [2024-11-18 07:21:07.572717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.714 [2024-11-18 07:21:07.572743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.714 qpair failed and we were unable to recover it. 00:35:46.714 [2024-11-18 07:21:07.572858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.714 [2024-11-18 07:21:07.572883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.714 qpair failed and we were unable to recover it. 00:35:46.714 [2024-11-18 07:21:07.573022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.714 [2024-11-18 07:21:07.573047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.714 qpair failed and we were unable to recover it. 00:35:46.714 [2024-11-18 07:21:07.573126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.714 [2024-11-18 07:21:07.573151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.714 qpair failed and we were unable to recover it. 00:35:46.714 [2024-11-18 07:21:07.573261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.714 [2024-11-18 07:21:07.573293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.714 qpair failed and we were unable to recover it. 00:35:46.714 [2024-11-18 07:21:07.573427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.714 [2024-11-18 07:21:07.573471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.714 qpair failed and we were unable to recover it. 00:35:46.714 [2024-11-18 07:21:07.573626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.714 [2024-11-18 07:21:07.573652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.714 qpair failed and we were unable to recover it. 00:35:46.714 [2024-11-18 07:21:07.573765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.714 [2024-11-18 07:21:07.573811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.714 qpair failed and we were unable to recover it. 00:35:46.714 [2024-11-18 07:21:07.573915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.714 [2024-11-18 07:21:07.573947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.714 qpair failed and we were unable to recover it. 00:35:46.714 [2024-11-18 07:21:07.574092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.714 [2024-11-18 07:21:07.574125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.714 qpair failed and we were unable to recover it. 00:35:46.714 [2024-11-18 07:21:07.574259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.714 [2024-11-18 07:21:07.574290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.714 qpair failed and we were unable to recover it. 00:35:46.714 [2024-11-18 07:21:07.574381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.714 [2024-11-18 07:21:07.574413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.714 qpair failed and we were unable to recover it. 00:35:46.714 [2024-11-18 07:21:07.574536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.714 [2024-11-18 07:21:07.574567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.714 qpair failed and we were unable to recover it. 00:35:46.714 [2024-11-18 07:21:07.574739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.714 [2024-11-18 07:21:07.574770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.714 qpair failed and we were unable to recover it. 00:35:46.714 [2024-11-18 07:21:07.574897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.714 [2024-11-18 07:21:07.574929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.714 qpair failed and we were unable to recover it. 00:35:46.714 [2024-11-18 07:21:07.575034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.714 [2024-11-18 07:21:07.575066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.714 qpair failed and we were unable to recover it. 00:35:46.714 [2024-11-18 07:21:07.575238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.714 [2024-11-18 07:21:07.575301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.714 qpair failed and we were unable to recover it. 00:35:46.714 [2024-11-18 07:21:07.575429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.714 [2024-11-18 07:21:07.575460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.714 qpair failed and we were unable to recover it. 00:35:46.714 [2024-11-18 07:21:07.575622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.714 [2024-11-18 07:21:07.575652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.714 qpair failed and we were unable to recover it. 00:35:46.714 [2024-11-18 07:21:07.575740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.714 [2024-11-18 07:21:07.575769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.714 qpair failed and we were unable to recover it. 00:35:46.714 [2024-11-18 07:21:07.575923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.714 [2024-11-18 07:21:07.575953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.714 qpair failed and we were unable to recover it. 00:35:46.714 [2024-11-18 07:21:07.576068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.714 [2024-11-18 07:21:07.576118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.714 qpair failed and we were unable to recover it. 00:35:46.714 [2024-11-18 07:21:07.576219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.714 [2024-11-18 07:21:07.576254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.714 qpair failed and we were unable to recover it. 00:35:46.714 [2024-11-18 07:21:07.576377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.715 [2024-11-18 07:21:07.576407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.715 qpair failed and we were unable to recover it. 00:35:46.715 [2024-11-18 07:21:07.576558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.715 [2024-11-18 07:21:07.576589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.715 qpair failed and we were unable to recover it. 00:35:46.715 [2024-11-18 07:21:07.576683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.715 [2024-11-18 07:21:07.576713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.715 qpair failed and we were unable to recover it. 00:35:46.715 [2024-11-18 07:21:07.576827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.715 [2024-11-18 07:21:07.576856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.715 qpair failed and we were unable to recover it. 00:35:46.715 [2024-11-18 07:21:07.576974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.715 [2024-11-18 07:21:07.577004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.715 qpair failed and we were unable to recover it. 00:35:46.715 [2024-11-18 07:21:07.577127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.715 [2024-11-18 07:21:07.577157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.715 qpair failed and we were unable to recover it. 00:35:46.715 [2024-11-18 07:21:07.577264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.715 [2024-11-18 07:21:07.577307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.715 qpair failed and we were unable to recover it. 00:35:46.715 [2024-11-18 07:21:07.577405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.715 [2024-11-18 07:21:07.577436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.715 qpair failed and we were unable to recover it. 00:35:46.715 [2024-11-18 07:21:07.577578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.715 [2024-11-18 07:21:07.577609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.715 qpair failed and we were unable to recover it. 00:35:46.715 [2024-11-18 07:21:07.577714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.715 [2024-11-18 07:21:07.577743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.715 qpair failed and we were unable to recover it. 00:35:46.715 [2024-11-18 07:21:07.577829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.715 [2024-11-18 07:21:07.577857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.715 qpair failed and we were unable to recover it. 00:35:46.715 [2024-11-18 07:21:07.577953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.715 [2024-11-18 07:21:07.577999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.715 qpair failed and we were unable to recover it. 00:35:46.715 [2024-11-18 07:21:07.578150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.715 [2024-11-18 07:21:07.578198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.715 qpair failed and we were unable to recover it. 00:35:46.715 [2024-11-18 07:21:07.578343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.715 [2024-11-18 07:21:07.578375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.715 qpair failed and we were unable to recover it. 00:35:46.715 [2024-11-18 07:21:07.578465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.715 [2024-11-18 07:21:07.578520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.715 qpair failed and we were unable to recover it. 00:35:46.715 [2024-11-18 07:21:07.578663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.715 [2024-11-18 07:21:07.578695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.715 qpair failed and we were unable to recover it. 00:35:46.715 [2024-11-18 07:21:07.578850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.715 [2024-11-18 07:21:07.578896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.715 qpair failed and we were unable to recover it. 00:35:46.715 [2024-11-18 07:21:07.578992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.715 [2024-11-18 07:21:07.579024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.715 qpair failed and we were unable to recover it. 00:35:46.715 [2024-11-18 07:21:07.579147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.715 [2024-11-18 07:21:07.579180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.715 qpair failed and we were unable to recover it. 00:35:46.715 [2024-11-18 07:21:07.579283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.715 [2024-11-18 07:21:07.579314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.715 qpair failed and we were unable to recover it. 00:35:46.715 [2024-11-18 07:21:07.579469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.715 [2024-11-18 07:21:07.579512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.715 qpair failed and we were unable to recover it. 00:35:46.715 [2024-11-18 07:21:07.579621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.715 [2024-11-18 07:21:07.579654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.715 qpair failed and we were unable to recover it. 00:35:46.715 [2024-11-18 07:21:07.579785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.715 [2024-11-18 07:21:07.579817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.715 qpair failed and we were unable to recover it. 00:35:46.715 [2024-11-18 07:21:07.579955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.715 [2024-11-18 07:21:07.579987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.715 qpair failed and we were unable to recover it. 00:35:46.715 [2024-11-18 07:21:07.580160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.715 [2024-11-18 07:21:07.580209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.715 qpair failed and we were unable to recover it. 00:35:46.715 [2024-11-18 07:21:07.580335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.715 [2024-11-18 07:21:07.580364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.715 qpair failed and we were unable to recover it. 00:35:46.715 [2024-11-18 07:21:07.580498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.715 [2024-11-18 07:21:07.580535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.715 qpair failed and we were unable to recover it. 00:35:46.715 [2024-11-18 07:21:07.580687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.715 [2024-11-18 07:21:07.580717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.715 qpair failed and we were unable to recover it. 00:35:46.715 [2024-11-18 07:21:07.580854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.715 [2024-11-18 07:21:07.580902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.715 qpair failed and we were unable to recover it. 00:35:46.715 [2024-11-18 07:21:07.581027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.715 [2024-11-18 07:21:07.581056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.715 qpair failed and we were unable to recover it. 00:35:46.715 [2024-11-18 07:21:07.581153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.715 [2024-11-18 07:21:07.581183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.715 qpair failed and we were unable to recover it. 00:35:46.715 [2024-11-18 07:21:07.581275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.715 [2024-11-18 07:21:07.581303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.715 qpair failed and we were unable to recover it. 00:35:46.715 [2024-11-18 07:21:07.581441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.715 [2024-11-18 07:21:07.581484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.715 qpair failed and we were unable to recover it. 00:35:46.715 [2024-11-18 07:21:07.581602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.715 [2024-11-18 07:21:07.581633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.715 qpair failed and we were unable to recover it. 00:35:46.715 [2024-11-18 07:21:07.581780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.715 [2024-11-18 07:21:07.581829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.715 qpair failed and we were unable to recover it. 00:35:46.715 [2024-11-18 07:21:07.582041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.715 [2024-11-18 07:21:07.582078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.715 qpair failed and we were unable to recover it. 00:35:46.715 [2024-11-18 07:21:07.582188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.715 [2024-11-18 07:21:07.582218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.715 qpair failed and we were unable to recover it. 00:35:46.715 [2024-11-18 07:21:07.582346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.715 [2024-11-18 07:21:07.582376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.715 qpair failed and we were unable to recover it. 00:35:46.716 [2024-11-18 07:21:07.582555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.716 [2024-11-18 07:21:07.582604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.716 qpair failed and we were unable to recover it. 00:35:46.716 [2024-11-18 07:21:07.582747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.716 [2024-11-18 07:21:07.582793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.716 qpair failed and we were unable to recover it. 00:35:46.716 [2024-11-18 07:21:07.582950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.716 [2024-11-18 07:21:07.583001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.716 qpair failed and we were unable to recover it. 00:35:46.716 [2024-11-18 07:21:07.583117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.716 [2024-11-18 07:21:07.583147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.716 qpair failed and we were unable to recover it. 00:35:46.716 [2024-11-18 07:21:07.583265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.716 [2024-11-18 07:21:07.583294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.716 qpair failed and we were unable to recover it. 00:35:46.716 [2024-11-18 07:21:07.583446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.716 [2024-11-18 07:21:07.583475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.716 qpair failed and we were unable to recover it. 00:35:46.716 [2024-11-18 07:21:07.583569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.716 [2024-11-18 07:21:07.583599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.716 qpair failed and we were unable to recover it. 00:35:46.716 [2024-11-18 07:21:07.583699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.716 [2024-11-18 07:21:07.583728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.716 qpair failed and we were unable to recover it. 00:35:46.716 [2024-11-18 07:21:07.583835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.716 [2024-11-18 07:21:07.583864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.716 qpair failed and we were unable to recover it. 00:35:46.716 [2024-11-18 07:21:07.584012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.716 [2024-11-18 07:21:07.584041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.716 qpair failed and we were unable to recover it. 00:35:46.716 [2024-11-18 07:21:07.584164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.716 [2024-11-18 07:21:07.584193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.716 qpair failed and we were unable to recover it. 00:35:46.716 [2024-11-18 07:21:07.584340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.716 [2024-11-18 07:21:07.584370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.716 qpair failed and we were unable to recover it. 00:35:46.716 [2024-11-18 07:21:07.584503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.716 [2024-11-18 07:21:07.584534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.716 qpair failed and we were unable to recover it. 00:35:46.716 [2024-11-18 07:21:07.584689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.716 [2024-11-18 07:21:07.584719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.716 qpair failed and we were unable to recover it. 00:35:46.716 [2024-11-18 07:21:07.584857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.716 [2024-11-18 07:21:07.584907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.716 qpair failed and we were unable to recover it. 00:35:46.716 [2024-11-18 07:21:07.585008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.716 [2024-11-18 07:21:07.585040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.716 qpair failed and we were unable to recover it. 00:35:46.716 [2024-11-18 07:21:07.585178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.716 [2024-11-18 07:21:07.585211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.716 qpair failed and we were unable to recover it. 00:35:46.716 [2024-11-18 07:21:07.585338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.716 [2024-11-18 07:21:07.585369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.716 qpair failed and we were unable to recover it. 00:35:46.716 [2024-11-18 07:21:07.585504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.716 [2024-11-18 07:21:07.585551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.716 qpair failed and we were unable to recover it. 00:35:46.716 [2024-11-18 07:21:07.585684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.716 [2024-11-18 07:21:07.585716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.716 qpair failed and we were unable to recover it. 00:35:46.716 [2024-11-18 07:21:07.585883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.716 [2024-11-18 07:21:07.585921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.716 qpair failed and we were unable to recover it. 00:35:46.716 [2024-11-18 07:21:07.586070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.716 [2024-11-18 07:21:07.586120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.716 qpair failed and we were unable to recover it. 00:35:46.716 [2024-11-18 07:21:07.586360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.716 [2024-11-18 07:21:07.586397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.716 qpair failed and we were unable to recover it. 00:35:46.716 [2024-11-18 07:21:07.586558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.716 [2024-11-18 07:21:07.586588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.716 qpair failed and we were unable to recover it. 00:35:46.716 [2024-11-18 07:21:07.586709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.716 [2024-11-18 07:21:07.586744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.716 qpair failed and we were unable to recover it. 00:35:46.716 [2024-11-18 07:21:07.586838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.716 [2024-11-18 07:21:07.586871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.716 qpair failed and we were unable to recover it. 00:35:46.716 [2024-11-18 07:21:07.587023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.716 [2024-11-18 07:21:07.587059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.716 qpair failed and we were unable to recover it. 00:35:46.716 [2024-11-18 07:21:07.587293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.716 [2024-11-18 07:21:07.587330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.716 qpair failed and we were unable to recover it. 00:35:46.716 [2024-11-18 07:21:07.587483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.716 [2024-11-18 07:21:07.587526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.716 qpair failed and we were unable to recover it. 00:35:46.716 [2024-11-18 07:21:07.587618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.716 [2024-11-18 07:21:07.587668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.716 qpair failed and we were unable to recover it. 00:35:46.716 [2024-11-18 07:21:07.587838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.716 [2024-11-18 07:21:07.587871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.716 qpair failed and we were unable to recover it. 00:35:46.716 [2024-11-18 07:21:07.587978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.716 [2024-11-18 07:21:07.588025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.716 qpair failed and we were unable to recover it. 00:35:46.716 [2024-11-18 07:21:07.588210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.716 [2024-11-18 07:21:07.588265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.716 qpair failed and we were unable to recover it. 00:35:46.716 [2024-11-18 07:21:07.588419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.716 [2024-11-18 07:21:07.588458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.716 qpair failed and we were unable to recover it. 00:35:46.716 [2024-11-18 07:21:07.588626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.716 [2024-11-18 07:21:07.588659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.716 qpair failed and we were unable to recover it. 00:35:46.716 [2024-11-18 07:21:07.588769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.716 [2024-11-18 07:21:07.588802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.716 qpair failed and we were unable to recover it. 00:35:46.716 [2024-11-18 07:21:07.588983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.716 [2024-11-18 07:21:07.589030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.716 qpair failed and we were unable to recover it. 00:35:46.717 [2024-11-18 07:21:07.589199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.717 [2024-11-18 07:21:07.589237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.717 qpair failed and we were unable to recover it. 00:35:46.717 [2024-11-18 07:21:07.589368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.717 [2024-11-18 07:21:07.589397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.717 qpair failed and we were unable to recover it. 00:35:46.717 [2024-11-18 07:21:07.589483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.717 [2024-11-18 07:21:07.589524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.717 qpair failed and we were unable to recover it. 00:35:46.717 [2024-11-18 07:21:07.589677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.717 [2024-11-18 07:21:07.589725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.717 qpair failed and we were unable to recover it. 00:35:46.717 [2024-11-18 07:21:07.589827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.717 [2024-11-18 07:21:07.589855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.717 qpair failed and we were unable to recover it. 00:35:46.717 [2024-11-18 07:21:07.589991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.717 [2024-11-18 07:21:07.590021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.717 qpair failed and we were unable to recover it. 00:35:46.717 [2024-11-18 07:21:07.590112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.717 [2024-11-18 07:21:07.590143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.717 qpair failed and we were unable to recover it. 00:35:46.717 [2024-11-18 07:21:07.590263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.717 [2024-11-18 07:21:07.590293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:46.717 qpair failed and we were unable to recover it. 00:35:46.717 [2024-11-18 07:21:07.590456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.717 [2024-11-18 07:21:07.590506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.717 qpair failed and we were unable to recover it. 00:35:46.717 [2024-11-18 07:21:07.590685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.717 [2024-11-18 07:21:07.590734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.717 qpair failed and we were unable to recover it. 00:35:46.717 [2024-11-18 07:21:07.590917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.717 [2024-11-18 07:21:07.590951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.717 qpair failed and we were unable to recover it. 00:35:46.717 [2024-11-18 07:21:07.591093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.717 [2024-11-18 07:21:07.591126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.717 qpair failed and we were unable to recover it. 00:35:46.717 [2024-11-18 07:21:07.591286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.717 [2024-11-18 07:21:07.591320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.717 qpair failed and we were unable to recover it. 00:35:46.717 [2024-11-18 07:21:07.591482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.717 [2024-11-18 07:21:07.591559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.717 qpair failed and we were unable to recover it. 00:35:46.717 [2024-11-18 07:21:07.591681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.717 [2024-11-18 07:21:07.591716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.717 qpair failed and we were unable to recover it. 00:35:46.717 [2024-11-18 07:21:07.591847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.717 [2024-11-18 07:21:07.591880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.717 qpair failed and we were unable to recover it. 00:35:46.717 [2024-11-18 07:21:07.591976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.717 [2024-11-18 07:21:07.592026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.717 qpair failed and we were unable to recover it. 00:35:46.717 [2024-11-18 07:21:07.592180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.717 [2024-11-18 07:21:07.592218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.717 qpair failed and we were unable to recover it. 00:35:46.717 [2024-11-18 07:21:07.592457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.717 [2024-11-18 07:21:07.592512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.717 qpair failed and we were unable to recover it. 00:35:46.717 [2024-11-18 07:21:07.592660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.717 [2024-11-18 07:21:07.592690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.717 qpair failed and we were unable to recover it. 00:35:46.717 [2024-11-18 07:21:07.592811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.717 [2024-11-18 07:21:07.592841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.717 qpair failed and we were unable to recover it. 00:35:46.717 [2024-11-18 07:21:07.592967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.717 [2024-11-18 07:21:07.592995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.717 qpair failed and we were unable to recover it. 00:35:46.717 [2024-11-18 07:21:07.593101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.717 [2024-11-18 07:21:07.593130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.717 qpair failed and we were unable to recover it. 00:35:46.717 [2024-11-18 07:21:07.593259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.717 [2024-11-18 07:21:07.593288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.717 qpair failed and we were unable to recover it. 00:35:46.717 [2024-11-18 07:21:07.593456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.717 [2024-11-18 07:21:07.593500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.717 qpair failed and we were unable to recover it. 00:35:46.717 [2024-11-18 07:21:07.593663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.717 [2024-11-18 07:21:07.593690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.717 qpair failed and we were unable to recover it. 00:35:46.717 [2024-11-18 07:21:07.593820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.717 [2024-11-18 07:21:07.593853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.717 qpair failed and we were unable to recover it. 00:35:46.717 [2024-11-18 07:21:07.594027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.717 [2024-11-18 07:21:07.594064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.717 qpair failed and we were unable to recover it. 00:35:46.717 [2024-11-18 07:21:07.594244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.717 [2024-11-18 07:21:07.594300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.717 qpair failed and we were unable to recover it. 00:35:46.717 [2024-11-18 07:21:07.594461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.717 [2024-11-18 07:21:07.594497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.717 qpair failed and we were unable to recover it. 00:35:46.717 [2024-11-18 07:21:07.594641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.717 [2024-11-18 07:21:07.594667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.717 qpair failed and we were unable to recover it. 00:35:46.717 [2024-11-18 07:21:07.594740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.717 [2024-11-18 07:21:07.594766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:46.717 qpair failed and we were unable to recover it. 00:35:46.717 [2024-11-18 07:21:07.594897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.717 [2024-11-18 07:21:07.594924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.717 qpair failed and we were unable to recover it. 00:35:46.717 [2024-11-18 07:21:07.595071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.717 [2024-11-18 07:21:07.595114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.717 qpair failed and we were unable to recover it. 00:35:46.717 [2024-11-18 07:21:07.595272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.717 [2024-11-18 07:21:07.595322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.717 qpair failed and we were unable to recover it. 00:35:46.717 [2024-11-18 07:21:07.595498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.717 [2024-11-18 07:21:07.595547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.717 qpair failed and we were unable to recover it. 00:35:46.717 [2024-11-18 07:21:07.595648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.717 [2024-11-18 07:21:07.595680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.717 qpair failed and we were unable to recover it. 00:35:46.717 [2024-11-18 07:21:07.595821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.717 [2024-11-18 07:21:07.595853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.718 qpair failed and we were unable to recover it. 00:35:46.718 [2024-11-18 07:21:07.595959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.718 [2024-11-18 07:21:07.595992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.718 qpair failed and we were unable to recover it. 00:35:46.718 [2024-11-18 07:21:07.596157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.718 [2024-11-18 07:21:07.596189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.718 qpair failed and we were unable to recover it. 00:35:46.718 [2024-11-18 07:21:07.596341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.718 [2024-11-18 07:21:07.596379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.718 qpair failed and we were unable to recover it. 00:35:46.718 [2024-11-18 07:21:07.596539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.718 [2024-11-18 07:21:07.596582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.718 qpair failed and we were unable to recover it. 00:35:46.718 [2024-11-18 07:21:07.596660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.718 [2024-11-18 07:21:07.596686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.718 qpair failed and we were unable to recover it. 00:35:46.718 [2024-11-18 07:21:07.596802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.718 [2024-11-18 07:21:07.596828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.718 qpair failed and we were unable to recover it. 00:35:46.718 [2024-11-18 07:21:07.596993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.718 [2024-11-18 07:21:07.597025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.718 qpair failed and we were unable to recover it. 00:35:46.718 [2024-11-18 07:21:07.597168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.718 [2024-11-18 07:21:07.597214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.718 qpair failed and we were unable to recover it. 00:35:46.718 [2024-11-18 07:21:07.597421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.718 [2024-11-18 07:21:07.597454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.718 qpair failed and we were unable to recover it. 00:35:46.718 [2024-11-18 07:21:07.597609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.718 [2024-11-18 07:21:07.597636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.718 qpair failed and we were unable to recover it. 00:35:46.718 [2024-11-18 07:21:07.597759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.718 [2024-11-18 07:21:07.597785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:46.718 qpair failed and we were unable to recover it. 00:35:46.718 [2024-11-18 07:21:07.597892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.718 [2024-11-18 07:21:07.597949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.718 qpair failed and we were unable to recover it. 00:35:46.718 [2024-11-18 07:21:07.598132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.718 [2024-11-18 07:21:07.598185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.718 qpair failed and we were unable to recover it. 00:35:46.718 [2024-11-18 07:21:07.598298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.718 [2024-11-18 07:21:07.598340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.718 qpair failed and we were unable to recover it. 00:35:46.718 [2024-11-18 07:21:07.598453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.718 [2024-11-18 07:21:07.598480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.718 qpair failed and we were unable to recover it. 00:35:46.718 [2024-11-18 07:21:07.598582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.718 [2024-11-18 07:21:07.598608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.718 qpair failed and we were unable to recover it. 00:35:46.718 [2024-11-18 07:21:07.598722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.718 [2024-11-18 07:21:07.598769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.718 qpair failed and we were unable to recover it. 00:35:46.718 [2024-11-18 07:21:07.598877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.718 [2024-11-18 07:21:07.598910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.718 qpair failed and we were unable to recover it. 00:35:46.718 [2024-11-18 07:21:07.599035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.718 [2024-11-18 07:21:07.599073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.718 qpair failed and we were unable to recover it. 00:35:46.718 [2024-11-18 07:21:07.599274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.718 [2024-11-18 07:21:07.599305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.718 qpair failed and we were unable to recover it. 00:35:46.718 [2024-11-18 07:21:07.599439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.718 [2024-11-18 07:21:07.599479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.718 qpair failed and we were unable to recover it. 00:35:46.718 [2024-11-18 07:21:07.599612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.718 [2024-11-18 07:21:07.599638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.718 qpair failed and we were unable to recover it. 00:35:46.718 [2024-11-18 07:21:07.599807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.718 [2024-11-18 07:21:07.599851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.718 qpair failed and we were unable to recover it. 00:35:46.718 [2024-11-18 07:21:07.599957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.718 [2024-11-18 07:21:07.599989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.718 qpair failed and we were unable to recover it. 00:35:46.718 [2024-11-18 07:21:07.600098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.718 [2024-11-18 07:21:07.600130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.718 qpair failed and we were unable to recover it. 00:35:46.718 [2024-11-18 07:21:07.600303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.718 [2024-11-18 07:21:07.600336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.718 qpair failed and we were unable to recover it. 00:35:46.718 [2024-11-18 07:21:07.600470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.718 [2024-11-18 07:21:07.600523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.718 qpair failed and we were unable to recover it. 00:35:46.718 [2024-11-18 07:21:07.600650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.718 [2024-11-18 07:21:07.600675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.718 qpair failed and we were unable to recover it. 00:35:46.718 [2024-11-18 07:21:07.600797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.718 [2024-11-18 07:21:07.600845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.718 qpair failed and we were unable to recover it. 00:35:46.718 [2024-11-18 07:21:07.600932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.718 [2024-11-18 07:21:07.600979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.718 qpair failed and we were unable to recover it. 00:35:46.718 [2024-11-18 07:21:07.601104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.718 [2024-11-18 07:21:07.601129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.718 qpair failed and we were unable to recover it. 00:35:46.718 [2024-11-18 07:21:07.601284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.718 [2024-11-18 07:21:07.601316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.718 qpair failed and we were unable to recover it. 00:35:46.718 [2024-11-18 07:21:07.601429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.718 [2024-11-18 07:21:07.601461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.718 qpair failed and we were unable to recover it. 00:35:46.718 [2024-11-18 07:21:07.601600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.718 [2024-11-18 07:21:07.601625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.718 qpair failed and we were unable to recover it. 00:35:46.718 [2024-11-18 07:21:07.601770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.718 [2024-11-18 07:21:07.601802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.718 qpair failed and we were unable to recover it. 00:35:46.718 [2024-11-18 07:21:07.601929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.718 [2024-11-18 07:21:07.601961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.718 qpair failed and we were unable to recover it. 00:35:46.718 [2024-11-18 07:21:07.602064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.718 [2024-11-18 07:21:07.602106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.718 qpair failed and we were unable to recover it. 00:35:46.718 [2024-11-18 07:21:07.602260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.719 [2024-11-18 07:21:07.602286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.719 qpair failed and we were unable to recover it. 00:35:46.719 [2024-11-18 07:21:07.602423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.719 [2024-11-18 07:21:07.602449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.719 qpair failed and we were unable to recover it. 00:35:46.719 [2024-11-18 07:21:07.602568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.719 [2024-11-18 07:21:07.602594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.719 qpair failed and we were unable to recover it. 00:35:46.719 [2024-11-18 07:21:07.602694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.719 [2024-11-18 07:21:07.602727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.719 qpair failed and we were unable to recover it. 00:35:46.719 [2024-11-18 07:21:07.602862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.719 [2024-11-18 07:21:07.602893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.719 qpair failed and we were unable to recover it. 00:35:46.719 [2024-11-18 07:21:07.602993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.719 [2024-11-18 07:21:07.603025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:46.719 qpair failed and we were unable to recover it. 00:35:46.719 [2024-11-18 07:21:07.603146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.719 [2024-11-18 07:21:07.603178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.005 qpair failed and we were unable to recover it. 00:35:47.005 [2024-11-18 07:21:07.603274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.005 [2024-11-18 07:21:07.603306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.005 qpair failed and we were unable to recover it. 00:35:47.005 [2024-11-18 07:21:07.603435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.005 [2024-11-18 07:21:07.603479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.005 qpair failed and we were unable to recover it. 00:35:47.005 [2024-11-18 07:21:07.603579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.005 [2024-11-18 07:21:07.603617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.005 qpair failed and we were unable to recover it. 00:35:47.005 [2024-11-18 07:21:07.603760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.005 [2024-11-18 07:21:07.603813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.005 qpair failed and we were unable to recover it. 00:35:47.005 [2024-11-18 07:21:07.603933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.005 [2024-11-18 07:21:07.603968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.005 qpair failed and we were unable to recover it. 00:35:47.006 [2024-11-18 07:21:07.604095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.006 [2024-11-18 07:21:07.604143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.006 qpair failed and we were unable to recover it. 00:35:47.006 [2024-11-18 07:21:07.604282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.006 [2024-11-18 07:21:07.604309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.006 qpair failed and we were unable to recover it. 00:35:47.006 [2024-11-18 07:21:07.604451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.006 [2024-11-18 07:21:07.604479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.006 qpair failed and we were unable to recover it. 00:35:47.006 [2024-11-18 07:21:07.604607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.006 [2024-11-18 07:21:07.604655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.006 qpair failed and we were unable to recover it. 00:35:47.006 [2024-11-18 07:21:07.604743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.006 [2024-11-18 07:21:07.604771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.006 qpair failed and we were unable to recover it. 00:35:47.006 [2024-11-18 07:21:07.604854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.006 [2024-11-18 07:21:07.604880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.006 qpair failed and we were unable to recover it. 00:35:47.006 [2024-11-18 07:21:07.604972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.006 [2024-11-18 07:21:07.604999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.006 qpair failed and we were unable to recover it. 00:35:47.006 [2024-11-18 07:21:07.605095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.006 [2024-11-18 07:21:07.605134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.006 qpair failed and we were unable to recover it. 00:35:47.006 [2024-11-18 07:21:07.605254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.006 [2024-11-18 07:21:07.605286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.006 qpair failed and we were unable to recover it. 00:35:47.006 [2024-11-18 07:21:07.605379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.006 [2024-11-18 07:21:07.605405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.006 qpair failed and we were unable to recover it. 00:35:47.006 [2024-11-18 07:21:07.605504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.006 [2024-11-18 07:21:07.605531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.006 qpair failed and we were unable to recover it. 00:35:47.006 [2024-11-18 07:21:07.605631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.006 [2024-11-18 07:21:07.605663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.006 qpair failed and we were unable to recover it. 00:35:47.006 [2024-11-18 07:21:07.605809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.006 [2024-11-18 07:21:07.605842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.006 qpair failed and we were unable to recover it. 00:35:47.006 [2024-11-18 07:21:07.605996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.006 [2024-11-18 07:21:07.606035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.006 qpair failed and we were unable to recover it. 00:35:47.006 [2024-11-18 07:21:07.606160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.006 [2024-11-18 07:21:07.606198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.006 qpair failed and we were unable to recover it. 00:35:47.006 [2024-11-18 07:21:07.606315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.006 [2024-11-18 07:21:07.606341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.006 qpair failed and we were unable to recover it. 00:35:47.006 [2024-11-18 07:21:07.606457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.006 [2024-11-18 07:21:07.606484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.006 qpair failed and we were unable to recover it. 00:35:47.006 [2024-11-18 07:21:07.606602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.006 [2024-11-18 07:21:07.606628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.006 qpair failed and we were unable to recover it. 00:35:47.006 [2024-11-18 07:21:07.606732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.006 [2024-11-18 07:21:07.606764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.006 qpair failed and we were unable to recover it. 00:35:47.006 [2024-11-18 07:21:07.606878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.006 [2024-11-18 07:21:07.606910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.006 qpair failed and we were unable to recover it. 00:35:47.006 [2024-11-18 07:21:07.607035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.006 [2024-11-18 07:21:07.607087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.006 qpair failed and we were unable to recover it. 00:35:47.006 [2024-11-18 07:21:07.607207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.006 [2024-11-18 07:21:07.607235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.006 qpair failed and we were unable to recover it. 00:35:47.006 [2024-11-18 07:21:07.607349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.006 [2024-11-18 07:21:07.607376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.006 qpair failed and we were unable to recover it. 00:35:47.006 [2024-11-18 07:21:07.607460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.006 [2024-11-18 07:21:07.607487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.006 qpair failed and we were unable to recover it. 00:35:47.006 [2024-11-18 07:21:07.607616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.006 [2024-11-18 07:21:07.607665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.006 qpair failed and we were unable to recover it. 00:35:47.006 [2024-11-18 07:21:07.607789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.006 [2024-11-18 07:21:07.607851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.006 qpair failed and we were unable to recover it. 00:35:47.006 [2024-11-18 07:21:07.607979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.006 [2024-11-18 07:21:07.608025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.006 qpair failed and we were unable to recover it. 00:35:47.006 [2024-11-18 07:21:07.608163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.006 [2024-11-18 07:21:07.608190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.006 qpair failed and we were unable to recover it. 00:35:47.006 [2024-11-18 07:21:07.608305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.006 [2024-11-18 07:21:07.608334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.006 qpair failed and we were unable to recover it. 00:35:47.006 [2024-11-18 07:21:07.608461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.006 [2024-11-18 07:21:07.608487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.006 qpair failed and we were unable to recover it. 00:35:47.006 [2024-11-18 07:21:07.608618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.006 [2024-11-18 07:21:07.608644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.006 qpair failed and we were unable to recover it. 00:35:47.006 [2024-11-18 07:21:07.608782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.006 [2024-11-18 07:21:07.608809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.006 qpair failed and we were unable to recover it. 00:35:47.006 [2024-11-18 07:21:07.608948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.006 [2024-11-18 07:21:07.608989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.006 qpair failed and we were unable to recover it. 00:35:47.006 [2024-11-18 07:21:07.609236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.006 [2024-11-18 07:21:07.609301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.006 qpair failed and we were unable to recover it. 00:35:47.006 [2024-11-18 07:21:07.609444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.006 [2024-11-18 07:21:07.609470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.006 qpair failed and we were unable to recover it. 00:35:47.006 [2024-11-18 07:21:07.609615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.006 [2024-11-18 07:21:07.609641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.006 qpair failed and we were unable to recover it. 00:35:47.006 [2024-11-18 07:21:07.609754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.006 [2024-11-18 07:21:07.609801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.006 qpair failed and we were unable to recover it. 00:35:47.006 [2024-11-18 07:21:07.609972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.007 [2024-11-18 07:21:07.610004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.007 qpair failed and we were unable to recover it. 00:35:47.007 [2024-11-18 07:21:07.610140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.007 [2024-11-18 07:21:07.610173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.007 qpair failed and we were unable to recover it. 00:35:47.007 [2024-11-18 07:21:07.610382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.007 [2024-11-18 07:21:07.610410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.007 qpair failed and we were unable to recover it. 00:35:47.007 [2024-11-18 07:21:07.610528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.007 [2024-11-18 07:21:07.610555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.007 qpair failed and we were unable to recover it. 00:35:47.007 [2024-11-18 07:21:07.610649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.007 [2024-11-18 07:21:07.610675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.007 qpair failed and we were unable to recover it. 00:35:47.007 [2024-11-18 07:21:07.610758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.007 [2024-11-18 07:21:07.610803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.007 qpair failed and we were unable to recover it. 00:35:47.007 [2024-11-18 07:21:07.611009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.007 [2024-11-18 07:21:07.611059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.007 qpair failed and we were unable to recover it. 00:35:47.007 [2024-11-18 07:21:07.611222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.007 [2024-11-18 07:21:07.611273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.007 qpair failed and we were unable to recover it. 00:35:47.007 [2024-11-18 07:21:07.611394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.007 [2024-11-18 07:21:07.611420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.007 qpair failed and we were unable to recover it. 00:35:47.007 [2024-11-18 07:21:07.611512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.007 [2024-11-18 07:21:07.611539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.007 qpair failed and we were unable to recover it. 00:35:47.007 [2024-11-18 07:21:07.611644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.007 [2024-11-18 07:21:07.611670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.007 qpair failed and we were unable to recover it. 00:35:47.007 [2024-11-18 07:21:07.611782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.007 [2024-11-18 07:21:07.611808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.007 qpair failed and we were unable to recover it. 00:35:47.007 [2024-11-18 07:21:07.611940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.007 [2024-11-18 07:21:07.611972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.007 qpair failed and we were unable to recover it. 00:35:47.007 [2024-11-18 07:21:07.612074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.007 [2024-11-18 07:21:07.612100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.007 qpair failed and we were unable to recover it. 00:35:47.007 [2024-11-18 07:21:07.612288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.007 [2024-11-18 07:21:07.612319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.007 qpair failed and we were unable to recover it. 00:35:47.007 [2024-11-18 07:21:07.612471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.007 [2024-11-18 07:21:07.612526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.007 qpair failed and we were unable to recover it. 00:35:47.007 [2024-11-18 07:21:07.612625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.007 [2024-11-18 07:21:07.612653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.007 qpair failed and we were unable to recover it. 00:35:47.007 [2024-11-18 07:21:07.612798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.007 [2024-11-18 07:21:07.612844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.007 qpair failed and we were unable to recover it. 00:35:47.007 [2024-11-18 07:21:07.612955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.007 [2024-11-18 07:21:07.613006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.007 qpair failed and we were unable to recover it. 00:35:47.007 [2024-11-18 07:21:07.613089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.007 [2024-11-18 07:21:07.613116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.007 qpair failed and we were unable to recover it. 00:35:47.007 [2024-11-18 07:21:07.613275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.007 [2024-11-18 07:21:07.613309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.007 qpair failed and we were unable to recover it. 00:35:47.007 [2024-11-18 07:21:07.613436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.007 [2024-11-18 07:21:07.613462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.007 qpair failed and we were unable to recover it. 00:35:47.007 [2024-11-18 07:21:07.613571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.007 [2024-11-18 07:21:07.613599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.007 qpair failed and we were unable to recover it. 00:35:47.007 [2024-11-18 07:21:07.613710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.007 [2024-11-18 07:21:07.613737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.007 qpair failed and we were unable to recover it. 00:35:47.007 [2024-11-18 07:21:07.613884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.007 [2024-11-18 07:21:07.613910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.007 qpair failed and we were unable to recover it. 00:35:47.007 [2024-11-18 07:21:07.614057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.007 [2024-11-18 07:21:07.614084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.007 qpair failed and we were unable to recover it. 00:35:47.007 [2024-11-18 07:21:07.614166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.007 [2024-11-18 07:21:07.614192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.007 qpair failed and we were unable to recover it. 00:35:47.007 [2024-11-18 07:21:07.614300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.007 [2024-11-18 07:21:07.614326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.007 qpair failed and we were unable to recover it. 00:35:47.007 [2024-11-18 07:21:07.614411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.007 [2024-11-18 07:21:07.614443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.007 qpair failed and we were unable to recover it. 00:35:47.007 [2024-11-18 07:21:07.614542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.007 [2024-11-18 07:21:07.614569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.007 qpair failed and we were unable to recover it. 00:35:47.007 [2024-11-18 07:21:07.614662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.007 [2024-11-18 07:21:07.614689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.007 qpair failed and we were unable to recover it. 00:35:47.007 [2024-11-18 07:21:07.614774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.007 [2024-11-18 07:21:07.614801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.007 qpair failed and we were unable to recover it. 00:35:47.007 [2024-11-18 07:21:07.614926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.007 [2024-11-18 07:21:07.614954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.007 qpair failed and we were unable to recover it. 00:35:47.007 [2024-11-18 07:21:07.615060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.007 [2024-11-18 07:21:07.615086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.007 qpair failed and we were unable to recover it. 00:35:47.007 [2024-11-18 07:21:07.615203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.007 [2024-11-18 07:21:07.615229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.007 qpair failed and we were unable to recover it. 00:35:47.007 [2024-11-18 07:21:07.615344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.007 [2024-11-18 07:21:07.615370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.007 qpair failed and we were unable to recover it. 00:35:47.007 [2024-11-18 07:21:07.615471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.007 [2024-11-18 07:21:07.615527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.007 qpair failed and we were unable to recover it. 00:35:47.007 [2024-11-18 07:21:07.615651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.007 [2024-11-18 07:21:07.615678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.007 qpair failed and we were unable to recover it. 00:35:47.008 [2024-11-18 07:21:07.615787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.008 [2024-11-18 07:21:07.615825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.008 qpair failed and we were unable to recover it. 00:35:47.008 [2024-11-18 07:21:07.615967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.008 [2024-11-18 07:21:07.615998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.008 qpair failed and we were unable to recover it. 00:35:47.008 [2024-11-18 07:21:07.616115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.008 [2024-11-18 07:21:07.616141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.008 qpair failed and we were unable to recover it. 00:35:47.008 [2024-11-18 07:21:07.616255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.008 [2024-11-18 07:21:07.616280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.008 qpair failed and we were unable to recover it. 00:35:47.008 [2024-11-18 07:21:07.616370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.008 [2024-11-18 07:21:07.616396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.008 qpair failed and we were unable to recover it. 00:35:47.008 [2024-11-18 07:21:07.616521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.008 [2024-11-18 07:21:07.616548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.008 qpair failed and we were unable to recover it. 00:35:47.008 [2024-11-18 07:21:07.616631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.008 [2024-11-18 07:21:07.616656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.008 qpair failed and we were unable to recover it. 00:35:47.008 [2024-11-18 07:21:07.616747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.008 [2024-11-18 07:21:07.616779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.008 qpair failed and we were unable to recover it. 00:35:47.008 [2024-11-18 07:21:07.616895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.008 [2024-11-18 07:21:07.616928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.008 qpair failed and we were unable to recover it. 00:35:47.008 [2024-11-18 07:21:07.617070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.008 [2024-11-18 07:21:07.617102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.008 qpair failed and we were unable to recover it. 00:35:47.008 [2024-11-18 07:21:07.617239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.008 [2024-11-18 07:21:07.617271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.008 qpair failed and we were unable to recover it. 00:35:47.008 [2024-11-18 07:21:07.617380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.008 [2024-11-18 07:21:07.617411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.008 qpair failed and we were unable to recover it. 00:35:47.008 [2024-11-18 07:21:07.617555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.008 [2024-11-18 07:21:07.617581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.008 qpair failed and we were unable to recover it. 00:35:47.008 [2024-11-18 07:21:07.617718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.008 [2024-11-18 07:21:07.617749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.008 qpair failed and we were unable to recover it. 00:35:47.008 [2024-11-18 07:21:07.617854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.008 [2024-11-18 07:21:07.617885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.008 qpair failed and we were unable to recover it. 00:35:47.008 [2024-11-18 07:21:07.618013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.008 [2024-11-18 07:21:07.618045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.008 qpair failed and we were unable to recover it. 00:35:47.008 [2024-11-18 07:21:07.618207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.008 [2024-11-18 07:21:07.618239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.008 qpair failed and we were unable to recover it. 00:35:47.008 [2024-11-18 07:21:07.618417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.008 [2024-11-18 07:21:07.618447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.008 qpair failed and we were unable to recover it. 00:35:47.008 [2024-11-18 07:21:07.618595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.008 [2024-11-18 07:21:07.618621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.008 qpair failed and we were unable to recover it. 00:35:47.008 [2024-11-18 07:21:07.618725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.008 [2024-11-18 07:21:07.618782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.008 qpair failed and we were unable to recover it. 00:35:47.008 [2024-11-18 07:21:07.618963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.008 [2024-11-18 07:21:07.619013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.008 qpair failed and we were unable to recover it. 00:35:47.008 [2024-11-18 07:21:07.619199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.008 [2024-11-18 07:21:07.619248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.008 qpair failed and we were unable to recover it. 00:35:47.008 [2024-11-18 07:21:07.619359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.008 [2024-11-18 07:21:07.619386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.008 qpair failed and we were unable to recover it. 00:35:47.008 [2024-11-18 07:21:07.619480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.008 [2024-11-18 07:21:07.619537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.008 qpair failed and we were unable to recover it. 00:35:47.008 [2024-11-18 07:21:07.619649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.008 [2024-11-18 07:21:07.619675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.008 qpair failed and we were unable to recover it. 00:35:47.008 [2024-11-18 07:21:07.619792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.008 [2024-11-18 07:21:07.619819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.008 qpair failed and we were unable to recover it. 00:35:47.008 [2024-11-18 07:21:07.619948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.008 [2024-11-18 07:21:07.619975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.008 qpair failed and we were unable to recover it. 00:35:47.008 [2024-11-18 07:21:07.620069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.008 [2024-11-18 07:21:07.620095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.008 qpair failed and we were unable to recover it. 00:35:47.008 [2024-11-18 07:21:07.620174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.008 [2024-11-18 07:21:07.620200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.008 qpair failed and we were unable to recover it. 00:35:47.008 [2024-11-18 07:21:07.620309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.008 [2024-11-18 07:21:07.620337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.008 qpair failed and we were unable to recover it. 00:35:47.008 [2024-11-18 07:21:07.620442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.008 [2024-11-18 07:21:07.620468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.008 qpair failed and we were unable to recover it. 00:35:47.008 [2024-11-18 07:21:07.620606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.008 [2024-11-18 07:21:07.620634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.008 qpair failed and we were unable to recover it. 00:35:47.008 [2024-11-18 07:21:07.620747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.008 [2024-11-18 07:21:07.620774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.008 qpair failed and we were unable to recover it. 00:35:47.008 [2024-11-18 07:21:07.620887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.008 [2024-11-18 07:21:07.620922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.008 qpair failed and we were unable to recover it. 00:35:47.008 [2024-11-18 07:21:07.621037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.008 [2024-11-18 07:21:07.621064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.008 qpair failed and we were unable to recover it. 00:35:47.008 [2024-11-18 07:21:07.621192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.008 [2024-11-18 07:21:07.621218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.008 qpair failed and we were unable to recover it. 00:35:47.008 [2024-11-18 07:21:07.621329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.008 [2024-11-18 07:21:07.621356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.008 qpair failed and we were unable to recover it. 00:35:47.008 [2024-11-18 07:21:07.621440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.009 [2024-11-18 07:21:07.621467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.009 qpair failed and we were unable to recover it. 00:35:47.009 [2024-11-18 07:21:07.621591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.009 [2024-11-18 07:21:07.621640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.009 qpair failed and we were unable to recover it. 00:35:47.009 [2024-11-18 07:21:07.621802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.009 [2024-11-18 07:21:07.621843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.009 qpair failed and we were unable to recover it. 00:35:47.009 [2024-11-18 07:21:07.622029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.009 [2024-11-18 07:21:07.622073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.009 qpair failed and we were unable to recover it. 00:35:47.009 [2024-11-18 07:21:07.622185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.009 [2024-11-18 07:21:07.622212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.009 qpair failed and we were unable to recover it. 00:35:47.009 [2024-11-18 07:21:07.622303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.009 [2024-11-18 07:21:07.622330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.009 qpair failed and we were unable to recover it. 00:35:47.009 [2024-11-18 07:21:07.622408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.009 [2024-11-18 07:21:07.622434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.009 qpair failed and we were unable to recover it. 00:35:47.009 [2024-11-18 07:21:07.622566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.009 [2024-11-18 07:21:07.622620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.009 qpair failed and we were unable to recover it. 00:35:47.009 [2024-11-18 07:21:07.622728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.009 [2024-11-18 07:21:07.622761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.009 qpair failed and we were unable to recover it. 00:35:47.009 [2024-11-18 07:21:07.622922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.009 [2024-11-18 07:21:07.622954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.009 qpair failed and we were unable to recover it. 00:35:47.009 [2024-11-18 07:21:07.623056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.009 [2024-11-18 07:21:07.623089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.009 qpair failed and we were unable to recover it. 00:35:47.009 [2024-11-18 07:21:07.623247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.009 [2024-11-18 07:21:07.623281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.009 qpair failed and we were unable to recover it. 00:35:47.009 [2024-11-18 07:21:07.623488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.009 [2024-11-18 07:21:07.623544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.009 qpair failed and we were unable to recover it. 00:35:47.009 [2024-11-18 07:21:07.623628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.009 [2024-11-18 07:21:07.623655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.009 qpair failed and we were unable to recover it. 00:35:47.009 [2024-11-18 07:21:07.623868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.009 [2024-11-18 07:21:07.623907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.009 qpair failed and we were unable to recover it. 00:35:47.009 [2024-11-18 07:21:07.624111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.009 [2024-11-18 07:21:07.624151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.009 qpair failed and we were unable to recover it. 00:35:47.009 [2024-11-18 07:21:07.624371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.009 [2024-11-18 07:21:07.624411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.009 qpair failed and we were unable to recover it. 00:35:47.009 [2024-11-18 07:21:07.624585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.009 [2024-11-18 07:21:07.624613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.009 qpair failed and we were unable to recover it. 00:35:47.009 [2024-11-18 07:21:07.624729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.009 [2024-11-18 07:21:07.624755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.009 qpair failed and we were unable to recover it. 00:35:47.009 [2024-11-18 07:21:07.624968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.009 [2024-11-18 07:21:07.625022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.009 qpair failed and we were unable to recover it. 00:35:47.009 [2024-11-18 07:21:07.625184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.009 [2024-11-18 07:21:07.625225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.009 qpair failed and we were unable to recover it. 00:35:47.009 [2024-11-18 07:21:07.625439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.009 [2024-11-18 07:21:07.625480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.009 qpair failed and we were unable to recover it. 00:35:47.009 [2024-11-18 07:21:07.625641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.009 [2024-11-18 07:21:07.625667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.009 qpair failed and we were unable to recover it. 00:35:47.009 [2024-11-18 07:21:07.625764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.009 [2024-11-18 07:21:07.625790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.009 qpair failed and we were unable to recover it. 00:35:47.009 [2024-11-18 07:21:07.625875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.009 [2024-11-18 07:21:07.625918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.009 qpair failed and we were unable to recover it. 00:35:47.009 [2024-11-18 07:21:07.626026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.009 [2024-11-18 07:21:07.626059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.009 qpair failed and we were unable to recover it. 00:35:47.009 [2024-11-18 07:21:07.626240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.009 [2024-11-18 07:21:07.626309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.009 qpair failed and we were unable to recover it. 00:35:47.009 [2024-11-18 07:21:07.626516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.009 [2024-11-18 07:21:07.626563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.009 qpair failed and we were unable to recover it. 00:35:47.009 [2024-11-18 07:21:07.626681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.009 [2024-11-18 07:21:07.626709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.009 qpair failed and we were unable to recover it. 00:35:47.009 [2024-11-18 07:21:07.626818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.009 [2024-11-18 07:21:07.626871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.009 qpair failed and we were unable to recover it. 00:35:47.009 [2024-11-18 07:21:07.627027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.009 [2024-11-18 07:21:07.627066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.009 qpair failed and we were unable to recover it. 00:35:47.009 [2024-11-18 07:21:07.627215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.009 [2024-11-18 07:21:07.627270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.009 qpair failed and we were unable to recover it. 00:35:47.009 [2024-11-18 07:21:07.627433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.009 [2024-11-18 07:21:07.627472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.009 qpair failed and we were unable to recover it. 00:35:47.009 [2024-11-18 07:21:07.627620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.009 [2024-11-18 07:21:07.627659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.009 qpair failed and we were unable to recover it. 00:35:47.009 [2024-11-18 07:21:07.627816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.009 [2024-11-18 07:21:07.627880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.009 qpair failed and we were unable to recover it. 00:35:47.009 [2024-11-18 07:21:07.628034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.009 [2024-11-18 07:21:07.628087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.009 qpair failed and we were unable to recover it. 00:35:47.009 [2024-11-18 07:21:07.628215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.009 [2024-11-18 07:21:07.628267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.009 qpair failed and we were unable to recover it. 00:35:47.009 [2024-11-18 07:21:07.628432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.009 [2024-11-18 07:21:07.628458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.009 qpair failed and we were unable to recover it. 00:35:47.010 [2024-11-18 07:21:07.628543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.010 [2024-11-18 07:21:07.628571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.010 qpair failed and we were unable to recover it. 00:35:47.010 [2024-11-18 07:21:07.628679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.010 [2024-11-18 07:21:07.628705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.010 qpair failed and we were unable to recover it. 00:35:47.010 [2024-11-18 07:21:07.628801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.010 [2024-11-18 07:21:07.628828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.010 qpair failed and we were unable to recover it. 00:35:47.010 [2024-11-18 07:21:07.628917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.010 [2024-11-18 07:21:07.628944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.010 qpair failed and we were unable to recover it. 00:35:47.010 [2024-11-18 07:21:07.629106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.010 [2024-11-18 07:21:07.629148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.010 qpair failed and we were unable to recover it. 00:35:47.010 [2024-11-18 07:21:07.629318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.010 [2024-11-18 07:21:07.629358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.010 qpair failed and we were unable to recover it. 00:35:47.010 [2024-11-18 07:21:07.629540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.010 [2024-11-18 07:21:07.629583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.010 qpair failed and we were unable to recover it. 00:35:47.010 [2024-11-18 07:21:07.629666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.010 [2024-11-18 07:21:07.629692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.010 qpair failed and we were unable to recover it. 00:35:47.010 [2024-11-18 07:21:07.629793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.010 [2024-11-18 07:21:07.629832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.010 qpair failed and we were unable to recover it. 00:35:47.010 [2024-11-18 07:21:07.629990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.010 [2024-11-18 07:21:07.630036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.010 qpair failed and we were unable to recover it. 00:35:47.010 [2024-11-18 07:21:07.630185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.010 [2024-11-18 07:21:07.630225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.010 qpair failed and we were unable to recover it. 00:35:47.010 [2024-11-18 07:21:07.630470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.010 [2024-11-18 07:21:07.630503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.010 qpair failed and we were unable to recover it. 00:35:47.010 [2024-11-18 07:21:07.630617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.010 [2024-11-18 07:21:07.630643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.010 qpair failed and we were unable to recover it. 00:35:47.010 [2024-11-18 07:21:07.630754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.010 [2024-11-18 07:21:07.630803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.010 qpair failed and we were unable to recover it. 00:35:47.010 [2024-11-18 07:21:07.630984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.010 [2024-11-18 07:21:07.631023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.010 qpair failed and we were unable to recover it. 00:35:47.010 [2024-11-18 07:21:07.631219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.010 [2024-11-18 07:21:07.631267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.010 qpair failed and we were unable to recover it. 00:35:47.010 [2024-11-18 07:21:07.631482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.010 [2024-11-18 07:21:07.631514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.010 qpair failed and we were unable to recover it. 00:35:47.010 [2024-11-18 07:21:07.631631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.010 [2024-11-18 07:21:07.631657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.010 qpair failed and we were unable to recover it. 00:35:47.010 [2024-11-18 07:21:07.631740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.010 [2024-11-18 07:21:07.631766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.010 qpair failed and we were unable to recover it. 00:35:47.010 [2024-11-18 07:21:07.631921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.010 [2024-11-18 07:21:07.631971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.010 qpair failed and we were unable to recover it. 00:35:47.010 [2024-11-18 07:21:07.632174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.010 [2024-11-18 07:21:07.632213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.010 qpair failed and we were unable to recover it. 00:35:47.010 [2024-11-18 07:21:07.632396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.010 [2024-11-18 07:21:07.632435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.010 qpair failed and we were unable to recover it. 00:35:47.010 [2024-11-18 07:21:07.632633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.010 [2024-11-18 07:21:07.632660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.010 qpair failed and we were unable to recover it. 00:35:47.010 [2024-11-18 07:21:07.632749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.010 [2024-11-18 07:21:07.632775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.010 qpair failed and we were unable to recover it. 00:35:47.010 [2024-11-18 07:21:07.632887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.010 [2024-11-18 07:21:07.632914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.010 qpair failed and we were unable to recover it. 00:35:47.010 [2024-11-18 07:21:07.633059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.010 [2024-11-18 07:21:07.633096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.010 qpair failed and we were unable to recover it. 00:35:47.010 [2024-11-18 07:21:07.633214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.010 [2024-11-18 07:21:07.633259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.010 qpair failed and we were unable to recover it. 00:35:47.010 [2024-11-18 07:21:07.633429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.010 [2024-11-18 07:21:07.633468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.010 qpair failed and we were unable to recover it. 00:35:47.010 [2024-11-18 07:21:07.633622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.010 [2024-11-18 07:21:07.633648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.010 qpair failed and we were unable to recover it. 00:35:47.010 [2024-11-18 07:21:07.633753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.010 [2024-11-18 07:21:07.633785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.010 qpair failed and we were unable to recover it. 00:35:47.010 [2024-11-18 07:21:07.633921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.010 [2024-11-18 07:21:07.633958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.010 qpair failed and we were unable to recover it. 00:35:47.010 [2024-11-18 07:21:07.634107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.010 [2024-11-18 07:21:07.634140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.011 qpair failed and we were unable to recover it. 00:35:47.011 [2024-11-18 07:21:07.634324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.011 [2024-11-18 07:21:07.634357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.011 qpair failed and we were unable to recover it. 00:35:47.011 [2024-11-18 07:21:07.634496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.011 [2024-11-18 07:21:07.634544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.011 qpair failed and we were unable to recover it. 00:35:47.011 [2024-11-18 07:21:07.634653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.011 [2024-11-18 07:21:07.634679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.011 qpair failed and we were unable to recover it. 00:35:47.011 [2024-11-18 07:21:07.634810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.011 [2024-11-18 07:21:07.634842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.011 qpair failed and we were unable to recover it. 00:35:47.011 [2024-11-18 07:21:07.634953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.011 [2024-11-18 07:21:07.634994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.011 qpair failed and we were unable to recover it. 00:35:47.011 [2024-11-18 07:21:07.635141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.011 [2024-11-18 07:21:07.635191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.011 qpair failed and we were unable to recover it. 00:35:47.011 [2024-11-18 07:21:07.635359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.011 [2024-11-18 07:21:07.635386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.011 qpair failed and we were unable to recover it. 00:35:47.011 [2024-11-18 07:21:07.635502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.011 [2024-11-18 07:21:07.635529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.011 qpair failed and we were unable to recover it. 00:35:47.011 [2024-11-18 07:21:07.635618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.011 [2024-11-18 07:21:07.635644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.011 qpair failed and we were unable to recover it. 00:35:47.011 [2024-11-18 07:21:07.635767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.011 [2024-11-18 07:21:07.635793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.011 qpair failed and we were unable to recover it. 00:35:47.011 [2024-11-18 07:21:07.635881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.011 [2024-11-18 07:21:07.635907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.011 qpair failed and we were unable to recover it. 00:35:47.011 [2024-11-18 07:21:07.636026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.011 [2024-11-18 07:21:07.636081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.011 qpair failed and we were unable to recover it. 00:35:47.011 [2024-11-18 07:21:07.636250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.011 [2024-11-18 07:21:07.636290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.011 qpair failed and we were unable to recover it. 00:35:47.011 [2024-11-18 07:21:07.636450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.011 [2024-11-18 07:21:07.636483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.011 qpair failed and we were unable to recover it. 00:35:47.011 [2024-11-18 07:21:07.636612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.011 [2024-11-18 07:21:07.636640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.011 qpair failed and we were unable to recover it. 00:35:47.011 [2024-11-18 07:21:07.636754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.011 [2024-11-18 07:21:07.636779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.011 qpair failed and we were unable to recover it. 00:35:47.011 [2024-11-18 07:21:07.636895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.011 [2024-11-18 07:21:07.636921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.011 qpair failed and we were unable to recover it. 00:35:47.011 [2024-11-18 07:21:07.637068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.011 [2024-11-18 07:21:07.637145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.011 qpair failed and we were unable to recover it. 00:35:47.011 [2024-11-18 07:21:07.637356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.011 [2024-11-18 07:21:07.637396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.011 qpair failed and we were unable to recover it. 00:35:47.011 [2024-11-18 07:21:07.637577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.011 [2024-11-18 07:21:07.637605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.011 qpair failed and we were unable to recover it. 00:35:47.011 [2024-11-18 07:21:07.637690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.011 [2024-11-18 07:21:07.637716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.011 qpair failed and we were unable to recover it. 00:35:47.011 [2024-11-18 07:21:07.637801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.011 [2024-11-18 07:21:07.637828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.011 qpair failed and we were unable to recover it. 00:35:47.011 [2024-11-18 07:21:07.637918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.011 [2024-11-18 07:21:07.637966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.011 qpair failed and we were unable to recover it. 00:35:47.011 [2024-11-18 07:21:07.638106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.011 [2024-11-18 07:21:07.638139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.011 qpair failed and we were unable to recover it. 00:35:47.011 [2024-11-18 07:21:07.638395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.011 [2024-11-18 07:21:07.638462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.011 qpair failed and we were unable to recover it. 00:35:47.011 [2024-11-18 07:21:07.638619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.011 [2024-11-18 07:21:07.638646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.011 qpair failed and we were unable to recover it. 00:35:47.011 [2024-11-18 07:21:07.638808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.011 [2024-11-18 07:21:07.638857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.011 qpair failed and we were unable to recover it. 00:35:47.011 [2024-11-18 07:21:07.639064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.011 [2024-11-18 07:21:07.639103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.011 qpair failed and we were unable to recover it. 00:35:47.011 [2024-11-18 07:21:07.639289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.011 [2024-11-18 07:21:07.639355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.011 qpair failed and we were unable to recover it. 00:35:47.011 [2024-11-18 07:21:07.639581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.011 [2024-11-18 07:21:07.639609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.011 qpair failed and we were unable to recover it. 00:35:47.011 [2024-11-18 07:21:07.639756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.011 [2024-11-18 07:21:07.639782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.011 qpair failed and we were unable to recover it. 00:35:47.011 [2024-11-18 07:21:07.639928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.011 [2024-11-18 07:21:07.639960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.011 qpair failed and we were unable to recover it. 00:35:47.011 [2024-11-18 07:21:07.640129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.011 [2024-11-18 07:21:07.640155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.011 qpair failed and we were unable to recover it. 00:35:47.011 [2024-11-18 07:21:07.640322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.011 [2024-11-18 07:21:07.640365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.011 qpair failed and we were unable to recover it. 00:35:47.011 [2024-11-18 07:21:07.640505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.011 [2024-11-18 07:21:07.640548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.011 qpair failed and we were unable to recover it. 00:35:47.011 [2024-11-18 07:21:07.640691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.011 [2024-11-18 07:21:07.640717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.011 qpair failed and we were unable to recover it. 00:35:47.011 [2024-11-18 07:21:07.640830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.011 [2024-11-18 07:21:07.640862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.011 qpair failed and we were unable to recover it. 00:35:47.012 [2024-11-18 07:21:07.640969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.012 [2024-11-18 07:21:07.641002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.012 qpair failed and we were unable to recover it. 00:35:47.012 [2024-11-18 07:21:07.641140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.012 [2024-11-18 07:21:07.641174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.012 qpair failed and we were unable to recover it. 00:35:47.012 [2024-11-18 07:21:07.641312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.012 [2024-11-18 07:21:07.641354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.012 qpair failed and we were unable to recover it. 00:35:47.012 [2024-11-18 07:21:07.641543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.012 [2024-11-18 07:21:07.641577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.012 qpair failed and we were unable to recover it. 00:35:47.012 [2024-11-18 07:21:07.641719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.012 [2024-11-18 07:21:07.641752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.012 qpair failed and we were unable to recover it. 00:35:47.012 [2024-11-18 07:21:07.641931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.012 [2024-11-18 07:21:07.641974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.012 qpair failed and we were unable to recover it. 00:35:47.012 [2024-11-18 07:21:07.642155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.012 [2024-11-18 07:21:07.642197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.012 qpair failed and we were unable to recover it. 00:35:47.012 [2024-11-18 07:21:07.642343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.012 [2024-11-18 07:21:07.642377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.012 qpair failed and we were unable to recover it. 00:35:47.012 [2024-11-18 07:21:07.642518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.012 [2024-11-18 07:21:07.642551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.012 qpair failed and we were unable to recover it. 00:35:47.012 [2024-11-18 07:21:07.642733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.012 [2024-11-18 07:21:07.642771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.012 qpair failed and we were unable to recover it. 00:35:47.012 [2024-11-18 07:21:07.642933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.012 [2024-11-18 07:21:07.642971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.012 qpair failed and we were unable to recover it. 00:35:47.012 [2024-11-18 07:21:07.643144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.012 [2024-11-18 07:21:07.643187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.012 qpair failed and we were unable to recover it. 00:35:47.012 [2024-11-18 07:21:07.643386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.012 [2024-11-18 07:21:07.643426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.012 qpair failed and we were unable to recover it. 00:35:47.012 [2024-11-18 07:21:07.643640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.012 [2024-11-18 07:21:07.643682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.012 qpair failed and we were unable to recover it. 00:35:47.012 [2024-11-18 07:21:07.643854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.012 [2024-11-18 07:21:07.643901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.012 qpair failed and we were unable to recover it. 00:35:47.012 [2024-11-18 07:21:07.644020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.012 [2024-11-18 07:21:07.644063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.012 qpair failed and we were unable to recover it. 00:35:47.012 [2024-11-18 07:21:07.644263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.012 [2024-11-18 07:21:07.644303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.012 qpair failed and we were unable to recover it. 00:35:47.012 [2024-11-18 07:21:07.644472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.012 [2024-11-18 07:21:07.644522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.012 qpair failed and we were unable to recover it. 00:35:47.012 [2024-11-18 07:21:07.644673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.012 [2024-11-18 07:21:07.644713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.012 qpair failed and we were unable to recover it. 00:35:47.012 [2024-11-18 07:21:07.644847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.012 [2024-11-18 07:21:07.644887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.012 qpair failed and we were unable to recover it. 00:35:47.012 [2024-11-18 07:21:07.645055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.012 [2024-11-18 07:21:07.645111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.012 qpair failed and we were unable to recover it. 00:35:47.012 [2024-11-18 07:21:07.645313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.012 [2024-11-18 07:21:07.645354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.012 qpair failed and we were unable to recover it. 00:35:47.012 [2024-11-18 07:21:07.645497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.012 [2024-11-18 07:21:07.645540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.012 qpair failed and we were unable to recover it. 00:35:47.012 [2024-11-18 07:21:07.645708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.012 [2024-11-18 07:21:07.645749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.012 qpair failed and we were unable to recover it. 00:35:47.012 [2024-11-18 07:21:07.645879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.012 [2024-11-18 07:21:07.645921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.012 qpair failed and we were unable to recover it. 00:35:47.012 [2024-11-18 07:21:07.646074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.012 [2024-11-18 07:21:07.646115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.012 qpair failed and we were unable to recover it. 00:35:47.012 [2024-11-18 07:21:07.646250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.012 [2024-11-18 07:21:07.646290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.012 qpair failed and we were unable to recover it. 00:35:47.012 [2024-11-18 07:21:07.646444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.012 [2024-11-18 07:21:07.646485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.012 qpair failed and we were unable to recover it. 00:35:47.012 [2024-11-18 07:21:07.646701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.012 [2024-11-18 07:21:07.646743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.012 qpair failed and we were unable to recover it. 00:35:47.012 [2024-11-18 07:21:07.646873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.012 [2024-11-18 07:21:07.646913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.012 qpair failed and we were unable to recover it. 00:35:47.012 [2024-11-18 07:21:07.647086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.012 [2024-11-18 07:21:07.647127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.012 qpair failed and we were unable to recover it. 00:35:47.012 [2024-11-18 07:21:07.647292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.012 [2024-11-18 07:21:07.647333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.012 qpair failed and we were unable to recover it. 00:35:47.012 [2024-11-18 07:21:07.647517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.012 [2024-11-18 07:21:07.647559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.012 qpair failed and we were unable to recover it. 00:35:47.012 [2024-11-18 07:21:07.647726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.012 [2024-11-18 07:21:07.647767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.012 qpair failed and we were unable to recover it. 00:35:47.012 [2024-11-18 07:21:07.647960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.012 [2024-11-18 07:21:07.648002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.012 qpair failed and we were unable to recover it. 00:35:47.012 [2024-11-18 07:21:07.648170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.012 [2024-11-18 07:21:07.648211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.012 qpair failed and we were unable to recover it. 00:35:47.012 [2024-11-18 07:21:07.648340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.012 [2024-11-18 07:21:07.648380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.012 qpair failed and we were unable to recover it. 00:35:47.012 [2024-11-18 07:21:07.648555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.013 [2024-11-18 07:21:07.648597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.013 qpair failed and we were unable to recover it. 00:35:47.013 [2024-11-18 07:21:07.648759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.013 [2024-11-18 07:21:07.648802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.013 qpair failed and we were unable to recover it. 00:35:47.013 [2024-11-18 07:21:07.649001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.013 [2024-11-18 07:21:07.649042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.013 qpair failed and we were unable to recover it. 00:35:47.013 [2024-11-18 07:21:07.649162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.013 [2024-11-18 07:21:07.649202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.013 qpair failed and we were unable to recover it. 00:35:47.013 [2024-11-18 07:21:07.649384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.013 [2024-11-18 07:21:07.649424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.013 qpair failed and we were unable to recover it. 00:35:47.013 [2024-11-18 07:21:07.649628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.013 [2024-11-18 07:21:07.649670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.013 qpair failed and we were unable to recover it. 00:35:47.013 [2024-11-18 07:21:07.649832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.013 [2024-11-18 07:21:07.649873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.013 qpair failed and we were unable to recover it. 00:35:47.013 [2024-11-18 07:21:07.649998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.013 [2024-11-18 07:21:07.650038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.013 qpair failed and we were unable to recover it. 00:35:47.013 [2024-11-18 07:21:07.650198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.013 [2024-11-18 07:21:07.650239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.013 qpair failed and we were unable to recover it. 00:35:47.013 [2024-11-18 07:21:07.650400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.013 [2024-11-18 07:21:07.650440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.013 qpair failed and we were unable to recover it. 00:35:47.013 [2024-11-18 07:21:07.650583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.013 [2024-11-18 07:21:07.650625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.013 qpair failed and we were unable to recover it. 00:35:47.013 [2024-11-18 07:21:07.650758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.013 [2024-11-18 07:21:07.650799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.013 qpair failed and we were unable to recover it. 00:35:47.013 [2024-11-18 07:21:07.650993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.013 [2024-11-18 07:21:07.651034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.013 qpair failed and we were unable to recover it. 00:35:47.013 [2024-11-18 07:21:07.651204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.013 [2024-11-18 07:21:07.651245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.013 qpair failed and we were unable to recover it. 00:35:47.013 [2024-11-18 07:21:07.651402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.013 [2024-11-18 07:21:07.651442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.013 qpair failed and we were unable to recover it. 00:35:47.013 [2024-11-18 07:21:07.651650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.013 [2024-11-18 07:21:07.651694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.013 qpair failed and we were unable to recover it. 00:35:47.013 [2024-11-18 07:21:07.651865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.013 [2024-11-18 07:21:07.651909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.013 qpair failed and we were unable to recover it. 00:35:47.013 [2024-11-18 07:21:07.652076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.013 [2024-11-18 07:21:07.652119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.013 qpair failed and we were unable to recover it. 00:35:47.013 [2024-11-18 07:21:07.652257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.013 [2024-11-18 07:21:07.652301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.013 qpair failed and we were unable to recover it. 00:35:47.013 [2024-11-18 07:21:07.652476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.013 [2024-11-18 07:21:07.652546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.013 qpair failed and we were unable to recover it. 00:35:47.013 [2024-11-18 07:21:07.652701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.013 [2024-11-18 07:21:07.652742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.013 qpair failed and we were unable to recover it. 00:35:47.013 [2024-11-18 07:21:07.652911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.013 [2024-11-18 07:21:07.652952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.013 qpair failed and we were unable to recover it. 00:35:47.013 [2024-11-18 07:21:07.653115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.013 [2024-11-18 07:21:07.653155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.013 qpair failed and we were unable to recover it. 00:35:47.013 [2024-11-18 07:21:07.653303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.013 [2024-11-18 07:21:07.653351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.013 qpair failed and we were unable to recover it. 00:35:47.013 [2024-11-18 07:21:07.653522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.013 [2024-11-18 07:21:07.653564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.013 qpair failed and we were unable to recover it. 00:35:47.013 [2024-11-18 07:21:07.653750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.013 [2024-11-18 07:21:07.653793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.013 qpair failed and we were unable to recover it. 00:35:47.013 [2024-11-18 07:21:07.653947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.013 [2024-11-18 07:21:07.653991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.013 qpair failed and we were unable to recover it. 00:35:47.013 [2024-11-18 07:21:07.654129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.013 [2024-11-18 07:21:07.654172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.013 qpair failed and we were unable to recover it. 00:35:47.013 [2024-11-18 07:21:07.654335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.013 [2024-11-18 07:21:07.654378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.013 qpair failed and we were unable to recover it. 00:35:47.013 [2024-11-18 07:21:07.654524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.013 [2024-11-18 07:21:07.654569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.013 qpair failed and we were unable to recover it. 00:35:47.013 [2024-11-18 07:21:07.654734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.013 [2024-11-18 07:21:07.654776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.013 qpair failed and we were unable to recover it. 00:35:47.013 [2024-11-18 07:21:07.654951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.013 [2024-11-18 07:21:07.654992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.013 qpair failed and we were unable to recover it. 00:35:47.013 [2024-11-18 07:21:07.655173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.013 [2024-11-18 07:21:07.655215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.013 qpair failed and we were unable to recover it. 00:35:47.013 [2024-11-18 07:21:07.655371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.013 [2024-11-18 07:21:07.655411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.013 qpair failed and we were unable to recover it. 00:35:47.013 [2024-11-18 07:21:07.655547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.013 [2024-11-18 07:21:07.655606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.013 qpair failed and we were unable to recover it. 00:35:47.013 [2024-11-18 07:21:07.655738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.013 [2024-11-18 07:21:07.655781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.013 qpair failed and we were unable to recover it. 00:35:47.013 [2024-11-18 07:21:07.655915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.013 [2024-11-18 07:21:07.655958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.013 qpair failed and we were unable to recover it. 00:35:47.013 [2024-11-18 07:21:07.656181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.013 [2024-11-18 07:21:07.656224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.013 qpair failed and we were unable to recover it. 00:35:47.014 [2024-11-18 07:21:07.656383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.014 [2024-11-18 07:21:07.656426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.014 qpair failed and we were unable to recover it. 00:35:47.014 [2024-11-18 07:21:07.656644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.014 [2024-11-18 07:21:07.656688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.014 qpair failed and we were unable to recover it. 00:35:47.014 [2024-11-18 07:21:07.656861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.014 [2024-11-18 07:21:07.656904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.014 qpair failed and we were unable to recover it. 00:35:47.014 [2024-11-18 07:21:07.657108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.014 [2024-11-18 07:21:07.657152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.014 qpair failed and we were unable to recover it. 00:35:47.014 [2024-11-18 07:21:07.657422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.014 [2024-11-18 07:21:07.657485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.014 qpair failed and we were unable to recover it. 00:35:47.014 [2024-11-18 07:21:07.657728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.014 [2024-11-18 07:21:07.657771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.014 qpair failed and we were unable to recover it. 00:35:47.014 [2024-11-18 07:21:07.657975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.014 [2024-11-18 07:21:07.658018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.014 qpair failed and we were unable to recover it. 00:35:47.014 [2024-11-18 07:21:07.658159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.014 [2024-11-18 07:21:07.658201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.014 qpair failed and we were unable to recover it. 00:35:47.014 [2024-11-18 07:21:07.658377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.014 [2024-11-18 07:21:07.658427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.014 qpair failed and we were unable to recover it. 00:35:47.014 [2024-11-18 07:21:07.658654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.014 [2024-11-18 07:21:07.658699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.014 qpair failed and we were unable to recover it. 00:35:47.014 [2024-11-18 07:21:07.658904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.014 [2024-11-18 07:21:07.658947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.014 qpair failed and we were unable to recover it. 00:35:47.014 [2024-11-18 07:21:07.659155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.014 [2024-11-18 07:21:07.659197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.014 qpair failed and we were unable to recover it. 00:35:47.014 [2024-11-18 07:21:07.659447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.014 [2024-11-18 07:21:07.659540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.014 qpair failed and we were unable to recover it. 00:35:47.014 [2024-11-18 07:21:07.659692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.014 [2024-11-18 07:21:07.659736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.014 qpair failed and we were unable to recover it. 00:35:47.014 [2024-11-18 07:21:07.659921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.014 [2024-11-18 07:21:07.659965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.014 qpair failed and we were unable to recover it. 00:35:47.014 [2024-11-18 07:21:07.660171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.014 [2024-11-18 07:21:07.660213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.014 qpair failed and we were unable to recover it. 00:35:47.014 [2024-11-18 07:21:07.660336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.014 [2024-11-18 07:21:07.660379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.014 qpair failed and we were unable to recover it. 00:35:47.014 [2024-11-18 07:21:07.660513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.014 [2024-11-18 07:21:07.660557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.014 qpair failed and we were unable to recover it. 00:35:47.014 [2024-11-18 07:21:07.660739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.014 [2024-11-18 07:21:07.660783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.014 qpair failed and we were unable to recover it. 00:35:47.014 [2024-11-18 07:21:07.660953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.014 [2024-11-18 07:21:07.660998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.014 qpair failed and we were unable to recover it. 00:35:47.014 [2024-11-18 07:21:07.661171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.014 [2024-11-18 07:21:07.661215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.014 qpair failed and we were unable to recover it. 00:35:47.014 [2024-11-18 07:21:07.661352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.014 [2024-11-18 07:21:07.661394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.014 qpair failed and we were unable to recover it. 00:35:47.014 [2024-11-18 07:21:07.661580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.014 [2024-11-18 07:21:07.661624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.014 qpair failed and we were unable to recover it. 00:35:47.014 [2024-11-18 07:21:07.661796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.014 [2024-11-18 07:21:07.661840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.014 qpair failed and we were unable to recover it. 00:35:47.014 [2024-11-18 07:21:07.662014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.014 [2024-11-18 07:21:07.662056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.014 qpair failed and we were unable to recover it. 00:35:47.014 [2024-11-18 07:21:07.662251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.014 [2024-11-18 07:21:07.662303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.014 qpair failed and we were unable to recover it. 00:35:47.014 [2024-11-18 07:21:07.662505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.014 [2024-11-18 07:21:07.662553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.014 qpair failed and we were unable to recover it. 00:35:47.014 [2024-11-18 07:21:07.662735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.014 [2024-11-18 07:21:07.662780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.014 qpair failed and we were unable to recover it. 00:35:47.014 [2024-11-18 07:21:07.662998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.014 [2024-11-18 07:21:07.663043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.014 qpair failed and we were unable to recover it. 00:35:47.014 [2024-11-18 07:21:07.663190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.014 [2024-11-18 07:21:07.663236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.014 qpair failed and we were unable to recover it. 00:35:47.014 [2024-11-18 07:21:07.663460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.014 [2024-11-18 07:21:07.663520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.014 qpair failed and we were unable to recover it. 00:35:47.014 [2024-11-18 07:21:07.663731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.014 [2024-11-18 07:21:07.663774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.014 qpair failed and we were unable to recover it. 00:35:47.014 [2024-11-18 07:21:07.663969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.014 [2024-11-18 07:21:07.664012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.014 qpair failed and we were unable to recover it. 00:35:47.014 [2024-11-18 07:21:07.664179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.014 [2024-11-18 07:21:07.664221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.014 qpair failed and we were unable to recover it. 00:35:47.014 [2024-11-18 07:21:07.664415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.014 [2024-11-18 07:21:07.664460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.014 qpair failed and we were unable to recover it. 00:35:47.014 [2024-11-18 07:21:07.664686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.014 [2024-11-18 07:21:07.664731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.014 qpair failed and we were unable to recover it. 00:35:47.014 [2024-11-18 07:21:07.664918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.014 [2024-11-18 07:21:07.664963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.014 qpair failed and we were unable to recover it. 00:35:47.014 [2024-11-18 07:21:07.665156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.015 [2024-11-18 07:21:07.665201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.015 qpair failed and we were unable to recover it. 00:35:47.015 [2024-11-18 07:21:07.665415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.015 [2024-11-18 07:21:07.665461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.015 qpair failed and we were unable to recover it. 00:35:47.015 [2024-11-18 07:21:07.665712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.015 [2024-11-18 07:21:07.665755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.015 qpair failed and we were unable to recover it. 00:35:47.015 [2024-11-18 07:21:07.665939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.015 [2024-11-18 07:21:07.665981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.015 qpair failed and we were unable to recover it. 00:35:47.015 [2024-11-18 07:21:07.666153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.015 [2024-11-18 07:21:07.666201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.015 qpair failed and we were unable to recover it. 00:35:47.015 [2024-11-18 07:21:07.666454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.015 [2024-11-18 07:21:07.666541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.015 qpair failed and we were unable to recover it. 00:35:47.015 [2024-11-18 07:21:07.666745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.015 [2024-11-18 07:21:07.666802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.015 qpair failed and we were unable to recover it. 00:35:47.015 [2024-11-18 07:21:07.666973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.015 [2024-11-18 07:21:07.667018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.015 qpair failed and we were unable to recover it. 00:35:47.015 [2024-11-18 07:21:07.667240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.015 [2024-11-18 07:21:07.667286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.015 qpair failed and we were unable to recover it. 00:35:47.015 [2024-11-18 07:21:07.667472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.015 [2024-11-18 07:21:07.667533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.015 qpair failed and we were unable to recover it. 00:35:47.015 [2024-11-18 07:21:07.667715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.015 [2024-11-18 07:21:07.667760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.015 qpair failed and we were unable to recover it. 00:35:47.015 [2024-11-18 07:21:07.667914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.015 [2024-11-18 07:21:07.667960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.015 qpair failed and we were unable to recover it. 00:35:47.015 [2024-11-18 07:21:07.668134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.015 [2024-11-18 07:21:07.668179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.015 qpair failed and we were unable to recover it. 00:35:47.015 [2024-11-18 07:21:07.668395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.015 [2024-11-18 07:21:07.668439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.015 qpair failed and we were unable to recover it. 00:35:47.015 [2024-11-18 07:21:07.668668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.015 [2024-11-18 07:21:07.668738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.015 qpair failed and we were unable to recover it. 00:35:47.015 [2024-11-18 07:21:07.668982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.015 [2024-11-18 07:21:07.669051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.015 qpair failed and we were unable to recover it. 00:35:47.015 [2024-11-18 07:21:07.669255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.015 [2024-11-18 07:21:07.669303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.015 qpair failed and we were unable to recover it. 00:35:47.015 [2024-11-18 07:21:07.669505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.015 [2024-11-18 07:21:07.669554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.015 qpair failed and we were unable to recover it. 00:35:47.015 [2024-11-18 07:21:07.669738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.015 [2024-11-18 07:21:07.669785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.015 qpair failed and we were unable to recover it. 00:35:47.015 [2024-11-18 07:21:07.669960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.015 [2024-11-18 07:21:07.670006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.015 qpair failed and we were unable to recover it. 00:35:47.015 [2024-11-18 07:21:07.670162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.015 [2024-11-18 07:21:07.670209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.015 qpair failed and we were unable to recover it. 00:35:47.015 [2024-11-18 07:21:07.670399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.015 [2024-11-18 07:21:07.670448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.015 qpair failed and we were unable to recover it. 00:35:47.015 [2024-11-18 07:21:07.670620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.015 [2024-11-18 07:21:07.670666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.015 qpair failed and we were unable to recover it. 00:35:47.015 [2024-11-18 07:21:07.670907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.015 [2024-11-18 07:21:07.670953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.015 qpair failed and we were unable to recover it. 00:35:47.015 [2024-11-18 07:21:07.671100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.015 [2024-11-18 07:21:07.671147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.015 qpair failed and we were unable to recover it. 00:35:47.015 [2024-11-18 07:21:07.671372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.015 [2024-11-18 07:21:07.671422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.015 qpair failed and we were unable to recover it. 00:35:47.015 [2024-11-18 07:21:07.671622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.015 [2024-11-18 07:21:07.671670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.015 qpair failed and we were unable to recover it. 00:35:47.015 [2024-11-18 07:21:07.671876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.015 [2024-11-18 07:21:07.671922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.015 qpair failed and we were unable to recover it. 00:35:47.015 [2024-11-18 07:21:07.672069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.015 [2024-11-18 07:21:07.672127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.015 qpair failed and we were unable to recover it. 00:35:47.015 [2024-11-18 07:21:07.672276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.015 [2024-11-18 07:21:07.672324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.015 qpair failed and we were unable to recover it. 00:35:47.015 [2024-11-18 07:21:07.672571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.015 [2024-11-18 07:21:07.672618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.015 qpair failed and we were unable to recover it. 00:35:47.015 [2024-11-18 07:21:07.672770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.016 [2024-11-18 07:21:07.672818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.016 qpair failed and we were unable to recover it. 00:35:47.016 [2024-11-18 07:21:07.672988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.016 [2024-11-18 07:21:07.673035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.016 qpair failed and we were unable to recover it. 00:35:47.016 [2024-11-18 07:21:07.673213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.016 [2024-11-18 07:21:07.673259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.016 qpair failed and we were unable to recover it. 00:35:47.016 [2024-11-18 07:21:07.673444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.016 [2024-11-18 07:21:07.673501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.016 qpair failed and we were unable to recover it. 00:35:47.016 [2024-11-18 07:21:07.673668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.016 [2024-11-18 07:21:07.673716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.016 qpair failed and we were unable to recover it. 00:35:47.016 [2024-11-18 07:21:07.673922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.016 [2024-11-18 07:21:07.673971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.016 qpair failed and we were unable to recover it. 00:35:47.016 [2024-11-18 07:21:07.674198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.016 [2024-11-18 07:21:07.674248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.016 qpair failed and we were unable to recover it. 00:35:47.016 [2024-11-18 07:21:07.674507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.016 [2024-11-18 07:21:07.674567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.016 qpair failed and we were unable to recover it. 00:35:47.016 [2024-11-18 07:21:07.674772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.016 [2024-11-18 07:21:07.674821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.016 qpair failed and we were unable to recover it. 00:35:47.016 [2024-11-18 07:21:07.675031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.016 [2024-11-18 07:21:07.675080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.016 qpair failed and we were unable to recover it. 00:35:47.016 [2024-11-18 07:21:07.675306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.016 [2024-11-18 07:21:07.675353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.016 qpair failed and we were unable to recover it. 00:35:47.016 [2024-11-18 07:21:07.675550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.016 [2024-11-18 07:21:07.675598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.016 qpair failed and we were unable to recover it. 00:35:47.016 [2024-11-18 07:21:07.675771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.016 [2024-11-18 07:21:07.675827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.016 qpair failed and we were unable to recover it. 00:35:47.016 [2024-11-18 07:21:07.676070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.016 [2024-11-18 07:21:07.676119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.016 qpair failed and we were unable to recover it. 00:35:47.016 [2024-11-18 07:21:07.676318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.016 [2024-11-18 07:21:07.676368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.016 qpair failed and we were unable to recover it. 00:35:47.016 [2024-11-18 07:21:07.676592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.016 [2024-11-18 07:21:07.676641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.016 qpair failed and we were unable to recover it. 00:35:47.016 [2024-11-18 07:21:07.676841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.016 [2024-11-18 07:21:07.676890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.016 qpair failed and we were unable to recover it. 00:35:47.016 [2024-11-18 07:21:07.677080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.016 [2024-11-18 07:21:07.677129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.016 qpair failed and we were unable to recover it. 00:35:47.016 [2024-11-18 07:21:07.677296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.016 [2024-11-18 07:21:07.677349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.016 qpair failed and we were unable to recover it. 00:35:47.016 [2024-11-18 07:21:07.677586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.016 [2024-11-18 07:21:07.677636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.016 qpair failed and we were unable to recover it. 00:35:47.016 [2024-11-18 07:21:07.677781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.016 [2024-11-18 07:21:07.677831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.016 qpair failed and we were unable to recover it. 00:35:47.016 [2024-11-18 07:21:07.678021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.016 [2024-11-18 07:21:07.678072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.016 qpair failed and we were unable to recover it. 00:35:47.016 [2024-11-18 07:21:07.678219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.016 [2024-11-18 07:21:07.678269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.016 qpair failed and we were unable to recover it. 00:35:47.016 [2024-11-18 07:21:07.678468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.016 [2024-11-18 07:21:07.678536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.016 qpair failed and we were unable to recover it. 00:35:47.016 [2024-11-18 07:21:07.678750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.016 [2024-11-18 07:21:07.678808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.016 qpair failed and we were unable to recover it. 00:35:47.016 [2024-11-18 07:21:07.679010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.016 [2024-11-18 07:21:07.679059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.016 qpair failed and we were unable to recover it. 00:35:47.016 [2024-11-18 07:21:07.679239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.016 [2024-11-18 07:21:07.679289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.016 qpair failed and we were unable to recover it. 00:35:47.016 [2024-11-18 07:21:07.679535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.016 [2024-11-18 07:21:07.679585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.016 qpair failed and we were unable to recover it. 00:35:47.016 [2024-11-18 07:21:07.679771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.016 [2024-11-18 07:21:07.679820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.016 qpair failed and we were unable to recover it. 00:35:47.016 [2024-11-18 07:21:07.680021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.016 [2024-11-18 07:21:07.680071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.016 qpair failed and we were unable to recover it. 00:35:47.016 [2024-11-18 07:21:07.680296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.016 [2024-11-18 07:21:07.680346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.016 qpair failed and we were unable to recover it. 00:35:47.016 [2024-11-18 07:21:07.680532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.016 [2024-11-18 07:21:07.680584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.016 qpair failed and we were unable to recover it. 00:35:47.016 [2024-11-18 07:21:07.680782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.016 [2024-11-18 07:21:07.680837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.016 qpair failed and we were unable to recover it. 00:35:47.016 [2024-11-18 07:21:07.680993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.016 [2024-11-18 07:21:07.681043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.016 qpair failed and we were unable to recover it. 00:35:47.016 [2024-11-18 07:21:07.681235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.016 [2024-11-18 07:21:07.681285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.016 qpair failed and we were unable to recover it. 00:35:47.016 [2024-11-18 07:21:07.681438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.016 [2024-11-18 07:21:07.681487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.016 qpair failed and we were unable to recover it. 00:35:47.016 [2024-11-18 07:21:07.681733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.016 [2024-11-18 07:21:07.681782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.016 qpair failed and we were unable to recover it. 00:35:47.016 [2024-11-18 07:21:07.681984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.016 [2024-11-18 07:21:07.682043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.017 qpair failed and we were unable to recover it. 00:35:47.017 [2024-11-18 07:21:07.682240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.017 [2024-11-18 07:21:07.682288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.017 qpair failed and we were unable to recover it. 00:35:47.017 [2024-11-18 07:21:07.682480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.017 [2024-11-18 07:21:07.682540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.017 qpair failed and we were unable to recover it. 00:35:47.017 [2024-11-18 07:21:07.682724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.017 [2024-11-18 07:21:07.682774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.017 qpair failed and we were unable to recover it. 00:35:47.017 [2024-11-18 07:21:07.682972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.017 [2024-11-18 07:21:07.683020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.017 qpair failed and we were unable to recover it. 00:35:47.017 [2024-11-18 07:21:07.683198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.017 [2024-11-18 07:21:07.683247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.017 qpair failed and we were unable to recover it. 00:35:47.017 [2024-11-18 07:21:07.683408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.017 [2024-11-18 07:21:07.683459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.017 qpair failed and we were unable to recover it. 00:35:47.017 [2024-11-18 07:21:07.683701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.017 [2024-11-18 07:21:07.683751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.017 qpair failed and we were unable to recover it. 00:35:47.017 [2024-11-18 07:21:07.683903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.017 [2024-11-18 07:21:07.683952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.017 qpair failed and we were unable to recover it. 00:35:47.017 [2024-11-18 07:21:07.684179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.017 [2024-11-18 07:21:07.684228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.017 qpair failed and we were unable to recover it. 00:35:47.017 [2024-11-18 07:21:07.684447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.017 [2024-11-18 07:21:07.684560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.017 qpair failed and we were unable to recover it. 00:35:47.017 [2024-11-18 07:21:07.684803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.017 [2024-11-18 07:21:07.684859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.017 qpair failed and we were unable to recover it. 00:35:47.017 [2024-11-18 07:21:07.685019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.017 [2024-11-18 07:21:07.685070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.017 qpair failed and we were unable to recover it. 00:35:47.017 [2024-11-18 07:21:07.685266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.017 [2024-11-18 07:21:07.685315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.017 qpair failed and we were unable to recover it. 00:35:47.017 [2024-11-18 07:21:07.685571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.017 [2024-11-18 07:21:07.685626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.017 qpair failed and we were unable to recover it. 00:35:47.017 [2024-11-18 07:21:07.685813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.017 [2024-11-18 07:21:07.685868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.017 qpair failed and we were unable to recover it. 00:35:47.017 [2024-11-18 07:21:07.686108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.017 [2024-11-18 07:21:07.686161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.017 qpair failed and we were unable to recover it. 00:35:47.017 [2024-11-18 07:21:07.686385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.017 [2024-11-18 07:21:07.686438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.017 qpair failed and we were unable to recover it. 00:35:47.017 [2024-11-18 07:21:07.686646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.017 [2024-11-18 07:21:07.686694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.017 qpair failed and we were unable to recover it. 00:35:47.017 [2024-11-18 07:21:07.686887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.017 [2024-11-18 07:21:07.686937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.017 qpair failed and we were unable to recover it. 00:35:47.017 [2024-11-18 07:21:07.687135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.017 [2024-11-18 07:21:07.687185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.017 qpair failed and we were unable to recover it. 00:35:47.017 [2024-11-18 07:21:07.687377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.017 [2024-11-18 07:21:07.687426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.017 qpair failed and we were unable to recover it. 00:35:47.017 [2024-11-18 07:21:07.687664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.017 [2024-11-18 07:21:07.687714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.017 qpair failed and we were unable to recover it. 00:35:47.017 [2024-11-18 07:21:07.687935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.017 [2024-11-18 07:21:07.687985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.017 qpair failed and we were unable to recover it. 00:35:47.017 [2024-11-18 07:21:07.688210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.017 [2024-11-18 07:21:07.688259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.017 qpair failed and we were unable to recover it. 00:35:47.017 [2024-11-18 07:21:07.688488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.017 [2024-11-18 07:21:07.688573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.017 qpair failed and we were unable to recover it. 00:35:47.017 [2024-11-18 07:21:07.688787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.017 [2024-11-18 07:21:07.688848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.017 qpair failed and we were unable to recover it. 00:35:47.017 [2024-11-18 07:21:07.689101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.017 [2024-11-18 07:21:07.689153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.017 qpair failed and we were unable to recover it. 00:35:47.017 [2024-11-18 07:21:07.689416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.017 [2024-11-18 07:21:07.689470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.017 qpair failed and we were unable to recover it. 00:35:47.017 [2024-11-18 07:21:07.689700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.017 [2024-11-18 07:21:07.689754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.017 qpair failed and we were unable to recover it. 00:35:47.017 [2024-11-18 07:21:07.689995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.017 [2024-11-18 07:21:07.690047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.017 qpair failed and we were unable to recover it. 00:35:47.017 [2024-11-18 07:21:07.690244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.017 [2024-11-18 07:21:07.690299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.017 qpair failed and we were unable to recover it. 00:35:47.017 [2024-11-18 07:21:07.690520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.017 [2024-11-18 07:21:07.690575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.017 qpair failed and we were unable to recover it. 00:35:47.017 [2024-11-18 07:21:07.690827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.017 [2024-11-18 07:21:07.690879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.017 qpair failed and we were unable to recover it. 00:35:47.017 [2024-11-18 07:21:07.691065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.017 [2024-11-18 07:21:07.691117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.017 qpair failed and we were unable to recover it. 00:35:47.017 [2024-11-18 07:21:07.691328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.017 [2024-11-18 07:21:07.691393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.017 qpair failed and we were unable to recover it. 00:35:47.017 [2024-11-18 07:21:07.691623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.017 [2024-11-18 07:21:07.691690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.017 qpair failed and we were unable to recover it. 00:35:47.017 [2024-11-18 07:21:07.691927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.017 [2024-11-18 07:21:07.691980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.017 qpair failed and we were unable to recover it. 00:35:47.018 [2024-11-18 07:21:07.692191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.018 [2024-11-18 07:21:07.692243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.018 qpair failed and we were unable to recover it. 00:35:47.018 [2024-11-18 07:21:07.692447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.018 [2024-11-18 07:21:07.692524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.018 qpair failed and we were unable to recover it. 00:35:47.018 [2024-11-18 07:21:07.692681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.018 [2024-11-18 07:21:07.692743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.018 qpair failed and we were unable to recover it. 00:35:47.018 [2024-11-18 07:21:07.692997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.018 [2024-11-18 07:21:07.693049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.018 qpair failed and we were unable to recover it. 00:35:47.018 [2024-11-18 07:21:07.693251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.018 [2024-11-18 07:21:07.693304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.018 qpair failed and we were unable to recover it. 00:35:47.018 [2024-11-18 07:21:07.693463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.018 [2024-11-18 07:21:07.693526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.018 qpair failed and we were unable to recover it. 00:35:47.018 [2024-11-18 07:21:07.693710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.018 [2024-11-18 07:21:07.693762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.018 qpair failed and we were unable to recover it. 00:35:47.018 [2024-11-18 07:21:07.693932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.018 [2024-11-18 07:21:07.693987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.018 qpair failed and we were unable to recover it. 00:35:47.018 [2024-11-18 07:21:07.694234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.018 [2024-11-18 07:21:07.694287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.018 qpair failed and we were unable to recover it. 00:35:47.018 [2024-11-18 07:21:07.694461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.018 [2024-11-18 07:21:07.694535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.018 qpair failed and we were unable to recover it. 00:35:47.018 [2024-11-18 07:21:07.694706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.018 [2024-11-18 07:21:07.694759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.018 qpair failed and we were unable to recover it. 00:35:47.018 [2024-11-18 07:21:07.694960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.018 [2024-11-18 07:21:07.695016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.018 qpair failed and we were unable to recover it. 00:35:47.018 [2024-11-18 07:21:07.695228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.018 [2024-11-18 07:21:07.695280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.018 qpair failed and we were unable to recover it. 00:35:47.018 [2024-11-18 07:21:07.695502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.018 [2024-11-18 07:21:07.695557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.018 qpair failed and we were unable to recover it. 00:35:47.018 [2024-11-18 07:21:07.695826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.018 [2024-11-18 07:21:07.695879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.018 qpair failed and we were unable to recover it. 00:35:47.018 [2024-11-18 07:21:07.696117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.018 [2024-11-18 07:21:07.696169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.018 qpair failed and we were unable to recover it. 00:35:47.018 [2024-11-18 07:21:07.696350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.018 [2024-11-18 07:21:07.696405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.018 qpair failed and we were unable to recover it. 00:35:47.018 [2024-11-18 07:21:07.696595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.018 [2024-11-18 07:21:07.696649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.018 qpair failed and we were unable to recover it. 00:35:47.018 [2024-11-18 07:21:07.696852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.018 [2024-11-18 07:21:07.696905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.018 qpair failed and we were unable to recover it. 00:35:47.018 [2024-11-18 07:21:07.697107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.018 [2024-11-18 07:21:07.697163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.018 qpair failed and we were unable to recover it. 00:35:47.018 [2024-11-18 07:21:07.697418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.018 [2024-11-18 07:21:07.697471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.018 qpair failed and we were unable to recover it. 00:35:47.018 [2024-11-18 07:21:07.697733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.018 [2024-11-18 07:21:07.697786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.018 qpair failed and we were unable to recover it. 00:35:47.018 [2024-11-18 07:21:07.697986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.018 [2024-11-18 07:21:07.698039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.018 qpair failed and we were unable to recover it. 00:35:47.018 [2024-11-18 07:21:07.698223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.018 [2024-11-18 07:21:07.698280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.018 qpair failed and we were unable to recover it. 00:35:47.018 [2024-11-18 07:21:07.698518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.018 [2024-11-18 07:21:07.698595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.018 qpair failed and we were unable to recover it. 00:35:47.018 [2024-11-18 07:21:07.698862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.018 [2024-11-18 07:21:07.698922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.018 qpair failed and we were unable to recover it. 00:35:47.018 [2024-11-18 07:21:07.699131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.018 [2024-11-18 07:21:07.699188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.018 qpair failed and we were unable to recover it. 00:35:47.018 [2024-11-18 07:21:07.699454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.018 [2024-11-18 07:21:07.699547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.018 qpair failed and we were unable to recover it. 00:35:47.018 [2024-11-18 07:21:07.699773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.018 [2024-11-18 07:21:07.699830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.018 qpair failed and we were unable to recover it. 00:35:47.018 [2024-11-18 07:21:07.700079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.018 [2024-11-18 07:21:07.700145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.018 qpair failed and we were unable to recover it. 00:35:47.018 [2024-11-18 07:21:07.700310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.018 [2024-11-18 07:21:07.700383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.018 qpair failed and we were unable to recover it. 00:35:47.018 [2024-11-18 07:21:07.700604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.018 [2024-11-18 07:21:07.700663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.018 qpair failed and we were unable to recover it. 00:35:47.018 [2024-11-18 07:21:07.700898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.018 [2024-11-18 07:21:07.700959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.018 qpair failed and we were unable to recover it. 00:35:47.018 [2024-11-18 07:21:07.701176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.018 [2024-11-18 07:21:07.701233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.018 qpair failed and we were unable to recover it. 00:35:47.018 [2024-11-18 07:21:07.701447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.018 [2024-11-18 07:21:07.701528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.018 qpair failed and we were unable to recover it. 00:35:47.018 [2024-11-18 07:21:07.701735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.018 [2024-11-18 07:21:07.701792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.018 qpair failed and we were unable to recover it. 00:35:47.018 [2024-11-18 07:21:07.702008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.018 [2024-11-18 07:21:07.702064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.018 qpair failed and we were unable to recover it. 00:35:47.018 [2024-11-18 07:21:07.702281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.018 [2024-11-18 07:21:07.702339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.019 qpair failed and we were unable to recover it. 00:35:47.019 [2024-11-18 07:21:07.702611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.019 [2024-11-18 07:21:07.702671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.019 qpair failed and we were unable to recover it. 00:35:47.019 [2024-11-18 07:21:07.702932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.019 [2024-11-18 07:21:07.702988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.019 qpair failed and we were unable to recover it. 00:35:47.019 [2024-11-18 07:21:07.703213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.019 [2024-11-18 07:21:07.703272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.019 qpair failed and we were unable to recover it. 00:35:47.019 [2024-11-18 07:21:07.703543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.019 [2024-11-18 07:21:07.703602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.019 qpair failed and we were unable to recover it. 00:35:47.019 [2024-11-18 07:21:07.703833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.019 [2024-11-18 07:21:07.703891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.019 qpair failed and we were unable to recover it. 00:35:47.019 [2024-11-18 07:21:07.704166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.019 [2024-11-18 07:21:07.704223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.019 qpair failed and we were unable to recover it. 00:35:47.019 [2024-11-18 07:21:07.704429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.019 [2024-11-18 07:21:07.704485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.019 qpair failed and we were unable to recover it. 00:35:47.019 [2024-11-18 07:21:07.704692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.019 [2024-11-18 07:21:07.704748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.019 qpair failed and we were unable to recover it. 00:35:47.019 [2024-11-18 07:21:07.704996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.019 [2024-11-18 07:21:07.705052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.019 qpair failed and we were unable to recover it. 00:35:47.019 [2024-11-18 07:21:07.705253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.019 [2024-11-18 07:21:07.705308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.019 qpair failed and we were unable to recover it. 00:35:47.019 [2024-11-18 07:21:07.705531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.019 [2024-11-18 07:21:07.705588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.019 qpair failed and we were unable to recover it. 00:35:47.019 [2024-11-18 07:21:07.705792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.019 [2024-11-18 07:21:07.705848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.019 qpair failed and we were unable to recover it. 00:35:47.019 [2024-11-18 07:21:07.706105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.019 [2024-11-18 07:21:07.706160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.019 qpair failed and we were unable to recover it. 00:35:47.019 [2024-11-18 07:21:07.706384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.019 [2024-11-18 07:21:07.706441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.019 qpair failed and we were unable to recover it. 00:35:47.019 [2024-11-18 07:21:07.706675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.019 [2024-11-18 07:21:07.706731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.019 qpair failed and we were unable to recover it. 00:35:47.019 [2024-11-18 07:21:07.706955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.019 [2024-11-18 07:21:07.707012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.019 qpair failed and we were unable to recover it. 00:35:47.019 [2024-11-18 07:21:07.707230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.019 [2024-11-18 07:21:07.707286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.019 qpair failed and we were unable to recover it. 00:35:47.019 [2024-11-18 07:21:07.707544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.019 [2024-11-18 07:21:07.707603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.019 qpair failed and we were unable to recover it. 00:35:47.019 [2024-11-18 07:21:07.707864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.019 [2024-11-18 07:21:07.707920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.019 qpair failed and we were unable to recover it. 00:35:47.019 [2024-11-18 07:21:07.708142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.019 [2024-11-18 07:21:07.708197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.019 qpair failed and we were unable to recover it. 00:35:47.019 [2024-11-18 07:21:07.708456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.019 [2024-11-18 07:21:07.708528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.019 qpair failed and we were unable to recover it. 00:35:47.019 [2024-11-18 07:21:07.708787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.019 [2024-11-18 07:21:07.708842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.019 qpair failed and we were unable to recover it. 00:35:47.019 [2024-11-18 07:21:07.709009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.019 [2024-11-18 07:21:07.709067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.019 qpair failed and we were unable to recover it. 00:35:47.019 [2024-11-18 07:21:07.709283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.019 [2024-11-18 07:21:07.709339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.019 qpair failed and we were unable to recover it. 00:35:47.019 [2024-11-18 07:21:07.709593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.019 [2024-11-18 07:21:07.709650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.019 qpair failed and we were unable to recover it. 00:35:47.019 [2024-11-18 07:21:07.709904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.019 [2024-11-18 07:21:07.709959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.019 qpair failed and we were unable to recover it. 00:35:47.019 [2024-11-18 07:21:07.710208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.019 [2024-11-18 07:21:07.710264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.019 qpair failed and we were unable to recover it. 00:35:47.019 [2024-11-18 07:21:07.710476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.019 [2024-11-18 07:21:07.710555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.019 qpair failed and we were unable to recover it. 00:35:47.019 [2024-11-18 07:21:07.710809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.019 [2024-11-18 07:21:07.710864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.019 qpair failed and we were unable to recover it. 00:35:47.019 [2024-11-18 07:21:07.711114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.019 [2024-11-18 07:21:07.711171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.019 qpair failed and we were unable to recover it. 00:35:47.019 [2024-11-18 07:21:07.711344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.019 [2024-11-18 07:21:07.711401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.019 qpair failed and we were unable to recover it. 00:35:47.019 [2024-11-18 07:21:07.711670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.019 [2024-11-18 07:21:07.711738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.019 qpair failed and we were unable to recover it. 00:35:47.019 [2024-11-18 07:21:07.711993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.019 [2024-11-18 07:21:07.712051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.019 qpair failed and we were unable to recover it. 00:35:47.019 [2024-11-18 07:21:07.712239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.020 [2024-11-18 07:21:07.712294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.020 qpair failed and we were unable to recover it. 00:35:47.020 [2024-11-18 07:21:07.712506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.020 [2024-11-18 07:21:07.712564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.020 qpair failed and we were unable to recover it. 00:35:47.020 [2024-11-18 07:21:07.712827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.020 [2024-11-18 07:21:07.712883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.020 qpair failed and we were unable to recover it. 00:35:47.020 [2024-11-18 07:21:07.713093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.020 [2024-11-18 07:21:07.713150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.020 qpair failed and we were unable to recover it. 00:35:47.020 [2024-11-18 07:21:07.713412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.020 [2024-11-18 07:21:07.713468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.020 qpair failed and we were unable to recover it. 00:35:47.020 [2024-11-18 07:21:07.713669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.020 [2024-11-18 07:21:07.713727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.020 qpair failed and we were unable to recover it. 00:35:47.020 [2024-11-18 07:21:07.713909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.020 [2024-11-18 07:21:07.713964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.020 qpair failed and we were unable to recover it. 00:35:47.020 [2024-11-18 07:21:07.714232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.020 [2024-11-18 07:21:07.714292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.020 qpair failed and we were unable to recover it. 00:35:47.020 [2024-11-18 07:21:07.714487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.020 [2024-11-18 07:21:07.714561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.020 qpair failed and we were unable to recover it. 00:35:47.020 [2024-11-18 07:21:07.714750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.020 [2024-11-18 07:21:07.714811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.020 qpair failed and we were unable to recover it. 00:35:47.020 [2024-11-18 07:21:07.715031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.020 [2024-11-18 07:21:07.715092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.020 qpair failed and we were unable to recover it. 00:35:47.020 [2024-11-18 07:21:07.715319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.020 [2024-11-18 07:21:07.715373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.020 qpair failed and we were unable to recover it. 00:35:47.020 [2024-11-18 07:21:07.715650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.020 [2024-11-18 07:21:07.715707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.020 qpair failed and we were unable to recover it. 00:35:47.020 [2024-11-18 07:21:07.715924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.020 [2024-11-18 07:21:07.715982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.020 qpair failed and we were unable to recover it. 00:35:47.020 [2024-11-18 07:21:07.716233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.020 [2024-11-18 07:21:07.716289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.020 qpair failed and we were unable to recover it. 00:35:47.020 [2024-11-18 07:21:07.716538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.020 [2024-11-18 07:21:07.716602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.020 qpair failed and we were unable to recover it. 00:35:47.020 [2024-11-18 07:21:07.716835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.020 [2024-11-18 07:21:07.716895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.020 qpair failed and we were unable to recover it. 00:35:47.020 [2024-11-18 07:21:07.717133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.020 [2024-11-18 07:21:07.717195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.020 qpair failed and we were unable to recover it. 00:35:47.020 [2024-11-18 07:21:07.717375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.020 [2024-11-18 07:21:07.717437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.020 qpair failed and we were unable to recover it. 00:35:47.020 [2024-11-18 07:21:07.717653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.020 [2024-11-18 07:21:07.717715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.020 qpair failed and we were unable to recover it. 00:35:47.020 [2024-11-18 07:21:07.717983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.020 [2024-11-18 07:21:07.718042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.020 qpair failed and we were unable to recover it. 00:35:47.020 [2024-11-18 07:21:07.718274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.020 [2024-11-18 07:21:07.718335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.020 qpair failed and we were unable to recover it. 00:35:47.020 [2024-11-18 07:21:07.718568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.020 [2024-11-18 07:21:07.718631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.020 qpair failed and we were unable to recover it. 00:35:47.020 [2024-11-18 07:21:07.718842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.020 [2024-11-18 07:21:07.718904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.020 qpair failed and we were unable to recover it. 00:35:47.020 [2024-11-18 07:21:07.719150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.020 [2024-11-18 07:21:07.719212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.020 qpair failed and we were unable to recover it. 00:35:47.020 [2024-11-18 07:21:07.719422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.020 [2024-11-18 07:21:07.719486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.020 qpair failed and we were unable to recover it. 00:35:47.020 [2024-11-18 07:21:07.719722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.020 [2024-11-18 07:21:07.719783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.020 qpair failed and we were unable to recover it. 00:35:47.020 [2024-11-18 07:21:07.720017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.020 [2024-11-18 07:21:07.720078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.020 qpair failed and we were unable to recover it. 00:35:47.020 [2024-11-18 07:21:07.720321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.020 [2024-11-18 07:21:07.720381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.020 qpair failed and we were unable to recover it. 00:35:47.020 [2024-11-18 07:21:07.720681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.020 [2024-11-18 07:21:07.720742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.020 qpair failed and we were unable to recover it. 00:35:47.020 [2024-11-18 07:21:07.721020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.020 [2024-11-18 07:21:07.721080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.020 qpair failed and we were unable to recover it. 00:35:47.020 [2024-11-18 07:21:07.721279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.020 [2024-11-18 07:21:07.721340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.020 qpair failed and we were unable to recover it. 00:35:47.020 [2024-11-18 07:21:07.721570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.020 [2024-11-18 07:21:07.721631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.020 qpair failed and we were unable to recover it. 00:35:47.020 [2024-11-18 07:21:07.721845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.020 [2024-11-18 07:21:07.721906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.020 qpair failed and we were unable to recover it. 00:35:47.020 [2024-11-18 07:21:07.722134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.020 [2024-11-18 07:21:07.722196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.020 qpair failed and we were unable to recover it. 00:35:47.020 [2024-11-18 07:21:07.722447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.020 [2024-11-18 07:21:07.722563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.020 qpair failed and we were unable to recover it. 00:35:47.020 [2024-11-18 07:21:07.722799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.020 [2024-11-18 07:21:07.722862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.020 qpair failed and we were unable to recover it. 00:35:47.020 [2024-11-18 07:21:07.723073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.021 [2024-11-18 07:21:07.723134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.021 qpair failed and we were unable to recover it. 00:35:47.021 [2024-11-18 07:21:07.723365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.021 [2024-11-18 07:21:07.723435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.021 qpair failed and we were unable to recover it. 00:35:47.021 [2024-11-18 07:21:07.723679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.021 [2024-11-18 07:21:07.723740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.021 qpair failed and we were unable to recover it. 00:35:47.021 [2024-11-18 07:21:07.724012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.021 [2024-11-18 07:21:07.724073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.021 qpair failed and we were unable to recover it. 00:35:47.021 [2024-11-18 07:21:07.724258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.021 [2024-11-18 07:21:07.724320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.021 qpair failed and we were unable to recover it. 00:35:47.021 [2024-11-18 07:21:07.724589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.021 [2024-11-18 07:21:07.724652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.021 qpair failed and we were unable to recover it. 00:35:47.021 [2024-11-18 07:21:07.724921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.021 [2024-11-18 07:21:07.724982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.021 qpair failed and we were unable to recover it. 00:35:47.021 [2024-11-18 07:21:07.725202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.021 [2024-11-18 07:21:07.725261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.021 qpair failed and we were unable to recover it. 00:35:47.021 [2024-11-18 07:21:07.725544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.021 [2024-11-18 07:21:07.725607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.021 qpair failed and we were unable to recover it. 00:35:47.021 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 403945 Killed "${NVMF_APP[@]}" "$@" 00:35:47.021 [2024-11-18 07:21:07.725899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.021 [2024-11-18 07:21:07.725960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.021 qpair failed and we were unable to recover it. 00:35:47.021 07:21:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:35:47.021 [2024-11-18 07:21:07.726244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.021 [2024-11-18 07:21:07.726304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.021 qpair failed and we were unable to recover it. 00:35:47.021 07:21:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:35:47.021 07:21:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:47.021 [2024-11-18 07:21:07.726542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.021 07:21:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:47.021 [2024-11-18 07:21:07.726624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.021 qpair failed and we were unable to recover it. 00:35:47.021 07:21:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:47.021 [2024-11-18 07:21:07.726881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.021 [2024-11-18 07:21:07.726949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.021 qpair failed and we were unable to recover it. 00:35:47.021 [2024-11-18 07:21:07.727181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.021 [2024-11-18 07:21:07.727248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.021 qpair failed and we were unable to recover it. 00:35:47.021 [2024-11-18 07:21:07.727558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.021 [2024-11-18 07:21:07.727624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.021 qpair failed and we were unable to recover it. 00:35:47.021 [2024-11-18 07:21:07.727842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.021 [2024-11-18 07:21:07.727908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.021 qpair failed and we were unable to recover it. 00:35:47.021 [2024-11-18 07:21:07.728201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.021 [2024-11-18 07:21:07.728262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.021 qpair failed and we were unable to recover it. 00:35:47.021 [2024-11-18 07:21:07.728514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.021 [2024-11-18 07:21:07.728577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.021 qpair failed and we were unable to recover it. 00:35:47.021 [2024-11-18 07:21:07.728805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.021 [2024-11-18 07:21:07.728878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.021 qpair failed and we were unable to recover it. 00:35:47.021 [2024-11-18 07:21:07.729130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.021 [2024-11-18 07:21:07.729194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.021 qpair failed and we were unable to recover it. 00:35:47.021 [2024-11-18 07:21:07.729421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.021 [2024-11-18 07:21:07.729481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.021 qpair failed and we were unable to recover it. 00:35:47.021 [2024-11-18 07:21:07.729671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.021 [2024-11-18 07:21:07.729732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.021 qpair failed and we were unable to recover it. 00:35:47.021 [2024-11-18 07:21:07.729931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.021 [2024-11-18 07:21:07.729994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.021 qpair failed and we were unable to recover it. 00:35:47.021 07:21:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=404728 00:35:47.021 07:21:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:35:47.021 [2024-11-18 07:21:07.730198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.021 07:21:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 404728 00:35:47.021 [2024-11-18 07:21:07.730261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.021 qpair failed and we were unable to recover it. 00:35:47.021 07:21:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 404728 ']' 00:35:47.021 [2024-11-18 07:21:07.730546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.021 07:21:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:47.021 [2024-11-18 07:21:07.730610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.021 qpair failed and we were unable to recover it. 00:35:47.021 07:21:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:47.021 07:21:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:47.021 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:47.021 [2024-11-18 07:21:07.730887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.021 07:21:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:47.021 [2024-11-18 07:21:07.730952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.021 qpair failed and we were unable to recover it. 00:35:47.021 07:21:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:47.021 [2024-11-18 07:21:07.731199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.021 [2024-11-18 07:21:07.731261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.021 qpair failed and we were unable to recover it. 00:35:47.021 [2024-11-18 07:21:07.731483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.021 [2024-11-18 07:21:07.731558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.021 qpair failed and we were unable to recover it. 00:35:47.021 [2024-11-18 07:21:07.731792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.021 [2024-11-18 07:21:07.731855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.021 qpair failed and we were unable to recover it. 00:35:47.021 [2024-11-18 07:21:07.732078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.021 [2024-11-18 07:21:07.732146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.021 qpair failed and we were unable to recover it. 00:35:47.021 [2024-11-18 07:21:07.732463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.021 [2024-11-18 07:21:07.732552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.021 qpair failed and we were unable to recover it. 00:35:47.021 [2024-11-18 07:21:07.732833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.022 [2024-11-18 07:21:07.732899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.022 qpair failed and we were unable to recover it. 00:35:47.022 [2024-11-18 07:21:07.733094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.022 [2024-11-18 07:21:07.733163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.022 qpair failed and we were unable to recover it. 00:35:47.022 [2024-11-18 07:21:07.733447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.022 [2024-11-18 07:21:07.733529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.022 qpair failed and we were unable to recover it. 00:35:47.022 [2024-11-18 07:21:07.733680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.022 [2024-11-18 07:21:07.733713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.022 qpair failed and we were unable to recover it. 00:35:47.022 [2024-11-18 07:21:07.733860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.022 [2024-11-18 07:21:07.733894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.022 qpair failed and we were unable to recover it. 00:35:47.022 [2024-11-18 07:21:07.734040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.022 [2024-11-18 07:21:07.734075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.022 qpair failed and we were unable to recover it. 00:35:47.022 [2024-11-18 07:21:07.734196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.022 [2024-11-18 07:21:07.734230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.022 qpair failed and we were unable to recover it. 00:35:47.022 [2024-11-18 07:21:07.734363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.022 [2024-11-18 07:21:07.734409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.022 qpair failed and we were unable to recover it. 00:35:47.022 [2024-11-18 07:21:07.734570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.022 [2024-11-18 07:21:07.734606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.022 qpair failed and we were unable to recover it. 00:35:47.022 [2024-11-18 07:21:07.734750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.022 [2024-11-18 07:21:07.734784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.022 qpair failed and we were unable to recover it. 00:35:47.022 [2024-11-18 07:21:07.734973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.022 [2024-11-18 07:21:07.735008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.022 qpair failed and we were unable to recover it. 00:35:47.022 [2024-11-18 07:21:07.735151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.022 [2024-11-18 07:21:07.735186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.022 qpair failed and we were unable to recover it. 00:35:47.022 [2024-11-18 07:21:07.735333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.022 [2024-11-18 07:21:07.735367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.022 qpair failed and we were unable to recover it. 00:35:47.022 [2024-11-18 07:21:07.735519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.022 [2024-11-18 07:21:07.735570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.022 qpair failed and we were unable to recover it. 00:35:47.022 [2024-11-18 07:21:07.735678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.022 [2024-11-18 07:21:07.735712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.022 qpair failed and we were unable to recover it. 00:35:47.022 [2024-11-18 07:21:07.735834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.022 [2024-11-18 07:21:07.735868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.022 qpair failed and we were unable to recover it. 00:35:47.022 [2024-11-18 07:21:07.735982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.022 [2024-11-18 07:21:07.736021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.022 qpair failed and we were unable to recover it. 00:35:47.022 [2024-11-18 07:21:07.736138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.022 [2024-11-18 07:21:07.736174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.022 qpair failed and we were unable to recover it. 00:35:47.022 [2024-11-18 07:21:07.736284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.022 [2024-11-18 07:21:07.736317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.022 qpair failed and we were unable to recover it. 00:35:47.022 [2024-11-18 07:21:07.736457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.022 [2024-11-18 07:21:07.736508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.022 qpair failed and we were unable to recover it. 00:35:47.022 [2024-11-18 07:21:07.736641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.022 [2024-11-18 07:21:07.736674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.022 qpair failed and we were unable to recover it. 00:35:47.022 [2024-11-18 07:21:07.736816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.022 [2024-11-18 07:21:07.736849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.022 qpair failed and we were unable to recover it. 00:35:47.022 [2024-11-18 07:21:07.736959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.022 [2024-11-18 07:21:07.736994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.022 qpair failed and we were unable to recover it. 00:35:47.022 [2024-11-18 07:21:07.737137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.022 [2024-11-18 07:21:07.737170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.022 qpair failed and we were unable to recover it. 00:35:47.022 [2024-11-18 07:21:07.737277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.022 [2024-11-18 07:21:07.737310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.022 qpair failed and we were unable to recover it. 00:35:47.022 [2024-11-18 07:21:07.737430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.022 [2024-11-18 07:21:07.737463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.022 qpair failed and we were unable to recover it. 00:35:47.022 [2024-11-18 07:21:07.737635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.022 [2024-11-18 07:21:07.737668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.022 qpair failed and we were unable to recover it. 00:35:47.022 [2024-11-18 07:21:07.737778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.022 [2024-11-18 07:21:07.737811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.022 qpair failed and we were unable to recover it. 00:35:47.022 [2024-11-18 07:21:07.737937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.022 [2024-11-18 07:21:07.737984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.022 qpair failed and we were unable to recover it. 00:35:47.022 [2024-11-18 07:21:07.738130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.022 [2024-11-18 07:21:07.738164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.022 qpair failed and we were unable to recover it. 00:35:47.022 [2024-11-18 07:21:07.738313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.022 [2024-11-18 07:21:07.738346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.022 qpair failed and we were unable to recover it. 00:35:47.022 [2024-11-18 07:21:07.738474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.022 [2024-11-18 07:21:07.738521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.022 qpair failed and we were unable to recover it. 00:35:47.022 [2024-11-18 07:21:07.738663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.022 [2024-11-18 07:21:07.738696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.022 qpair failed and we were unable to recover it. 00:35:47.022 [2024-11-18 07:21:07.738812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.022 [2024-11-18 07:21:07.738844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.022 qpair failed and we were unable to recover it. 00:35:47.022 [2024-11-18 07:21:07.738953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.022 [2024-11-18 07:21:07.738985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.022 qpair failed and we were unable to recover it. 00:35:47.022 [2024-11-18 07:21:07.739091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.022 [2024-11-18 07:21:07.739125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.022 qpair failed and we were unable to recover it. 00:35:47.022 [2024-11-18 07:21:07.739225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.022 [2024-11-18 07:21:07.739257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.022 qpair failed and we were unable to recover it. 00:35:47.022 [2024-11-18 07:21:07.739348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.022 [2024-11-18 07:21:07.739380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.023 qpair failed and we were unable to recover it. 00:35:47.023 [2024-11-18 07:21:07.739485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.023 [2024-11-18 07:21:07.739541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.023 qpair failed and we were unable to recover it. 00:35:47.023 [2024-11-18 07:21:07.739644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.023 [2024-11-18 07:21:07.739675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.023 qpair failed and we were unable to recover it. 00:35:47.023 [2024-11-18 07:21:07.739807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.023 [2024-11-18 07:21:07.739838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.023 qpair failed and we were unable to recover it. 00:35:47.023 [2024-11-18 07:21:07.739946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.023 [2024-11-18 07:21:07.739977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.023 qpair failed and we were unable to recover it. 00:35:47.023 [2024-11-18 07:21:07.740099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.023 [2024-11-18 07:21:07.740130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.023 qpair failed and we were unable to recover it. 00:35:47.023 [2024-11-18 07:21:07.740275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.023 [2024-11-18 07:21:07.740310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.023 qpair failed and we were unable to recover it. 00:35:47.023 [2024-11-18 07:21:07.740442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.023 [2024-11-18 07:21:07.740474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.023 qpair failed and we were unable to recover it. 00:35:47.023 [2024-11-18 07:21:07.740590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.023 [2024-11-18 07:21:07.740621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.023 qpair failed and we were unable to recover it. 00:35:47.023 [2024-11-18 07:21:07.740716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.023 [2024-11-18 07:21:07.740749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.023 qpair failed and we were unable to recover it. 00:35:47.023 [2024-11-18 07:21:07.740887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.023 [2024-11-18 07:21:07.740919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.023 qpair failed and we were unable to recover it. 00:35:47.023 [2024-11-18 07:21:07.741047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.023 [2024-11-18 07:21:07.741079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.023 qpair failed and we were unable to recover it. 00:35:47.023 [2024-11-18 07:21:07.741241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.023 [2024-11-18 07:21:07.741272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.023 qpair failed and we were unable to recover it. 00:35:47.023 [2024-11-18 07:21:07.741419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.023 [2024-11-18 07:21:07.741451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.023 qpair failed and we were unable to recover it. 00:35:47.023 [2024-11-18 07:21:07.741557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.023 [2024-11-18 07:21:07.741590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.023 qpair failed and we were unable to recover it. 00:35:47.023 [2024-11-18 07:21:07.741723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.023 [2024-11-18 07:21:07.741754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.023 qpair failed and we were unable to recover it. 00:35:47.023 [2024-11-18 07:21:07.741890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.023 [2024-11-18 07:21:07.741922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.023 qpair failed and we were unable to recover it. 00:35:47.023 [2024-11-18 07:21:07.742033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.023 [2024-11-18 07:21:07.742064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.023 qpair failed and we were unable to recover it. 00:35:47.023 [2024-11-18 07:21:07.742173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.023 [2024-11-18 07:21:07.742206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.023 qpair failed and we were unable to recover it. 00:35:47.023 [2024-11-18 07:21:07.742374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.023 [2024-11-18 07:21:07.742410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.023 qpair failed and we were unable to recover it. 00:35:47.023 [2024-11-18 07:21:07.742549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.023 [2024-11-18 07:21:07.742579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.023 qpair failed and we were unable to recover it. 00:35:47.023 [2024-11-18 07:21:07.742709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.023 [2024-11-18 07:21:07.742739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.023 qpair failed and we were unable to recover it. 00:35:47.023 [2024-11-18 07:21:07.742848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.023 [2024-11-18 07:21:07.742878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.023 qpair failed and we were unable to recover it. 00:35:47.023 [2024-11-18 07:21:07.742982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.023 [2024-11-18 07:21:07.743012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.023 qpair failed and we were unable to recover it. 00:35:47.023 [2024-11-18 07:21:07.743105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.023 [2024-11-18 07:21:07.743138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.023 qpair failed and we were unable to recover it. 00:35:47.023 [2024-11-18 07:21:07.743281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.023 [2024-11-18 07:21:07.743313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.023 qpair failed and we were unable to recover it. 00:35:47.023 [2024-11-18 07:21:07.743445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.023 [2024-11-18 07:21:07.743475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.023 qpair failed and we were unable to recover it. 00:35:47.023 [2024-11-18 07:21:07.743612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.023 [2024-11-18 07:21:07.743643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.023 qpair failed and we were unable to recover it. 00:35:47.023 [2024-11-18 07:21:07.743738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.023 [2024-11-18 07:21:07.743768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.023 qpair failed and we were unable to recover it. 00:35:47.023 [2024-11-18 07:21:07.743902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.023 [2024-11-18 07:21:07.743933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.023 qpair failed and we were unable to recover it. 00:35:47.023 [2024-11-18 07:21:07.744066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.023 [2024-11-18 07:21:07.744097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.023 qpair failed and we were unable to recover it. 00:35:47.023 [2024-11-18 07:21:07.744258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.023 [2024-11-18 07:21:07.744287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.023 qpair failed and we were unable to recover it. 00:35:47.023 [2024-11-18 07:21:07.744395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.023 [2024-11-18 07:21:07.744425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.023 qpair failed and we were unable to recover it. 00:35:47.023 [2024-11-18 07:21:07.744546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.023 [2024-11-18 07:21:07.744576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.023 qpair failed and we were unable to recover it. 00:35:47.023 [2024-11-18 07:21:07.744707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.023 [2024-11-18 07:21:07.744736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.023 qpair failed and we were unable to recover it. 00:35:47.023 [2024-11-18 07:21:07.744859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.023 [2024-11-18 07:21:07.744889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.023 qpair failed and we were unable to recover it. 00:35:47.023 [2024-11-18 07:21:07.744985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.023 [2024-11-18 07:21:07.745014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.023 qpair failed and we were unable to recover it. 00:35:47.023 [2024-11-18 07:21:07.745167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.023 [2024-11-18 07:21:07.745197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.023 qpair failed and we were unable to recover it. 00:35:47.024 [2024-11-18 07:21:07.745289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.024 [2024-11-18 07:21:07.745319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.024 qpair failed and we were unable to recover it. 00:35:47.024 [2024-11-18 07:21:07.745448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.024 [2024-11-18 07:21:07.745478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.024 qpair failed and we were unable to recover it. 00:35:47.024 [2024-11-18 07:21:07.745626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.024 [2024-11-18 07:21:07.745655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.024 qpair failed and we were unable to recover it. 00:35:47.024 [2024-11-18 07:21:07.745748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.024 [2024-11-18 07:21:07.745777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.024 qpair failed and we were unable to recover it. 00:35:47.024 [2024-11-18 07:21:07.745890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.024 [2024-11-18 07:21:07.745920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.024 qpair failed and we were unable to recover it. 00:35:47.024 [2024-11-18 07:21:07.746022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.024 [2024-11-18 07:21:07.746052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.024 qpair failed and we were unable to recover it. 00:35:47.024 [2024-11-18 07:21:07.746175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.024 [2024-11-18 07:21:07.746204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.024 qpair failed and we were unable to recover it. 00:35:47.024 [2024-11-18 07:21:07.746307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.024 [2024-11-18 07:21:07.746339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.024 qpair failed and we were unable to recover it. 00:35:47.024 [2024-11-18 07:21:07.746450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.024 [2024-11-18 07:21:07.746487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.024 qpair failed and we were unable to recover it. 00:35:47.024 [2024-11-18 07:21:07.746594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.024 [2024-11-18 07:21:07.746626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.024 qpair failed and we were unable to recover it. 00:35:47.024 [2024-11-18 07:21:07.746766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.024 [2024-11-18 07:21:07.746797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.024 qpair failed and we were unable to recover it. 00:35:47.024 [2024-11-18 07:21:07.746929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.024 [2024-11-18 07:21:07.746959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.024 qpair failed and we were unable to recover it. 00:35:47.024 [2024-11-18 07:21:07.747056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.024 [2024-11-18 07:21:07.747087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.024 qpair failed and we were unable to recover it. 00:35:47.024 [2024-11-18 07:21:07.747178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.024 [2024-11-18 07:21:07.747210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.024 qpair failed and we were unable to recover it. 00:35:47.024 [2024-11-18 07:21:07.747320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.024 [2024-11-18 07:21:07.747350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.024 qpair failed and we were unable to recover it. 00:35:47.024 [2024-11-18 07:21:07.747466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.024 [2024-11-18 07:21:07.747530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.024 qpair failed and we were unable to recover it. 00:35:47.024 [2024-11-18 07:21:07.747652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.024 [2024-11-18 07:21:07.747685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.024 qpair failed and we were unable to recover it. 00:35:47.024 [2024-11-18 07:21:07.747821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.024 [2024-11-18 07:21:07.747851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.024 qpair failed and we were unable to recover it. 00:35:47.024 [2024-11-18 07:21:07.747994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.024 [2024-11-18 07:21:07.748024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.024 qpair failed and we were unable to recover it. 00:35:47.024 [2024-11-18 07:21:07.748160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.024 [2024-11-18 07:21:07.748192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.024 qpair failed and we were unable to recover it. 00:35:47.024 [2024-11-18 07:21:07.748291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.024 [2024-11-18 07:21:07.748321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.024 qpair failed and we were unable to recover it. 00:35:47.024 [2024-11-18 07:21:07.748482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.024 [2024-11-18 07:21:07.748521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.024 qpair failed and we were unable to recover it. 00:35:47.024 [2024-11-18 07:21:07.748629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.024 [2024-11-18 07:21:07.748660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.024 qpair failed and we were unable to recover it. 00:35:47.024 [2024-11-18 07:21:07.748785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.024 [2024-11-18 07:21:07.748816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.024 qpair failed and we were unable to recover it. 00:35:47.024 [2024-11-18 07:21:07.748975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.024 [2024-11-18 07:21:07.749004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.024 qpair failed and we were unable to recover it. 00:35:47.024 [2024-11-18 07:21:07.749138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.024 [2024-11-18 07:21:07.749167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.024 qpair failed and we were unable to recover it. 00:35:47.024 [2024-11-18 07:21:07.749272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.024 [2024-11-18 07:21:07.749303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.024 qpair failed and we were unable to recover it. 00:35:47.024 [2024-11-18 07:21:07.749434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.024 [2024-11-18 07:21:07.749465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.024 qpair failed and we were unable to recover it. 00:35:47.024 [2024-11-18 07:21:07.749625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.024 [2024-11-18 07:21:07.749669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.024 qpair failed and we were unable to recover it. 00:35:47.024 [2024-11-18 07:21:07.749826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.024 [2024-11-18 07:21:07.749856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.024 qpair failed and we were unable to recover it. 00:35:47.024 [2024-11-18 07:21:07.749954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.024 [2024-11-18 07:21:07.749983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.024 qpair failed and we were unable to recover it. 00:35:47.024 [2024-11-18 07:21:07.750146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.024 [2024-11-18 07:21:07.750177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.024 qpair failed and we were unable to recover it. 00:35:47.024 [2024-11-18 07:21:07.750301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.024 [2024-11-18 07:21:07.750330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.024 qpair failed and we were unable to recover it. 00:35:47.024 [2024-11-18 07:21:07.750430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.025 [2024-11-18 07:21:07.750460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.025 qpair failed and we were unable to recover it. 00:35:47.025 [2024-11-18 07:21:07.750564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.025 [2024-11-18 07:21:07.750594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.025 qpair failed and we were unable to recover it. 00:35:47.025 [2024-11-18 07:21:07.750722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.025 [2024-11-18 07:21:07.750758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.025 qpair failed and we were unable to recover it. 00:35:47.025 [2024-11-18 07:21:07.750893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.025 [2024-11-18 07:21:07.750922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.025 qpair failed and we were unable to recover it. 00:35:47.025 [2024-11-18 07:21:07.751028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.025 [2024-11-18 07:21:07.751056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.025 qpair failed and we were unable to recover it. 00:35:47.025 [2024-11-18 07:21:07.751178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.025 [2024-11-18 07:21:07.751207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.025 qpair failed and we were unable to recover it. 00:35:47.025 [2024-11-18 07:21:07.751332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.025 [2024-11-18 07:21:07.751375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.025 qpair failed and we were unable to recover it. 00:35:47.025 [2024-11-18 07:21:07.751517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.025 [2024-11-18 07:21:07.751563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.025 qpair failed and we were unable to recover it. 00:35:47.025 [2024-11-18 07:21:07.751689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.025 [2024-11-18 07:21:07.751717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.025 qpair failed and we were unable to recover it. 00:35:47.025 [2024-11-18 07:21:07.751867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.025 [2024-11-18 07:21:07.751896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.025 qpair failed and we were unable to recover it. 00:35:47.025 [2024-11-18 07:21:07.752011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.025 [2024-11-18 07:21:07.752039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.025 qpair failed and we were unable to recover it. 00:35:47.025 [2024-11-18 07:21:07.752171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.025 [2024-11-18 07:21:07.752200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.025 qpair failed and we were unable to recover it. 00:35:47.025 [2024-11-18 07:21:07.752329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.025 [2024-11-18 07:21:07.752361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.025 qpair failed and we were unable to recover it. 00:35:47.025 [2024-11-18 07:21:07.752460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.025 [2024-11-18 07:21:07.752499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.025 qpair failed and we were unable to recover it. 00:35:47.025 [2024-11-18 07:21:07.752638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.025 [2024-11-18 07:21:07.752667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.025 qpair failed and we were unable to recover it. 00:35:47.025 [2024-11-18 07:21:07.752766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.025 [2024-11-18 07:21:07.752795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.025 qpair failed and we were unable to recover it. 00:35:47.025 [2024-11-18 07:21:07.752950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.025 [2024-11-18 07:21:07.752980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.025 qpair failed and we were unable to recover it. 00:35:47.025 [2024-11-18 07:21:07.753114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.025 [2024-11-18 07:21:07.753144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.025 qpair failed and we were unable to recover it. 00:35:47.025 [2024-11-18 07:21:07.753265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.025 [2024-11-18 07:21:07.753299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.025 qpair failed and we were unable to recover it. 00:35:47.025 [2024-11-18 07:21:07.753435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.025 [2024-11-18 07:21:07.753465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.025 qpair failed and we were unable to recover it. 00:35:47.025 [2024-11-18 07:21:07.753609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.025 [2024-11-18 07:21:07.753639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.025 qpair failed and we were unable to recover it. 00:35:47.025 [2024-11-18 07:21:07.753741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.025 [2024-11-18 07:21:07.753768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.025 qpair failed and we were unable to recover it. 00:35:47.025 [2024-11-18 07:21:07.753890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.025 [2024-11-18 07:21:07.753919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.025 qpair failed and we were unable to recover it. 00:35:47.025 [2024-11-18 07:21:07.754022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.025 [2024-11-18 07:21:07.754049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.025 qpair failed and we were unable to recover it. 00:35:47.025 [2024-11-18 07:21:07.754150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.025 [2024-11-18 07:21:07.754181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.025 qpair failed and we were unable to recover it. 00:35:47.025 [2024-11-18 07:21:07.754322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.025 [2024-11-18 07:21:07.754352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.025 qpair failed and we were unable to recover it. 00:35:47.025 [2024-11-18 07:21:07.754463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.025 [2024-11-18 07:21:07.754505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.025 qpair failed and we were unable to recover it. 00:35:47.025 [2024-11-18 07:21:07.754595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.025 [2024-11-18 07:21:07.754624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.025 qpair failed and we were unable to recover it. 00:35:47.025 [2024-11-18 07:21:07.754749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.025 [2024-11-18 07:21:07.754778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.025 qpair failed and we were unable to recover it. 00:35:47.025 [2024-11-18 07:21:07.754892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.025 [2024-11-18 07:21:07.754927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.025 qpair failed and we were unable to recover it. 00:35:47.025 [2024-11-18 07:21:07.755028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.025 [2024-11-18 07:21:07.755058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.025 qpair failed and we were unable to recover it. 00:35:47.025 [2024-11-18 07:21:07.755209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.025 [2024-11-18 07:21:07.755238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.025 qpair failed and we were unable to recover it. 00:35:47.025 [2024-11-18 07:21:07.755341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.025 [2024-11-18 07:21:07.755369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.025 qpair failed and we were unable to recover it. 00:35:47.025 [2024-11-18 07:21:07.755513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.025 [2024-11-18 07:21:07.755543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.025 qpair failed and we were unable to recover it. 00:35:47.025 [2024-11-18 07:21:07.755640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.025 [2024-11-18 07:21:07.755669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.025 qpair failed and we were unable to recover it. 00:35:47.025 [2024-11-18 07:21:07.755757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.025 [2024-11-18 07:21:07.755786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.025 qpair failed and we were unable to recover it. 00:35:47.025 [2024-11-18 07:21:07.755913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.025 [2024-11-18 07:21:07.755943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.025 qpair failed and we were unable to recover it. 00:35:47.025 [2024-11-18 07:21:07.756051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.025 [2024-11-18 07:21:07.756079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.025 qpair failed and we were unable to recover it. 00:35:47.026 [2024-11-18 07:21:07.756178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.026 [2024-11-18 07:21:07.756206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.026 qpair failed and we were unable to recover it. 00:35:47.026 [2024-11-18 07:21:07.756313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.026 [2024-11-18 07:21:07.756347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.026 qpair failed and we were unable to recover it. 00:35:47.026 [2024-11-18 07:21:07.756472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.026 [2024-11-18 07:21:07.756514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.026 qpair failed and we were unable to recover it. 00:35:47.026 [2024-11-18 07:21:07.756616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.026 [2024-11-18 07:21:07.756644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.026 qpair failed and we were unable to recover it. 00:35:47.026 [2024-11-18 07:21:07.756738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.026 [2024-11-18 07:21:07.756767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.026 qpair failed and we were unable to recover it. 00:35:47.026 [2024-11-18 07:21:07.756873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.026 [2024-11-18 07:21:07.756901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.026 qpair failed and we were unable to recover it. 00:35:47.026 [2024-11-18 07:21:07.757003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.026 [2024-11-18 07:21:07.757031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.026 qpair failed and we were unable to recover it. 00:35:47.026 [2024-11-18 07:21:07.757130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.026 [2024-11-18 07:21:07.757162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.026 qpair failed and we were unable to recover it. 00:35:47.026 [2024-11-18 07:21:07.757258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.026 [2024-11-18 07:21:07.757289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.026 qpair failed and we were unable to recover it. 00:35:47.026 [2024-11-18 07:21:07.757386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.026 [2024-11-18 07:21:07.757415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.026 qpair failed and we were unable to recover it. 00:35:47.026 [2024-11-18 07:21:07.757546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.026 [2024-11-18 07:21:07.757578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.026 qpair failed and we were unable to recover it. 00:35:47.026 [2024-11-18 07:21:07.757678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.026 [2024-11-18 07:21:07.757707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.026 qpair failed and we were unable to recover it. 00:35:47.026 [2024-11-18 07:21:07.757852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.026 [2024-11-18 07:21:07.757880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.026 qpair failed and we were unable to recover it. 00:35:47.026 [2024-11-18 07:21:07.757970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.026 [2024-11-18 07:21:07.758000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.026 qpair failed and we were unable to recover it. 00:35:47.026 [2024-11-18 07:21:07.758124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.026 [2024-11-18 07:21:07.758164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.026 qpair failed and we were unable to recover it. 00:35:47.026 [2024-11-18 07:21:07.758282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.026 [2024-11-18 07:21:07.758312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.026 qpair failed and we were unable to recover it. 00:35:47.026 [2024-11-18 07:21:07.758408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.026 [2024-11-18 07:21:07.758438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.026 qpair failed and we were unable to recover it. 00:35:47.026 [2024-11-18 07:21:07.758551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.026 [2024-11-18 07:21:07.758579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.026 qpair failed and we were unable to recover it. 00:35:47.026 [2024-11-18 07:21:07.758683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.026 [2024-11-18 07:21:07.758711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.026 qpair failed and we were unable to recover it. 00:35:47.026 [2024-11-18 07:21:07.758804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.026 [2024-11-18 07:21:07.758832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.026 qpair failed and we were unable to recover it. 00:35:47.026 [2024-11-18 07:21:07.758914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.026 [2024-11-18 07:21:07.758942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.026 qpair failed and we were unable to recover it. 00:35:47.026 [2024-11-18 07:21:07.759826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.026 [2024-11-18 07:21:07.759871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.026 qpair failed and we were unable to recover it. 00:35:47.026 [2024-11-18 07:21:07.760003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.026 [2024-11-18 07:21:07.760031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.026 qpair failed and we were unable to recover it. 00:35:47.026 [2024-11-18 07:21:07.760165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.026 [2024-11-18 07:21:07.760202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.026 qpair failed and we were unable to recover it. 00:35:47.026 [2024-11-18 07:21:07.760301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.026 [2024-11-18 07:21:07.760329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.026 qpair failed and we were unable to recover it. 00:35:47.026 [2024-11-18 07:21:07.760449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.026 [2024-11-18 07:21:07.760476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.026 qpair failed and we were unable to recover it. 00:35:47.026 [2024-11-18 07:21:07.760616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.026 [2024-11-18 07:21:07.760647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.026 qpair failed and we were unable to recover it. 00:35:47.026 [2024-11-18 07:21:07.760742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.026 [2024-11-18 07:21:07.760770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.026 qpair failed and we were unable to recover it. 00:35:47.026 [2024-11-18 07:21:07.760889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.026 [2024-11-18 07:21:07.760918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.026 qpair failed and we were unable to recover it. 00:35:47.026 [2024-11-18 07:21:07.761058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.026 [2024-11-18 07:21:07.761086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.026 qpair failed and we were unable to recover it. 00:35:47.026 [2024-11-18 07:21:07.761233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.026 [2024-11-18 07:21:07.761260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.026 qpair failed and we were unable to recover it. 00:35:47.026 [2024-11-18 07:21:07.761403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.026 [2024-11-18 07:21:07.761445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.026 qpair failed and we were unable to recover it. 00:35:47.026 [2024-11-18 07:21:07.761562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.026 [2024-11-18 07:21:07.761591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.026 qpair failed and we were unable to recover it. 00:35:47.026 [2024-11-18 07:21:07.761682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.026 [2024-11-18 07:21:07.761710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.026 qpair failed and we were unable to recover it. 00:35:47.026 [2024-11-18 07:21:07.761809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.026 [2024-11-18 07:21:07.761837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.026 qpair failed and we were unable to recover it. 00:35:47.026 [2024-11-18 07:21:07.761951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.026 [2024-11-18 07:21:07.761978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.026 qpair failed and we were unable to recover it. 00:35:47.026 [2024-11-18 07:21:07.762076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.026 [2024-11-18 07:21:07.762105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.026 qpair failed and we were unable to recover it. 00:35:47.026 [2024-11-18 07:21:07.762203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.027 [2024-11-18 07:21:07.762231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.027 qpair failed and we were unable to recover it. 00:35:47.027 [2024-11-18 07:21:07.762355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.027 [2024-11-18 07:21:07.762383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.027 qpair failed and we were unable to recover it. 00:35:47.027 [2024-11-18 07:21:07.762482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.027 [2024-11-18 07:21:07.762536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.027 qpair failed and we were unable to recover it. 00:35:47.027 [2024-11-18 07:21:07.762660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.027 [2024-11-18 07:21:07.762686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.027 qpair failed and we were unable to recover it. 00:35:47.027 [2024-11-18 07:21:07.762764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.027 [2024-11-18 07:21:07.762790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.027 qpair failed and we were unable to recover it. 00:35:47.027 [2024-11-18 07:21:07.762934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.027 [2024-11-18 07:21:07.762968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.027 qpair failed and we were unable to recover it. 00:35:47.027 [2024-11-18 07:21:07.763090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.027 [2024-11-18 07:21:07.763121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.027 qpair failed and we were unable to recover it. 00:35:47.027 [2024-11-18 07:21:07.763225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.027 [2024-11-18 07:21:07.763251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.027 qpair failed and we were unable to recover it. 00:35:47.027 [2024-11-18 07:21:07.763376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.027 [2024-11-18 07:21:07.763408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.027 qpair failed and we were unable to recover it. 00:35:47.027 [2024-11-18 07:21:07.763529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.027 [2024-11-18 07:21:07.763557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.027 qpair failed and we were unable to recover it. 00:35:47.027 [2024-11-18 07:21:07.763642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.027 [2024-11-18 07:21:07.763669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.027 qpair failed and we were unable to recover it. 00:35:47.027 [2024-11-18 07:21:07.763749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.027 [2024-11-18 07:21:07.763776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.027 qpair failed and we were unable to recover it. 00:35:47.027 [2024-11-18 07:21:07.763886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.027 [2024-11-18 07:21:07.763920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.027 qpair failed and we were unable to recover it. 00:35:47.027 [2024-11-18 07:21:07.764037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.027 [2024-11-18 07:21:07.764066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.027 qpair failed and we were unable to recover it. 00:35:47.027 [2024-11-18 07:21:07.764193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.027 [2024-11-18 07:21:07.764219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.027 qpair failed and we were unable to recover it. 00:35:47.027 [2024-11-18 07:21:07.764318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.027 [2024-11-18 07:21:07.764345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.027 qpair failed and we were unable to recover it. 00:35:47.027 [2024-11-18 07:21:07.764460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.027 [2024-11-18 07:21:07.764501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.027 qpair failed and we were unable to recover it. 00:35:47.027 [2024-11-18 07:21:07.764606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.027 [2024-11-18 07:21:07.764632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.027 qpair failed and we were unable to recover it. 00:35:47.027 [2024-11-18 07:21:07.764725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.027 [2024-11-18 07:21:07.764751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.027 qpair failed and we were unable to recover it. 00:35:47.027 [2024-11-18 07:21:07.764882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.027 [2024-11-18 07:21:07.764908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.027 qpair failed and we were unable to recover it. 00:35:47.027 [2024-11-18 07:21:07.764989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.027 [2024-11-18 07:21:07.765015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.027 qpair failed and we were unable to recover it. 00:35:47.027 [2024-11-18 07:21:07.765102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.027 [2024-11-18 07:21:07.765130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.027 qpair failed and we were unable to recover it. 00:35:47.027 [2024-11-18 07:21:07.765273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.027 [2024-11-18 07:21:07.765313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.027 qpair failed and we were unable to recover it. 00:35:47.027 [2024-11-18 07:21:07.765403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.027 [2024-11-18 07:21:07.765431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.027 qpair failed and we were unable to recover it. 00:35:47.027 [2024-11-18 07:21:07.765542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.027 [2024-11-18 07:21:07.765569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.027 qpair failed and we were unable to recover it. 00:35:47.027 [2024-11-18 07:21:07.765655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.027 [2024-11-18 07:21:07.765682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.027 qpair failed and we were unable to recover it. 00:35:47.027 [2024-11-18 07:21:07.765776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.027 [2024-11-18 07:21:07.765807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.027 qpair failed and we were unable to recover it. 00:35:47.027 [2024-11-18 07:21:07.765885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.027 [2024-11-18 07:21:07.765911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.027 qpair failed and we were unable to recover it. 00:35:47.027 [2024-11-18 07:21:07.766031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.027 [2024-11-18 07:21:07.766059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.027 qpair failed and we were unable to recover it. 00:35:47.027 [2024-11-18 07:21:07.766140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.027 [2024-11-18 07:21:07.766167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.027 qpair failed and we were unable to recover it. 00:35:47.027 [2024-11-18 07:21:07.766286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.027 [2024-11-18 07:21:07.766313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.027 qpair failed and we were unable to recover it. 00:35:47.027 [2024-11-18 07:21:07.766426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.027 [2024-11-18 07:21:07.766452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.027 qpair failed and we were unable to recover it. 00:35:47.027 [2024-11-18 07:21:07.766551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.027 [2024-11-18 07:21:07.766578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.027 qpair failed and we were unable to recover it. 00:35:47.027 [2024-11-18 07:21:07.766666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.027 [2024-11-18 07:21:07.766691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.027 qpair failed and we were unable to recover it. 00:35:47.027 [2024-11-18 07:21:07.766781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.027 [2024-11-18 07:21:07.766809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.027 qpair failed and we were unable to recover it. 00:35:47.027 [2024-11-18 07:21:07.766923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.027 [2024-11-18 07:21:07.766967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.027 qpair failed and we were unable to recover it. 00:35:47.027 [2024-11-18 07:21:07.767084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.027 [2024-11-18 07:21:07.767113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.027 qpair failed and we were unable to recover it. 00:35:47.027 [2024-11-18 07:21:07.767242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.027 [2024-11-18 07:21:07.767268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.027 qpair failed and we were unable to recover it. 00:35:47.027 [2024-11-18 07:21:07.767357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.028 [2024-11-18 07:21:07.767384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.028 qpair failed and we were unable to recover it. 00:35:47.028 [2024-11-18 07:21:07.767483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.028 [2024-11-18 07:21:07.767515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.028 qpair failed and we were unable to recover it. 00:35:47.028 [2024-11-18 07:21:07.767597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.028 [2024-11-18 07:21:07.767623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.028 qpair failed and we were unable to recover it. 00:35:47.028 [2024-11-18 07:21:07.767715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.028 [2024-11-18 07:21:07.767741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.028 qpair failed and we were unable to recover it. 00:35:47.028 [2024-11-18 07:21:07.767830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.028 [2024-11-18 07:21:07.767857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.028 qpair failed and we were unable to recover it. 00:35:47.028 [2024-11-18 07:21:07.767939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.028 [2024-11-18 07:21:07.767966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.028 qpair failed and we were unable to recover it. 00:35:47.028 [2024-11-18 07:21:07.768083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.028 [2024-11-18 07:21:07.768109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.028 qpair failed and we were unable to recover it. 00:35:47.028 [2024-11-18 07:21:07.768199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.028 [2024-11-18 07:21:07.768225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.028 qpair failed and we were unable to recover it. 00:35:47.028 [2024-11-18 07:21:07.768311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.028 [2024-11-18 07:21:07.768336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.028 qpair failed and we were unable to recover it. 00:35:47.028 [2024-11-18 07:21:07.768426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.028 [2024-11-18 07:21:07.768455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.028 qpair failed and we were unable to recover it. 00:35:47.028 [2024-11-18 07:21:07.768547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.028 [2024-11-18 07:21:07.768574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.028 qpair failed and we were unable to recover it. 00:35:47.028 [2024-11-18 07:21:07.768665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.028 [2024-11-18 07:21:07.768692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.028 qpair failed and we were unable to recover it. 00:35:47.028 [2024-11-18 07:21:07.768779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.028 [2024-11-18 07:21:07.768805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.028 qpair failed and we were unable to recover it. 00:35:47.028 [2024-11-18 07:21:07.768912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.028 [2024-11-18 07:21:07.768938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.028 qpair failed and we were unable to recover it. 00:35:47.028 [2024-11-18 07:21:07.769023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.028 [2024-11-18 07:21:07.769050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.028 qpair failed and we were unable to recover it. 00:35:47.028 [2024-11-18 07:21:07.769143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.028 [2024-11-18 07:21:07.769170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.028 qpair failed and we were unable to recover it. 00:35:47.028 [2024-11-18 07:21:07.769264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.028 [2024-11-18 07:21:07.769290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.028 qpair failed and we were unable to recover it. 00:35:47.028 [2024-11-18 07:21:07.769371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.028 [2024-11-18 07:21:07.769397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.028 qpair failed and we were unable to recover it. 00:35:47.028 [2024-11-18 07:21:07.769474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.028 [2024-11-18 07:21:07.769505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.028 qpair failed and we were unable to recover it. 00:35:47.028 [2024-11-18 07:21:07.769589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.028 [2024-11-18 07:21:07.769614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.028 qpair failed and we were unable to recover it. 00:35:47.028 [2024-11-18 07:21:07.769703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.028 [2024-11-18 07:21:07.769731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.028 qpair failed and we were unable to recover it. 00:35:47.028 [2024-11-18 07:21:07.769830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.028 [2024-11-18 07:21:07.769856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.028 qpair failed and we were unable to recover it. 00:35:47.028 [2024-11-18 07:21:07.769943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.028 [2024-11-18 07:21:07.769969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.028 qpair failed and we were unable to recover it. 00:35:47.028 [2024-11-18 07:21:07.770074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.028 [2024-11-18 07:21:07.770100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.028 qpair failed and we were unable to recover it. 00:35:47.028 [2024-11-18 07:21:07.770242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.028 [2024-11-18 07:21:07.770270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.028 qpair failed and we were unable to recover it. 00:35:47.028 [2024-11-18 07:21:07.770351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.028 [2024-11-18 07:21:07.770378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.028 qpair failed and we were unable to recover it. 00:35:47.028 [2024-11-18 07:21:07.770499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.028 [2024-11-18 07:21:07.770525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.028 qpair failed and we were unable to recover it. 00:35:47.028 [2024-11-18 07:21:07.770613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.028 [2024-11-18 07:21:07.770638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.028 qpair failed and we were unable to recover it. 00:35:47.028 [2024-11-18 07:21:07.770723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.028 [2024-11-18 07:21:07.770748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.028 qpair failed and we were unable to recover it. 00:35:47.028 [2024-11-18 07:21:07.770862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.028 [2024-11-18 07:21:07.770891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.028 qpair failed and we were unable to recover it. 00:35:47.028 [2024-11-18 07:21:07.771031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.028 [2024-11-18 07:21:07.771062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.028 qpair failed and we were unable to recover it. 00:35:47.028 [2024-11-18 07:21:07.771157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.028 [2024-11-18 07:21:07.771196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.028 qpair failed and we were unable to recover it. 00:35:47.028 [2024-11-18 07:21:07.771291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.028 [2024-11-18 07:21:07.771318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.028 qpair failed and we were unable to recover it. 00:35:47.028 [2024-11-18 07:21:07.771401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.028 [2024-11-18 07:21:07.771428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.028 qpair failed and we were unable to recover it. 00:35:47.028 [2024-11-18 07:21:07.771509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.028 [2024-11-18 07:21:07.771537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.028 qpair failed and we were unable to recover it. 00:35:47.028 [2024-11-18 07:21:07.771623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.028 [2024-11-18 07:21:07.771650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.028 qpair failed and we were unable to recover it. 00:35:47.028 [2024-11-18 07:21:07.771765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.028 [2024-11-18 07:21:07.771791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.029 qpair failed and we were unable to recover it. 00:35:47.029 [2024-11-18 07:21:07.771878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.029 [2024-11-18 07:21:07.771910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.029 qpair failed and we were unable to recover it. 00:35:47.029 [2024-11-18 07:21:07.772029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.029 [2024-11-18 07:21:07.772055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.029 qpair failed and we were unable to recover it. 00:35:47.029 [2024-11-18 07:21:07.772143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.029 [2024-11-18 07:21:07.772170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.029 qpair failed and we were unable to recover it. 00:35:47.029 [2024-11-18 07:21:07.772252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.029 [2024-11-18 07:21:07.772279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.029 qpair failed and we were unable to recover it. 00:35:47.029 [2024-11-18 07:21:07.772360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.029 [2024-11-18 07:21:07.772387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.029 qpair failed and we were unable to recover it. 00:35:47.029 [2024-11-18 07:21:07.772481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.029 [2024-11-18 07:21:07.772517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.029 qpair failed and we were unable to recover it. 00:35:47.029 [2024-11-18 07:21:07.772597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.029 [2024-11-18 07:21:07.772623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.029 qpair failed and we were unable to recover it. 00:35:47.029 [2024-11-18 07:21:07.772712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.029 [2024-11-18 07:21:07.772738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.029 qpair failed and we were unable to recover it. 00:35:47.029 [2024-11-18 07:21:07.772833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.029 [2024-11-18 07:21:07.772859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.029 qpair failed and we were unable to recover it. 00:35:47.029 [2024-11-18 07:21:07.772966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.029 [2024-11-18 07:21:07.772991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.029 qpair failed and we were unable to recover it. 00:35:47.029 [2024-11-18 07:21:07.773094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.029 [2024-11-18 07:21:07.773134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.029 qpair failed and we were unable to recover it. 00:35:47.029 [2024-11-18 07:21:07.773229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.029 [2024-11-18 07:21:07.773257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.029 qpair failed and we were unable to recover it. 00:35:47.029 [2024-11-18 07:21:07.773350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.029 [2024-11-18 07:21:07.773378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.029 qpair failed and we were unable to recover it. 00:35:47.029 [2024-11-18 07:21:07.773503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.029 [2024-11-18 07:21:07.773530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.029 qpair failed and we were unable to recover it. 00:35:47.029 [2024-11-18 07:21:07.773629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.029 [2024-11-18 07:21:07.773658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.029 qpair failed and we were unable to recover it. 00:35:47.029 [2024-11-18 07:21:07.773748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.029 [2024-11-18 07:21:07.773775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.029 qpair failed and we were unable to recover it. 00:35:47.029 [2024-11-18 07:21:07.773862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.029 [2024-11-18 07:21:07.773888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.029 qpair failed and we were unable to recover it. 00:35:47.029 [2024-11-18 07:21:07.773994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.029 [2024-11-18 07:21:07.774021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.029 qpair failed and we were unable to recover it. 00:35:47.029 [2024-11-18 07:21:07.774116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.029 [2024-11-18 07:21:07.774145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.029 qpair failed and we were unable to recover it. 00:35:47.029 [2024-11-18 07:21:07.774259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.029 [2024-11-18 07:21:07.774286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.029 qpair failed and we were unable to recover it. 00:35:47.029 [2024-11-18 07:21:07.774407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.029 [2024-11-18 07:21:07.774434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.029 qpair failed and we were unable to recover it. 00:35:47.029 [2024-11-18 07:21:07.774555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.029 [2024-11-18 07:21:07.774582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.029 qpair failed and we were unable to recover it. 00:35:47.029 [2024-11-18 07:21:07.774671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.029 [2024-11-18 07:21:07.774697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.029 qpair failed and we were unable to recover it. 00:35:47.029 [2024-11-18 07:21:07.774796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.029 [2024-11-18 07:21:07.774823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.029 qpair failed and we were unable to recover it. 00:35:47.029 [2024-11-18 07:21:07.774910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.029 [2024-11-18 07:21:07.774935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.029 qpair failed and we were unable to recover it. 00:35:47.029 [2024-11-18 07:21:07.775048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.029 [2024-11-18 07:21:07.775074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.029 qpair failed and we were unable to recover it. 00:35:47.029 [2024-11-18 07:21:07.775184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.029 [2024-11-18 07:21:07.775210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.029 qpair failed and we were unable to recover it. 00:35:47.029 [2024-11-18 07:21:07.775298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.029 [2024-11-18 07:21:07.775326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.029 qpair failed and we were unable to recover it. 00:35:47.029 [2024-11-18 07:21:07.775447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.029 [2024-11-18 07:21:07.775476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.029 qpair failed and we were unable to recover it. 00:35:47.029 [2024-11-18 07:21:07.775589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.029 [2024-11-18 07:21:07.775617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.029 qpair failed and we were unable to recover it. 00:35:47.029 [2024-11-18 07:21:07.775711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.029 [2024-11-18 07:21:07.775737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.029 qpair failed and we were unable to recover it. 00:35:47.029 [2024-11-18 07:21:07.775828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.029 [2024-11-18 07:21:07.775861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.029 qpair failed and we were unable to recover it. 00:35:47.029 [2024-11-18 07:21:07.775974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.029 [2024-11-18 07:21:07.776000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.029 qpair failed and we were unable to recover it. 00:35:47.029 [2024-11-18 07:21:07.776115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.029 [2024-11-18 07:21:07.776141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.029 qpair failed and we were unable to recover it. 00:35:47.030 [2024-11-18 07:21:07.776234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.030 [2024-11-18 07:21:07.776262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.030 qpair failed and we were unable to recover it. 00:35:47.030 [2024-11-18 07:21:07.776366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.030 [2024-11-18 07:21:07.776406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.030 qpair failed and we were unable to recover it. 00:35:47.030 [2024-11-18 07:21:07.776504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.030 [2024-11-18 07:21:07.776531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.030 qpair failed and we were unable to recover it. 00:35:47.030 [2024-11-18 07:21:07.776613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.030 [2024-11-18 07:21:07.776638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.030 qpair failed and we were unable to recover it. 00:35:47.030 [2024-11-18 07:21:07.776720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.030 [2024-11-18 07:21:07.776746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.030 qpair failed and we were unable to recover it. 00:35:47.030 [2024-11-18 07:21:07.776836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.030 [2024-11-18 07:21:07.776862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.030 qpair failed and we were unable to recover it. 00:35:47.030 [2024-11-18 07:21:07.776972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.030 [2024-11-18 07:21:07.776999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.030 qpair failed and we were unable to recover it. 00:35:47.030 [2024-11-18 07:21:07.777096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.030 [2024-11-18 07:21:07.777125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.030 qpair failed and we were unable to recover it. 00:35:47.030 [2024-11-18 07:21:07.777237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.030 [2024-11-18 07:21:07.777264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.030 qpair failed and we were unable to recover it. 00:35:47.030 [2024-11-18 07:21:07.777380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.030 [2024-11-18 07:21:07.777406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.030 qpair failed and we were unable to recover it. 00:35:47.030 [2024-11-18 07:21:07.777498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.030 [2024-11-18 07:21:07.777524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.030 qpair failed and we were unable to recover it. 00:35:47.030 [2024-11-18 07:21:07.777609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.030 [2024-11-18 07:21:07.777637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.030 qpair failed and we were unable to recover it. 00:35:47.030 [2024-11-18 07:21:07.777756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.030 [2024-11-18 07:21:07.777783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.030 qpair failed and we were unable to recover it. 00:35:47.030 [2024-11-18 07:21:07.777901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.030 [2024-11-18 07:21:07.777937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.030 qpair failed and we were unable to recover it. 00:35:47.030 [2024-11-18 07:21:07.778016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.030 [2024-11-18 07:21:07.778042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.030 qpair failed and we were unable to recover it. 00:35:47.030 [2024-11-18 07:21:07.778166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.030 [2024-11-18 07:21:07.778192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.030 qpair failed and we were unable to recover it. 00:35:47.030 [2024-11-18 07:21:07.778300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.030 [2024-11-18 07:21:07.778326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.030 qpair failed and we were unable to recover it. 00:35:47.030 [2024-11-18 07:21:07.778412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.030 [2024-11-18 07:21:07.778438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.030 qpair failed and we were unable to recover it. 00:35:47.030 [2024-11-18 07:21:07.778534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.030 [2024-11-18 07:21:07.778560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.030 qpair failed and we were unable to recover it. 00:35:47.030 [2024-11-18 07:21:07.778643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.030 [2024-11-18 07:21:07.778669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.030 qpair failed and we were unable to recover it. 00:35:47.030 [2024-11-18 07:21:07.778775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.030 [2024-11-18 07:21:07.778823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.030 qpair failed and we were unable to recover it. 00:35:47.030 [2024-11-18 07:21:07.778919] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:35:47.030 [2024-11-18 07:21:07.778974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.030 [2024-11-18 07:21:07.778996] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:47.030 [2024-11-18 07:21:07.779002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.030 qpair failed and we were unable to recover it. 00:35:47.030 [2024-11-18 07:21:07.779104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.030 [2024-11-18 07:21:07.779130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.030 qpair failed and we were unable to recover it. 00:35:47.030 [2024-11-18 07:21:07.779218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.030 [2024-11-18 07:21:07.779245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.030 qpair failed and we were unable to recover it. 00:35:47.030 [2024-11-18 07:21:07.779339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.030 [2024-11-18 07:21:07.779365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.030 qpair failed and we were unable to recover it. 00:35:47.030 [2024-11-18 07:21:07.779460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.030 [2024-11-18 07:21:07.779511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.030 qpair failed and we were unable to recover it. 00:35:47.030 [2024-11-18 07:21:07.779627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.030 [2024-11-18 07:21:07.779653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.030 qpair failed and we were unable to recover it. 00:35:47.030 [2024-11-18 07:21:07.779741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.030 [2024-11-18 07:21:07.779766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.030 qpair failed and we were unable to recover it. 00:35:47.030 [2024-11-18 07:21:07.779883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.030 [2024-11-18 07:21:07.779909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.030 qpair failed and we were unable to recover it. 00:35:47.030 [2024-11-18 07:21:07.780017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.030 [2024-11-18 07:21:07.780043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.030 qpair failed and we were unable to recover it. 00:35:47.030 [2024-11-18 07:21:07.780135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.030 [2024-11-18 07:21:07.780165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.030 qpair failed and we were unable to recover it. 00:35:47.030 [2024-11-18 07:21:07.780316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.030 [2024-11-18 07:21:07.780344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.030 qpair failed and we were unable to recover it. 00:35:47.030 [2024-11-18 07:21:07.780448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.030 [2024-11-18 07:21:07.780478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.030 qpair failed and we were unable to recover it. 00:35:47.030 [2024-11-18 07:21:07.780593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.030 [2024-11-18 07:21:07.780621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.030 qpair failed and we were unable to recover it. 00:35:47.030 [2024-11-18 07:21:07.780706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.030 [2024-11-18 07:21:07.780732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.030 qpair failed and we were unable to recover it. 00:35:47.030 [2024-11-18 07:21:07.780812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.030 [2024-11-18 07:21:07.780839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.030 qpair failed and we were unable to recover it. 00:35:47.030 [2024-11-18 07:21:07.780942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.030 [2024-11-18 07:21:07.780968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.031 qpair failed and we were unable to recover it. 00:35:47.031 [2024-11-18 07:21:07.781046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.031 [2024-11-18 07:21:07.781072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.031 qpair failed and we were unable to recover it. 00:35:47.031 [2024-11-18 07:21:07.781167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.031 [2024-11-18 07:21:07.781193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.031 qpair failed and we were unable to recover it. 00:35:47.031 [2024-11-18 07:21:07.781269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.031 [2024-11-18 07:21:07.781295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.031 qpair failed and we were unable to recover it. 00:35:47.031 [2024-11-18 07:21:07.781409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.031 [2024-11-18 07:21:07.781435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.031 qpair failed and we were unable to recover it. 00:35:47.031 [2024-11-18 07:21:07.781553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.031 [2024-11-18 07:21:07.781580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.031 qpair failed and we were unable to recover it. 00:35:47.031 [2024-11-18 07:21:07.781667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.031 [2024-11-18 07:21:07.781693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.031 qpair failed and we were unable to recover it. 00:35:47.031 [2024-11-18 07:21:07.781798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.031 [2024-11-18 07:21:07.781828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.031 qpair failed and we were unable to recover it. 00:35:47.031 [2024-11-18 07:21:07.781963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.031 [2024-11-18 07:21:07.781991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.031 qpair failed and we were unable to recover it. 00:35:47.031 [2024-11-18 07:21:07.782089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.031 [2024-11-18 07:21:07.782118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.031 qpair failed and we were unable to recover it. 00:35:47.031 [2024-11-18 07:21:07.782248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.031 [2024-11-18 07:21:07.782275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.031 qpair failed and we were unable to recover it. 00:35:47.031 [2024-11-18 07:21:07.782384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.031 [2024-11-18 07:21:07.782411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.031 qpair failed and we were unable to recover it. 00:35:47.031 [2024-11-18 07:21:07.782509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.031 [2024-11-18 07:21:07.782536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.031 qpair failed and we were unable to recover it. 00:35:47.031 [2024-11-18 07:21:07.782626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.031 [2024-11-18 07:21:07.782653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.031 qpair failed and we were unable to recover it. 00:35:47.031 [2024-11-18 07:21:07.782747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.031 [2024-11-18 07:21:07.782774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.031 qpair failed and we were unable to recover it. 00:35:47.031 [2024-11-18 07:21:07.782897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.031 [2024-11-18 07:21:07.782923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.031 qpair failed and we were unable to recover it. 00:35:47.031 [2024-11-18 07:21:07.783040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.031 [2024-11-18 07:21:07.783067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.031 qpair failed and we were unable to recover it. 00:35:47.031 [2024-11-18 07:21:07.783156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.031 [2024-11-18 07:21:07.783184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.031 qpair failed and we were unable to recover it. 00:35:47.031 [2024-11-18 07:21:07.783262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.031 [2024-11-18 07:21:07.783288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.031 qpair failed and we were unable to recover it. 00:35:47.031 [2024-11-18 07:21:07.783404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.031 [2024-11-18 07:21:07.783431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.031 qpair failed and we were unable to recover it. 00:35:47.031 [2024-11-18 07:21:07.783529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.031 [2024-11-18 07:21:07.783558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.031 qpair failed and we were unable to recover it. 00:35:47.031 [2024-11-18 07:21:07.783652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.031 [2024-11-18 07:21:07.783678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.031 qpair failed and we were unable to recover it. 00:35:47.031 [2024-11-18 07:21:07.783767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.031 [2024-11-18 07:21:07.783793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.031 qpair failed and we were unable to recover it. 00:35:47.031 [2024-11-18 07:21:07.783894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.031 [2024-11-18 07:21:07.783921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.031 qpair failed and we were unable to recover it. 00:35:47.031 [2024-11-18 07:21:07.784007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.031 [2024-11-18 07:21:07.784035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.031 qpair failed and we were unable to recover it. 00:35:47.031 [2024-11-18 07:21:07.784142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.031 [2024-11-18 07:21:07.784169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.031 qpair failed and we were unable to recover it. 00:35:47.031 [2024-11-18 07:21:07.784279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.031 [2024-11-18 07:21:07.784319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.031 qpair failed and we were unable to recover it. 00:35:47.031 [2024-11-18 07:21:07.784409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.031 [2024-11-18 07:21:07.784437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.031 qpair failed and we were unable to recover it. 00:35:47.031 [2024-11-18 07:21:07.784537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.031 [2024-11-18 07:21:07.784564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.031 qpair failed and we were unable to recover it. 00:35:47.031 [2024-11-18 07:21:07.784679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.031 [2024-11-18 07:21:07.784705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.031 qpair failed and we were unable to recover it. 00:35:47.031 [2024-11-18 07:21:07.784823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.031 [2024-11-18 07:21:07.784849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.031 qpair failed and we were unable to recover it. 00:35:47.031 [2024-11-18 07:21:07.784932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.031 [2024-11-18 07:21:07.784959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.031 qpair failed and we were unable to recover it. 00:35:47.031 [2024-11-18 07:21:07.785069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.031 [2024-11-18 07:21:07.785095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.031 qpair failed and we were unable to recover it. 00:35:47.031 [2024-11-18 07:21:07.785206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.031 [2024-11-18 07:21:07.785232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.031 qpair failed and we were unable to recover it. 00:35:47.031 [2024-11-18 07:21:07.785316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.031 [2024-11-18 07:21:07.785345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.031 qpair failed and we were unable to recover it. 00:35:47.031 [2024-11-18 07:21:07.785450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.031 [2024-11-18 07:21:07.785478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.031 qpair failed and we were unable to recover it. 00:35:47.031 [2024-11-18 07:21:07.785591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.031 [2024-11-18 07:21:07.785623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.031 qpair failed and we were unable to recover it. 00:35:47.031 [2024-11-18 07:21:07.785714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.031 [2024-11-18 07:21:07.785741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.031 qpair failed and we were unable to recover it. 00:35:47.031 [2024-11-18 07:21:07.785897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.032 [2024-11-18 07:21:07.785924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.032 qpair failed and we were unable to recover it. 00:35:47.032 [2024-11-18 07:21:07.786045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.032 [2024-11-18 07:21:07.786072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.032 qpair failed and we were unable to recover it. 00:35:47.032 [2024-11-18 07:21:07.786185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.032 [2024-11-18 07:21:07.786212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.032 qpair failed and we were unable to recover it. 00:35:47.032 [2024-11-18 07:21:07.786338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.032 [2024-11-18 07:21:07.786384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.032 qpair failed and we were unable to recover it. 00:35:47.032 [2024-11-18 07:21:07.786478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.032 [2024-11-18 07:21:07.786516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.032 qpair failed and we were unable to recover it. 00:35:47.032 [2024-11-18 07:21:07.786603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.032 [2024-11-18 07:21:07.786631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.032 qpair failed and we were unable to recover it. 00:35:47.032 [2024-11-18 07:21:07.786743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.032 [2024-11-18 07:21:07.786769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.032 qpair failed and we were unable to recover it. 00:35:47.032 [2024-11-18 07:21:07.786901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.032 [2024-11-18 07:21:07.786928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.032 qpair failed and we were unable to recover it. 00:35:47.032 [2024-11-18 07:21:07.787035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.032 [2024-11-18 07:21:07.787061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.032 qpair failed and we were unable to recover it. 00:35:47.032 [2024-11-18 07:21:07.787177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.032 [2024-11-18 07:21:07.787205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.032 qpair failed and we were unable to recover it. 00:35:47.032 [2024-11-18 07:21:07.787296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.032 [2024-11-18 07:21:07.787322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.032 qpair failed and we were unable to recover it. 00:35:47.032 [2024-11-18 07:21:07.787417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.032 [2024-11-18 07:21:07.787445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.032 qpair failed and we were unable to recover it. 00:35:47.032 [2024-11-18 07:21:07.787554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.032 [2024-11-18 07:21:07.787581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.032 qpair failed and we were unable to recover it. 00:35:47.032 [2024-11-18 07:21:07.787667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.032 [2024-11-18 07:21:07.787693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.032 qpair failed and we were unable to recover it. 00:35:47.032 [2024-11-18 07:21:07.787782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.032 [2024-11-18 07:21:07.787809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.032 qpair failed and we were unable to recover it. 00:35:47.032 [2024-11-18 07:21:07.787894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.032 [2024-11-18 07:21:07.787921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.032 qpair failed and we were unable to recover it. 00:35:47.032 [2024-11-18 07:21:07.788061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.032 [2024-11-18 07:21:07.788087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.032 qpair failed and we were unable to recover it. 00:35:47.032 [2024-11-18 07:21:07.788196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.032 [2024-11-18 07:21:07.788222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.032 qpair failed and we were unable to recover it. 00:35:47.032 [2024-11-18 07:21:07.788346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.032 [2024-11-18 07:21:07.788371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.032 qpair failed and we were unable to recover it. 00:35:47.032 [2024-11-18 07:21:07.788470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.032 [2024-11-18 07:21:07.788527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.032 qpair failed and we were unable to recover it. 00:35:47.032 [2024-11-18 07:21:07.788619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.032 [2024-11-18 07:21:07.788648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.032 qpair failed and we were unable to recover it. 00:35:47.032 [2024-11-18 07:21:07.788764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.032 [2024-11-18 07:21:07.788791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.032 qpair failed and we were unable to recover it. 00:35:47.032 [2024-11-18 07:21:07.788878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.032 [2024-11-18 07:21:07.788905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.032 qpair failed and we were unable to recover it. 00:35:47.032 [2024-11-18 07:21:07.788990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.032 [2024-11-18 07:21:07.789017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.032 qpair failed and we were unable to recover it. 00:35:47.032 [2024-11-18 07:21:07.789103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.032 [2024-11-18 07:21:07.789130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.032 qpair failed and we were unable to recover it. 00:35:47.032 [2024-11-18 07:21:07.789240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.032 [2024-11-18 07:21:07.789272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.032 qpair failed and we were unable to recover it. 00:35:47.032 [2024-11-18 07:21:07.789364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.032 [2024-11-18 07:21:07.789403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.032 qpair failed and we were unable to recover it. 00:35:47.032 [2024-11-18 07:21:07.789519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.032 [2024-11-18 07:21:07.789548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.032 qpair failed and we were unable to recover it. 00:35:47.032 [2024-11-18 07:21:07.789664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.032 [2024-11-18 07:21:07.789691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.032 qpair failed and we were unable to recover it. 00:35:47.032 [2024-11-18 07:21:07.789774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.032 [2024-11-18 07:21:07.789800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.032 qpair failed and we were unable to recover it. 00:35:47.032 [2024-11-18 07:21:07.789882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.032 [2024-11-18 07:21:07.789909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.032 qpair failed and we were unable to recover it. 00:35:47.032 [2024-11-18 07:21:07.789999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.032 [2024-11-18 07:21:07.790030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.032 qpair failed and we were unable to recover it. 00:35:47.032 [2024-11-18 07:21:07.790135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.032 [2024-11-18 07:21:07.790160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.032 qpair failed and we were unable to recover it. 00:35:47.032 [2024-11-18 07:21:07.790267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.032 [2024-11-18 07:21:07.790300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.032 qpair failed and we were unable to recover it. 00:35:47.032 [2024-11-18 07:21:07.790395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.032 [2024-11-18 07:21:07.790422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.032 qpair failed and we were unable to recover it. 00:35:47.032 [2024-11-18 07:21:07.790541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.032 [2024-11-18 07:21:07.790568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.032 qpair failed and we were unable to recover it. 00:35:47.032 [2024-11-18 07:21:07.790657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.032 [2024-11-18 07:21:07.790683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.032 qpair failed and we were unable to recover it. 00:35:47.032 [2024-11-18 07:21:07.790772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.032 [2024-11-18 07:21:07.790797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.033 qpair failed and we were unable to recover it. 00:35:47.033 [2024-11-18 07:21:07.790907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.033 [2024-11-18 07:21:07.790933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.033 qpair failed and we were unable to recover it. 00:35:47.033 [2024-11-18 07:21:07.791026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.033 [2024-11-18 07:21:07.791052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.033 qpair failed and we were unable to recover it. 00:35:47.033 [2024-11-18 07:21:07.791143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.033 [2024-11-18 07:21:07.791171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.033 qpair failed and we were unable to recover it. 00:35:47.033 [2024-11-18 07:21:07.791285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.033 [2024-11-18 07:21:07.791312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.033 qpair failed and we were unable to recover it. 00:35:47.033 [2024-11-18 07:21:07.791391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.033 [2024-11-18 07:21:07.791420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.033 qpair failed and we were unable to recover it. 00:35:47.033 [2024-11-18 07:21:07.791541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.033 [2024-11-18 07:21:07.791569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.033 qpair failed and we were unable to recover it. 00:35:47.033 [2024-11-18 07:21:07.791666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.033 [2024-11-18 07:21:07.791693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.033 qpair failed and we were unable to recover it. 00:35:47.033 [2024-11-18 07:21:07.791770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.033 [2024-11-18 07:21:07.791797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.033 qpair failed and we were unable to recover it. 00:35:47.033 [2024-11-18 07:21:07.791941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.033 [2024-11-18 07:21:07.791967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.033 qpair failed and we were unable to recover it. 00:35:47.033 [2024-11-18 07:21:07.792047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.033 [2024-11-18 07:21:07.792073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.033 qpair failed and we were unable to recover it. 00:35:47.033 [2024-11-18 07:21:07.792225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.033 [2024-11-18 07:21:07.792253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.033 qpair failed and we were unable to recover it. 00:35:47.033 [2024-11-18 07:21:07.792337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.033 [2024-11-18 07:21:07.792364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.033 qpair failed and we were unable to recover it. 00:35:47.033 [2024-11-18 07:21:07.792456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.033 [2024-11-18 07:21:07.792484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.033 qpair failed and we were unable to recover it. 00:35:47.033 [2024-11-18 07:21:07.792575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.033 [2024-11-18 07:21:07.792602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.033 qpair failed and we were unable to recover it. 00:35:47.033 [2024-11-18 07:21:07.792693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.033 [2024-11-18 07:21:07.792721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.033 qpair failed and we were unable to recover it. 00:35:47.033 [2024-11-18 07:21:07.792830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.033 [2024-11-18 07:21:07.792857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.033 qpair failed and we were unable to recover it. 00:35:47.033 [2024-11-18 07:21:07.792969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.033 [2024-11-18 07:21:07.792994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.033 qpair failed and we were unable to recover it. 00:35:47.033 [2024-11-18 07:21:07.793116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.033 [2024-11-18 07:21:07.793142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.033 qpair failed and we were unable to recover it. 00:35:47.033 [2024-11-18 07:21:07.793228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.033 [2024-11-18 07:21:07.793255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.033 qpair failed and we were unable to recover it. 00:35:47.033 [2024-11-18 07:21:07.793371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.033 [2024-11-18 07:21:07.793398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.033 qpair failed and we were unable to recover it. 00:35:47.033 [2024-11-18 07:21:07.793484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.033 [2024-11-18 07:21:07.793519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.033 qpair failed and we were unable to recover it. 00:35:47.033 [2024-11-18 07:21:07.793606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.033 [2024-11-18 07:21:07.793632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.033 qpair failed and we were unable to recover it. 00:35:47.033 [2024-11-18 07:21:07.793715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.033 [2024-11-18 07:21:07.793741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.033 qpair failed and we were unable to recover it. 00:35:47.033 [2024-11-18 07:21:07.793863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.033 [2024-11-18 07:21:07.793889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.033 qpair failed and we were unable to recover it. 00:35:47.033 [2024-11-18 07:21:07.794010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.033 [2024-11-18 07:21:07.794036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.033 qpair failed and we were unable to recover it. 00:35:47.033 [2024-11-18 07:21:07.794153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.033 [2024-11-18 07:21:07.794181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.033 qpair failed and we were unable to recover it. 00:35:47.033 [2024-11-18 07:21:07.794280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.033 [2024-11-18 07:21:07.794308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.033 qpair failed and we were unable to recover it. 00:35:47.033 [2024-11-18 07:21:07.794425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.033 [2024-11-18 07:21:07.794457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.033 qpair failed and we were unable to recover it. 00:35:47.033 [2024-11-18 07:21:07.794558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.033 [2024-11-18 07:21:07.794585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.033 qpair failed and we were unable to recover it. 00:35:47.033 [2024-11-18 07:21:07.794667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.033 [2024-11-18 07:21:07.794693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.033 qpair failed and we were unable to recover it. 00:35:47.033 [2024-11-18 07:21:07.794775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.033 [2024-11-18 07:21:07.794804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.033 qpair failed and we were unable to recover it. 00:35:47.033 [2024-11-18 07:21:07.794925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.033 [2024-11-18 07:21:07.794951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.033 qpair failed and we were unable to recover it. 00:35:47.033 [2024-11-18 07:21:07.795029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.033 [2024-11-18 07:21:07.795056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.033 qpair failed and we were unable to recover it. 00:35:47.034 [2024-11-18 07:21:07.795147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.034 [2024-11-18 07:21:07.795175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.034 qpair failed and we were unable to recover it. 00:35:47.034 [2024-11-18 07:21:07.795270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.034 [2024-11-18 07:21:07.795298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.034 qpair failed and we were unable to recover it. 00:35:47.034 [2024-11-18 07:21:07.795404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.034 [2024-11-18 07:21:07.795430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.034 qpair failed and we were unable to recover it. 00:35:47.034 [2024-11-18 07:21:07.795550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.034 [2024-11-18 07:21:07.795577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.034 qpair failed and we were unable to recover it. 00:35:47.034 [2024-11-18 07:21:07.795670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.034 [2024-11-18 07:21:07.795697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.034 qpair failed and we were unable to recover it. 00:35:47.034 [2024-11-18 07:21:07.795783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.034 [2024-11-18 07:21:07.795809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.034 qpair failed and we were unable to recover it. 00:35:47.034 [2024-11-18 07:21:07.795895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.034 [2024-11-18 07:21:07.795922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.034 qpair failed and we were unable to recover it. 00:35:47.034 [2024-11-18 07:21:07.796038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.034 [2024-11-18 07:21:07.796074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.034 qpair failed and we were unable to recover it. 00:35:47.034 [2024-11-18 07:21:07.796163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.034 [2024-11-18 07:21:07.796191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.034 qpair failed and we were unable to recover it. 00:35:47.034 [2024-11-18 07:21:07.796263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.034 [2024-11-18 07:21:07.796289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.034 qpair failed and we were unable to recover it. 00:35:47.034 [2024-11-18 07:21:07.796370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.034 [2024-11-18 07:21:07.796397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.034 qpair failed and we were unable to recover it. 00:35:47.034 [2024-11-18 07:21:07.796487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.034 [2024-11-18 07:21:07.796540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.034 qpair failed and we were unable to recover it. 00:35:47.034 [2024-11-18 07:21:07.796624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.034 [2024-11-18 07:21:07.796649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.034 qpair failed and we were unable to recover it. 00:35:47.034 [2024-11-18 07:21:07.796733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.034 [2024-11-18 07:21:07.796758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.034 qpair failed and we were unable to recover it. 00:35:47.034 [2024-11-18 07:21:07.796850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.034 [2024-11-18 07:21:07.796875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.034 qpair failed and we were unable to recover it. 00:35:47.034 [2024-11-18 07:21:07.797002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.034 [2024-11-18 07:21:07.797028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.034 qpair failed and we were unable to recover it. 00:35:47.034 [2024-11-18 07:21:07.797142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.034 [2024-11-18 07:21:07.797169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.034 qpair failed and we were unable to recover it. 00:35:47.034 [2024-11-18 07:21:07.797298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.034 [2024-11-18 07:21:07.797327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.034 qpair failed and we were unable to recover it. 00:35:47.034 [2024-11-18 07:21:07.797439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.034 [2024-11-18 07:21:07.797467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.034 qpair failed and we were unable to recover it. 00:35:47.034 [2024-11-18 07:21:07.797574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.034 [2024-11-18 07:21:07.797601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.034 qpair failed and we were unable to recover it. 00:35:47.034 [2024-11-18 07:21:07.797713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.034 [2024-11-18 07:21:07.797740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.034 qpair failed and we were unable to recover it. 00:35:47.034 [2024-11-18 07:21:07.797826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.034 [2024-11-18 07:21:07.797858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.034 qpair failed and we were unable to recover it. 00:35:47.034 [2024-11-18 07:21:07.797949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.034 [2024-11-18 07:21:07.797976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.034 qpair failed and we were unable to recover it. 00:35:47.034 [2024-11-18 07:21:07.798119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.034 [2024-11-18 07:21:07.798146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.034 qpair failed and we were unable to recover it. 00:35:47.034 [2024-11-18 07:21:07.798253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.034 [2024-11-18 07:21:07.798279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.034 qpair failed and we were unable to recover it. 00:35:47.034 [2024-11-18 07:21:07.798392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.034 [2024-11-18 07:21:07.798418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.034 qpair failed and we were unable to recover it. 00:35:47.034 [2024-11-18 07:21:07.798527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.034 [2024-11-18 07:21:07.798553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.034 qpair failed and we were unable to recover it. 00:35:47.034 [2024-11-18 07:21:07.798637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.034 [2024-11-18 07:21:07.798663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.034 qpair failed and we were unable to recover it. 00:35:47.034 [2024-11-18 07:21:07.798750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.034 [2024-11-18 07:21:07.798777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.034 qpair failed and we were unable to recover it. 00:35:47.034 [2024-11-18 07:21:07.798859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.034 [2024-11-18 07:21:07.798885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.034 qpair failed and we were unable to recover it. 00:35:47.034 [2024-11-18 07:21:07.798963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.034 [2024-11-18 07:21:07.798989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.034 qpair failed and we were unable to recover it. 00:35:47.034 [2024-11-18 07:21:07.799085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.034 [2024-11-18 07:21:07.799112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.034 qpair failed and we were unable to recover it. 00:35:47.034 [2024-11-18 07:21:07.799230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.034 [2024-11-18 07:21:07.799256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.034 qpair failed and we were unable to recover it. 00:35:47.034 [2024-11-18 07:21:07.799347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.034 [2024-11-18 07:21:07.799374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.034 qpair failed and we were unable to recover it. 00:35:47.034 [2024-11-18 07:21:07.799487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.034 [2024-11-18 07:21:07.799521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.034 qpair failed and we were unable to recover it. 00:35:47.034 [2024-11-18 07:21:07.799607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.034 [2024-11-18 07:21:07.799633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.034 qpair failed and we were unable to recover it. 00:35:47.034 [2024-11-18 07:21:07.799717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.034 [2024-11-18 07:21:07.799743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.034 qpair failed and we were unable to recover it. 00:35:47.034 [2024-11-18 07:21:07.799859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.035 [2024-11-18 07:21:07.799886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.035 qpair failed and we were unable to recover it. 00:35:47.035 [2024-11-18 07:21:07.799975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.035 [2024-11-18 07:21:07.800001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.035 qpair failed and we were unable to recover it. 00:35:47.035 [2024-11-18 07:21:07.800116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.035 [2024-11-18 07:21:07.800143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.035 qpair failed and we were unable to recover it. 00:35:47.035 [2024-11-18 07:21:07.800238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.035 [2024-11-18 07:21:07.800267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.035 qpair failed and we were unable to recover it. 00:35:47.035 [2024-11-18 07:21:07.800357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.035 [2024-11-18 07:21:07.800386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.035 qpair failed and we were unable to recover it. 00:35:47.035 [2024-11-18 07:21:07.800483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.035 [2024-11-18 07:21:07.800530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.035 qpair failed and we were unable to recover it. 00:35:47.035 [2024-11-18 07:21:07.800625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.035 [2024-11-18 07:21:07.800653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.035 qpair failed and we were unable to recover it. 00:35:47.035 [2024-11-18 07:21:07.800748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.035 [2024-11-18 07:21:07.800775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.035 qpair failed and we were unable to recover it. 00:35:47.035 [2024-11-18 07:21:07.800870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.035 [2024-11-18 07:21:07.800896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.035 qpair failed and we were unable to recover it. 00:35:47.035 [2024-11-18 07:21:07.800985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.035 [2024-11-18 07:21:07.801012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.035 qpair failed and we were unable to recover it. 00:35:47.035 [2024-11-18 07:21:07.801107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.035 [2024-11-18 07:21:07.801135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.035 qpair failed and we were unable to recover it. 00:35:47.035 [2024-11-18 07:21:07.801240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.035 [2024-11-18 07:21:07.801279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.035 qpair failed and we were unable to recover it. 00:35:47.035 [2024-11-18 07:21:07.801371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.035 [2024-11-18 07:21:07.801398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.035 qpair failed and we were unable to recover it. 00:35:47.035 [2024-11-18 07:21:07.801483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.035 [2024-11-18 07:21:07.801516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.035 qpair failed and we were unable to recover it. 00:35:47.035 [2024-11-18 07:21:07.801631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.035 [2024-11-18 07:21:07.801657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.035 qpair failed and we were unable to recover it. 00:35:47.035 [2024-11-18 07:21:07.801750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.035 [2024-11-18 07:21:07.801776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.035 qpair failed and we were unable to recover it. 00:35:47.035 [2024-11-18 07:21:07.801873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.035 [2024-11-18 07:21:07.801900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.035 qpair failed and we were unable to recover it. 00:35:47.035 [2024-11-18 07:21:07.802016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.035 [2024-11-18 07:21:07.802042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.035 qpair failed and we were unable to recover it. 00:35:47.035 [2024-11-18 07:21:07.802138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.035 [2024-11-18 07:21:07.802167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.035 qpair failed and we were unable to recover it. 00:35:47.035 [2024-11-18 07:21:07.802249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.035 [2024-11-18 07:21:07.802277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.035 qpair failed and we were unable to recover it. 00:35:47.035 [2024-11-18 07:21:07.802378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.035 [2024-11-18 07:21:07.802408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.035 qpair failed and we were unable to recover it. 00:35:47.035 [2024-11-18 07:21:07.802531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.035 [2024-11-18 07:21:07.802558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.035 qpair failed and we were unable to recover it. 00:35:47.035 [2024-11-18 07:21:07.802648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.035 [2024-11-18 07:21:07.802675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.035 qpair failed and we were unable to recover it. 00:35:47.035 [2024-11-18 07:21:07.802802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.035 [2024-11-18 07:21:07.802828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.035 qpair failed and we were unable to recover it. 00:35:47.035 [2024-11-18 07:21:07.802922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.035 [2024-11-18 07:21:07.802954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.035 qpair failed and we were unable to recover it. 00:35:47.035 [2024-11-18 07:21:07.803076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.035 [2024-11-18 07:21:07.803102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.035 qpair failed and we were unable to recover it. 00:35:47.035 [2024-11-18 07:21:07.803195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.035 [2024-11-18 07:21:07.803223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.035 qpair failed and we were unable to recover it. 00:35:47.035 [2024-11-18 07:21:07.803315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.035 [2024-11-18 07:21:07.803343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.035 qpair failed and we were unable to recover it. 00:35:47.035 [2024-11-18 07:21:07.803454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.035 [2024-11-18 07:21:07.803481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.035 qpair failed and we were unable to recover it. 00:35:47.035 [2024-11-18 07:21:07.803569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.035 [2024-11-18 07:21:07.803597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.035 qpair failed and we were unable to recover it. 00:35:47.035 [2024-11-18 07:21:07.803679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.035 [2024-11-18 07:21:07.803706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.035 qpair failed and we were unable to recover it. 00:35:47.035 [2024-11-18 07:21:07.803834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.035 [2024-11-18 07:21:07.803860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.035 qpair failed and we were unable to recover it. 00:35:47.035 [2024-11-18 07:21:07.803948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.035 [2024-11-18 07:21:07.803975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.035 qpair failed and we were unable to recover it. 00:35:47.035 [2024-11-18 07:21:07.804061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.035 [2024-11-18 07:21:07.804090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.035 qpair failed and we were unable to recover it. 00:35:47.035 [2024-11-18 07:21:07.804196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.035 [2024-11-18 07:21:07.804235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.035 qpair failed and we were unable to recover it. 00:35:47.035 [2024-11-18 07:21:07.804333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.035 [2024-11-18 07:21:07.804361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.035 qpair failed and we were unable to recover it. 00:35:47.035 [2024-11-18 07:21:07.804475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.035 [2024-11-18 07:21:07.804515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.035 qpair failed and we were unable to recover it. 00:35:47.035 [2024-11-18 07:21:07.804598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.035 [2024-11-18 07:21:07.804625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.035 qpair failed and we were unable to recover it. 00:35:47.036 [2024-11-18 07:21:07.804715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.036 [2024-11-18 07:21:07.804744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.036 qpair failed and we were unable to recover it. 00:35:47.036 [2024-11-18 07:21:07.804862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.036 [2024-11-18 07:21:07.804889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.036 qpair failed and we were unable to recover it. 00:35:47.036 [2024-11-18 07:21:07.804998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.036 [2024-11-18 07:21:07.805025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.036 qpair failed and we were unable to recover it. 00:35:47.036 [2024-11-18 07:21:07.805114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.036 [2024-11-18 07:21:07.805140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.036 qpair failed and we were unable to recover it. 00:35:47.036 [2024-11-18 07:21:07.805232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.036 [2024-11-18 07:21:07.805260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.036 qpair failed and we were unable to recover it. 00:35:47.036 [2024-11-18 07:21:07.805396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.036 [2024-11-18 07:21:07.805436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.036 qpair failed and we were unable to recover it. 00:35:47.036 [2024-11-18 07:21:07.805571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.036 [2024-11-18 07:21:07.805600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.036 qpair failed and we were unable to recover it. 00:35:47.036 [2024-11-18 07:21:07.805716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.036 [2024-11-18 07:21:07.805742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.036 qpair failed and we were unable to recover it. 00:35:47.036 [2024-11-18 07:21:07.805891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.036 [2024-11-18 07:21:07.805916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.036 qpair failed and we were unable to recover it. 00:35:47.036 [2024-11-18 07:21:07.806007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.036 [2024-11-18 07:21:07.806034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.036 qpair failed and we were unable to recover it. 00:35:47.036 [2024-11-18 07:21:07.806119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.036 [2024-11-18 07:21:07.806145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.036 qpair failed and we were unable to recover it. 00:35:47.036 [2024-11-18 07:21:07.806295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.036 [2024-11-18 07:21:07.806321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.036 qpair failed and we were unable to recover it. 00:35:47.036 [2024-11-18 07:21:07.806432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.036 [2024-11-18 07:21:07.806458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.036 qpair failed and we were unable to recover it. 00:35:47.036 [2024-11-18 07:21:07.806550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.036 [2024-11-18 07:21:07.806580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.036 qpair failed and we were unable to recover it. 00:35:47.036 [2024-11-18 07:21:07.806681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.036 [2024-11-18 07:21:07.806720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.036 qpair failed and we were unable to recover it. 00:35:47.036 [2024-11-18 07:21:07.806851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.036 [2024-11-18 07:21:07.806878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.036 qpair failed and we were unable to recover it. 00:35:47.036 [2024-11-18 07:21:07.806993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.036 [2024-11-18 07:21:07.807019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.036 qpair failed and we were unable to recover it. 00:35:47.036 [2024-11-18 07:21:07.807102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.036 [2024-11-18 07:21:07.807130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.036 qpair failed and we were unable to recover it. 00:35:47.036 [2024-11-18 07:21:07.807217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.036 [2024-11-18 07:21:07.807244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.036 qpair failed and we were unable to recover it. 00:35:47.036 [2024-11-18 07:21:07.807385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.036 [2024-11-18 07:21:07.807411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.036 qpair failed and we were unable to recover it. 00:35:47.036 [2024-11-18 07:21:07.807526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.036 [2024-11-18 07:21:07.807553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.036 qpair failed and we were unable to recover it. 00:35:47.036 [2024-11-18 07:21:07.807650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.036 [2024-11-18 07:21:07.807688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.036 qpair failed and we were unable to recover it. 00:35:47.036 [2024-11-18 07:21:07.807786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.036 [2024-11-18 07:21:07.807814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.036 qpair failed and we were unable to recover it. 00:35:47.036 [2024-11-18 07:21:07.807928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.036 [2024-11-18 07:21:07.807956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.036 qpair failed and we were unable to recover it. 00:35:47.036 [2024-11-18 07:21:07.808064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.036 [2024-11-18 07:21:07.808091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.036 qpair failed and we were unable to recover it. 00:35:47.036 [2024-11-18 07:21:07.808180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.036 [2024-11-18 07:21:07.808209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.036 qpair failed and we were unable to recover it. 00:35:47.036 [2024-11-18 07:21:07.808320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.036 [2024-11-18 07:21:07.808345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.036 qpair failed and we were unable to recover it. 00:35:47.036 [2024-11-18 07:21:07.808466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.036 [2024-11-18 07:21:07.808499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.036 qpair failed and we were unable to recover it. 00:35:47.036 [2024-11-18 07:21:07.808592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.036 [2024-11-18 07:21:07.808619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.036 qpair failed and we were unable to recover it. 00:35:47.036 [2024-11-18 07:21:07.808733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.036 [2024-11-18 07:21:07.808760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.036 qpair failed and we were unable to recover it. 00:35:47.036 [2024-11-18 07:21:07.808883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.036 [2024-11-18 07:21:07.808910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.036 qpair failed and we were unable to recover it. 00:35:47.036 [2024-11-18 07:21:07.808997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.036 [2024-11-18 07:21:07.809024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.036 qpair failed and we were unable to recover it. 00:35:47.036 [2024-11-18 07:21:07.809135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.036 [2024-11-18 07:21:07.809162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.036 qpair failed and we were unable to recover it. 00:35:47.036 [2024-11-18 07:21:07.809260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.036 [2024-11-18 07:21:07.809299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.036 qpair failed and we were unable to recover it. 00:35:47.036 [2024-11-18 07:21:07.809391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.036 [2024-11-18 07:21:07.809418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.036 qpair failed and we were unable to recover it. 00:35:47.036 [2024-11-18 07:21:07.809575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.036 [2024-11-18 07:21:07.809604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.036 qpair failed and we were unable to recover it. 00:35:47.036 [2024-11-18 07:21:07.809720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.036 [2024-11-18 07:21:07.809747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.036 qpair failed and we were unable to recover it. 00:35:47.036 [2024-11-18 07:21:07.809887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.037 [2024-11-18 07:21:07.809913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.037 qpair failed and we were unable to recover it. 00:35:47.037 [2024-11-18 07:21:07.810017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.037 [2024-11-18 07:21:07.810043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.037 qpair failed and we were unable to recover it. 00:35:47.037 [2024-11-18 07:21:07.810130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.037 [2024-11-18 07:21:07.810156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.037 qpair failed and we were unable to recover it. 00:35:47.037 [2024-11-18 07:21:07.810249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.037 [2024-11-18 07:21:07.810278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.037 qpair failed and we were unable to recover it. 00:35:47.037 [2024-11-18 07:21:07.810361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.037 [2024-11-18 07:21:07.810389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.037 qpair failed and we were unable to recover it. 00:35:47.037 [2024-11-18 07:21:07.810497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.037 [2024-11-18 07:21:07.810525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.037 qpair failed and we were unable to recover it. 00:35:47.037 [2024-11-18 07:21:07.810609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.037 [2024-11-18 07:21:07.810636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.037 qpair failed and we were unable to recover it. 00:35:47.037 [2024-11-18 07:21:07.810743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.037 [2024-11-18 07:21:07.810769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.037 qpair failed and we were unable to recover it. 00:35:47.037 [2024-11-18 07:21:07.810854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.037 [2024-11-18 07:21:07.810882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.037 qpair failed and we were unable to recover it. 00:35:47.037 [2024-11-18 07:21:07.810968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.037 [2024-11-18 07:21:07.810995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.037 qpair failed and we were unable to recover it. 00:35:47.037 [2024-11-18 07:21:07.811080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.037 [2024-11-18 07:21:07.811118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.037 qpair failed and we were unable to recover it. 00:35:47.037 [2024-11-18 07:21:07.811209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.037 [2024-11-18 07:21:07.811235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.037 qpair failed and we were unable to recover it. 00:35:47.037 [2024-11-18 07:21:07.811321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.037 [2024-11-18 07:21:07.811348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.037 qpair failed and we were unable to recover it. 00:35:47.037 [2024-11-18 07:21:07.811434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.037 [2024-11-18 07:21:07.811462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.037 qpair failed and we were unable to recover it. 00:35:47.037 [2024-11-18 07:21:07.811571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.037 [2024-11-18 07:21:07.811610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.037 qpair failed and we were unable to recover it. 00:35:47.037 [2024-11-18 07:21:07.811708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.037 [2024-11-18 07:21:07.811737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.037 qpair failed and we were unable to recover it. 00:35:47.037 [2024-11-18 07:21:07.811852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.037 [2024-11-18 07:21:07.811884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.037 qpair failed and we were unable to recover it. 00:35:47.037 [2024-11-18 07:21:07.811964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.037 [2024-11-18 07:21:07.811991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.037 qpair failed and we were unable to recover it. 00:35:47.037 [2024-11-18 07:21:07.812076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.037 [2024-11-18 07:21:07.812102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.037 qpair failed and we were unable to recover it. 00:35:47.037 [2024-11-18 07:21:07.812207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.037 [2024-11-18 07:21:07.812247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.037 qpair failed and we were unable to recover it. 00:35:47.037 [2024-11-18 07:21:07.812339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.037 [2024-11-18 07:21:07.812367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.037 qpair failed and we were unable to recover it. 00:35:47.037 [2024-11-18 07:21:07.812480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.037 [2024-11-18 07:21:07.812521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.037 qpair failed and we were unable to recover it. 00:35:47.037 [2024-11-18 07:21:07.812609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.037 [2024-11-18 07:21:07.812635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.037 qpair failed and we were unable to recover it. 00:35:47.037 [2024-11-18 07:21:07.812718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.037 [2024-11-18 07:21:07.812744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.037 qpair failed and we were unable to recover it. 00:35:47.037 [2024-11-18 07:21:07.812824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.037 [2024-11-18 07:21:07.812850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.037 qpair failed and we were unable to recover it. 00:35:47.037 [2024-11-18 07:21:07.812935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.037 [2024-11-18 07:21:07.812961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.037 qpair failed and we were unable to recover it. 00:35:47.037 [2024-11-18 07:21:07.813035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.037 [2024-11-18 07:21:07.813060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.037 qpair failed and we were unable to recover it. 00:35:47.037 [2024-11-18 07:21:07.813205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.037 [2024-11-18 07:21:07.813234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.037 qpair failed and we were unable to recover it. 00:35:47.037 [2024-11-18 07:21:07.813327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.037 [2024-11-18 07:21:07.813355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.037 qpair failed and we were unable to recover it. 00:35:47.037 [2024-11-18 07:21:07.813465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.037 [2024-11-18 07:21:07.813515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.037 qpair failed and we were unable to recover it. 00:35:47.037 [2024-11-18 07:21:07.813637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.037 [2024-11-18 07:21:07.813663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.037 qpair failed and we were unable to recover it. 00:35:47.037 [2024-11-18 07:21:07.813801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.037 [2024-11-18 07:21:07.813826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.037 qpair failed and we were unable to recover it. 00:35:47.037 [2024-11-18 07:21:07.813912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.037 [2024-11-18 07:21:07.813937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.037 qpair failed and we were unable to recover it. 00:35:47.037 [2024-11-18 07:21:07.814055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.037 [2024-11-18 07:21:07.814083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.037 qpair failed and we were unable to recover it. 00:35:47.037 [2024-11-18 07:21:07.814180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.037 [2024-11-18 07:21:07.814206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.037 qpair failed and we were unable to recover it. 00:35:47.037 [2024-11-18 07:21:07.814297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.037 [2024-11-18 07:21:07.814323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.037 qpair failed and we were unable to recover it. 00:35:47.037 [2024-11-18 07:21:07.814413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.037 [2024-11-18 07:21:07.814439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.037 qpair failed and we were unable to recover it. 00:35:47.037 [2024-11-18 07:21:07.814535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.037 [2024-11-18 07:21:07.814562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.037 qpair failed and we were unable to recover it. 00:35:47.038 [2024-11-18 07:21:07.814672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.038 [2024-11-18 07:21:07.814697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.038 qpair failed and we were unable to recover it. 00:35:47.038 [2024-11-18 07:21:07.814783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.038 [2024-11-18 07:21:07.814810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.038 qpair failed and we were unable to recover it. 00:35:47.038 [2024-11-18 07:21:07.814902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.038 [2024-11-18 07:21:07.814931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.038 qpair failed and we were unable to recover it. 00:35:47.038 [2024-11-18 07:21:07.815055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.038 [2024-11-18 07:21:07.815081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.038 qpair failed and we were unable to recover it. 00:35:47.038 [2024-11-18 07:21:07.815167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.038 [2024-11-18 07:21:07.815193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.038 qpair failed and we were unable to recover it. 00:35:47.038 [2024-11-18 07:21:07.815311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.038 [2024-11-18 07:21:07.815350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.038 qpair failed and we were unable to recover it. 00:35:47.038 [2024-11-18 07:21:07.815503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.038 [2024-11-18 07:21:07.815532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.038 qpair failed and we were unable to recover it. 00:35:47.038 [2024-11-18 07:21:07.815628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.038 [2024-11-18 07:21:07.815655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.038 qpair failed and we were unable to recover it. 00:35:47.038 [2024-11-18 07:21:07.815774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.038 [2024-11-18 07:21:07.815809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.038 qpair failed and we were unable to recover it. 00:35:47.038 [2024-11-18 07:21:07.815932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.038 [2024-11-18 07:21:07.815958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.038 qpair failed and we were unable to recover it. 00:35:47.038 [2024-11-18 07:21:07.816042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.038 [2024-11-18 07:21:07.816067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.038 qpair failed and we were unable to recover it. 00:35:47.038 [2024-11-18 07:21:07.816183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.038 [2024-11-18 07:21:07.816208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.038 qpair failed and we were unable to recover it. 00:35:47.038 [2024-11-18 07:21:07.816327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.038 [2024-11-18 07:21:07.816353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.038 qpair failed and we were unable to recover it. 00:35:47.038 [2024-11-18 07:21:07.816474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.038 [2024-11-18 07:21:07.816520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.038 qpair failed and we were unable to recover it. 00:35:47.038 [2024-11-18 07:21:07.816638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.038 [2024-11-18 07:21:07.816664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.038 qpair failed and we were unable to recover it. 00:35:47.038 [2024-11-18 07:21:07.816779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.038 [2024-11-18 07:21:07.816814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.038 qpair failed and we were unable to recover it. 00:35:47.038 [2024-11-18 07:21:07.816897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.038 [2024-11-18 07:21:07.816924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.038 qpair failed and we were unable to recover it. 00:35:47.038 [2024-11-18 07:21:07.817035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.038 [2024-11-18 07:21:07.817062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.038 qpair failed and we were unable to recover it. 00:35:47.038 [2024-11-18 07:21:07.817154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.038 [2024-11-18 07:21:07.817180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.038 qpair failed and we were unable to recover it. 00:35:47.038 [2024-11-18 07:21:07.817273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.038 [2024-11-18 07:21:07.817300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.038 qpair failed and we were unable to recover it. 00:35:47.038 [2024-11-18 07:21:07.817414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.038 [2024-11-18 07:21:07.817443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.038 qpair failed and we were unable to recover it. 00:35:47.038 [2024-11-18 07:21:07.817537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.038 [2024-11-18 07:21:07.817564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.038 qpair failed and we were unable to recover it. 00:35:47.038 [2024-11-18 07:21:07.817656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.038 [2024-11-18 07:21:07.817682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.038 qpair failed and we were unable to recover it. 00:35:47.038 [2024-11-18 07:21:07.817797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.038 [2024-11-18 07:21:07.817824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.038 qpair failed and we were unable to recover it. 00:35:47.038 [2024-11-18 07:21:07.817907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.038 [2024-11-18 07:21:07.817934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.038 qpair failed and we were unable to recover it. 00:35:47.038 [2024-11-18 07:21:07.818038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.038 [2024-11-18 07:21:07.818065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.038 qpair failed and we were unable to recover it. 00:35:47.038 [2024-11-18 07:21:07.818184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.038 [2024-11-18 07:21:07.818213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.038 qpair failed and we were unable to recover it. 00:35:47.038 [2024-11-18 07:21:07.818355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.038 [2024-11-18 07:21:07.818381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.038 qpair failed and we were unable to recover it. 00:35:47.038 [2024-11-18 07:21:07.818500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.038 [2024-11-18 07:21:07.818528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.038 qpair failed and we were unable to recover it. 00:35:47.038 [2024-11-18 07:21:07.818612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.038 [2024-11-18 07:21:07.818639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.038 qpair failed and we were unable to recover it. 00:35:47.038 [2024-11-18 07:21:07.818725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.038 [2024-11-18 07:21:07.818752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.038 qpair failed and we were unable to recover it. 00:35:47.038 [2024-11-18 07:21:07.818866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.038 [2024-11-18 07:21:07.818893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.038 qpair failed and we were unable to recover it. 00:35:47.038 [2024-11-18 07:21:07.818995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.038 [2024-11-18 07:21:07.819023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.038 qpair failed and we were unable to recover it. 00:35:47.038 [2024-11-18 07:21:07.819117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.039 [2024-11-18 07:21:07.819156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.039 qpair failed and we were unable to recover it. 00:35:47.039 [2024-11-18 07:21:07.819270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.039 [2024-11-18 07:21:07.819298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.039 qpair failed and we were unable to recover it. 00:35:47.039 [2024-11-18 07:21:07.819381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.039 [2024-11-18 07:21:07.819407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.039 qpair failed and we were unable to recover it. 00:35:47.039 [2024-11-18 07:21:07.819505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.039 [2024-11-18 07:21:07.819534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.039 qpair failed and we were unable to recover it. 00:35:47.039 [2024-11-18 07:21:07.819620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.039 [2024-11-18 07:21:07.819649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.039 qpair failed and we were unable to recover it. 00:35:47.039 [2024-11-18 07:21:07.819791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.039 [2024-11-18 07:21:07.819818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.039 qpair failed and we were unable to recover it. 00:35:47.039 [2024-11-18 07:21:07.819935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.039 [2024-11-18 07:21:07.819971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.039 qpair failed and we were unable to recover it. 00:35:47.039 [2024-11-18 07:21:07.820120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.039 [2024-11-18 07:21:07.820147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.039 qpair failed and we were unable to recover it. 00:35:47.039 [2024-11-18 07:21:07.820238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.039 [2024-11-18 07:21:07.820264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.039 qpair failed and we were unable to recover it. 00:35:47.039 [2024-11-18 07:21:07.820380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.039 [2024-11-18 07:21:07.820407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.039 qpair failed and we were unable to recover it. 00:35:47.039 [2024-11-18 07:21:07.820510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.039 [2024-11-18 07:21:07.820539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.039 qpair failed and we were unable to recover it. 00:35:47.039 [2024-11-18 07:21:07.820627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.039 [2024-11-18 07:21:07.820654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.039 qpair failed and we were unable to recover it. 00:35:47.039 [2024-11-18 07:21:07.820772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.039 [2024-11-18 07:21:07.820812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.039 qpair failed and we were unable to recover it. 00:35:47.039 [2024-11-18 07:21:07.820924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.039 [2024-11-18 07:21:07.820950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.039 qpair failed and we were unable to recover it. 00:35:47.039 [2024-11-18 07:21:07.821070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.039 [2024-11-18 07:21:07.821097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.039 qpair failed and we were unable to recover it. 00:35:47.039 [2024-11-18 07:21:07.821177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.039 [2024-11-18 07:21:07.821202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.039 qpair failed and we were unable to recover it. 00:35:47.039 [2024-11-18 07:21:07.821322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.039 [2024-11-18 07:21:07.821350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.039 qpair failed and we were unable to recover it. 00:35:47.039 [2024-11-18 07:21:07.821497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.039 [2024-11-18 07:21:07.821536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.039 qpair failed and we were unable to recover it. 00:35:47.039 [2024-11-18 07:21:07.821636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.039 [2024-11-18 07:21:07.821664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.039 qpair failed and we were unable to recover it. 00:35:47.039 [2024-11-18 07:21:07.821758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.039 [2024-11-18 07:21:07.821797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.039 qpair failed and we were unable to recover it. 00:35:47.039 [2024-11-18 07:21:07.821888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.039 [2024-11-18 07:21:07.821914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.039 qpair failed and we were unable to recover it. 00:35:47.039 [2024-11-18 07:21:07.822001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.039 [2024-11-18 07:21:07.822029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.039 qpair failed and we were unable to recover it. 00:35:47.039 [2024-11-18 07:21:07.822139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.039 [2024-11-18 07:21:07.822166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.039 qpair failed and we were unable to recover it. 00:35:47.039 [2024-11-18 07:21:07.822252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.039 [2024-11-18 07:21:07.822279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.039 qpair failed and we were unable to recover it. 00:35:47.039 [2024-11-18 07:21:07.822376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.039 [2024-11-18 07:21:07.822415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.039 qpair failed and we were unable to recover it. 00:35:47.039 [2024-11-18 07:21:07.822529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.039 [2024-11-18 07:21:07.822557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.039 qpair failed and we were unable to recover it. 00:35:47.039 [2024-11-18 07:21:07.822659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.039 [2024-11-18 07:21:07.822685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.039 qpair failed and we were unable to recover it. 00:35:47.039 [2024-11-18 07:21:07.822774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.039 [2024-11-18 07:21:07.822800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.039 qpair failed and we were unable to recover it. 00:35:47.039 [2024-11-18 07:21:07.822919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.039 [2024-11-18 07:21:07.822945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.039 qpair failed and we were unable to recover it. 00:35:47.039 [2024-11-18 07:21:07.823040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.039 [2024-11-18 07:21:07.823066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.039 qpair failed and we were unable to recover it. 00:35:47.039 [2024-11-18 07:21:07.823138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.039 [2024-11-18 07:21:07.823164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.039 qpair failed and we were unable to recover it. 00:35:47.039 [2024-11-18 07:21:07.823314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.039 [2024-11-18 07:21:07.823340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.039 qpair failed and we were unable to recover it. 00:35:47.039 [2024-11-18 07:21:07.823451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.039 [2024-11-18 07:21:07.823480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.039 qpair failed and we were unable to recover it. 00:35:47.039 [2024-11-18 07:21:07.823581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.039 [2024-11-18 07:21:07.823609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.039 qpair failed and we were unable to recover it. 00:35:47.039 [2024-11-18 07:21:07.823727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.039 [2024-11-18 07:21:07.823754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.039 qpair failed and we were unable to recover it. 00:35:47.039 [2024-11-18 07:21:07.823837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.039 [2024-11-18 07:21:07.823863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.039 qpair failed and we were unable to recover it. 00:35:47.039 [2024-11-18 07:21:07.823985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.039 [2024-11-18 07:21:07.824012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.039 qpair failed and we were unable to recover it. 00:35:47.039 [2024-11-18 07:21:07.824103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.039 [2024-11-18 07:21:07.824130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.039 qpair failed and we were unable to recover it. 00:35:47.040 [2024-11-18 07:21:07.824215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.040 [2024-11-18 07:21:07.824242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.040 qpair failed and we were unable to recover it. 00:35:47.040 [2024-11-18 07:21:07.824368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.040 [2024-11-18 07:21:07.824396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.040 qpair failed and we were unable to recover it. 00:35:47.040 [2024-11-18 07:21:07.824511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.040 [2024-11-18 07:21:07.824550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.040 qpair failed and we were unable to recover it. 00:35:47.040 [2024-11-18 07:21:07.824651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.040 [2024-11-18 07:21:07.824679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.040 qpair failed and we were unable to recover it. 00:35:47.040 [2024-11-18 07:21:07.824772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.040 [2024-11-18 07:21:07.824798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.040 qpair failed and we were unable to recover it. 00:35:47.040 [2024-11-18 07:21:07.824879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.040 [2024-11-18 07:21:07.824904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.040 qpair failed and we were unable to recover it. 00:35:47.040 [2024-11-18 07:21:07.824990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.040 [2024-11-18 07:21:07.825016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.040 qpair failed and we were unable to recover it. 00:35:47.040 [2024-11-18 07:21:07.825093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.040 [2024-11-18 07:21:07.825122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.040 qpair failed and we were unable to recover it. 00:35:47.040 [2024-11-18 07:21:07.825240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.040 [2024-11-18 07:21:07.825267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.040 qpair failed and we were unable to recover it. 00:35:47.040 [2024-11-18 07:21:07.825394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.040 [2024-11-18 07:21:07.825421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.040 qpair failed and we were unable to recover it. 00:35:47.040 [2024-11-18 07:21:07.825526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.040 [2024-11-18 07:21:07.825555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.040 qpair failed and we were unable to recover it. 00:35:47.040 [2024-11-18 07:21:07.825667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.040 [2024-11-18 07:21:07.825693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.040 qpair failed and we were unable to recover it. 00:35:47.040 [2024-11-18 07:21:07.825775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.040 [2024-11-18 07:21:07.825803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.040 qpair failed and we were unable to recover it. 00:35:47.040 [2024-11-18 07:21:07.825898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.040 [2024-11-18 07:21:07.825925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.040 qpair failed and we were unable to recover it. 00:35:47.040 [2024-11-18 07:21:07.826012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.040 [2024-11-18 07:21:07.826044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.040 qpair failed and we were unable to recover it. 00:35:47.040 [2024-11-18 07:21:07.826143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.040 [2024-11-18 07:21:07.826170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.040 qpair failed and we were unable to recover it. 00:35:47.040 [2024-11-18 07:21:07.826261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.040 [2024-11-18 07:21:07.826288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.040 qpair failed and we were unable to recover it. 00:35:47.040 [2024-11-18 07:21:07.826400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.040 [2024-11-18 07:21:07.826426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.040 qpair failed and we were unable to recover it. 00:35:47.040 [2024-11-18 07:21:07.826523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.040 [2024-11-18 07:21:07.826550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.040 qpair failed and we were unable to recover it. 00:35:47.040 [2024-11-18 07:21:07.826668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.040 [2024-11-18 07:21:07.826694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.040 qpair failed and we were unable to recover it. 00:35:47.040 [2024-11-18 07:21:07.826788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.040 [2024-11-18 07:21:07.826814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.040 qpair failed and we were unable to recover it. 00:35:47.040 [2024-11-18 07:21:07.826900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.040 [2024-11-18 07:21:07.826926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.040 qpair failed and we were unable to recover it. 00:35:47.040 [2024-11-18 07:21:07.827010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.040 [2024-11-18 07:21:07.827039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.040 qpair failed and we were unable to recover it. 00:35:47.040 [2024-11-18 07:21:07.827160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.040 [2024-11-18 07:21:07.827187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.040 qpair failed and we were unable to recover it. 00:35:47.040 [2024-11-18 07:21:07.827278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.040 [2024-11-18 07:21:07.827317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.040 qpair failed and we were unable to recover it. 00:35:47.040 [2024-11-18 07:21:07.827441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.040 [2024-11-18 07:21:07.827469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.040 qpair failed and we were unable to recover it. 00:35:47.040 [2024-11-18 07:21:07.827598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.040 [2024-11-18 07:21:07.827627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.040 qpair failed and we were unable to recover it. 00:35:47.040 [2024-11-18 07:21:07.827743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.040 [2024-11-18 07:21:07.827774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.040 qpair failed and we were unable to recover it. 00:35:47.040 [2024-11-18 07:21:07.827868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.040 [2024-11-18 07:21:07.827895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.040 qpair failed and we were unable to recover it. 00:35:47.040 [2024-11-18 07:21:07.827981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.040 [2024-11-18 07:21:07.828007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.040 qpair failed and we were unable to recover it. 00:35:47.040 [2024-11-18 07:21:07.828123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.040 [2024-11-18 07:21:07.828149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.040 qpair failed and we were unable to recover it. 00:35:47.040 [2024-11-18 07:21:07.828228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.040 [2024-11-18 07:21:07.828255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.040 qpair failed and we were unable to recover it. 00:35:47.040 [2024-11-18 07:21:07.828350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.040 [2024-11-18 07:21:07.828378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.040 qpair failed and we were unable to recover it. 00:35:47.040 [2024-11-18 07:21:07.828505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.040 [2024-11-18 07:21:07.828534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.040 qpair failed and we were unable to recover it. 00:35:47.040 [2024-11-18 07:21:07.828677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.040 [2024-11-18 07:21:07.828704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.040 qpair failed and we were unable to recover it. 00:35:47.040 [2024-11-18 07:21:07.828783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.040 [2024-11-18 07:21:07.828809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.040 qpair failed and we were unable to recover it. 00:35:47.040 [2024-11-18 07:21:07.828924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.040 [2024-11-18 07:21:07.828959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.040 qpair failed and we were unable to recover it. 00:35:47.040 [2024-11-18 07:21:07.829035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.041 [2024-11-18 07:21:07.829063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.041 qpair failed and we were unable to recover it. 00:35:47.041 [2024-11-18 07:21:07.829144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.041 [2024-11-18 07:21:07.829171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.041 qpair failed and we were unable to recover it. 00:35:47.041 [2024-11-18 07:21:07.829283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.041 [2024-11-18 07:21:07.829309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.041 qpair failed and we were unable to recover it. 00:35:47.041 [2024-11-18 07:21:07.829424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.041 [2024-11-18 07:21:07.829451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.041 qpair failed and we were unable to recover it. 00:35:47.041 [2024-11-18 07:21:07.829540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.041 [2024-11-18 07:21:07.829572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.041 qpair failed and we were unable to recover it. 00:35:47.041 [2024-11-18 07:21:07.829688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.041 [2024-11-18 07:21:07.829715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.041 qpair failed and we were unable to recover it. 00:35:47.041 [2024-11-18 07:21:07.829829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.041 [2024-11-18 07:21:07.829856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.041 qpair failed and we were unable to recover it. 00:35:47.041 [2024-11-18 07:21:07.829994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.041 [2024-11-18 07:21:07.830021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.041 qpair failed and we were unable to recover it. 00:35:47.041 [2024-11-18 07:21:07.830108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.041 [2024-11-18 07:21:07.830135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.041 qpair failed and we were unable to recover it. 00:35:47.041 [2024-11-18 07:21:07.830254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.041 [2024-11-18 07:21:07.830283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.041 qpair failed and we were unable to recover it. 00:35:47.041 [2024-11-18 07:21:07.830416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.041 [2024-11-18 07:21:07.830455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.041 qpair failed and we were unable to recover it. 00:35:47.041 [2024-11-18 07:21:07.830585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.041 [2024-11-18 07:21:07.830613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.041 qpair failed and we were unable to recover it. 00:35:47.041 [2024-11-18 07:21:07.830702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.041 [2024-11-18 07:21:07.830728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.041 qpair failed and we were unable to recover it. 00:35:47.041 [2024-11-18 07:21:07.830828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.041 [2024-11-18 07:21:07.830855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.041 qpair failed and we were unable to recover it. 00:35:47.041 [2024-11-18 07:21:07.830946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.041 [2024-11-18 07:21:07.830971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.041 qpair failed and we were unable to recover it. 00:35:47.041 [2024-11-18 07:21:07.831060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.041 [2024-11-18 07:21:07.831087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.041 qpair failed and we were unable to recover it. 00:35:47.041 [2024-11-18 07:21:07.831230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.041 [2024-11-18 07:21:07.831257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.041 qpair failed and we were unable to recover it. 00:35:47.041 [2024-11-18 07:21:07.831385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.041 [2024-11-18 07:21:07.831424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.041 qpair failed and we were unable to recover it. 00:35:47.041 [2024-11-18 07:21:07.831534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.041 [2024-11-18 07:21:07.831563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.041 qpair failed and we were unable to recover it. 00:35:47.041 [2024-11-18 07:21:07.831648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.041 [2024-11-18 07:21:07.831675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.041 qpair failed and we were unable to recover it. 00:35:47.041 [2024-11-18 07:21:07.831792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.041 [2024-11-18 07:21:07.831818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.041 qpair failed and we were unable to recover it. 00:35:47.041 [2024-11-18 07:21:07.831949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.041 [2024-11-18 07:21:07.831976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.041 qpair failed and we were unable to recover it. 00:35:47.041 [2024-11-18 07:21:07.832085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.041 [2024-11-18 07:21:07.832112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.041 qpair failed and we were unable to recover it. 00:35:47.041 [2024-11-18 07:21:07.832227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.041 [2024-11-18 07:21:07.832255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.041 qpair failed and we were unable to recover it. 00:35:47.041 [2024-11-18 07:21:07.832349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.041 [2024-11-18 07:21:07.832376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.041 qpair failed and we were unable to recover it. 00:35:47.041 [2024-11-18 07:21:07.832485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.041 [2024-11-18 07:21:07.832524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.041 qpair failed and we were unable to recover it. 00:35:47.041 [2024-11-18 07:21:07.832638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.041 [2024-11-18 07:21:07.832665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.041 qpair failed and we were unable to recover it. 00:35:47.041 [2024-11-18 07:21:07.832774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.041 [2024-11-18 07:21:07.832800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.041 qpair failed and we were unable to recover it. 00:35:47.041 [2024-11-18 07:21:07.832883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.041 [2024-11-18 07:21:07.832910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.041 qpair failed and we were unable to recover it. 00:35:47.041 [2024-11-18 07:21:07.832995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.041 [2024-11-18 07:21:07.833022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.041 qpair failed and we were unable to recover it. 00:35:47.041 [2024-11-18 07:21:07.833137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.041 [2024-11-18 07:21:07.833163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.041 qpair failed and we were unable to recover it. 00:35:47.041 [2024-11-18 07:21:07.833262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.041 [2024-11-18 07:21:07.833291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.041 qpair failed and we were unable to recover it. 00:35:47.041 [2024-11-18 07:21:07.833408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.041 [2024-11-18 07:21:07.833435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.041 qpair failed and we were unable to recover it. 00:35:47.041 [2024-11-18 07:21:07.833547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.041 [2024-11-18 07:21:07.833574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.041 qpair failed and we were unable to recover it. 00:35:47.041 [2024-11-18 07:21:07.833666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.041 [2024-11-18 07:21:07.833692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.041 qpair failed and we were unable to recover it. 00:35:47.041 [2024-11-18 07:21:07.833776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.041 [2024-11-18 07:21:07.833802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.041 qpair failed and we were unable to recover it. 00:35:47.041 [2024-11-18 07:21:07.833916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.041 [2024-11-18 07:21:07.833942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.041 qpair failed and we were unable to recover it. 00:35:47.041 [2024-11-18 07:21:07.834055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.041 [2024-11-18 07:21:07.834083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.041 qpair failed and we were unable to recover it. 00:35:47.042 [2024-11-18 07:21:07.834225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.042 [2024-11-18 07:21:07.834252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.042 qpair failed and we were unable to recover it. 00:35:47.042 [2024-11-18 07:21:07.834331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.042 [2024-11-18 07:21:07.834357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.042 qpair failed and we were unable to recover it. 00:35:47.042 [2024-11-18 07:21:07.834473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.042 [2024-11-18 07:21:07.834509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.042 qpair failed and we were unable to recover it. 00:35:47.042 [2024-11-18 07:21:07.834600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.042 [2024-11-18 07:21:07.834627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.042 qpair failed and we were unable to recover it. 00:35:47.042 [2024-11-18 07:21:07.834738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.042 [2024-11-18 07:21:07.834765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.042 qpair failed and we were unable to recover it. 00:35:47.042 [2024-11-18 07:21:07.834904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.042 [2024-11-18 07:21:07.834931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.042 qpair failed and we were unable to recover it. 00:35:47.042 [2024-11-18 07:21:07.835043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.042 [2024-11-18 07:21:07.835074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.042 qpair failed and we were unable to recover it. 00:35:47.042 [2024-11-18 07:21:07.835177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.042 [2024-11-18 07:21:07.835216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.042 qpair failed and we were unable to recover it. 00:35:47.042 [2024-11-18 07:21:07.835335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.042 [2024-11-18 07:21:07.835364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.042 qpair failed and we were unable to recover it. 00:35:47.042 [2024-11-18 07:21:07.835475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.042 [2024-11-18 07:21:07.835508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.042 qpair failed and we were unable to recover it. 00:35:47.042 [2024-11-18 07:21:07.835601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.042 [2024-11-18 07:21:07.835628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.042 qpair failed and we were unable to recover it. 00:35:47.042 [2024-11-18 07:21:07.835714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.042 [2024-11-18 07:21:07.835740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.042 qpair failed and we were unable to recover it. 00:35:47.042 [2024-11-18 07:21:07.835837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.042 [2024-11-18 07:21:07.835863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.042 qpair failed and we were unable to recover it. 00:35:47.042 [2024-11-18 07:21:07.835945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.042 [2024-11-18 07:21:07.835971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.042 qpair failed and we were unable to recover it. 00:35:47.042 [2024-11-18 07:21:07.836087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.042 [2024-11-18 07:21:07.836113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.042 qpair failed and we were unable to recover it. 00:35:47.042 [2024-11-18 07:21:07.836212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.042 [2024-11-18 07:21:07.836250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.042 qpair failed and we were unable to recover it. 00:35:47.042 [2024-11-18 07:21:07.836379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.042 [2024-11-18 07:21:07.836407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.042 qpair failed and we were unable to recover it. 00:35:47.042 [2024-11-18 07:21:07.836500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.042 [2024-11-18 07:21:07.836530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.042 qpair failed and we were unable to recover it. 00:35:47.042 [2024-11-18 07:21:07.836642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.042 [2024-11-18 07:21:07.836668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.042 qpair failed and we were unable to recover it. 00:35:47.042 [2024-11-18 07:21:07.836747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.042 [2024-11-18 07:21:07.836774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.042 qpair failed and we were unable to recover it. 00:35:47.042 [2024-11-18 07:21:07.836897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.042 [2024-11-18 07:21:07.836924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.042 qpair failed and we were unable to recover it. 00:35:47.042 [2024-11-18 07:21:07.837008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.042 [2024-11-18 07:21:07.837034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.042 qpair failed and we were unable to recover it. 00:35:47.042 [2024-11-18 07:21:07.837151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.042 [2024-11-18 07:21:07.837180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.042 qpair failed and we were unable to recover it. 00:35:47.042 [2024-11-18 07:21:07.837274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.042 [2024-11-18 07:21:07.837302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.042 qpair failed and we were unable to recover it. 00:35:47.042 [2024-11-18 07:21:07.837392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.042 [2024-11-18 07:21:07.837420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.042 qpair failed and we were unable to recover it. 00:35:47.042 [2024-11-18 07:21:07.837508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.042 [2024-11-18 07:21:07.837535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.042 qpair failed and we were unable to recover it. 00:35:47.042 [2024-11-18 07:21:07.837612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.042 [2024-11-18 07:21:07.837639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.042 qpair failed and we were unable to recover it. 00:35:47.042 [2024-11-18 07:21:07.837754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.042 [2024-11-18 07:21:07.837783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.042 qpair failed and we were unable to recover it. 00:35:47.042 [2024-11-18 07:21:07.837895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.042 [2024-11-18 07:21:07.837922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.042 qpair failed and we were unable to recover it. 00:35:47.042 [2024-11-18 07:21:07.838032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.042 [2024-11-18 07:21:07.838058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.042 qpair failed and we were unable to recover it. 00:35:47.042 [2024-11-18 07:21:07.838198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.042 [2024-11-18 07:21:07.838224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.042 qpair failed and we were unable to recover it. 00:35:47.042 [2024-11-18 07:21:07.838307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.042 [2024-11-18 07:21:07.838334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.042 qpair failed and we were unable to recover it. 00:35:47.042 [2024-11-18 07:21:07.838438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.042 [2024-11-18 07:21:07.838464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.042 qpair failed and we were unable to recover it. 00:35:47.042 [2024-11-18 07:21:07.838565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.043 [2024-11-18 07:21:07.838593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.043 qpair failed and we were unable to recover it. 00:35:47.043 [2024-11-18 07:21:07.838708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.043 [2024-11-18 07:21:07.838736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.043 qpair failed and we were unable to recover it. 00:35:47.043 [2024-11-18 07:21:07.838854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.043 [2024-11-18 07:21:07.838880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.043 qpair failed and we were unable to recover it. 00:35:47.043 [2024-11-18 07:21:07.838990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.043 [2024-11-18 07:21:07.839016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.043 qpair failed and we were unable to recover it. 00:35:47.043 [2024-11-18 07:21:07.839125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.043 [2024-11-18 07:21:07.839152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.043 qpair failed and we were unable to recover it. 00:35:47.043 [2024-11-18 07:21:07.839238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.043 [2024-11-18 07:21:07.839264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.043 qpair failed and we were unable to recover it. 00:35:47.043 [2024-11-18 07:21:07.839385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.043 [2024-11-18 07:21:07.839413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.043 qpair failed and we were unable to recover it. 00:35:47.043 [2024-11-18 07:21:07.839524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.043 [2024-11-18 07:21:07.839563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.043 qpair failed and we were unable to recover it. 00:35:47.043 [2024-11-18 07:21:07.839665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.043 [2024-11-18 07:21:07.839692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.043 qpair failed and we were unable to recover it. 00:35:47.043 [2024-11-18 07:21:07.839771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.043 [2024-11-18 07:21:07.839803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.043 qpair failed and we were unable to recover it. 00:35:47.043 [2024-11-18 07:21:07.839909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.043 [2024-11-18 07:21:07.839935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.043 qpair failed and we were unable to recover it. 00:35:47.043 [2024-11-18 07:21:07.840064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.043 [2024-11-18 07:21:07.840103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.043 qpair failed and we were unable to recover it. 00:35:47.043 [2024-11-18 07:21:07.840198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.043 [2024-11-18 07:21:07.840226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.043 qpair failed and we were unable to recover it. 00:35:47.043 [2024-11-18 07:21:07.840338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.043 [2024-11-18 07:21:07.840369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.043 qpair failed and we were unable to recover it. 00:35:47.043 [2024-11-18 07:21:07.840452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.043 [2024-11-18 07:21:07.840478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.043 qpair failed and we were unable to recover it. 00:35:47.043 [2024-11-18 07:21:07.840573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.043 [2024-11-18 07:21:07.840600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.043 qpair failed and we were unable to recover it. 00:35:47.043 [2024-11-18 07:21:07.840693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.043 [2024-11-18 07:21:07.840720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.043 qpair failed and we were unable to recover it. 00:35:47.043 [2024-11-18 07:21:07.840812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.043 [2024-11-18 07:21:07.840840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.043 qpair failed and we were unable to recover it. 00:35:47.043 [2024-11-18 07:21:07.840951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.043 [2024-11-18 07:21:07.840978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.043 qpair failed and we were unable to recover it. 00:35:47.043 [2024-11-18 07:21:07.841086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.043 [2024-11-18 07:21:07.841112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.043 qpair failed and we were unable to recover it. 00:35:47.043 [2024-11-18 07:21:07.841228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.043 [2024-11-18 07:21:07.841254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.043 qpair failed and we were unable to recover it. 00:35:47.043 [2024-11-18 07:21:07.841355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.043 [2024-11-18 07:21:07.841394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.043 qpair failed and we were unable to recover it. 00:35:47.043 [2024-11-18 07:21:07.841517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.043 [2024-11-18 07:21:07.841545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.043 qpair failed and we were unable to recover it. 00:35:47.043 [2024-11-18 07:21:07.841645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.043 [2024-11-18 07:21:07.841673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.043 qpair failed and we were unable to recover it. 00:35:47.043 [2024-11-18 07:21:07.841768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.043 [2024-11-18 07:21:07.841796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.043 qpair failed and we were unable to recover it. 00:35:47.043 [2024-11-18 07:21:07.841903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.043 [2024-11-18 07:21:07.841930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.043 qpair failed and we were unable to recover it. 00:35:47.043 [2024-11-18 07:21:07.842041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.043 [2024-11-18 07:21:07.842067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.043 qpair failed and we were unable to recover it. 00:35:47.043 [2024-11-18 07:21:07.842154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.043 [2024-11-18 07:21:07.842181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.043 qpair failed and we were unable to recover it. 00:35:47.043 [2024-11-18 07:21:07.842306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.043 [2024-11-18 07:21:07.842335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.043 qpair failed and we were unable to recover it. 00:35:47.043 [2024-11-18 07:21:07.842422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.043 [2024-11-18 07:21:07.842449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.043 qpair failed and we were unable to recover it. 00:35:47.043 [2024-11-18 07:21:07.842548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.043 [2024-11-18 07:21:07.842576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.043 qpair failed and we were unable to recover it. 00:35:47.043 [2024-11-18 07:21:07.842654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.043 [2024-11-18 07:21:07.842680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.043 qpair failed and we were unable to recover it. 00:35:47.043 [2024-11-18 07:21:07.842754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.043 [2024-11-18 07:21:07.842780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.043 qpair failed and we were unable to recover it. 00:35:47.043 [2024-11-18 07:21:07.842891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.043 [2024-11-18 07:21:07.842919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.043 qpair failed and we were unable to recover it. 00:35:47.043 [2024-11-18 07:21:07.843005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.043 [2024-11-18 07:21:07.843032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.043 qpair failed and we were unable to recover it. 00:35:47.043 [2024-11-18 07:21:07.843173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.043 [2024-11-18 07:21:07.843199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.043 qpair failed and we were unable to recover it. 00:35:47.043 [2024-11-18 07:21:07.843286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.043 [2024-11-18 07:21:07.843313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.043 qpair failed and we were unable to recover it. 00:35:47.043 [2024-11-18 07:21:07.843421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.043 [2024-11-18 07:21:07.843447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.044 qpair failed and we were unable to recover it. 00:35:47.044 [2024-11-18 07:21:07.843547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.044 [2024-11-18 07:21:07.843574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.044 qpair failed and we were unable to recover it. 00:35:47.044 [2024-11-18 07:21:07.843662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.044 [2024-11-18 07:21:07.843687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.044 qpair failed and we were unable to recover it. 00:35:47.044 [2024-11-18 07:21:07.843776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.044 [2024-11-18 07:21:07.843809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.044 qpair failed and we were unable to recover it. 00:35:47.044 [2024-11-18 07:21:07.843929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.044 [2024-11-18 07:21:07.843956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.044 qpair failed and we were unable to recover it. 00:35:47.044 [2024-11-18 07:21:07.844059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.044 [2024-11-18 07:21:07.844085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.044 qpair failed and we were unable to recover it. 00:35:47.044 [2024-11-18 07:21:07.844196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.044 [2024-11-18 07:21:07.844222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.044 qpair failed and we were unable to recover it. 00:35:47.044 [2024-11-18 07:21:07.844316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.044 [2024-11-18 07:21:07.844355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.044 qpair failed and we were unable to recover it. 00:35:47.044 [2024-11-18 07:21:07.844454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.044 [2024-11-18 07:21:07.844481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.044 qpair failed and we were unable to recover it. 00:35:47.044 [2024-11-18 07:21:07.844606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.044 [2024-11-18 07:21:07.844633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.044 qpair failed and we were unable to recover it. 00:35:47.044 [2024-11-18 07:21:07.844722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.044 [2024-11-18 07:21:07.844747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.044 qpair failed and we were unable to recover it. 00:35:47.044 [2024-11-18 07:21:07.844866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.044 [2024-11-18 07:21:07.844892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.044 qpair failed and we were unable to recover it. 00:35:47.044 [2024-11-18 07:21:07.845003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.044 [2024-11-18 07:21:07.845029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.044 qpair failed and we were unable to recover it. 00:35:47.044 [2024-11-18 07:21:07.845110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.044 [2024-11-18 07:21:07.845138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.044 qpair failed and we were unable to recover it. 00:35:47.044 [2024-11-18 07:21:07.845229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.044 [2024-11-18 07:21:07.845255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.044 qpair failed and we were unable to recover it. 00:35:47.044 [2024-11-18 07:21:07.845361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.044 [2024-11-18 07:21:07.845388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.044 qpair failed and we were unable to recover it. 00:35:47.044 [2024-11-18 07:21:07.845477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.044 [2024-11-18 07:21:07.845510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.044 qpair failed and we were unable to recover it. 00:35:47.044 [2024-11-18 07:21:07.845630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.044 [2024-11-18 07:21:07.845656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.044 qpair failed and we were unable to recover it. 00:35:47.044 [2024-11-18 07:21:07.845746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.044 [2024-11-18 07:21:07.845772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.044 qpair failed and we were unable to recover it. 00:35:47.044 [2024-11-18 07:21:07.845883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.044 [2024-11-18 07:21:07.845911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.044 qpair failed and we were unable to recover it. 00:35:47.044 [2024-11-18 07:21:07.846049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.044 [2024-11-18 07:21:07.846075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.044 qpair failed and we were unable to recover it. 00:35:47.044 [2024-11-18 07:21:07.846203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.044 [2024-11-18 07:21:07.846242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.044 qpair failed and we were unable to recover it. 00:35:47.044 [2024-11-18 07:21:07.846358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.044 [2024-11-18 07:21:07.846386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.044 qpair failed and we were unable to recover it. 00:35:47.044 [2024-11-18 07:21:07.846514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.044 [2024-11-18 07:21:07.846553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.044 qpair failed and we were unable to recover it. 00:35:47.044 [2024-11-18 07:21:07.846676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.044 [2024-11-18 07:21:07.846704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.044 qpair failed and we were unable to recover it. 00:35:47.044 [2024-11-18 07:21:07.846790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.044 [2024-11-18 07:21:07.846817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.044 qpair failed and we were unable to recover it. 00:35:47.044 [2024-11-18 07:21:07.846904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.044 [2024-11-18 07:21:07.846930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.044 qpair failed and we were unable to recover it. 00:35:47.044 [2024-11-18 07:21:07.847046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.044 [2024-11-18 07:21:07.847073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.044 qpair failed and we were unable to recover it. 00:35:47.044 [2024-11-18 07:21:07.847151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.044 [2024-11-18 07:21:07.847177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.044 qpair failed and we were unable to recover it. 00:35:47.044 [2024-11-18 07:21:07.847310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.044 [2024-11-18 07:21:07.847350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.044 qpair failed and we were unable to recover it. 00:35:47.044 [2024-11-18 07:21:07.847445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.044 [2024-11-18 07:21:07.847473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.044 qpair failed and we were unable to recover it. 00:35:47.044 [2024-11-18 07:21:07.847570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.044 [2024-11-18 07:21:07.847597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.044 qpair failed and we were unable to recover it. 00:35:47.044 [2024-11-18 07:21:07.847715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.044 [2024-11-18 07:21:07.847741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.044 qpair failed and we were unable to recover it. 00:35:47.044 [2024-11-18 07:21:07.847830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.044 [2024-11-18 07:21:07.847856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.044 qpair failed and we were unable to recover it. 00:35:47.044 [2024-11-18 07:21:07.847978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.044 [2024-11-18 07:21:07.848008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.044 qpair failed and we were unable to recover it. 00:35:47.044 [2024-11-18 07:21:07.848129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.044 [2024-11-18 07:21:07.848157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.044 qpair failed and we were unable to recover it. 00:35:47.044 [2024-11-18 07:21:07.848238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.044 [2024-11-18 07:21:07.848264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.044 qpair failed and we were unable to recover it. 00:35:47.044 [2024-11-18 07:21:07.848373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.044 [2024-11-18 07:21:07.848399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.044 qpair failed and we were unable to recover it. 00:35:47.045 [2024-11-18 07:21:07.848513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.045 [2024-11-18 07:21:07.848540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.045 qpair failed and we were unable to recover it. 00:35:47.045 [2024-11-18 07:21:07.848667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.045 [2024-11-18 07:21:07.848705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.045 qpair failed and we were unable to recover it. 00:35:47.045 [2024-11-18 07:21:07.848832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.045 [2024-11-18 07:21:07.848859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.045 qpair failed and we were unable to recover it. 00:35:47.045 [2024-11-18 07:21:07.848972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.045 [2024-11-18 07:21:07.848998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.045 qpair failed and we were unable to recover it. 00:35:47.045 [2024-11-18 07:21:07.849078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.045 [2024-11-18 07:21:07.849104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.045 qpair failed and we were unable to recover it. 00:35:47.045 [2024-11-18 07:21:07.849196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.045 [2024-11-18 07:21:07.849229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.045 qpair failed and we were unable to recover it. 00:35:47.045 [2024-11-18 07:21:07.849327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.045 [2024-11-18 07:21:07.849365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.045 qpair failed and we were unable to recover it. 00:35:47.045 [2024-11-18 07:21:07.849449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.045 [2024-11-18 07:21:07.849476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.045 qpair failed and we were unable to recover it. 00:35:47.045 [2024-11-18 07:21:07.849603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.045 [2024-11-18 07:21:07.849629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.045 qpair failed and we were unable to recover it. 00:35:47.045 [2024-11-18 07:21:07.849707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.045 [2024-11-18 07:21:07.849733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.045 qpair failed and we were unable to recover it. 00:35:47.045 [2024-11-18 07:21:07.849819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.045 [2024-11-18 07:21:07.849845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.045 qpair failed and we were unable to recover it. 00:35:47.045 [2024-11-18 07:21:07.849963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.045 [2024-11-18 07:21:07.849988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.045 qpair failed and we were unable to recover it. 00:35:47.045 [2024-11-18 07:21:07.850099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.045 [2024-11-18 07:21:07.850125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.045 qpair failed and we were unable to recover it. 00:35:47.045 [2024-11-18 07:21:07.850240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.045 [2024-11-18 07:21:07.850266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.045 qpair failed and we were unable to recover it. 00:35:47.045 [2024-11-18 07:21:07.850368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.045 [2024-11-18 07:21:07.850394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.045 qpair failed and we were unable to recover it. 00:35:47.045 [2024-11-18 07:21:07.850513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.045 [2024-11-18 07:21:07.850542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.045 qpair failed and we were unable to recover it. 00:35:47.045 [2024-11-18 07:21:07.850630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.045 [2024-11-18 07:21:07.850657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.045 qpair failed and we were unable to recover it. 00:35:47.045 [2024-11-18 07:21:07.850736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.045 [2024-11-18 07:21:07.850762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.045 qpair failed and we were unable to recover it. 00:35:47.045 [2024-11-18 07:21:07.850884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.045 [2024-11-18 07:21:07.850911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.045 qpair failed and we were unable to recover it. 00:35:47.045 [2024-11-18 07:21:07.851030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.045 [2024-11-18 07:21:07.851057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.045 qpair failed and we were unable to recover it. 00:35:47.045 [2024-11-18 07:21:07.851170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.045 [2024-11-18 07:21:07.851196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.045 qpair failed and we were unable to recover it. 00:35:47.045 [2024-11-18 07:21:07.851281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.045 [2024-11-18 07:21:07.851308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.045 qpair failed and we were unable to recover it. 00:35:47.045 [2024-11-18 07:21:07.851428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.045 [2024-11-18 07:21:07.851455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.045 qpair failed and we were unable to recover it. 00:35:47.045 [2024-11-18 07:21:07.851551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.045 [2024-11-18 07:21:07.851580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.045 qpair failed and we were unable to recover it. 00:35:47.045 [2024-11-18 07:21:07.851695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.045 [2024-11-18 07:21:07.851723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.045 qpair failed and we were unable to recover it. 00:35:47.045 [2024-11-18 07:21:07.851805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.045 [2024-11-18 07:21:07.851831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.045 qpair failed and we were unable to recover it. 00:35:47.045 [2024-11-18 07:21:07.851916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.045 [2024-11-18 07:21:07.851942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.045 qpair failed and we were unable to recover it. 00:35:47.045 [2024-11-18 07:21:07.852059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.045 [2024-11-18 07:21:07.852085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.045 qpair failed and we were unable to recover it. 00:35:47.045 [2024-11-18 07:21:07.852164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.045 [2024-11-18 07:21:07.852190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.045 qpair failed and we were unable to recover it. 00:35:47.045 [2024-11-18 07:21:07.852275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.045 [2024-11-18 07:21:07.852300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.045 qpair failed and we were unable to recover it. 00:35:47.045 [2024-11-18 07:21:07.852412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.045 [2024-11-18 07:21:07.852438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.045 qpair failed and we were unable to recover it. 00:35:47.045 [2024-11-18 07:21:07.852572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.045 [2024-11-18 07:21:07.852612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.045 qpair failed and we were unable to recover it. 00:35:47.045 [2024-11-18 07:21:07.852707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.045 [2024-11-18 07:21:07.852735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.045 qpair failed and we were unable to recover it. 00:35:47.045 [2024-11-18 07:21:07.852846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.045 [2024-11-18 07:21:07.852872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.045 qpair failed and we were unable to recover it. 00:35:47.045 [2024-11-18 07:21:07.852994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.045 [2024-11-18 07:21:07.853020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.045 qpair failed and we were unable to recover it. 00:35:47.045 [2024-11-18 07:21:07.853135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.045 [2024-11-18 07:21:07.853161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.045 qpair failed and we were unable to recover it. 00:35:47.045 [2024-11-18 07:21:07.853287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.045 [2024-11-18 07:21:07.853313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.045 qpair failed and we were unable to recover it. 00:35:47.045 [2024-11-18 07:21:07.853393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.046 [2024-11-18 07:21:07.853421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.046 qpair failed and we were unable to recover it. 00:35:47.046 [2024-11-18 07:21:07.853514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.046 [2024-11-18 07:21:07.853544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.046 qpair failed and we were unable to recover it. 00:35:47.046 [2024-11-18 07:21:07.853663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.046 [2024-11-18 07:21:07.853690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.046 qpair failed and we were unable to recover it. 00:35:47.046 [2024-11-18 07:21:07.853812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.046 [2024-11-18 07:21:07.853840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.046 qpair failed and we were unable to recover it. 00:35:47.046 [2024-11-18 07:21:07.853953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.046 [2024-11-18 07:21:07.853979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.046 qpair failed and we were unable to recover it. 00:35:47.046 [2024-11-18 07:21:07.854070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.046 [2024-11-18 07:21:07.854096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.046 qpair failed and we were unable to recover it. 00:35:47.046 [2024-11-18 07:21:07.854213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.046 [2024-11-18 07:21:07.854241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.046 qpair failed and we were unable to recover it. 00:35:47.046 [2024-11-18 07:21:07.854326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.046 [2024-11-18 07:21:07.854353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.046 qpair failed and we were unable to recover it. 00:35:47.046 [2024-11-18 07:21:07.854440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.046 [2024-11-18 07:21:07.854472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.046 qpair failed and we were unable to recover it. 00:35:47.046 [2024-11-18 07:21:07.854627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.046 [2024-11-18 07:21:07.854655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.046 qpair failed and we were unable to recover it. 00:35:47.046 [2024-11-18 07:21:07.854748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.046 [2024-11-18 07:21:07.854774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.046 qpair failed and we were unable to recover it. 00:35:47.046 [2024-11-18 07:21:07.854851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.046 [2024-11-18 07:21:07.854878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.046 qpair failed and we were unable to recover it. 00:35:47.046 [2024-11-18 07:21:07.854989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.046 [2024-11-18 07:21:07.855015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.046 qpair failed and we were unable to recover it. 00:35:47.046 [2024-11-18 07:21:07.855129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.046 [2024-11-18 07:21:07.855155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.046 qpair failed and we were unable to recover it. 00:35:47.046 [2024-11-18 07:21:07.855247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.046 [2024-11-18 07:21:07.855273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.046 qpair failed and we were unable to recover it. 00:35:47.046 [2024-11-18 07:21:07.855385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.046 [2024-11-18 07:21:07.855413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.046 qpair failed and we were unable to recover it. 00:35:47.046 [2024-11-18 07:21:07.855511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.046 [2024-11-18 07:21:07.855538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.046 qpair failed and we were unable to recover it. 00:35:47.046 [2024-11-18 07:21:07.855615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.046 [2024-11-18 07:21:07.855641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.046 qpair failed and we were unable to recover it. 00:35:47.046 [2024-11-18 07:21:07.855727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.046 [2024-11-18 07:21:07.855754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.046 qpair failed and we were unable to recover it. 00:35:47.046 [2024-11-18 07:21:07.855873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.046 [2024-11-18 07:21:07.855900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.046 qpair failed and we were unable to recover it. 00:35:47.046 [2024-11-18 07:21:07.855986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.046 [2024-11-18 07:21:07.856013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.046 qpair failed and we were unable to recover it. 00:35:47.046 [2024-11-18 07:21:07.856092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.046 [2024-11-18 07:21:07.856118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.046 qpair failed and we were unable to recover it. 00:35:47.046 [2024-11-18 07:21:07.856264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.046 [2024-11-18 07:21:07.856291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.046 qpair failed and we were unable to recover it. 00:35:47.046 [2024-11-18 07:21:07.856388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.046 [2024-11-18 07:21:07.856426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.046 qpair failed and we were unable to recover it. 00:35:47.046 [2024-11-18 07:21:07.856554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.046 [2024-11-18 07:21:07.856582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.046 qpair failed and we were unable to recover it. 00:35:47.046 [2024-11-18 07:21:07.856663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.046 [2024-11-18 07:21:07.856691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.046 qpair failed and we were unable to recover it. 00:35:47.046 [2024-11-18 07:21:07.856807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.046 [2024-11-18 07:21:07.856833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.046 qpair failed and we were unable to recover it. 00:35:47.046 [2024-11-18 07:21:07.856914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.046 [2024-11-18 07:21:07.856940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.046 qpair failed and we were unable to recover it. 00:35:47.046 [2024-11-18 07:21:07.857042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.046 [2024-11-18 07:21:07.857067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.046 qpair failed and we were unable to recover it. 00:35:47.046 [2024-11-18 07:21:07.857153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.046 [2024-11-18 07:21:07.857179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.046 qpair failed and we were unable to recover it. 00:35:47.046 [2024-11-18 07:21:07.857260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.046 [2024-11-18 07:21:07.857287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.046 qpair failed and we were unable to recover it. 00:35:47.046 [2024-11-18 07:21:07.857399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.046 [2024-11-18 07:21:07.857427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.046 qpair failed and we were unable to recover it. 00:35:47.046 [2024-11-18 07:21:07.857583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.046 [2024-11-18 07:21:07.857612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.046 qpair failed and we were unable to recover it. 00:35:47.046 [2024-11-18 07:21:07.857698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.046 [2024-11-18 07:21:07.857725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.046 qpair failed and we were unable to recover it. 00:35:47.046 [2024-11-18 07:21:07.857816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.046 [2024-11-18 07:21:07.857843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.046 qpair failed and we were unable to recover it. 00:35:47.046 [2024-11-18 07:21:07.857961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.046 [2024-11-18 07:21:07.857989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.046 qpair failed and we were unable to recover it. 00:35:47.046 [2024-11-18 07:21:07.858080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.046 [2024-11-18 07:21:07.858108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.046 qpair failed and we were unable to recover it. 00:35:47.046 [2024-11-18 07:21:07.858222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.046 [2024-11-18 07:21:07.858247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.046 qpair failed and we were unable to recover it. 00:35:47.046 [2024-11-18 07:21:07.858328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.047 [2024-11-18 07:21:07.858355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.047 qpair failed and we were unable to recover it. 00:35:47.047 [2024-11-18 07:21:07.858464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.047 [2024-11-18 07:21:07.858498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.047 qpair failed and we were unable to recover it. 00:35:47.047 [2024-11-18 07:21:07.858577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.047 [2024-11-18 07:21:07.858603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.047 qpair failed and we were unable to recover it. 00:35:47.047 [2024-11-18 07:21:07.858713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.047 [2024-11-18 07:21:07.858740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.047 qpair failed and we were unable to recover it. 00:35:47.047 [2024-11-18 07:21:07.858845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.047 [2024-11-18 07:21:07.858871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.047 qpair failed and we were unable to recover it. 00:35:47.047 [2024-11-18 07:21:07.858987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.047 [2024-11-18 07:21:07.859015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.047 qpair failed and we were unable to recover it. 00:35:47.047 [2024-11-18 07:21:07.859104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.047 [2024-11-18 07:21:07.859132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.047 qpair failed and we were unable to recover it. 00:35:47.047 [2024-11-18 07:21:07.859277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.047 [2024-11-18 07:21:07.859305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.047 qpair failed and we were unable to recover it. 00:35:47.047 [2024-11-18 07:21:07.859389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.047 [2024-11-18 07:21:07.859415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.047 qpair failed and we were unable to recover it. 00:35:47.047 [2024-11-18 07:21:07.859534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.047 [2024-11-18 07:21:07.859560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.047 qpair failed and we were unable to recover it. 00:35:47.047 [2024-11-18 07:21:07.859646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.047 [2024-11-18 07:21:07.859671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.047 qpair failed and we were unable to recover it. 00:35:47.047 [2024-11-18 07:21:07.859816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.047 [2024-11-18 07:21:07.859850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.047 qpair failed and we were unable to recover it. 00:35:47.047 [2024-11-18 07:21:07.859944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.047 [2024-11-18 07:21:07.859970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.047 qpair failed and we were unable to recover it. 00:35:47.047 [2024-11-18 07:21:07.860061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.047 [2024-11-18 07:21:07.860087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.047 qpair failed and we were unable to recover it. 00:35:47.047 [2024-11-18 07:21:07.860200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.047 [2024-11-18 07:21:07.860229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.047 qpair failed and we were unable to recover it. 00:35:47.047 [2024-11-18 07:21:07.860310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.047 [2024-11-18 07:21:07.860336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.047 qpair failed and we were unable to recover it. 00:35:47.047 [2024-11-18 07:21:07.860426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.047 [2024-11-18 07:21:07.860453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.047 qpair failed and we were unable to recover it. 00:35:47.047 [2024-11-18 07:21:07.860557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.047 [2024-11-18 07:21:07.860585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.047 qpair failed and we were unable to recover it. 00:35:47.047 [2024-11-18 07:21:07.860662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.047 [2024-11-18 07:21:07.860688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.047 qpair failed and we were unable to recover it. 00:35:47.047 [2024-11-18 07:21:07.860770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.047 [2024-11-18 07:21:07.860796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.047 qpair failed and we were unable to recover it. 00:35:47.047 [2024-11-18 07:21:07.860879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.047 [2024-11-18 07:21:07.860905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.047 qpair failed and we were unable to recover it. 00:35:47.047 [2024-11-18 07:21:07.861014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.047 [2024-11-18 07:21:07.861040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.047 qpair failed and we were unable to recover it. 00:35:47.047 [2024-11-18 07:21:07.861181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.047 [2024-11-18 07:21:07.861219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.047 qpair failed and we were unable to recover it. 00:35:47.047 [2024-11-18 07:21:07.861349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.047 [2024-11-18 07:21:07.861376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.047 qpair failed and we were unable to recover it. 00:35:47.047 [2024-11-18 07:21:07.861467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.047 [2024-11-18 07:21:07.861501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.047 qpair failed and we were unable to recover it. 00:35:47.047 [2024-11-18 07:21:07.861596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.047 [2024-11-18 07:21:07.861623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.047 qpair failed and we were unable to recover it. 00:35:47.047 [2024-11-18 07:21:07.861734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.047 [2024-11-18 07:21:07.861760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.047 qpair failed and we were unable to recover it. 00:35:47.047 [2024-11-18 07:21:07.861882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.047 [2024-11-18 07:21:07.861908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.047 qpair failed and we were unable to recover it. 00:35:47.047 [2024-11-18 07:21:07.861996] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:35:47.047 [2024-11-18 07:21:07.862023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.047 [2024-11-18 07:21:07.862048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.047 qpair failed and we were unable to recover it. 00:35:47.047 [2024-11-18 07:21:07.862132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.047 [2024-11-18 07:21:07.862157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.047 qpair failed and we were unable to recover it. 00:35:47.047 [2024-11-18 07:21:07.862261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.047 [2024-11-18 07:21:07.862286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.047 qpair failed and we were unable to recover it. 00:35:47.047 [2024-11-18 07:21:07.862371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.047 [2024-11-18 07:21:07.862400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.047 qpair failed and we were unable to recover it. 00:35:47.047 [2024-11-18 07:21:07.862506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.047 [2024-11-18 07:21:07.862537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.047 qpair failed and we were unable to recover it. 00:35:47.047 [2024-11-18 07:21:07.862659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.048 [2024-11-18 07:21:07.862685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.048 qpair failed and we were unable to recover it. 00:35:47.048 [2024-11-18 07:21:07.862770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.048 [2024-11-18 07:21:07.862797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.048 qpair failed and we were unable to recover it. 00:35:47.048 [2024-11-18 07:21:07.862886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.048 [2024-11-18 07:21:07.862913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.048 qpair failed and we were unable to recover it. 00:35:47.048 [2024-11-18 07:21:07.863002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.048 [2024-11-18 07:21:07.863028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.048 qpair failed and we were unable to recover it. 00:35:47.048 [2024-11-18 07:21:07.863137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.048 [2024-11-18 07:21:07.863166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.048 qpair failed and we were unable to recover it. 00:35:47.048 [2024-11-18 07:21:07.863310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.048 [2024-11-18 07:21:07.863338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.048 qpair failed and we were unable to recover it. 00:35:47.048 [2024-11-18 07:21:07.863424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.048 [2024-11-18 07:21:07.863451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.048 qpair failed and we were unable to recover it. 00:35:47.048 [2024-11-18 07:21:07.863548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.048 [2024-11-18 07:21:07.863575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.048 qpair failed and we were unable to recover it. 00:35:47.048 [2024-11-18 07:21:07.863651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.048 [2024-11-18 07:21:07.863678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.048 qpair failed and we were unable to recover it. 00:35:47.048 [2024-11-18 07:21:07.863761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.048 [2024-11-18 07:21:07.863787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.048 qpair failed and we were unable to recover it. 00:35:47.048 [2024-11-18 07:21:07.863881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.048 [2024-11-18 07:21:07.863907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.048 qpair failed and we were unable to recover it. 00:35:47.048 [2024-11-18 07:21:07.864048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.048 [2024-11-18 07:21:07.864074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.048 qpair failed and we were unable to recover it. 00:35:47.048 [2024-11-18 07:21:07.864183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.048 [2024-11-18 07:21:07.864209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.048 qpair failed and we were unable to recover it. 00:35:47.048 [2024-11-18 07:21:07.864339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.048 [2024-11-18 07:21:07.864365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.048 qpair failed and we were unable to recover it. 00:35:47.048 [2024-11-18 07:21:07.864458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.048 [2024-11-18 07:21:07.864484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.048 qpair failed and we were unable to recover it. 00:35:47.048 [2024-11-18 07:21:07.864582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.048 [2024-11-18 07:21:07.864609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.048 qpair failed and we were unable to recover it. 00:35:47.048 [2024-11-18 07:21:07.864719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.048 [2024-11-18 07:21:07.864745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.048 qpair failed and we were unable to recover it. 00:35:47.048 [2024-11-18 07:21:07.864840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.048 [2024-11-18 07:21:07.864873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.048 qpair failed and we were unable to recover it. 00:35:47.048 [2024-11-18 07:21:07.864953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.048 [2024-11-18 07:21:07.864979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.048 qpair failed and we were unable to recover it. 00:35:47.048 [2024-11-18 07:21:07.865098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.048 [2024-11-18 07:21:07.865124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.048 qpair failed and we were unable to recover it. 00:35:47.048 [2024-11-18 07:21:07.865209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.048 [2024-11-18 07:21:07.865235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.048 qpair failed and we were unable to recover it. 00:35:47.048 [2024-11-18 07:21:07.865368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.048 [2024-11-18 07:21:07.865407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.048 qpair failed and we were unable to recover it. 00:35:47.048 [2024-11-18 07:21:07.865515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.048 [2024-11-18 07:21:07.865544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.048 qpair failed and we were unable to recover it. 00:35:47.048 [2024-11-18 07:21:07.865647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.048 [2024-11-18 07:21:07.865686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.048 qpair failed and we were unable to recover it. 00:35:47.048 [2024-11-18 07:21:07.865806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.048 [2024-11-18 07:21:07.865833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.048 qpair failed and we were unable to recover it. 00:35:47.048 [2024-11-18 07:21:07.865975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.048 [2024-11-18 07:21:07.866002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.048 qpair failed and we were unable to recover it. 00:35:47.048 [2024-11-18 07:21:07.866094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.048 [2024-11-18 07:21:07.866120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.048 qpair failed and we were unable to recover it. 00:35:47.048 [2024-11-18 07:21:07.866238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.048 [2024-11-18 07:21:07.866265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.048 qpair failed and we were unable to recover it. 00:35:47.048 [2024-11-18 07:21:07.866359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.048 [2024-11-18 07:21:07.866398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.048 qpair failed and we were unable to recover it. 00:35:47.048 [2024-11-18 07:21:07.866500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.048 [2024-11-18 07:21:07.866529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.048 qpair failed and we were unable to recover it. 00:35:47.048 [2024-11-18 07:21:07.866625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.048 [2024-11-18 07:21:07.866652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.048 qpair failed and we were unable to recover it. 00:35:47.048 [2024-11-18 07:21:07.866775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.048 [2024-11-18 07:21:07.866806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.048 qpair failed and we were unable to recover it. 00:35:47.048 [2024-11-18 07:21:07.866902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.048 [2024-11-18 07:21:07.866929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.048 qpair failed and we were unable to recover it. 00:35:47.048 [2024-11-18 07:21:07.867019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.048 [2024-11-18 07:21:07.867045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.048 qpair failed and we were unable to recover it. 00:35:47.048 [2024-11-18 07:21:07.867162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.048 [2024-11-18 07:21:07.867190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.048 qpair failed and we were unable to recover it. 00:35:47.048 [2024-11-18 07:21:07.867324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.048 [2024-11-18 07:21:07.867363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.048 qpair failed and we were unable to recover it. 00:35:47.048 [2024-11-18 07:21:07.867496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.048 [2024-11-18 07:21:07.867524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.048 qpair failed and we were unable to recover it. 00:35:47.048 [2024-11-18 07:21:07.867618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.048 [2024-11-18 07:21:07.867645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.048 qpair failed and we were unable to recover it. 00:35:47.049 [2024-11-18 07:21:07.867736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.049 [2024-11-18 07:21:07.867762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.049 qpair failed and we were unable to recover it. 00:35:47.049 [2024-11-18 07:21:07.867868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.049 [2024-11-18 07:21:07.867894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.049 qpair failed and we were unable to recover it. 00:35:47.049 [2024-11-18 07:21:07.868037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.049 [2024-11-18 07:21:07.868065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.049 qpair failed and we were unable to recover it. 00:35:47.049 [2024-11-18 07:21:07.868188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.049 [2024-11-18 07:21:07.868216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.049 qpair failed and we were unable to recover it. 00:35:47.049 [2024-11-18 07:21:07.868334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.049 [2024-11-18 07:21:07.868360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.049 qpair failed and we were unable to recover it. 00:35:47.049 [2024-11-18 07:21:07.868467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.049 [2024-11-18 07:21:07.868510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.049 qpair failed and we were unable to recover it. 00:35:47.049 [2024-11-18 07:21:07.868596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.049 [2024-11-18 07:21:07.868627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.049 qpair failed and we were unable to recover it. 00:35:47.049 [2024-11-18 07:21:07.868710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.049 [2024-11-18 07:21:07.868738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.049 qpair failed and we were unable to recover it. 00:35:47.049 [2024-11-18 07:21:07.868844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.049 [2024-11-18 07:21:07.868873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.049 qpair failed and we were unable to recover it. 00:35:47.049 [2024-11-18 07:21:07.868958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.049 [2024-11-18 07:21:07.868984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.049 qpair failed and we were unable to recover it. 00:35:47.049 [2024-11-18 07:21:07.869083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.049 [2024-11-18 07:21:07.869122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.049 qpair failed and we were unable to recover it. 00:35:47.049 [2024-11-18 07:21:07.869218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.049 [2024-11-18 07:21:07.869246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.049 qpair failed and we were unable to recover it. 00:35:47.049 [2024-11-18 07:21:07.869334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.049 [2024-11-18 07:21:07.869361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.049 qpair failed and we were unable to recover it. 00:35:47.049 [2024-11-18 07:21:07.869450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.049 [2024-11-18 07:21:07.869487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.049 qpair failed and we were unable to recover it. 00:35:47.049 [2024-11-18 07:21:07.869606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.049 [2024-11-18 07:21:07.869632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.049 qpair failed and we were unable to recover it. 00:35:47.049 [2024-11-18 07:21:07.869719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.049 [2024-11-18 07:21:07.869745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.049 qpair failed and we were unable to recover it. 00:35:47.049 [2024-11-18 07:21:07.869869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.049 [2024-11-18 07:21:07.869895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.049 qpair failed and we were unable to recover it. 00:35:47.049 [2024-11-18 07:21:07.869990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.049 [2024-11-18 07:21:07.870028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.049 qpair failed and we were unable to recover it. 00:35:47.049 [2024-11-18 07:21:07.870130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.049 [2024-11-18 07:21:07.870157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.049 qpair failed and we were unable to recover it. 00:35:47.049 [2024-11-18 07:21:07.870270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.049 [2024-11-18 07:21:07.870298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.049 qpair failed and we were unable to recover it. 00:35:47.049 [2024-11-18 07:21:07.870448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.049 [2024-11-18 07:21:07.870476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.049 qpair failed and we were unable to recover it. 00:35:47.049 [2024-11-18 07:21:07.870586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.049 [2024-11-18 07:21:07.870614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.049 qpair failed and we were unable to recover it. 00:35:47.049 [2024-11-18 07:21:07.870697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.049 [2024-11-18 07:21:07.870724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.049 qpair failed and we were unable to recover it. 00:35:47.049 [2024-11-18 07:21:07.870828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.049 [2024-11-18 07:21:07.870856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.049 qpair failed and we were unable to recover it. 00:35:47.049 [2024-11-18 07:21:07.870974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.049 [2024-11-18 07:21:07.871000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.049 qpair failed and we were unable to recover it. 00:35:47.049 [2024-11-18 07:21:07.871115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.049 [2024-11-18 07:21:07.871144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.049 qpair failed and we were unable to recover it. 00:35:47.049 [2024-11-18 07:21:07.871228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.049 [2024-11-18 07:21:07.871254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.049 qpair failed and we were unable to recover it. 00:35:47.049 [2024-11-18 07:21:07.871364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.049 [2024-11-18 07:21:07.871390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.049 qpair failed and we were unable to recover it. 00:35:47.049 [2024-11-18 07:21:07.871504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.049 [2024-11-18 07:21:07.871531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.049 qpair failed and we were unable to recover it. 00:35:47.049 [2024-11-18 07:21:07.871618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.049 [2024-11-18 07:21:07.871644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.049 qpair failed and we were unable to recover it. 00:35:47.049 [2024-11-18 07:21:07.871739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.049 [2024-11-18 07:21:07.871764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.049 qpair failed and we were unable to recover it. 00:35:47.049 [2024-11-18 07:21:07.871866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.049 [2024-11-18 07:21:07.871892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.049 qpair failed and we were unable to recover it. 00:35:47.049 [2024-11-18 07:21:07.871980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.049 [2024-11-18 07:21:07.872008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.049 qpair failed and we were unable to recover it. 00:35:47.049 [2024-11-18 07:21:07.872128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.049 [2024-11-18 07:21:07.872156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.049 qpair failed and we were unable to recover it. 00:35:47.049 [2024-11-18 07:21:07.872252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.049 [2024-11-18 07:21:07.872279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.049 qpair failed and we were unable to recover it. 00:35:47.049 [2024-11-18 07:21:07.872382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.049 [2024-11-18 07:21:07.872410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.049 qpair failed and we were unable to recover it. 00:35:47.049 [2024-11-18 07:21:07.872526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.049 [2024-11-18 07:21:07.872553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.049 qpair failed and we were unable to recover it. 00:35:47.049 [2024-11-18 07:21:07.872642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.049 [2024-11-18 07:21:07.872668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.050 qpair failed and we were unable to recover it. 00:35:47.050 [2024-11-18 07:21:07.872755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.050 [2024-11-18 07:21:07.872790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.050 qpair failed and we were unable to recover it. 00:35:47.050 [2024-11-18 07:21:07.872877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.050 [2024-11-18 07:21:07.872903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.050 qpair failed and we were unable to recover it. 00:35:47.050 [2024-11-18 07:21:07.873022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.050 [2024-11-18 07:21:07.873049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.050 qpair failed and we were unable to recover it. 00:35:47.050 [2024-11-18 07:21:07.873164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.050 [2024-11-18 07:21:07.873191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.050 qpair failed and we were unable to recover it. 00:35:47.050 [2024-11-18 07:21:07.873276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.050 [2024-11-18 07:21:07.873302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.050 qpair failed and we were unable to recover it. 00:35:47.050 [2024-11-18 07:21:07.873427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.050 [2024-11-18 07:21:07.873466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.050 qpair failed and we were unable to recover it. 00:35:47.050 [2024-11-18 07:21:07.873573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.050 [2024-11-18 07:21:07.873602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.050 qpair failed and we were unable to recover it. 00:35:47.050 [2024-11-18 07:21:07.873699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.050 [2024-11-18 07:21:07.873726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.050 qpair failed and we were unable to recover it. 00:35:47.050 [2024-11-18 07:21:07.873852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.050 [2024-11-18 07:21:07.873885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.050 qpair failed and we were unable to recover it. 00:35:47.050 [2024-11-18 07:21:07.873974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.050 [2024-11-18 07:21:07.874002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.050 qpair failed and we were unable to recover it. 00:35:47.050 [2024-11-18 07:21:07.874119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.050 [2024-11-18 07:21:07.874146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.050 qpair failed and we were unable to recover it. 00:35:47.050 [2024-11-18 07:21:07.874241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.050 [2024-11-18 07:21:07.874267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.050 qpair failed and we were unable to recover it. 00:35:47.050 [2024-11-18 07:21:07.874355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.050 [2024-11-18 07:21:07.874382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.050 qpair failed and we were unable to recover it. 00:35:47.050 [2024-11-18 07:21:07.874501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.050 [2024-11-18 07:21:07.874528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.050 qpair failed and we were unable to recover it. 00:35:47.050 [2024-11-18 07:21:07.874637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.050 [2024-11-18 07:21:07.874664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.050 qpair failed and we were unable to recover it. 00:35:47.050 [2024-11-18 07:21:07.874745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.050 [2024-11-18 07:21:07.874772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.050 qpair failed and we were unable to recover it. 00:35:47.050 [2024-11-18 07:21:07.874915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.050 [2024-11-18 07:21:07.874942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.050 qpair failed and we were unable to recover it. 00:35:47.050 [2024-11-18 07:21:07.875030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.050 [2024-11-18 07:21:07.875057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.050 qpair failed and we were unable to recover it. 00:35:47.050 [2024-11-18 07:21:07.875139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.050 [2024-11-18 07:21:07.875167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.050 qpair failed and we were unable to recover it. 00:35:47.050 [2024-11-18 07:21:07.875271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.050 [2024-11-18 07:21:07.875310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.050 qpair failed and we were unable to recover it. 00:35:47.050 [2024-11-18 07:21:07.875400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.050 [2024-11-18 07:21:07.875428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.050 qpair failed and we were unable to recover it. 00:35:47.050 [2024-11-18 07:21:07.875550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.050 [2024-11-18 07:21:07.875579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.050 qpair failed and we were unable to recover it. 00:35:47.050 [2024-11-18 07:21:07.875673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.050 [2024-11-18 07:21:07.875700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.050 qpair failed and we were unable to recover it. 00:35:47.050 [2024-11-18 07:21:07.875792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.050 [2024-11-18 07:21:07.875819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.050 qpair failed and we were unable to recover it. 00:35:47.050 [2024-11-18 07:21:07.875908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.050 [2024-11-18 07:21:07.875936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.050 qpair failed and we were unable to recover it. 00:35:47.050 [2024-11-18 07:21:07.876055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.050 [2024-11-18 07:21:07.876082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.050 qpair failed and we were unable to recover it. 00:35:47.050 [2024-11-18 07:21:07.876175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.050 [2024-11-18 07:21:07.876201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.050 qpair failed and we were unable to recover it. 00:35:47.050 [2024-11-18 07:21:07.876284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.050 [2024-11-18 07:21:07.876311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.050 qpair failed and we were unable to recover it. 00:35:47.050 [2024-11-18 07:21:07.876456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.050 [2024-11-18 07:21:07.876508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.050 qpair failed and we were unable to recover it. 00:35:47.050 [2024-11-18 07:21:07.876618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.050 [2024-11-18 07:21:07.876645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.050 qpair failed and we were unable to recover it. 00:35:47.050 [2024-11-18 07:21:07.876739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.050 [2024-11-18 07:21:07.876766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.050 qpair failed and we were unable to recover it. 00:35:47.050 [2024-11-18 07:21:07.876848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.050 [2024-11-18 07:21:07.876878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.050 qpair failed and we were unable to recover it. 00:35:47.050 [2024-11-18 07:21:07.876987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.050 [2024-11-18 07:21:07.877014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.050 qpair failed and we were unable to recover it. 00:35:47.050 [2024-11-18 07:21:07.877101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.050 [2024-11-18 07:21:07.877129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.050 qpair failed and we were unable to recover it. 00:35:47.050 [2024-11-18 07:21:07.877230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.050 [2024-11-18 07:21:07.877270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.050 qpair failed and we were unable to recover it. 00:35:47.050 [2024-11-18 07:21:07.877405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.050 [2024-11-18 07:21:07.877445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.050 qpair failed and we were unable to recover it. 00:35:47.050 [2024-11-18 07:21:07.877554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.051 [2024-11-18 07:21:07.877583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.051 qpair failed and we were unable to recover it. 00:35:47.051 [2024-11-18 07:21:07.877671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.051 [2024-11-18 07:21:07.877697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.051 qpair failed and we were unable to recover it. 00:35:47.051 [2024-11-18 07:21:07.877783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.051 [2024-11-18 07:21:07.877810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.051 qpair failed and we were unable to recover it. 00:35:47.051 [2024-11-18 07:21:07.877924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.051 [2024-11-18 07:21:07.877951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.051 qpair failed and we were unable to recover it. 00:35:47.051 [2024-11-18 07:21:07.878093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.051 [2024-11-18 07:21:07.878121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.051 qpair failed and we were unable to recover it. 00:35:47.051 [2024-11-18 07:21:07.878209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.051 [2024-11-18 07:21:07.878236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.051 qpair failed and we were unable to recover it. 00:35:47.051 [2024-11-18 07:21:07.878335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.051 [2024-11-18 07:21:07.878375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.051 qpair failed and we were unable to recover it. 00:35:47.051 [2024-11-18 07:21:07.878513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.051 [2024-11-18 07:21:07.878541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.051 qpair failed and we were unable to recover it. 00:35:47.051 [2024-11-18 07:21:07.878638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.051 [2024-11-18 07:21:07.878664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.051 qpair failed and we were unable to recover it. 00:35:47.051 [2024-11-18 07:21:07.878762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.051 [2024-11-18 07:21:07.878793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.051 qpair failed and we were unable to recover it. 00:35:47.051 [2024-11-18 07:21:07.878890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.051 [2024-11-18 07:21:07.878918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.051 qpair failed and we were unable to recover it. 00:35:47.051 [2024-11-18 07:21:07.879066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.051 [2024-11-18 07:21:07.879093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.051 qpair failed and we were unable to recover it. 00:35:47.051 [2024-11-18 07:21:07.879210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.051 [2024-11-18 07:21:07.879237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.051 qpair failed and we were unable to recover it. 00:35:47.051 [2024-11-18 07:21:07.879331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.051 [2024-11-18 07:21:07.879358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.051 qpair failed and we were unable to recover it. 00:35:47.051 [2024-11-18 07:21:07.879503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.051 [2024-11-18 07:21:07.879530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.051 qpair failed and we were unable to recover it. 00:35:47.051 [2024-11-18 07:21:07.879621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.051 [2024-11-18 07:21:07.879648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.051 qpair failed and we were unable to recover it. 00:35:47.051 [2024-11-18 07:21:07.879761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.051 [2024-11-18 07:21:07.879796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.051 qpair failed and we were unable to recover it. 00:35:47.051 [2024-11-18 07:21:07.879921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.051 [2024-11-18 07:21:07.879948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.051 qpair failed and we were unable to recover it. 00:35:47.051 [2024-11-18 07:21:07.880062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.051 [2024-11-18 07:21:07.880088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.051 qpair failed and we were unable to recover it. 00:35:47.051 [2024-11-18 07:21:07.880171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.051 [2024-11-18 07:21:07.880198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.051 qpair failed and we were unable to recover it. 00:35:47.051 [2024-11-18 07:21:07.880292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.051 [2024-11-18 07:21:07.880331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.051 qpair failed and we were unable to recover it. 00:35:47.051 [2024-11-18 07:21:07.880426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.051 [2024-11-18 07:21:07.880454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.051 qpair failed and we were unable to recover it. 00:35:47.051 [2024-11-18 07:21:07.880584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.051 [2024-11-18 07:21:07.880613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.051 qpair failed and we were unable to recover it. 00:35:47.051 [2024-11-18 07:21:07.880705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.051 [2024-11-18 07:21:07.880731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.051 qpair failed and we were unable to recover it. 00:35:47.051 [2024-11-18 07:21:07.880822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.051 [2024-11-18 07:21:07.880848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.051 qpair failed and we were unable to recover it. 00:35:47.051 [2024-11-18 07:21:07.880938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.051 [2024-11-18 07:21:07.880965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.051 qpair failed and we were unable to recover it. 00:35:47.051 [2024-11-18 07:21:07.881116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.051 [2024-11-18 07:21:07.881145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.051 qpair failed and we were unable to recover it. 00:35:47.051 [2024-11-18 07:21:07.881232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.051 [2024-11-18 07:21:07.881259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.051 qpair failed and we were unable to recover it. 00:35:47.051 [2024-11-18 07:21:07.881348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.051 [2024-11-18 07:21:07.881374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.051 qpair failed and we were unable to recover it. 00:35:47.051 [2024-11-18 07:21:07.881518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.051 [2024-11-18 07:21:07.881546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.051 qpair failed and we were unable to recover it. 00:35:47.051 [2024-11-18 07:21:07.881640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.051 [2024-11-18 07:21:07.881667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.051 qpair failed and we were unable to recover it. 00:35:47.051 [2024-11-18 07:21:07.881750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.051 [2024-11-18 07:21:07.881778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.051 qpair failed and we were unable to recover it. 00:35:47.051 [2024-11-18 07:21:07.881901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.051 [2024-11-18 07:21:07.881927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.051 qpair failed and we were unable to recover it. 00:35:47.051 [2024-11-18 07:21:07.882013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.051 [2024-11-18 07:21:07.882040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.051 qpair failed and we were unable to recover it. 00:35:47.051 [2024-11-18 07:21:07.882132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.051 [2024-11-18 07:21:07.882160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.051 qpair failed and we were unable to recover it. 00:35:47.051 [2024-11-18 07:21:07.882269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.051 [2024-11-18 07:21:07.882295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.051 qpair failed and we were unable to recover it. 00:35:47.051 [2024-11-18 07:21:07.882378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.051 [2024-11-18 07:21:07.882404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.051 qpair failed and we were unable to recover it. 00:35:47.051 [2024-11-18 07:21:07.882500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.051 [2024-11-18 07:21:07.882527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.051 qpair failed and we were unable to recover it. 00:35:47.051 [2024-11-18 07:21:07.882641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.051 [2024-11-18 07:21:07.882668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.051 qpair failed and we were unable to recover it. 00:35:47.051 [2024-11-18 07:21:07.882781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.051 [2024-11-18 07:21:07.882816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.051 qpair failed and we were unable to recover it. 00:35:47.051 [2024-11-18 07:21:07.882953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.051 [2024-11-18 07:21:07.882980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.052 qpair failed and we were unable to recover it. 00:35:47.052 [2024-11-18 07:21:07.883092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.052 [2024-11-18 07:21:07.883119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.052 qpair failed and we were unable to recover it. 00:35:47.052 [2024-11-18 07:21:07.883233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.052 [2024-11-18 07:21:07.883259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.052 qpair failed and we were unable to recover it. 00:35:47.052 [2024-11-18 07:21:07.883348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.052 [2024-11-18 07:21:07.883376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.052 qpair failed and we were unable to recover it. 00:35:47.052 [2024-11-18 07:21:07.883482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.052 [2024-11-18 07:21:07.883529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.052 qpair failed and we were unable to recover it. 00:35:47.052 [2024-11-18 07:21:07.883627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.052 [2024-11-18 07:21:07.883655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.052 qpair failed and we were unable to recover it. 00:35:47.052 [2024-11-18 07:21:07.883767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.052 [2024-11-18 07:21:07.883794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.052 qpair failed and we were unable to recover it. 00:35:47.052 [2024-11-18 07:21:07.883922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.052 [2024-11-18 07:21:07.883948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.052 qpair failed and we were unable to recover it. 00:35:47.052 [2024-11-18 07:21:07.884043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.052 [2024-11-18 07:21:07.884069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.052 qpair failed and we were unable to recover it. 00:35:47.052 [2024-11-18 07:21:07.884158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.052 [2024-11-18 07:21:07.884187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.052 qpair failed and we were unable to recover it. 00:35:47.052 [2024-11-18 07:21:07.884299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.052 [2024-11-18 07:21:07.884326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.052 qpair failed and we were unable to recover it. 00:35:47.052 [2024-11-18 07:21:07.884448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.052 [2024-11-18 07:21:07.884475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.052 qpair failed and we were unable to recover it. 00:35:47.052 [2024-11-18 07:21:07.884578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.052 [2024-11-18 07:21:07.884605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.052 qpair failed and we were unable to recover it. 00:35:47.052 [2024-11-18 07:21:07.884696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.052 [2024-11-18 07:21:07.884723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.052 qpair failed and we were unable to recover it. 00:35:47.052 [2024-11-18 07:21:07.884821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.052 [2024-11-18 07:21:07.884848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.052 qpair failed and we were unable to recover it. 00:35:47.052 [2024-11-18 07:21:07.884941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.052 [2024-11-18 07:21:07.884968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.052 qpair failed and we were unable to recover it. 00:35:47.052 [2024-11-18 07:21:07.885068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.052 [2024-11-18 07:21:07.885095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.052 qpair failed and we were unable to recover it. 00:35:47.052 [2024-11-18 07:21:07.885234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.052 [2024-11-18 07:21:07.885261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.052 qpair failed and we were unable to recover it. 00:35:47.052 [2024-11-18 07:21:07.885345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.052 [2024-11-18 07:21:07.885371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.052 qpair failed and we were unable to recover it. 00:35:47.052 [2024-11-18 07:21:07.885460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.052 [2024-11-18 07:21:07.885505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.052 qpair failed and we were unable to recover it. 00:35:47.052 [2024-11-18 07:21:07.885634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.052 [2024-11-18 07:21:07.885673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.052 qpair failed and we were unable to recover it. 00:35:47.052 [2024-11-18 07:21:07.885766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.052 [2024-11-18 07:21:07.885802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.052 qpair failed and we were unable to recover it. 00:35:47.052 [2024-11-18 07:21:07.885888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.052 [2024-11-18 07:21:07.885913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.052 qpair failed and we were unable to recover it. 00:35:47.052 [2024-11-18 07:21:07.886034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.052 [2024-11-18 07:21:07.886060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.052 qpair failed and we were unable to recover it. 00:35:47.052 [2024-11-18 07:21:07.886152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.053 [2024-11-18 07:21:07.886179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.053 qpair failed and we were unable to recover it. 00:35:47.053 [2024-11-18 07:21:07.886294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.053 [2024-11-18 07:21:07.886319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.053 qpair failed and we were unable to recover it. 00:35:47.053 [2024-11-18 07:21:07.886407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.053 [2024-11-18 07:21:07.886442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.053 qpair failed and we were unable to recover it. 00:35:47.053 [2024-11-18 07:21:07.886557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.053 [2024-11-18 07:21:07.886587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.053 qpair failed and we were unable to recover it. 00:35:47.053 [2024-11-18 07:21:07.886696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.053 [2024-11-18 07:21:07.886723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.053 qpair failed and we were unable to recover it. 00:35:47.053 [2024-11-18 07:21:07.886841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.053 [2024-11-18 07:21:07.886867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.053 qpair failed and we were unable to recover it. 00:35:47.053 [2024-11-18 07:21:07.886951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.053 [2024-11-18 07:21:07.886978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.053 qpair failed and we were unable to recover it. 00:35:47.053 [2024-11-18 07:21:07.887055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.053 [2024-11-18 07:21:07.887082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.053 qpair failed and we were unable to recover it. 00:35:47.053 [2024-11-18 07:21:07.887164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.053 [2024-11-18 07:21:07.887190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.053 qpair failed and we were unable to recover it. 00:35:47.053 [2024-11-18 07:21:07.887306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.053 [2024-11-18 07:21:07.887332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.053 qpair failed and we were unable to recover it. 00:35:47.053 [2024-11-18 07:21:07.887426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.053 [2024-11-18 07:21:07.887454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.053 qpair failed and we were unable to recover it. 00:35:47.053 [2024-11-18 07:21:07.887572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.053 [2024-11-18 07:21:07.887610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.053 qpair failed and we were unable to recover it. 00:35:47.053 [2024-11-18 07:21:07.887702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.053 [2024-11-18 07:21:07.887730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.053 qpair failed and we were unable to recover it. 00:35:47.053 [2024-11-18 07:21:07.887876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.053 [2024-11-18 07:21:07.887901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.053 qpair failed and we were unable to recover it. 00:35:47.053 [2024-11-18 07:21:07.888020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.053 [2024-11-18 07:21:07.888046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.053 qpair failed and we were unable to recover it. 00:35:47.053 [2024-11-18 07:21:07.888167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.053 [2024-11-18 07:21:07.888193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.053 qpair failed and we were unable to recover it. 00:35:47.053 [2024-11-18 07:21:07.888283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.053 [2024-11-18 07:21:07.888310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.053 qpair failed and we were unable to recover it. 00:35:47.053 [2024-11-18 07:21:07.888395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.053 [2024-11-18 07:21:07.888421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.053 qpair failed and we were unable to recover it. 00:35:47.053 [2024-11-18 07:21:07.888533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.053 [2024-11-18 07:21:07.888573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.053 qpair failed and we were unable to recover it. 00:35:47.053 [2024-11-18 07:21:07.888675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.053 [2024-11-18 07:21:07.888703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.053 qpair failed and we were unable to recover it. 00:35:47.053 [2024-11-18 07:21:07.888796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.053 [2024-11-18 07:21:07.888823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.053 qpair failed and we were unable to recover it. 00:35:47.053 [2024-11-18 07:21:07.888904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.053 [2024-11-18 07:21:07.888930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.053 qpair failed and we were unable to recover it. 00:35:47.053 [2024-11-18 07:21:07.889020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.053 [2024-11-18 07:21:07.889047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.053 qpair failed and we were unable to recover it. 00:35:47.053 [2024-11-18 07:21:07.889162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.053 [2024-11-18 07:21:07.889188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.053 qpair failed and we were unable to recover it. 00:35:47.053 [2024-11-18 07:21:07.889270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.053 [2024-11-18 07:21:07.889297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.053 qpair failed and we were unable to recover it. 00:35:47.053 [2024-11-18 07:21:07.889410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.053 [2024-11-18 07:21:07.889439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.053 qpair failed and we were unable to recover it. 00:35:47.053 [2024-11-18 07:21:07.889562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.053 [2024-11-18 07:21:07.889589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.053 qpair failed and we were unable to recover it. 00:35:47.053 [2024-11-18 07:21:07.889700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.053 [2024-11-18 07:21:07.889726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.053 qpair failed and we were unable to recover it. 00:35:47.053 [2024-11-18 07:21:07.889821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.053 [2024-11-18 07:21:07.889847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.053 qpair failed and we were unable to recover it. 00:35:47.053 [2024-11-18 07:21:07.889984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.053 [2024-11-18 07:21:07.890024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.053 qpair failed and we were unable to recover it. 00:35:47.053 [2024-11-18 07:21:07.890165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.053 [2024-11-18 07:21:07.890193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.053 qpair failed and we were unable to recover it. 00:35:47.053 [2024-11-18 07:21:07.890268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.053 [2024-11-18 07:21:07.890295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.053 qpair failed and we were unable to recover it. 00:35:47.053 [2024-11-18 07:21:07.890416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.053 [2024-11-18 07:21:07.890443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.053 qpair failed and we were unable to recover it. 00:35:47.053 [2024-11-18 07:21:07.890543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.053 [2024-11-18 07:21:07.890569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.053 qpair failed and we were unable to recover it. 00:35:47.053 [2024-11-18 07:21:07.890662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.053 [2024-11-18 07:21:07.890688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.053 qpair failed and we were unable to recover it. 00:35:47.053 [2024-11-18 07:21:07.890780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.053 [2024-11-18 07:21:07.890806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.053 qpair failed and we were unable to recover it. 00:35:47.053 [2024-11-18 07:21:07.890913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.053 [2024-11-18 07:21:07.890938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.053 qpair failed and we were unable to recover it. 00:35:47.053 [2024-11-18 07:21:07.891029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.053 [2024-11-18 07:21:07.891057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.053 qpair failed and we were unable to recover it. 00:35:47.053 [2024-11-18 07:21:07.891143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.053 [2024-11-18 07:21:07.891171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.053 qpair failed and we were unable to recover it. 00:35:47.053 [2024-11-18 07:21:07.891250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.053 [2024-11-18 07:21:07.891277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.053 qpair failed and we were unable to recover it. 00:35:47.053 [2024-11-18 07:21:07.891362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.054 [2024-11-18 07:21:07.891389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.054 qpair failed and we were unable to recover it. 00:35:47.054 [2024-11-18 07:21:07.891500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.054 [2024-11-18 07:21:07.891527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.054 qpair failed and we were unable to recover it. 00:35:47.054 [2024-11-18 07:21:07.891614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.054 [2024-11-18 07:21:07.891645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.054 qpair failed and we were unable to recover it. 00:35:47.054 [2024-11-18 07:21:07.891732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.054 [2024-11-18 07:21:07.891759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.054 qpair failed and we were unable to recover it. 00:35:47.054 [2024-11-18 07:21:07.891879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.054 [2024-11-18 07:21:07.891906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.054 qpair failed and we were unable to recover it. 00:35:47.054 [2024-11-18 07:21:07.892012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.054 [2024-11-18 07:21:07.892038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.054 qpair failed and we were unable to recover it. 00:35:47.054 [2024-11-18 07:21:07.892120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.054 [2024-11-18 07:21:07.892146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.054 qpair failed and we were unable to recover it. 00:35:47.054 [2024-11-18 07:21:07.892279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.054 [2024-11-18 07:21:07.892319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.054 qpair failed and we were unable to recover it. 00:35:47.054 [2024-11-18 07:21:07.892445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.054 [2024-11-18 07:21:07.892473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.054 qpair failed and we were unable to recover it. 00:35:47.054 [2024-11-18 07:21:07.892578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.054 [2024-11-18 07:21:07.892605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.054 qpair failed and we were unable to recover it. 00:35:47.054 [2024-11-18 07:21:07.892687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.054 [2024-11-18 07:21:07.892713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.054 qpair failed and we were unable to recover it. 00:35:47.054 [2024-11-18 07:21:07.892826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.054 [2024-11-18 07:21:07.892852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.054 qpair failed and we were unable to recover it. 00:35:47.054 [2024-11-18 07:21:07.892990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.054 [2024-11-18 07:21:07.893016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.054 qpair failed and we were unable to recover it. 00:35:47.054 [2024-11-18 07:21:07.893108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.054 [2024-11-18 07:21:07.893135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.054 qpair failed and we were unable to recover it. 00:35:47.054 [2024-11-18 07:21:07.893261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.054 [2024-11-18 07:21:07.893300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.054 qpair failed and we were unable to recover it. 00:35:47.054 [2024-11-18 07:21:07.893432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.054 [2024-11-18 07:21:07.893479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.054 qpair failed and we were unable to recover it. 00:35:47.054 [2024-11-18 07:21:07.893599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.054 [2024-11-18 07:21:07.893627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.054 qpair failed and we were unable to recover it. 00:35:47.054 [2024-11-18 07:21:07.893720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.054 [2024-11-18 07:21:07.893748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.054 qpair failed and we were unable to recover it. 00:35:47.054 [2024-11-18 07:21:07.893845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.054 [2024-11-18 07:21:07.893875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.054 qpair failed and we were unable to recover it. 00:35:47.054 [2024-11-18 07:21:07.893992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.054 [2024-11-18 07:21:07.894020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.054 qpair failed and we were unable to recover it. 00:35:47.054 [2024-11-18 07:21:07.894136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.054 [2024-11-18 07:21:07.894161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.054 qpair failed and we were unable to recover it. 00:35:47.054 [2024-11-18 07:21:07.894275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.054 [2024-11-18 07:21:07.894301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.054 qpair failed and we were unable to recover it. 00:35:47.054 [2024-11-18 07:21:07.894381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.054 [2024-11-18 07:21:07.894408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.054 qpair failed and we were unable to recover it. 00:35:47.054 [2024-11-18 07:21:07.894501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.054 [2024-11-18 07:21:07.894528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.054 qpair failed and we were unable to recover it. 00:35:47.054 [2024-11-18 07:21:07.894641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.054 [2024-11-18 07:21:07.894667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.054 qpair failed and we were unable to recover it. 00:35:47.054 [2024-11-18 07:21:07.894751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.054 [2024-11-18 07:21:07.894777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.054 qpair failed and we were unable to recover it. 00:35:47.054 [2024-11-18 07:21:07.894891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.054 [2024-11-18 07:21:07.894917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.054 qpair failed and we were unable to recover it. 00:35:47.054 [2024-11-18 07:21:07.895035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.054 [2024-11-18 07:21:07.895063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.054 qpair failed and we were unable to recover it. 00:35:47.054 [2024-11-18 07:21:07.895152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.054 [2024-11-18 07:21:07.895179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.054 qpair failed and we were unable to recover it. 00:35:47.054 [2024-11-18 07:21:07.895294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.054 [2024-11-18 07:21:07.895327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.054 qpair failed and we were unable to recover it. 00:35:47.054 [2024-11-18 07:21:07.895410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.054 [2024-11-18 07:21:07.895436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.054 qpair failed and we were unable to recover it. 00:35:47.054 [2024-11-18 07:21:07.895547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.054 [2024-11-18 07:21:07.895573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.054 qpair failed and we were unable to recover it. 00:35:47.054 [2024-11-18 07:21:07.895661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.054 [2024-11-18 07:21:07.895687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.054 qpair failed and we were unable to recover it. 00:35:47.054 [2024-11-18 07:21:07.895782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.054 [2024-11-18 07:21:07.895809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.054 qpair failed and we were unable to recover it. 00:35:47.054 [2024-11-18 07:21:07.895957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.054 [2024-11-18 07:21:07.895984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.054 qpair failed and we were unable to recover it. 00:35:47.054 [2024-11-18 07:21:07.896063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.054 [2024-11-18 07:21:07.896088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.054 qpair failed and we were unable to recover it. 00:35:47.054 [2024-11-18 07:21:07.896197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.054 [2024-11-18 07:21:07.896223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.054 qpair failed and we were unable to recover it. 00:35:47.054 [2024-11-18 07:21:07.896339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.054 [2024-11-18 07:21:07.896366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.054 qpair failed and we were unable to recover it. 00:35:47.054 [2024-11-18 07:21:07.896480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.054 [2024-11-18 07:21:07.896515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.054 qpair failed and we were unable to recover it. 00:35:47.054 [2024-11-18 07:21:07.896600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.054 [2024-11-18 07:21:07.896627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.054 qpair failed and we were unable to recover it. 00:35:47.054 [2024-11-18 07:21:07.896717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.054 [2024-11-18 07:21:07.896744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.054 qpair failed and we were unable to recover it. 00:35:47.054 [2024-11-18 07:21:07.896835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.054 [2024-11-18 07:21:07.896862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.054 qpair failed and we were unable to recover it. 00:35:47.054 [2024-11-18 07:21:07.896986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.054 [2024-11-18 07:21:07.897013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.054 qpair failed and we were unable to recover it. 00:35:47.054 [2024-11-18 07:21:07.897147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.054 [2024-11-18 07:21:07.897185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.054 qpair failed and we were unable to recover it. 00:35:47.054 [2024-11-18 07:21:07.897284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.054 [2024-11-18 07:21:07.897311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.054 qpair failed and we were unable to recover it. 00:35:47.054 [2024-11-18 07:21:07.897413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.055 [2024-11-18 07:21:07.897452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.055 qpair failed and we were unable to recover it. 00:35:47.055 [2024-11-18 07:21:07.897547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.055 [2024-11-18 07:21:07.897575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.055 qpair failed and we were unable to recover it. 00:35:47.055 [2024-11-18 07:21:07.897667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.055 [2024-11-18 07:21:07.897694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.055 qpair failed and we were unable to recover it. 00:35:47.055 [2024-11-18 07:21:07.897800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.055 [2024-11-18 07:21:07.897826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.055 qpair failed and we were unable to recover it. 00:35:47.055 [2024-11-18 07:21:07.897933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.055 [2024-11-18 07:21:07.897959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.055 qpair failed and we were unable to recover it. 00:35:47.055 [2024-11-18 07:21:07.898109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.055 [2024-11-18 07:21:07.898139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.055 qpair failed and we were unable to recover it. 00:35:47.055 [2024-11-18 07:21:07.898230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.055 [2024-11-18 07:21:07.898257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.055 qpair failed and we were unable to recover it. 00:35:47.055 [2024-11-18 07:21:07.898384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.055 [2024-11-18 07:21:07.898414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.055 qpair failed and we were unable to recover it. 00:35:47.055 [2024-11-18 07:21:07.898519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.055 [2024-11-18 07:21:07.898546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.055 qpair failed and we were unable to recover it. 00:35:47.055 [2024-11-18 07:21:07.898643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.055 [2024-11-18 07:21:07.898670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.055 qpair failed and we were unable to recover it. 00:35:47.055 [2024-11-18 07:21:07.898754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.055 [2024-11-18 07:21:07.898780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.055 qpair failed and we were unable to recover it. 00:35:47.055 [2024-11-18 07:21:07.898864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.055 [2024-11-18 07:21:07.898892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.055 qpair failed and we were unable to recover it. 00:35:47.055 [2024-11-18 07:21:07.898985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.055 [2024-11-18 07:21:07.899015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.055 qpair failed and we were unable to recover it. 00:35:47.055 [2024-11-18 07:21:07.899134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.055 [2024-11-18 07:21:07.899162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.055 qpair failed and we were unable to recover it. 00:35:47.055 [2024-11-18 07:21:07.899276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.055 [2024-11-18 07:21:07.899302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.055 qpair failed and we were unable to recover it. 00:35:47.055 [2024-11-18 07:21:07.899414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.055 [2024-11-18 07:21:07.899440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.055 qpair failed and we were unable to recover it. 00:35:47.055 [2024-11-18 07:21:07.899557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.055 [2024-11-18 07:21:07.899584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.055 qpair failed and we were unable to recover it. 00:35:47.055 [2024-11-18 07:21:07.899673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.055 [2024-11-18 07:21:07.899699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.055 qpair failed and we were unable to recover it. 00:35:47.055 [2024-11-18 07:21:07.899800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.055 [2024-11-18 07:21:07.899826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.055 qpair failed and we were unable to recover it. 00:35:47.055 [2024-11-18 07:21:07.899910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.055 [2024-11-18 07:21:07.899936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.055 qpair failed and we were unable to recover it. 00:35:47.055 [2024-11-18 07:21:07.900019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.055 [2024-11-18 07:21:07.900045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.055 qpair failed and we were unable to recover it. 00:35:47.055 [2024-11-18 07:21:07.900131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.055 [2024-11-18 07:21:07.900158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.055 qpair failed and we were unable to recover it. 00:35:47.055 [2024-11-18 07:21:07.900274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.055 [2024-11-18 07:21:07.900300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.055 qpair failed and we were unable to recover it. 00:35:47.055 [2024-11-18 07:21:07.900419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.055 [2024-11-18 07:21:07.900447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.055 qpair failed and we were unable to recover it. 00:35:47.055 [2024-11-18 07:21:07.900547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.055 [2024-11-18 07:21:07.900578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.055 qpair failed and we were unable to recover it. 00:35:47.055 [2024-11-18 07:21:07.900690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.055 [2024-11-18 07:21:07.900716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.055 qpair failed and we were unable to recover it. 00:35:47.055 [2024-11-18 07:21:07.900801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.055 [2024-11-18 07:21:07.900827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.055 qpair failed and we were unable to recover it. 00:35:47.055 [2024-11-18 07:21:07.900939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.055 [2024-11-18 07:21:07.900965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.055 qpair failed and we were unable to recover it. 00:35:47.055 [2024-11-18 07:21:07.901074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.055 [2024-11-18 07:21:07.901100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.055 qpair failed and we were unable to recover it. 00:35:47.055 [2024-11-18 07:21:07.901212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.055 [2024-11-18 07:21:07.901238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.055 qpair failed and we were unable to recover it. 00:35:47.055 [2024-11-18 07:21:07.901321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.055 [2024-11-18 07:21:07.901348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.055 qpair failed and we were unable to recover it. 00:35:47.055 [2024-11-18 07:21:07.901442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.055 [2024-11-18 07:21:07.901471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.055 qpair failed and we were unable to recover it. 00:35:47.055 [2024-11-18 07:21:07.901570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.055 [2024-11-18 07:21:07.901597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.055 qpair failed and we were unable to recover it. 00:35:47.055 [2024-11-18 07:21:07.901690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.055 [2024-11-18 07:21:07.901717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.055 qpair failed and we were unable to recover it. 00:35:47.055 [2024-11-18 07:21:07.901827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.055 [2024-11-18 07:21:07.901854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.055 qpair failed and we were unable to recover it. 00:35:47.055 [2024-11-18 07:21:07.901939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.055 [2024-11-18 07:21:07.901966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.055 qpair failed and we were unable to recover it. 00:35:47.055 [2024-11-18 07:21:07.902046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.055 [2024-11-18 07:21:07.902074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.055 qpair failed and we were unable to recover it. 00:35:47.055 [2024-11-18 07:21:07.902165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.055 [2024-11-18 07:21:07.902192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.055 qpair failed and we were unable to recover it. 00:35:47.055 [2024-11-18 07:21:07.902318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.055 [2024-11-18 07:21:07.902346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.055 qpair failed and we were unable to recover it. 00:35:47.055 [2024-11-18 07:21:07.902441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.055 [2024-11-18 07:21:07.902469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.055 qpair failed and we were unable to recover it. 00:35:47.055 [2024-11-18 07:21:07.902565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.055 [2024-11-18 07:21:07.902591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.055 qpair failed and we were unable to recover it. 00:35:47.055 [2024-11-18 07:21:07.902683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.055 [2024-11-18 07:21:07.902709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.055 qpair failed and we were unable to recover it. 00:35:47.055 [2024-11-18 07:21:07.902813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.055 [2024-11-18 07:21:07.902848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.055 qpair failed and we were unable to recover it. 00:35:47.055 [2024-11-18 07:21:07.902973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.055 [2024-11-18 07:21:07.902999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.055 qpair failed and we were unable to recover it. 00:35:47.055 [2024-11-18 07:21:07.903087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.055 [2024-11-18 07:21:07.903112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.055 qpair failed and we were unable to recover it. 00:35:47.055 [2024-11-18 07:21:07.903207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.055 [2024-11-18 07:21:07.903235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.055 qpair failed and we were unable to recover it. 00:35:47.055 [2024-11-18 07:21:07.903347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.055 [2024-11-18 07:21:07.903375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.056 qpair failed and we were unable to recover it. 00:35:47.056 [2024-11-18 07:21:07.903485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.056 [2024-11-18 07:21:07.903517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.056 qpair failed and we were unable to recover it. 00:35:47.056 [2024-11-18 07:21:07.903604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.056 [2024-11-18 07:21:07.903631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.056 qpair failed and we were unable to recover it. 00:35:47.056 [2024-11-18 07:21:07.903718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.056 [2024-11-18 07:21:07.903744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.056 qpair failed and we were unable to recover it. 00:35:47.056 [2024-11-18 07:21:07.903837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.056 [2024-11-18 07:21:07.903862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.056 qpair failed and we were unable to recover it. 00:35:47.056 [2024-11-18 07:21:07.903981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.056 [2024-11-18 07:21:07.904012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.056 qpair failed and we were unable to recover it. 00:35:47.056 [2024-11-18 07:21:07.904088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.056 [2024-11-18 07:21:07.904113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.056 qpair failed and we were unable to recover it. 00:35:47.056 [2024-11-18 07:21:07.904200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.056 [2024-11-18 07:21:07.904226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.056 qpair failed and we were unable to recover it. 00:35:47.056 [2024-11-18 07:21:07.904313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.056 [2024-11-18 07:21:07.904339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.056 qpair failed and we were unable to recover it. 00:35:47.056 [2024-11-18 07:21:07.904428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.056 [2024-11-18 07:21:07.904454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.056 qpair failed and we were unable to recover it. 00:35:47.056 [2024-11-18 07:21:07.904558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.056 [2024-11-18 07:21:07.904596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.056 qpair failed and we were unable to recover it. 00:35:47.056 [2024-11-18 07:21:07.904742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.056 [2024-11-18 07:21:07.904770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.056 qpair failed and we were unable to recover it. 00:35:47.056 [2024-11-18 07:21:07.904865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.056 [2024-11-18 07:21:07.904892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.056 qpair failed and we were unable to recover it. 00:35:47.056 [2024-11-18 07:21:07.905009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.056 [2024-11-18 07:21:07.905035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.056 qpair failed and we were unable to recover it. 00:35:47.056 [2024-11-18 07:21:07.905181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.056 [2024-11-18 07:21:07.905207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.056 qpair failed and we were unable to recover it. 00:35:47.056 [2024-11-18 07:21:07.905355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.056 [2024-11-18 07:21:07.905382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.056 qpair failed and we were unable to recover it. 00:35:47.056 [2024-11-18 07:21:07.905470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.056 [2024-11-18 07:21:07.905512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.056 qpair failed and we were unable to recover it. 00:35:47.056 [2024-11-18 07:21:07.905594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.056 [2024-11-18 07:21:07.905620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.056 qpair failed and we were unable to recover it. 00:35:47.056 [2024-11-18 07:21:07.905702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.056 [2024-11-18 07:21:07.905731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.056 qpair failed and we were unable to recover it. 00:35:47.056 [2024-11-18 07:21:07.905864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.056 [2024-11-18 07:21:07.905890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.056 qpair failed and we were unable to recover it. 00:35:47.056 [2024-11-18 07:21:07.905977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.056 [2024-11-18 07:21:07.906005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.056 qpair failed and we were unable to recover it. 00:35:47.056 [2024-11-18 07:21:07.906086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.056 [2024-11-18 07:21:07.906112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.056 qpair failed and we were unable to recover it. 00:35:47.056 [2024-11-18 07:21:07.906229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.056 [2024-11-18 07:21:07.906255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.056 qpair failed and we were unable to recover it. 00:35:47.056 [2024-11-18 07:21:07.906345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.056 [2024-11-18 07:21:07.906372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.056 qpair failed and we were unable to recover it. 00:35:47.056 [2024-11-18 07:21:07.906484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.056 [2024-11-18 07:21:07.906518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.056 qpair failed and we were unable to recover it. 00:35:47.056 [2024-11-18 07:21:07.906633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.056 [2024-11-18 07:21:07.906660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.056 qpair failed and we were unable to recover it. 00:35:47.056 [2024-11-18 07:21:07.906739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.056 [2024-11-18 07:21:07.906765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.056 qpair failed and we were unable to recover it. 00:35:47.056 [2024-11-18 07:21:07.906873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.056 [2024-11-18 07:21:07.906899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.056 qpair failed and we were unable to recover it. 00:35:47.056 [2024-11-18 07:21:07.906984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.056 [2024-11-18 07:21:07.907010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.056 qpair failed and we were unable to recover it. 00:35:47.056 [2024-11-18 07:21:07.907121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.056 [2024-11-18 07:21:07.907147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.056 qpair failed and we were unable to recover it. 00:35:47.056 [2024-11-18 07:21:07.907261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.056 [2024-11-18 07:21:07.907288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.056 qpair failed and we were unable to recover it. 00:35:47.056 [2024-11-18 07:21:07.907377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.056 [2024-11-18 07:21:07.907403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.056 qpair failed and we were unable to recover it. 00:35:47.056 [2024-11-18 07:21:07.907525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.056 [2024-11-18 07:21:07.907554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.056 qpair failed and we were unable to recover it. 00:35:47.056 [2024-11-18 07:21:07.907670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.056 [2024-11-18 07:21:07.907695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.056 qpair failed and we were unable to recover it. 00:35:47.056 [2024-11-18 07:21:07.907787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.056 [2024-11-18 07:21:07.907813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.056 qpair failed and we were unable to recover it. 00:35:47.056 [2024-11-18 07:21:07.907928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.056 [2024-11-18 07:21:07.907954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.056 qpair failed and we were unable to recover it. 00:35:47.056 [2024-11-18 07:21:07.908061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.056 [2024-11-18 07:21:07.908086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.056 qpair failed and we were unable to recover it. 00:35:47.056 [2024-11-18 07:21:07.908172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.056 [2024-11-18 07:21:07.908199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.056 qpair failed and we were unable to recover it. 00:35:47.056 [2024-11-18 07:21:07.908313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.056 [2024-11-18 07:21:07.908339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.056 qpair failed and we were unable to recover it. 00:35:47.056 [2024-11-18 07:21:07.908428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.056 [2024-11-18 07:21:07.908453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.056 qpair failed and we were unable to recover it. 00:35:47.057 [2024-11-18 07:21:07.908562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.057 [2024-11-18 07:21:07.908589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.057 qpair failed and we were unable to recover it. 00:35:47.057 [2024-11-18 07:21:07.908692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.057 [2024-11-18 07:21:07.908719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.057 qpair failed and we were unable to recover it. 00:35:47.057 [2024-11-18 07:21:07.908805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.057 [2024-11-18 07:21:07.908830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.057 qpair failed and we were unable to recover it. 00:35:47.057 [2024-11-18 07:21:07.908915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.057 [2024-11-18 07:21:07.908943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.057 qpair failed and we were unable to recover it. 00:35:47.057 [2024-11-18 07:21:07.909062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.057 [2024-11-18 07:21:07.909090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.057 qpair failed and we were unable to recover it. 00:35:47.057 [2024-11-18 07:21:07.909207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.057 [2024-11-18 07:21:07.909238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.057 qpair failed and we were unable to recover it. 00:35:47.057 [2024-11-18 07:21:07.909358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.057 [2024-11-18 07:21:07.909384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.057 qpair failed and we were unable to recover it. 00:35:47.057 [2024-11-18 07:21:07.909462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.057 [2024-11-18 07:21:07.909488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.057 qpair failed and we were unable to recover it. 00:35:47.057 [2024-11-18 07:21:07.909588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.057 [2024-11-18 07:21:07.909614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.057 qpair failed and we were unable to recover it. 00:35:47.057 [2024-11-18 07:21:07.909701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.057 [2024-11-18 07:21:07.909728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.057 qpair failed and we were unable to recover it. 00:35:47.057 [2024-11-18 07:21:07.909852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.057 [2024-11-18 07:21:07.909878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.057 qpair failed and we were unable to recover it. 00:35:47.057 [2024-11-18 07:21:07.909969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.057 [2024-11-18 07:21:07.909995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.057 qpair failed and we were unable to recover it. 00:35:47.057 [2024-11-18 07:21:07.910086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.057 [2024-11-18 07:21:07.910113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.057 qpair failed and we were unable to recover it. 00:35:47.057 [2024-11-18 07:21:07.910193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.057 [2024-11-18 07:21:07.910220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.057 qpair failed and we were unable to recover it. 00:35:47.057 [2024-11-18 07:21:07.910367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.057 [2024-11-18 07:21:07.910406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.057 qpair failed and we were unable to recover it. 00:35:47.057 [2024-11-18 07:21:07.910514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.057 [2024-11-18 07:21:07.910543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.057 qpair failed and we were unable to recover it. 00:35:47.057 [2024-11-18 07:21:07.910639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.057 [2024-11-18 07:21:07.910666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.057 qpair failed and we were unable to recover it. 00:35:47.057 [2024-11-18 07:21:07.910758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.057 [2024-11-18 07:21:07.910785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.057 qpair failed and we were unable to recover it. 00:35:47.057 [2024-11-18 07:21:07.910896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.057 [2024-11-18 07:21:07.910924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.057 qpair failed and we were unable to recover it. 00:35:47.057 [2024-11-18 07:21:07.911023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.057 [2024-11-18 07:21:07.911050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.057 qpair failed and we were unable to recover it. 00:35:47.057 [2024-11-18 07:21:07.911130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.057 [2024-11-18 07:21:07.911158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.057 qpair failed and we were unable to recover it. 00:35:47.057 [2024-11-18 07:21:07.911274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.057 [2024-11-18 07:21:07.911302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.057 qpair failed and we were unable to recover it. 00:35:47.057 [2024-11-18 07:21:07.911405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.057 [2024-11-18 07:21:07.911433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.057 qpair failed and we were unable to recover it. 00:35:47.057 [2024-11-18 07:21:07.911561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.057 [2024-11-18 07:21:07.911588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.057 qpair failed and we were unable to recover it. 00:35:47.057 [2024-11-18 07:21:07.911678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.057 [2024-11-18 07:21:07.911705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.057 qpair failed and we were unable to recover it. 00:35:47.057 [2024-11-18 07:21:07.911856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.057 [2024-11-18 07:21:07.911881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.057 qpair failed and we were unable to recover it. 00:35:47.057 [2024-11-18 07:21:07.911963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.057 [2024-11-18 07:21:07.911989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.057 qpair failed and we were unable to recover it. 00:35:47.057 [2024-11-18 07:21:07.912091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.057 [2024-11-18 07:21:07.912117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.057 qpair failed and we were unable to recover it. 00:35:47.057 [2024-11-18 07:21:07.912201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.057 [2024-11-18 07:21:07.912228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.057 qpair failed and we were unable to recover it. 00:35:47.057 [2024-11-18 07:21:07.912326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.057 [2024-11-18 07:21:07.912353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.057 qpair failed and we were unable to recover it. 00:35:47.057 [2024-11-18 07:21:07.912439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.057 [2024-11-18 07:21:07.912467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.057 qpair failed and we were unable to recover it. 00:35:47.057 [2024-11-18 07:21:07.912566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.057 [2024-11-18 07:21:07.912593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.057 qpair failed and we were unable to recover it. 00:35:47.057 [2024-11-18 07:21:07.912671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.057 [2024-11-18 07:21:07.912702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.057 qpair failed and we were unable to recover it. 00:35:47.057 [2024-11-18 07:21:07.912796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.057 [2024-11-18 07:21:07.912822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.057 qpair failed and we were unable to recover it. 00:35:47.057 [2024-11-18 07:21:07.912914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.057 [2024-11-18 07:21:07.912940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.057 qpair failed and we were unable to recover it. 00:35:47.057 [2024-11-18 07:21:07.913013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.057 [2024-11-18 07:21:07.913039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.057 qpair failed and we were unable to recover it. 00:35:47.057 [2024-11-18 07:21:07.913114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.057 [2024-11-18 07:21:07.913140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.057 qpair failed and we were unable to recover it. 00:35:47.057 [2024-11-18 07:21:07.913222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.057 [2024-11-18 07:21:07.913249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.057 qpair failed and we were unable to recover it. 00:35:47.057 [2024-11-18 07:21:07.913348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.057 [2024-11-18 07:21:07.913376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.057 qpair failed and we were unable to recover it. 00:35:47.057 [2024-11-18 07:21:07.913467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.057 [2024-11-18 07:21:07.913513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.057 qpair failed and we were unable to recover it. 00:35:47.057 [2024-11-18 07:21:07.913624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.057 [2024-11-18 07:21:07.913651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.057 qpair failed and we were unable to recover it. 00:35:47.057 [2024-11-18 07:21:07.913703] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:47.057 [2024-11-18 07:21:07.913737] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:47.057 [2024-11-18 07:21:07.913752] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:47.057 [2024-11-18 07:21:07.913765] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:47.057 [2024-11-18 07:21:07.913763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.057 [2024-11-18 07:21:07.913776] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:47.057 [2024-11-18 07:21:07.913789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.057 qpair failed and we were unable to recover it. 00:35:47.057 [2024-11-18 07:21:07.913901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.057 [2024-11-18 07:21:07.913926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.057 qpair failed and we were unable to recover it. 00:35:47.057 [2024-11-18 07:21:07.914040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.057 [2024-11-18 07:21:07.914067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.057 qpair failed and we were unable to recover it. 00:35:47.057 [2024-11-18 07:21:07.914162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.057 [2024-11-18 07:21:07.914189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.058 qpair failed and we were unable to recover it. 00:35:47.058 [2024-11-18 07:21:07.914281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.058 [2024-11-18 07:21:07.914310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.058 qpair failed and we were unable to recover it. 00:35:47.058 [2024-11-18 07:21:07.914433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.058 [2024-11-18 07:21:07.914460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.058 qpair failed and we were unable to recover it. 00:35:47.058 [2024-11-18 07:21:07.914565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.058 [2024-11-18 07:21:07.914591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.058 qpair failed and we were unable to recover it. 00:35:47.058 [2024-11-18 07:21:07.914678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.058 [2024-11-18 07:21:07.914703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.058 qpair failed and we were unable to recover it. 00:35:47.058 [2024-11-18 07:21:07.914780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.058 [2024-11-18 07:21:07.914806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.058 qpair failed and we were unable to recover it. 00:35:47.058 [2024-11-18 07:21:07.914892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.058 [2024-11-18 07:21:07.914918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.058 qpair failed and we were unable to recover it. 00:35:47.058 [2024-11-18 07:21:07.915021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.058 [2024-11-18 07:21:07.915047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.058 qpair failed and we were unable to recover it. 00:35:47.058 [2024-11-18 07:21:07.915141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.058 [2024-11-18 07:21:07.915166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.058 qpair failed and we were unable to recover it. 00:35:47.058 [2024-11-18 07:21:07.915281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.058 [2024-11-18 07:21:07.915308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.058 qpair failed and we were unable to recover it. 00:35:47.058 [2024-11-18 07:21:07.915454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.058 [2024-11-18 07:21:07.915400] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:35:47.058 [2024-11-18 07:21:07.915496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.058 qpair failed and we were unable to recover it. 00:35:47.058 [2024-11-18 07:21:07.915434] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:35:47.058 [2024-11-18 07:21:07.915466] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:35:47.058 [2024-11-18 07:21:07.915586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.058 [2024-11-18 07:21:07.915469] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:35:47.058 [2024-11-18 07:21:07.915613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.058 qpair failed and we were unable to recover it. 00:35:47.058 [2024-11-18 07:21:07.915729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.058 [2024-11-18 07:21:07.915754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.058 qpair failed and we were unable to recover it. 00:35:47.058 [2024-11-18 07:21:07.915847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.058 [2024-11-18 07:21:07.915873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.058 qpair failed and we were unable to recover it. 00:35:47.058 [2024-11-18 07:21:07.915969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.058 [2024-11-18 07:21:07.915995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.058 qpair failed and we were unable to recover it. 00:35:47.058 [2024-11-18 07:21:07.916083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.058 [2024-11-18 07:21:07.916109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.058 qpair failed and we were unable to recover it. 00:35:47.058 [2024-11-18 07:21:07.916194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.058 [2024-11-18 07:21:07.916222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.058 qpair failed and we were unable to recover it. 00:35:47.058 [2024-11-18 07:21:07.916310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.058 [2024-11-18 07:21:07.916336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.058 qpair failed and we were unable to recover it. 00:35:47.058 [2024-11-18 07:21:07.916428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.058 [2024-11-18 07:21:07.916456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.058 qpair failed and we were unable to recover it. 00:35:47.058 [2024-11-18 07:21:07.916567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.058 [2024-11-18 07:21:07.916594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.058 qpair failed and we were unable to recover it. 00:35:47.058 [2024-11-18 07:21:07.916703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.058 [2024-11-18 07:21:07.916729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.058 qpair failed and we were unable to recover it. 00:35:47.058 [2024-11-18 07:21:07.916814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.058 [2024-11-18 07:21:07.916839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.058 qpair failed and we were unable to recover it. 00:35:47.058 [2024-11-18 07:21:07.916939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.058 [2024-11-18 07:21:07.916965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.058 qpair failed and we were unable to recover it. 00:35:47.058 [2024-11-18 07:21:07.917048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.058 [2024-11-18 07:21:07.917074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.058 qpair failed and we were unable to recover it. 00:35:47.058 [2024-11-18 07:21:07.917157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.058 [2024-11-18 07:21:07.917182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.058 qpair failed and we were unable to recover it. 00:35:47.058 [2024-11-18 07:21:07.917284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.058 [2024-11-18 07:21:07.917311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.058 qpair failed and we were unable to recover it. 00:35:47.058 [2024-11-18 07:21:07.917419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.058 [2024-11-18 07:21:07.917458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.058 qpair failed and we were unable to recover it. 00:35:47.058 [2024-11-18 07:21:07.917568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.058 [2024-11-18 07:21:07.917596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.058 qpair failed and we were unable to recover it. 00:35:47.058 [2024-11-18 07:21:07.917677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.058 [2024-11-18 07:21:07.917704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.058 qpair failed and we were unable to recover it. 00:35:47.058 [2024-11-18 07:21:07.917791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.058 [2024-11-18 07:21:07.917818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.058 qpair failed and we were unable to recover it. 00:35:47.058 [2024-11-18 07:21:07.917896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.058 [2024-11-18 07:21:07.917922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.058 qpair failed and we were unable to recover it. 00:35:47.058 [2024-11-18 07:21:07.918033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.058 [2024-11-18 07:21:07.918059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.058 qpair failed and we were unable to recover it. 00:35:47.058 [2024-11-18 07:21:07.918140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.058 [2024-11-18 07:21:07.918168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.058 qpair failed and we were unable to recover it. 00:35:47.058 [2024-11-18 07:21:07.918255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.058 [2024-11-18 07:21:07.918281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.058 qpair failed and we were unable to recover it. 00:35:47.058 [2024-11-18 07:21:07.918366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.058 [2024-11-18 07:21:07.918393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.058 qpair failed and we were unable to recover it. 00:35:47.058 [2024-11-18 07:21:07.918482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.058 [2024-11-18 07:21:07.918515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.058 qpair failed and we were unable to recover it. 00:35:47.058 [2024-11-18 07:21:07.918598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.058 [2024-11-18 07:21:07.918624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.058 qpair failed and we were unable to recover it. 00:35:47.058 [2024-11-18 07:21:07.918702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.058 [2024-11-18 07:21:07.918728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.058 qpair failed and we were unable to recover it. 00:35:47.058 [2024-11-18 07:21:07.918824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.058 [2024-11-18 07:21:07.918854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.058 qpair failed and we were unable to recover it. 00:35:47.058 [2024-11-18 07:21:07.918955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.058 [2024-11-18 07:21:07.918986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.058 qpair failed and we were unable to recover it. 00:35:47.058 [2024-11-18 07:21:07.919108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.058 [2024-11-18 07:21:07.919135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.058 qpair failed and we were unable to recover it. 00:35:47.058 [2024-11-18 07:21:07.919244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.058 [2024-11-18 07:21:07.919272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.058 qpair failed and we were unable to recover it. 00:35:47.058 [2024-11-18 07:21:07.919356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.058 [2024-11-18 07:21:07.919383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.058 qpair failed and we were unable to recover it. 00:35:47.058 [2024-11-18 07:21:07.919470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.058 [2024-11-18 07:21:07.919521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.058 qpair failed and we were unable to recover it. 00:35:47.058 [2024-11-18 07:21:07.919604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.058 [2024-11-18 07:21:07.919631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.058 qpair failed and we were unable to recover it. 00:35:47.058 [2024-11-18 07:21:07.919716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.058 [2024-11-18 07:21:07.919742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.058 qpair failed and we were unable to recover it. 00:35:47.059 [2024-11-18 07:21:07.919826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.059 [2024-11-18 07:21:07.919854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.059 qpair failed and we were unable to recover it. 00:35:47.059 [2024-11-18 07:21:07.919968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.059 [2024-11-18 07:21:07.919995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.059 qpair failed and we were unable to recover it. 00:35:47.059 [2024-11-18 07:21:07.920087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.059 [2024-11-18 07:21:07.920113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.059 qpair failed and we were unable to recover it. 00:35:47.059 [2024-11-18 07:21:07.920205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.059 [2024-11-18 07:21:07.920233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.059 qpair failed and we were unable to recover it. 00:35:47.059 [2024-11-18 07:21:07.920343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.059 [2024-11-18 07:21:07.920370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.059 qpair failed and we were unable to recover it. 00:35:47.059 [2024-11-18 07:21:07.920463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.059 [2024-11-18 07:21:07.920499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.059 qpair failed and we were unable to recover it. 00:35:47.059 [2024-11-18 07:21:07.920601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.059 [2024-11-18 07:21:07.920628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.059 qpair failed and we were unable to recover it. 00:35:47.059 [2024-11-18 07:21:07.920714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.059 [2024-11-18 07:21:07.920740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.059 qpair failed and we were unable to recover it. 00:35:47.059 [2024-11-18 07:21:07.920832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.059 [2024-11-18 07:21:07.920858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.059 qpair failed and we were unable to recover it. 00:35:47.059 [2024-11-18 07:21:07.920941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.059 [2024-11-18 07:21:07.920968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.059 qpair failed and we were unable to recover it. 00:35:47.059 [2024-11-18 07:21:07.921058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.059 [2024-11-18 07:21:07.921086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.059 qpair failed and we were unable to recover it. 00:35:47.059 [2024-11-18 07:21:07.921174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.059 [2024-11-18 07:21:07.921200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.059 qpair failed and we were unable to recover it. 00:35:47.059 [2024-11-18 07:21:07.921289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.059 [2024-11-18 07:21:07.921315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.059 qpair failed and we were unable to recover it. 00:35:47.059 [2024-11-18 07:21:07.921392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.059 [2024-11-18 07:21:07.921419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.059 qpair failed and we were unable to recover it. 00:35:47.059 [2024-11-18 07:21:07.921541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.059 [2024-11-18 07:21:07.921580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.059 qpair failed and we were unable to recover it. 00:35:47.059 [2024-11-18 07:21:07.921674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.059 [2024-11-18 07:21:07.921703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.059 qpair failed and we were unable to recover it. 00:35:47.059 [2024-11-18 07:21:07.921787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.059 [2024-11-18 07:21:07.921814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.059 qpair failed and we were unable to recover it. 00:35:47.059 [2024-11-18 07:21:07.921902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.059 [2024-11-18 07:21:07.921928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.059 qpair failed and we were unable to recover it. 00:35:47.059 [2024-11-18 07:21:07.922037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.059 [2024-11-18 07:21:07.922064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.059 qpair failed and we were unable to recover it. 00:35:47.059 [2024-11-18 07:21:07.922157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.059 [2024-11-18 07:21:07.922190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.059 qpair failed and we were unable to recover it. 00:35:47.059 [2024-11-18 07:21:07.922282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.059 [2024-11-18 07:21:07.922317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.059 qpair failed and we were unable to recover it. 00:35:47.059 [2024-11-18 07:21:07.922435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.059 [2024-11-18 07:21:07.922462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.059 qpair failed and we were unable to recover it. 00:35:47.059 [2024-11-18 07:21:07.922557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.059 [2024-11-18 07:21:07.922583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.059 qpair failed and we were unable to recover it. 00:35:47.059 [2024-11-18 07:21:07.922667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.059 [2024-11-18 07:21:07.922694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.059 qpair failed and we were unable to recover it. 00:35:47.059 [2024-11-18 07:21:07.922771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.059 [2024-11-18 07:21:07.922800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.059 qpair failed and we were unable to recover it. 00:35:47.059 [2024-11-18 07:21:07.922892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.059 [2024-11-18 07:21:07.922920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.059 qpair failed and we were unable to recover it. 00:35:47.059 [2024-11-18 07:21:07.922998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.059 [2024-11-18 07:21:07.923025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.059 qpair failed and we were unable to recover it. 00:35:47.059 [2024-11-18 07:21:07.923113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.059 [2024-11-18 07:21:07.923140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.059 qpair failed and we were unable to recover it. 00:35:47.059 [2024-11-18 07:21:07.923231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.059 [2024-11-18 07:21:07.923259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.059 qpair failed and we were unable to recover it. 00:35:47.059 [2024-11-18 07:21:07.923377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.059 [2024-11-18 07:21:07.923404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.059 qpair failed and we were unable to recover it. 00:35:47.059 [2024-11-18 07:21:07.923501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.059 [2024-11-18 07:21:07.923528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.059 qpair failed and we were unable to recover it. 00:35:47.059 [2024-11-18 07:21:07.923621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.059 [2024-11-18 07:21:07.923648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.059 qpair failed and we were unable to recover it. 00:35:47.059 [2024-11-18 07:21:07.923730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.059 [2024-11-18 07:21:07.923757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.059 qpair failed and we were unable to recover it. 00:35:47.059 [2024-11-18 07:21:07.923843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.059 [2024-11-18 07:21:07.923870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.059 qpair failed and we were unable to recover it. 00:35:47.059 [2024-11-18 07:21:07.923957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.059 [2024-11-18 07:21:07.923985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.059 qpair failed and we were unable to recover it. 00:35:47.059 [2024-11-18 07:21:07.924071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.059 [2024-11-18 07:21:07.924098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.059 qpair failed and we were unable to recover it. 00:35:47.059 [2024-11-18 07:21:07.924178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.059 [2024-11-18 07:21:07.924206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.059 qpair failed and we were unable to recover it. 00:35:47.059 [2024-11-18 07:21:07.924287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.059 [2024-11-18 07:21:07.924314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.059 qpair failed and we were unable to recover it. 00:35:47.059 [2024-11-18 07:21:07.924399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.059 [2024-11-18 07:21:07.924425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.059 qpair failed and we were unable to recover it. 00:35:47.059 [2024-11-18 07:21:07.924518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.059 [2024-11-18 07:21:07.924546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.059 qpair failed and we were unable to recover it. 00:35:47.059 [2024-11-18 07:21:07.924640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.059 [2024-11-18 07:21:07.924669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.059 qpair failed and we were unable to recover it. 00:35:47.059 [2024-11-18 07:21:07.924748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.059 [2024-11-18 07:21:07.924774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.059 qpair failed and we were unable to recover it. 00:35:47.059 [2024-11-18 07:21:07.924865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.059 [2024-11-18 07:21:07.924891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.059 qpair failed and we were unable to recover it. 00:35:47.059 [2024-11-18 07:21:07.924975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.059 [2024-11-18 07:21:07.925001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.059 qpair failed and we were unable to recover it. 00:35:47.059 [2024-11-18 07:21:07.925096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.059 [2024-11-18 07:21:07.925122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.059 qpair failed and we were unable to recover it. 00:35:47.059 [2024-11-18 07:21:07.925223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.059 [2024-11-18 07:21:07.925263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.059 qpair failed and we were unable to recover it. 00:35:47.059 [2024-11-18 07:21:07.925355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.060 [2024-11-18 07:21:07.925383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.060 qpair failed and we were unable to recover it. 00:35:47.060 [2024-11-18 07:21:07.925469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.060 [2024-11-18 07:21:07.925505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.060 qpair failed and we were unable to recover it. 00:35:47.060 [2024-11-18 07:21:07.925595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.060 [2024-11-18 07:21:07.925621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.060 qpair failed and we were unable to recover it. 00:35:47.060 [2024-11-18 07:21:07.925704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.060 [2024-11-18 07:21:07.925730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.060 qpair failed and we were unable to recover it. 00:35:47.060 [2024-11-18 07:21:07.925827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.060 [2024-11-18 07:21:07.925857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.060 qpair failed and we were unable to recover it. 00:35:47.060 [2024-11-18 07:21:07.925939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.060 [2024-11-18 07:21:07.925965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.060 qpair failed and we were unable to recover it. 00:35:47.060 [2024-11-18 07:21:07.926054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.060 [2024-11-18 07:21:07.926083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.060 qpair failed and we were unable to recover it. 00:35:47.060 [2024-11-18 07:21:07.926173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.060 [2024-11-18 07:21:07.926200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.060 qpair failed and we were unable to recover it. 00:35:47.060 [2024-11-18 07:21:07.926292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.060 [2024-11-18 07:21:07.926319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.060 qpair failed and we were unable to recover it. 00:35:47.060 [2024-11-18 07:21:07.926413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.060 [2024-11-18 07:21:07.926440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.060 qpair failed and we were unable to recover it. 00:35:47.060 [2024-11-18 07:21:07.926569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.060 [2024-11-18 07:21:07.926596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.060 qpair failed and we were unable to recover it. 00:35:47.060 [2024-11-18 07:21:07.926681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.060 [2024-11-18 07:21:07.926708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.060 qpair failed and we were unable to recover it. 00:35:47.060 [2024-11-18 07:21:07.926824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.060 [2024-11-18 07:21:07.926861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.060 qpair failed and we were unable to recover it. 00:35:47.060 [2024-11-18 07:21:07.926952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.060 [2024-11-18 07:21:07.926991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.060 qpair failed and we were unable to recover it. 00:35:47.060 [2024-11-18 07:21:07.927103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.060 [2024-11-18 07:21:07.927129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.060 qpair failed and we were unable to recover it. 00:35:47.060 [2024-11-18 07:21:07.927207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.060 [2024-11-18 07:21:07.927232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.060 qpair failed and we were unable to recover it. 00:35:47.060 [2024-11-18 07:21:07.927312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.060 [2024-11-18 07:21:07.927337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.060 qpair failed and we were unable to recover it. 00:35:47.060 [2024-11-18 07:21:07.927454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.060 [2024-11-18 07:21:07.927502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.060 qpair failed and we were unable to recover it. 00:35:47.060 [2024-11-18 07:21:07.927587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.060 [2024-11-18 07:21:07.927614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.060 qpair failed and we were unable to recover it. 00:35:47.060 [2024-11-18 07:21:07.927692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.060 [2024-11-18 07:21:07.927719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.060 qpair failed and we were unable to recover it. 00:35:47.060 [2024-11-18 07:21:07.927808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.060 [2024-11-18 07:21:07.927836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.060 qpair failed and we were unable to recover it. 00:35:47.060 [2024-11-18 07:21:07.927920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.060 [2024-11-18 07:21:07.927946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.060 qpair failed and we were unable to recover it. 00:35:47.060 [2024-11-18 07:21:07.928052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.060 [2024-11-18 07:21:07.928092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.060 qpair failed and we were unable to recover it. 00:35:47.060 [2024-11-18 07:21:07.928181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.060 [2024-11-18 07:21:07.928208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.060 qpair failed and we were unable to recover it. 00:35:47.060 [2024-11-18 07:21:07.928283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.060 [2024-11-18 07:21:07.928309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.060 qpair failed and we were unable to recover it. 00:35:47.060 [2024-11-18 07:21:07.928400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.060 [2024-11-18 07:21:07.928428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.060 qpair failed and we were unable to recover it. 00:35:47.060 [2024-11-18 07:21:07.928532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.060 [2024-11-18 07:21:07.928559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.060 qpair failed and we were unable to recover it. 00:35:47.060 [2024-11-18 07:21:07.928662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.060 [2024-11-18 07:21:07.928689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.060 qpair failed and we were unable to recover it. 00:35:47.060 [2024-11-18 07:21:07.928764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.060 [2024-11-18 07:21:07.928789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.060 qpair failed and we were unable to recover it. 00:35:47.060 [2024-11-18 07:21:07.928869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.060 [2024-11-18 07:21:07.928894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.060 qpair failed and we were unable to recover it. 00:35:47.060 [2024-11-18 07:21:07.928970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.060 [2024-11-18 07:21:07.928995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.060 qpair failed and we were unable to recover it. 00:35:47.060 [2024-11-18 07:21:07.929073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.060 [2024-11-18 07:21:07.929099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.060 qpair failed and we were unable to recover it. 00:35:47.060 [2024-11-18 07:21:07.929217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.060 [2024-11-18 07:21:07.929242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.060 qpair failed and we were unable to recover it. 00:35:47.060 [2024-11-18 07:21:07.929321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.060 [2024-11-18 07:21:07.929347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.060 qpair failed and we were unable to recover it. 00:35:47.060 [2024-11-18 07:21:07.929462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.060 [2024-11-18 07:21:07.929497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.060 qpair failed and we were unable to recover it. 00:35:47.060 [2024-11-18 07:21:07.929582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.060 [2024-11-18 07:21:07.929609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.060 qpair failed and we were unable to recover it. 00:35:47.060 [2024-11-18 07:21:07.929725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.060 [2024-11-18 07:21:07.929751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.060 qpair failed and we were unable to recover it. 00:35:47.060 [2024-11-18 07:21:07.929844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.060 [2024-11-18 07:21:07.929871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.060 qpair failed and we were unable to recover it. 00:35:47.060 [2024-11-18 07:21:07.929962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.060 [2024-11-18 07:21:07.929989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.060 qpair failed and we were unable to recover it. 00:35:47.060 [2024-11-18 07:21:07.930082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.060 [2024-11-18 07:21:07.930108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.060 qpair failed and we were unable to recover it. 00:35:47.060 [2024-11-18 07:21:07.930230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.060 [2024-11-18 07:21:07.930258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.060 qpair failed and we were unable to recover it. 00:35:47.061 [2024-11-18 07:21:07.930350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.061 [2024-11-18 07:21:07.930389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.061 qpair failed and we were unable to recover it. 00:35:47.061 [2024-11-18 07:21:07.930482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.061 [2024-11-18 07:21:07.930524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.061 qpair failed and we were unable to recover it. 00:35:47.061 [2024-11-18 07:21:07.930610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.061 [2024-11-18 07:21:07.930636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.061 qpair failed and we were unable to recover it. 00:35:47.061 [2024-11-18 07:21:07.930719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.061 [2024-11-18 07:21:07.930745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.061 qpair failed and we were unable to recover it. 00:35:47.061 [2024-11-18 07:21:07.930850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.061 [2024-11-18 07:21:07.930876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.061 qpair failed and we were unable to recover it. 00:35:47.061 [2024-11-18 07:21:07.930973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.061 [2024-11-18 07:21:07.931004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.061 qpair failed and we were unable to recover it. 00:35:47.061 [2024-11-18 07:21:07.931092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.061 [2024-11-18 07:21:07.931117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.061 qpair failed and we were unable to recover it. 00:35:47.061 [2024-11-18 07:21:07.931202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.061 [2024-11-18 07:21:07.931228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.061 qpair failed and we were unable to recover it. 00:35:47.061 [2024-11-18 07:21:07.931320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.061 [2024-11-18 07:21:07.931348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.061 qpair failed and we were unable to recover it. 00:35:47.061 [2024-11-18 07:21:07.931425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.061 [2024-11-18 07:21:07.931452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.061 qpair failed and we were unable to recover it. 00:35:47.061 [2024-11-18 07:21:07.931554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.061 [2024-11-18 07:21:07.931582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.061 qpair failed and we were unable to recover it. 00:35:47.061 [2024-11-18 07:21:07.931668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.061 [2024-11-18 07:21:07.931695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.061 qpair failed and we were unable to recover it. 00:35:47.061 [2024-11-18 07:21:07.931800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.061 [2024-11-18 07:21:07.931827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.061 qpair failed and we were unable to recover it. 00:35:47.061 [2024-11-18 07:21:07.931908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.061 [2024-11-18 07:21:07.931935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.061 qpair failed and we were unable to recover it. 00:35:47.061 [2024-11-18 07:21:07.932039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.061 [2024-11-18 07:21:07.932066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.061 qpair failed and we were unable to recover it. 00:35:47.061 [2024-11-18 07:21:07.932196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.061 [2024-11-18 07:21:07.932224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.061 qpair failed and we were unable to recover it. 00:35:47.061 [2024-11-18 07:21:07.932320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.061 [2024-11-18 07:21:07.932360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.061 qpair failed and we were unable to recover it. 00:35:47.061 [2024-11-18 07:21:07.932458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.061 [2024-11-18 07:21:07.932486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.061 qpair failed and we were unable to recover it. 00:35:47.061 [2024-11-18 07:21:07.932577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.061 [2024-11-18 07:21:07.932603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.061 qpair failed and we were unable to recover it. 00:35:47.061 [2024-11-18 07:21:07.932719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.061 [2024-11-18 07:21:07.932746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.061 qpair failed and we were unable to recover it. 00:35:47.061 [2024-11-18 07:21:07.932841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.061 [2024-11-18 07:21:07.932868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.061 qpair failed and we were unable to recover it. 00:35:47.061 [2024-11-18 07:21:07.932986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.061 [2024-11-18 07:21:07.933012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.061 qpair failed and we were unable to recover it. 00:35:47.061 [2024-11-18 07:21:07.933098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.061 [2024-11-18 07:21:07.933124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.061 qpair failed and we were unable to recover it. 00:35:47.061 [2024-11-18 07:21:07.933215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.061 [2024-11-18 07:21:07.933242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.061 qpair failed and we were unable to recover it. 00:35:47.061 [2024-11-18 07:21:07.933334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.061 [2024-11-18 07:21:07.933359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.061 qpair failed and we were unable to recover it. 00:35:47.061 [2024-11-18 07:21:07.933439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.061 [2024-11-18 07:21:07.933465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.061 qpair failed and we were unable to recover it. 00:35:47.061 [2024-11-18 07:21:07.933575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.061 [2024-11-18 07:21:07.933602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.061 qpair failed and we were unable to recover it. 00:35:47.061 [2024-11-18 07:21:07.933690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.061 [2024-11-18 07:21:07.933716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.061 qpair failed and we were unable to recover it. 00:35:47.061 [2024-11-18 07:21:07.933804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.061 [2024-11-18 07:21:07.933831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.061 qpair failed and we were unable to recover it. 00:35:47.061 [2024-11-18 07:21:07.933913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.061 [2024-11-18 07:21:07.933940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.061 qpair failed and we were unable to recover it. 00:35:47.061 [2024-11-18 07:21:07.934022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.061 [2024-11-18 07:21:07.934048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.061 qpair failed and we were unable to recover it. 00:35:47.061 [2024-11-18 07:21:07.934148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.061 [2024-11-18 07:21:07.934174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.061 qpair failed and we were unable to recover it. 00:35:47.061 [2024-11-18 07:21:07.934260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.061 [2024-11-18 07:21:07.934285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.061 qpair failed and we were unable to recover it. 00:35:47.061 [2024-11-18 07:21:07.934391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.061 [2024-11-18 07:21:07.934416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.061 qpair failed and we were unable to recover it. 00:35:47.061 [2024-11-18 07:21:07.934516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.061 [2024-11-18 07:21:07.934545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.061 qpair failed and we were unable to recover it. 00:35:47.061 [2024-11-18 07:21:07.934629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.061 [2024-11-18 07:21:07.934655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.061 qpair failed and we were unable to recover it. 00:35:47.061 [2024-11-18 07:21:07.934755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.061 [2024-11-18 07:21:07.934781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.061 qpair failed and we were unable to recover it. 00:35:47.061 [2024-11-18 07:21:07.934869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.061 [2024-11-18 07:21:07.934899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.061 qpair failed and we were unable to recover it. 00:35:47.061 [2024-11-18 07:21:07.935004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.061 [2024-11-18 07:21:07.935044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.061 qpair failed and we were unable to recover it. 00:35:47.061 [2024-11-18 07:21:07.935133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.061 [2024-11-18 07:21:07.935167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.061 qpair failed and we were unable to recover it. 00:35:47.061 [2024-11-18 07:21:07.935265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.061 [2024-11-18 07:21:07.935292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.061 qpair failed and we were unable to recover it. 00:35:47.061 [2024-11-18 07:21:07.935407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.061 [2024-11-18 07:21:07.935434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.061 qpair failed and we were unable to recover it. 00:35:47.061 [2024-11-18 07:21:07.935542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.061 [2024-11-18 07:21:07.935569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.061 qpair failed and we were unable to recover it. 00:35:47.061 [2024-11-18 07:21:07.935648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.061 [2024-11-18 07:21:07.935674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.061 qpair failed and we were unable to recover it. 00:35:47.061 [2024-11-18 07:21:07.935757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.061 [2024-11-18 07:21:07.935792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.061 qpair failed and we were unable to recover it. 00:35:47.061 [2024-11-18 07:21:07.935872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.061 [2024-11-18 07:21:07.935898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.061 qpair failed and we were unable to recover it. 00:35:47.061 [2024-11-18 07:21:07.935999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.061 [2024-11-18 07:21:07.936027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.061 qpair failed and we were unable to recover it. 00:35:47.062 [2024-11-18 07:21:07.936150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.062 [2024-11-18 07:21:07.936177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.062 qpair failed and we were unable to recover it. 00:35:47.062 [2024-11-18 07:21:07.936262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.062 [2024-11-18 07:21:07.936288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.062 qpair failed and we were unable to recover it. 00:35:47.062 [2024-11-18 07:21:07.936371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.062 [2024-11-18 07:21:07.936402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.062 qpair failed and we were unable to recover it. 00:35:47.062 [2024-11-18 07:21:07.936507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.062 [2024-11-18 07:21:07.936535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.062 qpair failed and we were unable to recover it. 00:35:47.062 [2024-11-18 07:21:07.936616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.062 [2024-11-18 07:21:07.936643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.062 qpair failed and we were unable to recover it. 00:35:47.062 [2024-11-18 07:21:07.936722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.062 [2024-11-18 07:21:07.936749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.062 qpair failed and we were unable to recover it. 00:35:47.062 [2024-11-18 07:21:07.936855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.062 [2024-11-18 07:21:07.936882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.062 qpair failed and we were unable to recover it. 00:35:47.062 [2024-11-18 07:21:07.936958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.062 [2024-11-18 07:21:07.936985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.062 qpair failed and we were unable to recover it. 00:35:47.062 [2024-11-18 07:21:07.937066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.062 [2024-11-18 07:21:07.937094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.062 qpair failed and we were unable to recover it. 00:35:47.062 [2024-11-18 07:21:07.937181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.062 [2024-11-18 07:21:07.937209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.062 qpair failed and we were unable to recover it. 00:35:47.062 [2024-11-18 07:21:07.937317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.062 [2024-11-18 07:21:07.937356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.062 qpair failed and we were unable to recover it. 00:35:47.062 [2024-11-18 07:21:07.937451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.062 [2024-11-18 07:21:07.937478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.062 qpair failed and we were unable to recover it. 00:35:47.062 [2024-11-18 07:21:07.937600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.062 [2024-11-18 07:21:07.937626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.062 qpair failed and we were unable to recover it. 00:35:47.062 [2024-11-18 07:21:07.937711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.062 [2024-11-18 07:21:07.937737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.062 qpair failed and we were unable to recover it. 00:35:47.062 [2024-11-18 07:21:07.937834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.062 [2024-11-18 07:21:07.937860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.062 qpair failed and we were unable to recover it. 00:35:47.062 [2024-11-18 07:21:07.937947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.062 [2024-11-18 07:21:07.937977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.062 qpair failed and we were unable to recover it. 00:35:47.062 [2024-11-18 07:21:07.938093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.062 [2024-11-18 07:21:07.938119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.062 qpair failed and we were unable to recover it. 00:35:47.062 [2024-11-18 07:21:07.938198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.062 [2024-11-18 07:21:07.938234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.062 qpair failed and we were unable to recover it. 00:35:47.062 [2024-11-18 07:21:07.938320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.062 [2024-11-18 07:21:07.938348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.062 qpair failed and we were unable to recover it. 00:35:47.062 [2024-11-18 07:21:07.938442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.062 [2024-11-18 07:21:07.938470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.062 qpair failed and we were unable to recover it. 00:35:47.062 [2024-11-18 07:21:07.938569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.062 [2024-11-18 07:21:07.938596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.062 qpair failed and we were unable to recover it. 00:35:47.062 [2024-11-18 07:21:07.938684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.062 [2024-11-18 07:21:07.938711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.062 qpair failed and we were unable to recover it. 00:35:47.062 [2024-11-18 07:21:07.938816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.062 [2024-11-18 07:21:07.938843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.062 qpair failed and we were unable to recover it. 00:35:47.062 [2024-11-18 07:21:07.938959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.062 [2024-11-18 07:21:07.938996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.062 qpair failed and we were unable to recover it. 00:35:47.062 [2024-11-18 07:21:07.939112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.062 [2024-11-18 07:21:07.939138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.062 qpair failed and we were unable to recover it. 00:35:47.062 [2024-11-18 07:21:07.939217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.062 [2024-11-18 07:21:07.939246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.062 qpair failed and we were unable to recover it. 00:35:47.062 [2024-11-18 07:21:07.939329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.062 [2024-11-18 07:21:07.939357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.062 qpair failed and we were unable to recover it. 00:35:47.062 [2024-11-18 07:21:07.939438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.062 [2024-11-18 07:21:07.939466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.062 qpair failed and we were unable to recover it. 00:35:47.062 [2024-11-18 07:21:07.939622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.062 [2024-11-18 07:21:07.939649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.062 qpair failed and we were unable to recover it. 00:35:47.062 [2024-11-18 07:21:07.939730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.062 [2024-11-18 07:21:07.939757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.062 qpair failed and we were unable to recover it. 00:35:47.062 [2024-11-18 07:21:07.939870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.062 [2024-11-18 07:21:07.939897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.062 qpair failed and we were unable to recover it. 00:35:47.062 [2024-11-18 07:21:07.939979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.062 [2024-11-18 07:21:07.940006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.062 qpair failed and we were unable to recover it. 00:35:47.062 [2024-11-18 07:21:07.940086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.062 [2024-11-18 07:21:07.940124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.062 qpair failed and we were unable to recover it. 00:35:47.062 [2024-11-18 07:21:07.940233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.062 [2024-11-18 07:21:07.940260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.062 qpair failed and we were unable to recover it. 00:35:47.062 [2024-11-18 07:21:07.940346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.062 [2024-11-18 07:21:07.940372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.062 qpair failed and we were unable to recover it. 00:35:47.062 [2024-11-18 07:21:07.940459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.062 [2024-11-18 07:21:07.940497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.062 qpair failed and we were unable to recover it. 00:35:47.062 [2024-11-18 07:21:07.940577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.062 [2024-11-18 07:21:07.940604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.062 qpair failed and we were unable to recover it. 00:35:47.062 [2024-11-18 07:21:07.940689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.062 [2024-11-18 07:21:07.940715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.062 qpair failed and we were unable to recover it. 00:35:47.062 [2024-11-18 07:21:07.940840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.062 [2024-11-18 07:21:07.940867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.062 qpair failed and we were unable to recover it. 00:35:47.062 [2024-11-18 07:21:07.940976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.062 [2024-11-18 07:21:07.941014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.062 qpair failed and we were unable to recover it. 00:35:47.062 [2024-11-18 07:21:07.941111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.062 [2024-11-18 07:21:07.941139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.062 qpair failed and we were unable to recover it. 00:35:47.062 [2024-11-18 07:21:07.941221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.062 [2024-11-18 07:21:07.941248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.062 qpair failed and we were unable to recover it. 00:35:47.062 [2024-11-18 07:21:07.941335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.062 [2024-11-18 07:21:07.941361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.062 qpair failed and we were unable to recover it. 00:35:47.062 [2024-11-18 07:21:07.941488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.062 [2024-11-18 07:21:07.941523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.062 qpair failed and we were unable to recover it. 00:35:47.062 [2024-11-18 07:21:07.941607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.062 [2024-11-18 07:21:07.941634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.062 qpair failed and we were unable to recover it. 00:35:47.062 [2024-11-18 07:21:07.941716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.062 [2024-11-18 07:21:07.941742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.062 qpair failed and we were unable to recover it. 00:35:47.062 [2024-11-18 07:21:07.941841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.063 [2024-11-18 07:21:07.941875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.063 qpair failed and we were unable to recover it. 00:35:47.063 [2024-11-18 07:21:07.941955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.063 [2024-11-18 07:21:07.941983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.063 qpair failed and we were unable to recover it. 00:35:47.063 [2024-11-18 07:21:07.942060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.063 [2024-11-18 07:21:07.942097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.063 qpair failed and we were unable to recover it. 00:35:47.063 [2024-11-18 07:21:07.942207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.063 [2024-11-18 07:21:07.942233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.063 qpair failed and we were unable to recover it. 00:35:47.063 [2024-11-18 07:21:07.942322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.063 [2024-11-18 07:21:07.942347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.063 qpair failed and we were unable to recover it. 00:35:47.063 [2024-11-18 07:21:07.942431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.063 [2024-11-18 07:21:07.942458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.063 qpair failed and we were unable to recover it. 00:35:47.063 [2024-11-18 07:21:07.942548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.063 [2024-11-18 07:21:07.942575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.063 qpair failed and we were unable to recover it. 00:35:47.063 [2024-11-18 07:21:07.942668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.063 [2024-11-18 07:21:07.942697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.063 qpair failed and we were unable to recover it. 00:35:47.063 [2024-11-18 07:21:07.942778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.063 [2024-11-18 07:21:07.942805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.063 qpair failed and we were unable to recover it. 00:35:47.063 [2024-11-18 07:21:07.942919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.063 [2024-11-18 07:21:07.942945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.063 qpair failed and we were unable to recover it. 00:35:47.063 [2024-11-18 07:21:07.943028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.063 [2024-11-18 07:21:07.943054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.063 qpair failed and we were unable to recover it. 00:35:47.063 [2024-11-18 07:21:07.943166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.063 [2024-11-18 07:21:07.943206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.063 qpair failed and we were unable to recover it. 00:35:47.063 [2024-11-18 07:21:07.943301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.063 [2024-11-18 07:21:07.943329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.063 qpair failed and we were unable to recover it. 00:35:47.063 [2024-11-18 07:21:07.943409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.063 [2024-11-18 07:21:07.943444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.063 qpair failed and we were unable to recover it. 00:35:47.063 [2024-11-18 07:21:07.943546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.063 [2024-11-18 07:21:07.943572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.063 qpair failed and we were unable to recover it. 00:35:47.063 [2024-11-18 07:21:07.943683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.063 [2024-11-18 07:21:07.943710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.063 qpair failed and we were unable to recover it. 00:35:47.063 [2024-11-18 07:21:07.943793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.063 [2024-11-18 07:21:07.943819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.063 qpair failed and we were unable to recover it. 00:35:47.063 [2024-11-18 07:21:07.943916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.063 [2024-11-18 07:21:07.943942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.063 qpair failed and we were unable to recover it. 00:35:47.063 [2024-11-18 07:21:07.944025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.063 [2024-11-18 07:21:07.944052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.063 qpair failed and we were unable to recover it. 00:35:47.063 [2024-11-18 07:21:07.944133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.063 [2024-11-18 07:21:07.944161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.063 qpair failed and we were unable to recover it. 00:35:47.063 [2024-11-18 07:21:07.944241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.063 [2024-11-18 07:21:07.944268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.063 qpair failed and we were unable to recover it. 00:35:47.063 [2024-11-18 07:21:07.944361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.063 [2024-11-18 07:21:07.944388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.063 qpair failed and we were unable to recover it. 00:35:47.063 [2024-11-18 07:21:07.944465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.063 [2024-11-18 07:21:07.944500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.063 qpair failed and we were unable to recover it. 00:35:47.063 [2024-11-18 07:21:07.944585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.063 [2024-11-18 07:21:07.944612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.063 qpair failed and we were unable to recover it. 00:35:47.063 [2024-11-18 07:21:07.944714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.063 [2024-11-18 07:21:07.944754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.063 qpair failed and we were unable to recover it. 00:35:47.063 [2024-11-18 07:21:07.944841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.063 [2024-11-18 07:21:07.944870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.063 qpair failed and we were unable to recover it. 00:35:47.063 [2024-11-18 07:21:07.944991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.063 [2024-11-18 07:21:07.945017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.063 qpair failed and we were unable to recover it. 00:35:47.063 [2024-11-18 07:21:07.945104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.063 [2024-11-18 07:21:07.945131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.063 qpair failed and we were unable to recover it. 00:35:47.063 [2024-11-18 07:21:07.945215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.063 [2024-11-18 07:21:07.945243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.063 qpair failed and we were unable to recover it. 00:35:47.063 [2024-11-18 07:21:07.945370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.063 [2024-11-18 07:21:07.945397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.063 qpair failed and we were unable to recover it. 00:35:47.063 [2024-11-18 07:21:07.945484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.063 [2024-11-18 07:21:07.945519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.063 qpair failed and we were unable to recover it. 00:35:47.063 [2024-11-18 07:21:07.945612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.063 [2024-11-18 07:21:07.945638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.063 qpair failed and we were unable to recover it. 00:35:47.063 [2024-11-18 07:21:07.945712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.063 [2024-11-18 07:21:07.945738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.063 qpair failed and we were unable to recover it. 00:35:47.063 [2024-11-18 07:21:07.945831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.063 [2024-11-18 07:21:07.945859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.063 qpair failed and we were unable to recover it. 00:35:47.063 [2024-11-18 07:21:07.945949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.063 [2024-11-18 07:21:07.945976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.063 qpair failed and we were unable to recover it. 00:35:47.063 [2024-11-18 07:21:07.946077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.063 [2024-11-18 07:21:07.946103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.063 qpair failed and we were unable to recover it. 00:35:47.063 [2024-11-18 07:21:07.946221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.063 [2024-11-18 07:21:07.946247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.063 qpair failed and we were unable to recover it. 00:35:47.063 [2024-11-18 07:21:07.946333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.063 [2024-11-18 07:21:07.946360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.063 qpair failed and we were unable to recover it. 00:35:47.063 [2024-11-18 07:21:07.946443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.063 [2024-11-18 07:21:07.946471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.063 qpair failed and we were unable to recover it. 00:35:47.063 [2024-11-18 07:21:07.946572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.063 [2024-11-18 07:21:07.946601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.063 qpair failed and we were unable to recover it. 00:35:47.063 [2024-11-18 07:21:07.946687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.063 [2024-11-18 07:21:07.946715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.063 qpair failed and we were unable to recover it. 00:35:47.064 [2024-11-18 07:21:07.946813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.064 [2024-11-18 07:21:07.946840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.064 qpair failed and we were unable to recover it. 00:35:47.064 [2024-11-18 07:21:07.946941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.064 [2024-11-18 07:21:07.946967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.064 qpair failed and we were unable to recover it. 00:35:47.064 [2024-11-18 07:21:07.947045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.064 [2024-11-18 07:21:07.947073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.064 qpair failed and we were unable to recover it. 00:35:47.064 [2024-11-18 07:21:07.947169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.064 [2024-11-18 07:21:07.947195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.064 qpair failed and we were unable to recover it. 00:35:47.064 [2024-11-18 07:21:07.947315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.064 [2024-11-18 07:21:07.947341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.064 qpair failed and we were unable to recover it. 00:35:47.064 [2024-11-18 07:21:07.947427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.064 [2024-11-18 07:21:07.947454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.064 qpair failed and we were unable to recover it. 00:35:47.064 [2024-11-18 07:21:07.947549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.064 [2024-11-18 07:21:07.947577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.064 qpair failed and we were unable to recover it. 00:35:47.064 [2024-11-18 07:21:07.947663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.064 [2024-11-18 07:21:07.947690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.064 qpair failed and we were unable to recover it. 00:35:47.064 [2024-11-18 07:21:07.947775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.064 [2024-11-18 07:21:07.947805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.064 qpair failed and we were unable to recover it. 00:35:47.064 [2024-11-18 07:21:07.947928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.064 [2024-11-18 07:21:07.947963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.064 qpair failed and we were unable to recover it. 00:35:47.064 [2024-11-18 07:21:07.948078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.064 [2024-11-18 07:21:07.948104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.064 qpair failed and we were unable to recover it. 00:35:47.064 [2024-11-18 07:21:07.948199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.064 [2024-11-18 07:21:07.948226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.064 qpair failed and we were unable to recover it. 00:35:47.064 [2024-11-18 07:21:07.948309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.064 [2024-11-18 07:21:07.948341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.064 qpair failed and we were unable to recover it. 00:35:47.064 [2024-11-18 07:21:07.948420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.064 [2024-11-18 07:21:07.948447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.064 qpair failed and we were unable to recover it. 00:35:47.064 [2024-11-18 07:21:07.948557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.064 [2024-11-18 07:21:07.948584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.064 qpair failed and we were unable to recover it. 00:35:47.064 [2024-11-18 07:21:07.948661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.064 [2024-11-18 07:21:07.948688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.064 qpair failed and we were unable to recover it. 00:35:47.064 [2024-11-18 07:21:07.948766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.064 [2024-11-18 07:21:07.948792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.064 qpair failed and we were unable to recover it. 00:35:47.064 [2024-11-18 07:21:07.948908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.064 [2024-11-18 07:21:07.948935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.064 qpair failed and we were unable to recover it. 00:35:47.064 [2024-11-18 07:21:07.949016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.064 [2024-11-18 07:21:07.949043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.064 qpair failed and we were unable to recover it. 00:35:47.064 [2024-11-18 07:21:07.949155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.064 [2024-11-18 07:21:07.949181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.064 qpair failed and we were unable to recover it. 00:35:47.064 [2024-11-18 07:21:07.949259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.064 [2024-11-18 07:21:07.949286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.064 qpair failed and we were unable to recover it. 00:35:47.064 [2024-11-18 07:21:07.949394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.064 [2024-11-18 07:21:07.949421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.064 qpair failed and we were unable to recover it. 00:35:47.064 [2024-11-18 07:21:07.949511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.064 [2024-11-18 07:21:07.949538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.064 qpair failed and we were unable to recover it. 00:35:47.064 [2024-11-18 07:21:07.949615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.064 [2024-11-18 07:21:07.949642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.064 qpair failed and we were unable to recover it. 00:35:47.064 [2024-11-18 07:21:07.949723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.064 [2024-11-18 07:21:07.949750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.064 qpair failed and we were unable to recover it. 00:35:47.064 [2024-11-18 07:21:07.949854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.064 [2024-11-18 07:21:07.949881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.064 qpair failed and we were unable to recover it. 00:35:47.064 [2024-11-18 07:21:07.949961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.064 [2024-11-18 07:21:07.949996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.064 qpair failed and we were unable to recover it. 00:35:47.064 [2024-11-18 07:21:07.950085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.064 [2024-11-18 07:21:07.950112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.064 qpair failed and we were unable to recover it. 00:35:47.064 [2024-11-18 07:21:07.950187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.064 [2024-11-18 07:21:07.950213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.064 qpair failed and we were unable to recover it. 00:35:47.064 [2024-11-18 07:21:07.950312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.064 [2024-11-18 07:21:07.950362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.064 qpair failed and we were unable to recover it. 00:35:47.064 [2024-11-18 07:21:07.950443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.064 [2024-11-18 07:21:07.950471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.064 qpair failed and we were unable to recover it. 00:35:47.064 [2024-11-18 07:21:07.950580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.064 [2024-11-18 07:21:07.950607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.064 qpair failed and we were unable to recover it. 00:35:47.064 [2024-11-18 07:21:07.950691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.064 [2024-11-18 07:21:07.950717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.064 qpair failed and we were unable to recover it. 00:35:47.064 [2024-11-18 07:21:07.950815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.064 [2024-11-18 07:21:07.950841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.064 qpair failed and we were unable to recover it. 00:35:47.064 [2024-11-18 07:21:07.950924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.064 [2024-11-18 07:21:07.950950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.064 qpair failed and we were unable to recover it. 00:35:47.064 [2024-11-18 07:21:07.951026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.064 [2024-11-18 07:21:07.951051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.064 qpair failed and we were unable to recover it. 00:35:47.064 [2024-11-18 07:21:07.951142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.064 [2024-11-18 07:21:07.951167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.064 qpair failed and we were unable to recover it. 00:35:47.064 [2024-11-18 07:21:07.951257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.064 [2024-11-18 07:21:07.951283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.064 qpair failed and we were unable to recover it. 00:35:47.064 [2024-11-18 07:21:07.951360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.064 [2024-11-18 07:21:07.951385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.064 qpair failed and we were unable to recover it. 00:35:47.064 [2024-11-18 07:21:07.951498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.064 [2024-11-18 07:21:07.951539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.064 qpair failed and we were unable to recover it. 00:35:47.064 [2024-11-18 07:21:07.951630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.064 [2024-11-18 07:21:07.951657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.064 qpair failed and we were unable to recover it. 00:35:47.064 [2024-11-18 07:21:07.951733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.064 [2024-11-18 07:21:07.951760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.064 qpair failed and we were unable to recover it. 00:35:47.064 [2024-11-18 07:21:07.951890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.064 [2024-11-18 07:21:07.951917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.064 qpair failed and we were unable to recover it. 00:35:47.064 [2024-11-18 07:21:07.952008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.064 [2024-11-18 07:21:07.952037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.064 qpair failed and we were unable to recover it. 00:35:47.064 [2024-11-18 07:21:07.952128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.064 [2024-11-18 07:21:07.952157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.064 qpair failed and we were unable to recover it. 00:35:47.064 [2024-11-18 07:21:07.952253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.064 [2024-11-18 07:21:07.952281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.064 qpair failed and we were unable to recover it. 00:35:47.064 [2024-11-18 07:21:07.952376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.064 [2024-11-18 07:21:07.952402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.064 qpair failed and we were unable to recover it. 00:35:47.065 [2024-11-18 07:21:07.952502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.065 [2024-11-18 07:21:07.952529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.065 qpair failed and we were unable to recover it. 00:35:47.065 [2024-11-18 07:21:07.952616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.065 [2024-11-18 07:21:07.952642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.065 qpair failed and we were unable to recover it. 00:35:47.065 [2024-11-18 07:21:07.952758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.065 [2024-11-18 07:21:07.952791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.065 qpair failed and we were unable to recover it. 00:35:47.065 [2024-11-18 07:21:07.952878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.065 [2024-11-18 07:21:07.952904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.065 qpair failed and we were unable to recover it. 00:35:47.065 [2024-11-18 07:21:07.952998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.065 [2024-11-18 07:21:07.953024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.065 qpair failed and we were unable to recover it. 00:35:47.065 [2024-11-18 07:21:07.953116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.065 [2024-11-18 07:21:07.953146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.065 qpair failed and we were unable to recover it. 00:35:47.065 [2024-11-18 07:21:07.953226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.065 [2024-11-18 07:21:07.953252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.065 qpair failed and we were unable to recover it. 00:35:47.065 [2024-11-18 07:21:07.953339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.065 [2024-11-18 07:21:07.953365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.065 qpair failed and we were unable to recover it. 00:35:47.065 [2024-11-18 07:21:07.953468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.065 [2024-11-18 07:21:07.953499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.065 qpair failed and we were unable to recover it. 00:35:47.065 [2024-11-18 07:21:07.953585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.065 [2024-11-18 07:21:07.953611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.065 qpair failed and we were unable to recover it. 00:35:47.065 [2024-11-18 07:21:07.953697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.065 [2024-11-18 07:21:07.953723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.065 qpair failed and we were unable to recover it. 00:35:47.065 [2024-11-18 07:21:07.953846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.065 [2024-11-18 07:21:07.953871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.065 qpair failed and we were unable to recover it. 00:35:47.065 [2024-11-18 07:21:07.953978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.065 [2024-11-18 07:21:07.954004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.065 qpair failed and we were unable to recover it. 00:35:47.065 [2024-11-18 07:21:07.954083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.065 [2024-11-18 07:21:07.954115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.065 qpair failed and we were unable to recover it. 00:35:47.065 [2024-11-18 07:21:07.954239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.065 [2024-11-18 07:21:07.954268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.065 qpair failed and we were unable to recover it. 00:35:47.065 [2024-11-18 07:21:07.954346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.065 [2024-11-18 07:21:07.954373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.065 qpair failed and we were unable to recover it. 00:35:47.065 [2024-11-18 07:21:07.954457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.065 [2024-11-18 07:21:07.954504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.065 qpair failed and we were unable to recover it. 00:35:47.065 [2024-11-18 07:21:07.954595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.065 [2024-11-18 07:21:07.954621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.065 qpair failed and we were unable to recover it. 00:35:47.065 [2024-11-18 07:21:07.954704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.065 [2024-11-18 07:21:07.954731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.065 qpair failed and we were unable to recover it. 00:35:47.065 [2024-11-18 07:21:07.954831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.065 [2024-11-18 07:21:07.954870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.065 qpair failed and we were unable to recover it. 00:35:47.065 [2024-11-18 07:21:07.954955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.065 [2024-11-18 07:21:07.954981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.065 qpair failed and we were unable to recover it. 00:35:47.065 [2024-11-18 07:21:07.955103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.065 [2024-11-18 07:21:07.955136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.065 qpair failed and we were unable to recover it. 00:35:47.065 [2024-11-18 07:21:07.955226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.065 [2024-11-18 07:21:07.955252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.065 qpair failed and we were unable to recover it. 00:35:47.065 [2024-11-18 07:21:07.955332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.065 [2024-11-18 07:21:07.955357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.065 qpair failed and we were unable to recover it. 00:35:47.065 [2024-11-18 07:21:07.955443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.065 [2024-11-18 07:21:07.955469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.065 qpair failed and we were unable to recover it. 00:35:47.065 [2024-11-18 07:21:07.955569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.065 [2024-11-18 07:21:07.955597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.065 qpair failed and we were unable to recover it. 00:35:47.065 [2024-11-18 07:21:07.955713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.065 [2024-11-18 07:21:07.955739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.065 qpair failed and we were unable to recover it. 00:35:47.065 [2024-11-18 07:21:07.955835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.065 [2024-11-18 07:21:07.955861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.065 qpair failed and we were unable to recover it. 00:35:47.065 [2024-11-18 07:21:07.955937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.065 [2024-11-18 07:21:07.955962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.065 qpair failed and we were unable to recover it. 00:35:47.065 [2024-11-18 07:21:07.956051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.065 [2024-11-18 07:21:07.956077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.065 qpair failed and we were unable to recover it. 00:35:47.065 [2024-11-18 07:21:07.956214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.065 [2024-11-18 07:21:07.956252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.065 qpair failed and we were unable to recover it. 00:35:47.065 [2024-11-18 07:21:07.956348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.065 [2024-11-18 07:21:07.956376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.065 qpair failed and we were unable to recover it. 00:35:47.065 [2024-11-18 07:21:07.956464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.065 [2024-11-18 07:21:07.956516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.065 qpair failed and we were unable to recover it. 00:35:47.065 [2024-11-18 07:21:07.956608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.065 [2024-11-18 07:21:07.956634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.065 qpair failed and we were unable to recover it. 00:35:47.065 [2024-11-18 07:21:07.956732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.065 [2024-11-18 07:21:07.956759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.065 qpair failed and we were unable to recover it. 00:35:47.065 [2024-11-18 07:21:07.956875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.065 [2024-11-18 07:21:07.956901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.065 qpair failed and we were unable to recover it. 00:35:47.065 [2024-11-18 07:21:07.956985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.065 [2024-11-18 07:21:07.957011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.065 qpair failed and we were unable to recover it. 00:35:47.065 [2024-11-18 07:21:07.957088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.065 [2024-11-18 07:21:07.957114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.065 qpair failed and we were unable to recover it. 00:35:47.065 [2024-11-18 07:21:07.957194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.065 [2024-11-18 07:21:07.957221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.065 qpair failed and we were unable to recover it. 00:35:47.065 [2024-11-18 07:21:07.957316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.065 [2024-11-18 07:21:07.957343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.065 qpair failed and we were unable to recover it. 00:35:47.344 [2024-11-18 07:21:07.957451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.344 [2024-11-18 07:21:07.957487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.344 qpair failed and we were unable to recover it. 00:35:47.344 [2024-11-18 07:21:07.957588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.344 [2024-11-18 07:21:07.957614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.344 qpair failed and we were unable to recover it. 00:35:47.344 [2024-11-18 07:21:07.957702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.344 [2024-11-18 07:21:07.957728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.344 qpair failed and we were unable to recover it. 00:35:47.344 [2024-11-18 07:21:07.957831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.344 [2024-11-18 07:21:07.957857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.344 qpair failed and we were unable to recover it. 00:35:47.344 [2024-11-18 07:21:07.957944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.344 [2024-11-18 07:21:07.957971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.344 qpair failed and we were unable to recover it. 00:35:47.344 [2024-11-18 07:21:07.958071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.344 [2024-11-18 07:21:07.958099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.344 qpair failed and we were unable to recover it. 00:35:47.344 [2024-11-18 07:21:07.958202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.344 [2024-11-18 07:21:07.958232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.344 qpair failed and we were unable to recover it. 00:35:47.344 [2024-11-18 07:21:07.958327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.344 [2024-11-18 07:21:07.958353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.344 qpair failed and we were unable to recover it. 00:35:47.344 [2024-11-18 07:21:07.958440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.344 [2024-11-18 07:21:07.958467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.344 qpair failed and we were unable to recover it. 00:35:47.344 [2024-11-18 07:21:07.958577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.344 [2024-11-18 07:21:07.958604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.344 qpair failed and we were unable to recover it. 00:35:47.344 [2024-11-18 07:21:07.958687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.344 [2024-11-18 07:21:07.958714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.344 qpair failed and we were unable to recover it. 00:35:47.344 [2024-11-18 07:21:07.958854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.344 [2024-11-18 07:21:07.958881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.344 qpair failed and we were unable to recover it. 00:35:47.344 [2024-11-18 07:21:07.958970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.344 [2024-11-18 07:21:07.959000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.344 qpair failed and we were unable to recover it. 00:35:47.344 [2024-11-18 07:21:07.959094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.344 [2024-11-18 07:21:07.959125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.344 qpair failed and we were unable to recover it. 00:35:47.344 [2024-11-18 07:21:07.959218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.344 [2024-11-18 07:21:07.959246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.344 qpair failed and we were unable to recover it. 00:35:47.344 [2024-11-18 07:21:07.959332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.344 [2024-11-18 07:21:07.959360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.344 qpair failed and we were unable to recover it. 00:35:47.344 [2024-11-18 07:21:07.959444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.344 [2024-11-18 07:21:07.959470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.344 qpair failed and we were unable to recover it. 00:35:47.344 [2024-11-18 07:21:07.959573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.344 [2024-11-18 07:21:07.959600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.344 qpair failed and we were unable to recover it. 00:35:47.344 [2024-11-18 07:21:07.959686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.344 [2024-11-18 07:21:07.959713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.344 qpair failed and we were unable to recover it. 00:35:47.344 [2024-11-18 07:21:07.959831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.344 [2024-11-18 07:21:07.959873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.345 qpair failed and we were unable to recover it. 00:35:47.345 [2024-11-18 07:21:07.959963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.345 [2024-11-18 07:21:07.959989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.345 qpair failed and we were unable to recover it. 00:35:47.345 [2024-11-18 07:21:07.960080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.345 [2024-11-18 07:21:07.960107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.345 qpair failed and we were unable to recover it. 00:35:47.345 [2024-11-18 07:21:07.960201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.345 [2024-11-18 07:21:07.960229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.345 qpair failed and we were unable to recover it. 00:35:47.345 [2024-11-18 07:21:07.960317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.345 [2024-11-18 07:21:07.960344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.345 qpair failed and we were unable to recover it. 00:35:47.345 [2024-11-18 07:21:07.960428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.345 [2024-11-18 07:21:07.960454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.345 qpair failed and we were unable to recover it. 00:35:47.345 [2024-11-18 07:21:07.960602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.345 [2024-11-18 07:21:07.960628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.345 qpair failed and we were unable to recover it. 00:35:47.345 [2024-11-18 07:21:07.960714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.345 [2024-11-18 07:21:07.960740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.345 qpair failed and we were unable to recover it. 00:35:47.345 [2024-11-18 07:21:07.960836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.345 [2024-11-18 07:21:07.960863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.345 qpair failed and we were unable to recover it. 00:35:47.345 [2024-11-18 07:21:07.961013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.345 [2024-11-18 07:21:07.961039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.345 qpair failed and we were unable to recover it. 00:35:47.345 [2024-11-18 07:21:07.961122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.345 [2024-11-18 07:21:07.961147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.345 qpair failed and we were unable to recover it. 00:35:47.345 [2024-11-18 07:21:07.961230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.345 [2024-11-18 07:21:07.961256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.345 qpair failed and we were unable to recover it. 00:35:47.345 [2024-11-18 07:21:07.961335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.345 [2024-11-18 07:21:07.961361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.345 qpair failed and we were unable to recover it. 00:35:47.345 [2024-11-18 07:21:07.961446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.345 [2024-11-18 07:21:07.961472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.345 qpair failed and we were unable to recover it. 00:35:47.345 [2024-11-18 07:21:07.961574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.345 [2024-11-18 07:21:07.961600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.345 qpair failed and we were unable to recover it. 00:35:47.345 [2024-11-18 07:21:07.961709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.345 [2024-11-18 07:21:07.961735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.345 qpair failed and we were unable to recover it. 00:35:47.345 [2024-11-18 07:21:07.961837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.345 [2024-11-18 07:21:07.961864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.345 qpair failed and we were unable to recover it. 00:35:47.345 [2024-11-18 07:21:07.961958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.345 [2024-11-18 07:21:07.961984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.345 qpair failed and we were unable to recover it. 00:35:47.345 [2024-11-18 07:21:07.962077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.345 [2024-11-18 07:21:07.962110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.345 qpair failed and we were unable to recover it. 00:35:47.345 [2024-11-18 07:21:07.962194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.345 [2024-11-18 07:21:07.962220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.345 qpair failed and we were unable to recover it. 00:35:47.345 [2024-11-18 07:21:07.962313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.345 [2024-11-18 07:21:07.962340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.345 qpair failed and we were unable to recover it. 00:35:47.345 [2024-11-18 07:21:07.962450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.345 [2024-11-18 07:21:07.962476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.345 qpair failed and we were unable to recover it. 00:35:47.345 [2024-11-18 07:21:07.962569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.345 [2024-11-18 07:21:07.962602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.345 qpair failed and we were unable to recover it. 00:35:47.345 [2024-11-18 07:21:07.962693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.345 [2024-11-18 07:21:07.962728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.345 qpair failed and we were unable to recover it. 00:35:47.345 [2024-11-18 07:21:07.962828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.345 [2024-11-18 07:21:07.962854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.345 qpair failed and we were unable to recover it. 00:35:47.345 [2024-11-18 07:21:07.962934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.345 [2024-11-18 07:21:07.962962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.345 qpair failed and we were unable to recover it. 00:35:47.345 [2024-11-18 07:21:07.963044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.345 [2024-11-18 07:21:07.963070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.345 qpair failed and we were unable to recover it. 00:35:47.345 [2024-11-18 07:21:07.963199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.345 [2024-11-18 07:21:07.963227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.345 qpair failed and we were unable to recover it. 00:35:47.345 [2024-11-18 07:21:07.963323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.345 [2024-11-18 07:21:07.963350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.345 qpair failed and we were unable to recover it. 00:35:47.345 [2024-11-18 07:21:07.963439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.345 [2024-11-18 07:21:07.963465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.345 qpair failed and we were unable to recover it. 00:35:47.345 [2024-11-18 07:21:07.963574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.345 [2024-11-18 07:21:07.963607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.345 qpair failed and we were unable to recover it. 00:35:47.345 [2024-11-18 07:21:07.963698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.345 [2024-11-18 07:21:07.963724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.345 qpair failed and we were unable to recover it. 00:35:47.345 [2024-11-18 07:21:07.963826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.345 [2024-11-18 07:21:07.963852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.345 qpair failed and we were unable to recover it. 00:35:47.345 [2024-11-18 07:21:07.963948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.345 [2024-11-18 07:21:07.963977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.345 qpair failed and we were unable to recover it. 00:35:47.346 [2024-11-18 07:21:07.964065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.346 [2024-11-18 07:21:07.964092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.346 qpair failed and we were unable to recover it. 00:35:47.346 [2024-11-18 07:21:07.964170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.346 [2024-11-18 07:21:07.964196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.346 qpair failed and we were unable to recover it. 00:35:47.346 [2024-11-18 07:21:07.964285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.346 [2024-11-18 07:21:07.964312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.346 qpair failed and we were unable to recover it. 00:35:47.346 [2024-11-18 07:21:07.964389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.346 [2024-11-18 07:21:07.964415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.346 qpair failed and we were unable to recover it. 00:35:47.346 [2024-11-18 07:21:07.964515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.346 [2024-11-18 07:21:07.964541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.346 qpair failed and we were unable to recover it. 00:35:47.346 [2024-11-18 07:21:07.964623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.346 [2024-11-18 07:21:07.964652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.346 qpair failed and we were unable to recover it. 00:35:47.346 [2024-11-18 07:21:07.964729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.346 [2024-11-18 07:21:07.964761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.346 qpair failed and we were unable to recover it. 00:35:47.346 [2024-11-18 07:21:07.964873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.346 [2024-11-18 07:21:07.964913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.346 qpair failed and we were unable to recover it. 00:35:47.346 [2024-11-18 07:21:07.965009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.346 [2024-11-18 07:21:07.965037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.346 qpair failed and we were unable to recover it. 00:35:47.346 [2024-11-18 07:21:07.965139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.346 [2024-11-18 07:21:07.965166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.346 qpair failed and we were unable to recover it. 00:35:47.346 [2024-11-18 07:21:07.965274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.346 [2024-11-18 07:21:07.965301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.346 qpair failed and we were unable to recover it. 00:35:47.346 [2024-11-18 07:21:07.965388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.346 [2024-11-18 07:21:07.965415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.346 qpair failed and we were unable to recover it. 00:35:47.346 [2024-11-18 07:21:07.965520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.346 [2024-11-18 07:21:07.965548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.346 qpair failed and we were unable to recover it. 00:35:47.346 [2024-11-18 07:21:07.965633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.346 [2024-11-18 07:21:07.965659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.346 qpair failed and we were unable to recover it. 00:35:47.346 [2024-11-18 07:21:07.965745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.346 [2024-11-18 07:21:07.965774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.346 qpair failed and we were unable to recover it. 00:35:47.346 [2024-11-18 07:21:07.965873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.346 [2024-11-18 07:21:07.965901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.346 qpair failed and we were unable to recover it. 00:35:47.346 [2024-11-18 07:21:07.965991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.346 [2024-11-18 07:21:07.966019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.346 qpair failed and we were unable to recover it. 00:35:47.346 [2024-11-18 07:21:07.966114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.346 [2024-11-18 07:21:07.966141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.346 qpair failed and we were unable to recover it. 00:35:47.346 [2024-11-18 07:21:07.966270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.346 [2024-11-18 07:21:07.966298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.346 qpair failed and we were unable to recover it. 00:35:47.346 [2024-11-18 07:21:07.966404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.346 [2024-11-18 07:21:07.966431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.346 qpair failed and we were unable to recover it. 00:35:47.346 [2024-11-18 07:21:07.966532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.346 [2024-11-18 07:21:07.966559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.346 qpair failed and we were unable to recover it. 00:35:47.346 [2024-11-18 07:21:07.966642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.346 [2024-11-18 07:21:07.966668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.346 qpair failed and we were unable to recover it. 00:35:47.346 [2024-11-18 07:21:07.966776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.346 [2024-11-18 07:21:07.966807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.346 qpair failed and we were unable to recover it. 00:35:47.346 [2024-11-18 07:21:07.966892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.346 [2024-11-18 07:21:07.966920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.346 qpair failed and we were unable to recover it. 00:35:47.346 [2024-11-18 07:21:07.967031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.346 [2024-11-18 07:21:07.967062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.346 qpair failed and we were unable to recover it. 00:35:47.346 [2024-11-18 07:21:07.967160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.346 [2024-11-18 07:21:07.967187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.346 qpair failed and we were unable to recover it. 00:35:47.346 [2024-11-18 07:21:07.967277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.346 [2024-11-18 07:21:07.967304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.346 qpair failed and we were unable to recover it. 00:35:47.346 [2024-11-18 07:21:07.967413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.346 [2024-11-18 07:21:07.967439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.346 qpair failed and we were unable to recover it. 00:35:47.346 [2024-11-18 07:21:07.967538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.346 [2024-11-18 07:21:07.967566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.346 qpair failed and we were unable to recover it. 00:35:47.346 [2024-11-18 07:21:07.967656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.346 [2024-11-18 07:21:07.967683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.346 qpair failed and we were unable to recover it. 00:35:47.346 [2024-11-18 07:21:07.967769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.346 [2024-11-18 07:21:07.967799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.346 qpair failed and we were unable to recover it. 00:35:47.346 [2024-11-18 07:21:07.967891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.346 [2024-11-18 07:21:07.967918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.346 qpair failed and we were unable to recover it. 00:35:47.347 [2024-11-18 07:21:07.968033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.347 [2024-11-18 07:21:07.968060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.347 qpair failed and we were unable to recover it. 00:35:47.347 [2024-11-18 07:21:07.968159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.347 [2024-11-18 07:21:07.968186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.347 qpair failed and we were unable to recover it. 00:35:47.347 [2024-11-18 07:21:07.968262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.347 [2024-11-18 07:21:07.968288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.347 qpair failed and we were unable to recover it. 00:35:47.347 [2024-11-18 07:21:07.968380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.347 [2024-11-18 07:21:07.968407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.347 qpair failed and we were unable to recover it. 00:35:47.347 [2024-11-18 07:21:07.968506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.347 [2024-11-18 07:21:07.968533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.347 qpair failed and we were unable to recover it. 00:35:47.347 [2024-11-18 07:21:07.968625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.347 [2024-11-18 07:21:07.968651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.347 qpair failed and we were unable to recover it. 00:35:47.347 [2024-11-18 07:21:07.968767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.347 [2024-11-18 07:21:07.968794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.347 qpair failed and we were unable to recover it. 00:35:47.347 [2024-11-18 07:21:07.968884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.347 [2024-11-18 07:21:07.968914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.347 qpair failed and we were unable to recover it. 00:35:47.347 [2024-11-18 07:21:07.968990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.347 [2024-11-18 07:21:07.969016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.347 qpair failed and we were unable to recover it. 00:35:47.347 [2024-11-18 07:21:07.969102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.347 [2024-11-18 07:21:07.969129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.347 qpair failed and we were unable to recover it. 00:35:47.347 [2024-11-18 07:21:07.969213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.347 [2024-11-18 07:21:07.969239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.347 qpair failed and we were unable to recover it. 00:35:47.347 [2024-11-18 07:21:07.969360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.347 [2024-11-18 07:21:07.969387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.347 qpair failed and we were unable to recover it. 00:35:47.347 [2024-11-18 07:21:07.969462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.347 [2024-11-18 07:21:07.969498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.347 qpair failed and we were unable to recover it. 00:35:47.347 [2024-11-18 07:21:07.969583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.347 [2024-11-18 07:21:07.969611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.347 qpair failed and we were unable to recover it. 00:35:47.347 [2024-11-18 07:21:07.969697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.347 [2024-11-18 07:21:07.969732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.347 qpair failed and we were unable to recover it. 00:35:47.347 [2024-11-18 07:21:07.969835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.347 [2024-11-18 07:21:07.969862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.347 qpair failed and we were unable to recover it. 00:35:47.347 [2024-11-18 07:21:07.969950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.347 [2024-11-18 07:21:07.969977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.347 qpair failed and we were unable to recover it. 00:35:47.347 [2024-11-18 07:21:07.970059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.347 [2024-11-18 07:21:07.970085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.347 qpair failed and we were unable to recover it. 00:35:47.347 [2024-11-18 07:21:07.970201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.347 [2024-11-18 07:21:07.970227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.347 qpair failed and we were unable to recover it. 00:35:47.347 [2024-11-18 07:21:07.970338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.347 [2024-11-18 07:21:07.970378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.347 qpair failed and we were unable to recover it. 00:35:47.347 [2024-11-18 07:21:07.970468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.347 [2024-11-18 07:21:07.970505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.347 qpair failed and we were unable to recover it. 00:35:47.347 [2024-11-18 07:21:07.970595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.347 [2024-11-18 07:21:07.970622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.347 qpair failed and we were unable to recover it. 00:35:47.347 [2024-11-18 07:21:07.970711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.347 [2024-11-18 07:21:07.970737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.347 qpair failed and we were unable to recover it. 00:35:47.347 [2024-11-18 07:21:07.970824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.347 [2024-11-18 07:21:07.970852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.347 qpair failed and we were unable to recover it. 00:35:47.347 [2024-11-18 07:21:07.970946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.347 [2024-11-18 07:21:07.970973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.347 qpair failed and we were unable to recover it. 00:35:47.347 [2024-11-18 07:21:07.971052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.347 [2024-11-18 07:21:07.971080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.347 qpair failed and we were unable to recover it. 00:35:47.347 [2024-11-18 07:21:07.971173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.347 [2024-11-18 07:21:07.971203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.347 qpair failed and we were unable to recover it. 00:35:47.347 [2024-11-18 07:21:07.971301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.347 [2024-11-18 07:21:07.971340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.347 qpair failed and we were unable to recover it. 00:35:47.347 [2024-11-18 07:21:07.971443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.347 [2024-11-18 07:21:07.971472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.347 qpair failed and we were unable to recover it. 00:35:47.347 [2024-11-18 07:21:07.971562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.347 [2024-11-18 07:21:07.971590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.347 qpair failed and we were unable to recover it. 00:35:47.347 [2024-11-18 07:21:07.971671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.348 [2024-11-18 07:21:07.971698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.348 qpair failed and we were unable to recover it. 00:35:47.348 [2024-11-18 07:21:07.971821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.348 [2024-11-18 07:21:07.971848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.348 qpair failed and we were unable to recover it. 00:35:47.348 [2024-11-18 07:21:07.971942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.348 [2024-11-18 07:21:07.971968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.348 qpair failed and we were unable to recover it. 00:35:47.348 [2024-11-18 07:21:07.972079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.348 [2024-11-18 07:21:07.972106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.348 qpair failed and we were unable to recover it. 00:35:47.348 [2024-11-18 07:21:07.972192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.348 [2024-11-18 07:21:07.972219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.348 qpair failed and we were unable to recover it. 00:35:47.348 [2024-11-18 07:21:07.972300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.348 [2024-11-18 07:21:07.972326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.348 qpair failed and we were unable to recover it. 00:35:47.348 [2024-11-18 07:21:07.972410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.348 [2024-11-18 07:21:07.972436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.348 qpair failed and we were unable to recover it. 00:35:47.348 [2024-11-18 07:21:07.972525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.348 [2024-11-18 07:21:07.972552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.348 qpair failed and we were unable to recover it. 00:35:47.348 [2024-11-18 07:21:07.972633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.348 [2024-11-18 07:21:07.972659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.348 qpair failed and we were unable to recover it. 00:35:47.348 [2024-11-18 07:21:07.972736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.348 [2024-11-18 07:21:07.972762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.348 qpair failed and we were unable to recover it. 00:35:47.348 [2024-11-18 07:21:07.972845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.348 [2024-11-18 07:21:07.972872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.348 qpair failed and we were unable to recover it. 00:35:47.348 [2024-11-18 07:21:07.972964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.348 [2024-11-18 07:21:07.972993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.348 qpair failed and we were unable to recover it. 00:35:47.348 [2024-11-18 07:21:07.973084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.348 [2024-11-18 07:21:07.973112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.348 qpair failed and we were unable to recover it. 00:35:47.348 [2024-11-18 07:21:07.973253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.348 [2024-11-18 07:21:07.973293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.348 qpair failed and we were unable to recover it. 00:35:47.348 [2024-11-18 07:21:07.973410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.348 [2024-11-18 07:21:07.973437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.348 qpair failed and we were unable to recover it. 00:35:47.348 [2024-11-18 07:21:07.973534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.348 [2024-11-18 07:21:07.973562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.348 qpair failed and we were unable to recover it. 00:35:47.348 [2024-11-18 07:21:07.973642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.348 [2024-11-18 07:21:07.973668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.348 qpair failed and we were unable to recover it. 00:35:47.348 [2024-11-18 07:21:07.973757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.348 [2024-11-18 07:21:07.973783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.348 qpair failed and we were unable to recover it. 00:35:47.348 [2024-11-18 07:21:07.973876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.348 [2024-11-18 07:21:07.973903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.348 qpair failed and we were unable to recover it. 00:35:47.348 [2024-11-18 07:21:07.973987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.348 [2024-11-18 07:21:07.974014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.348 qpair failed and we were unable to recover it. 00:35:47.348 [2024-11-18 07:21:07.974088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.348 [2024-11-18 07:21:07.974114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.348 qpair failed and we were unable to recover it. 00:35:47.348 [2024-11-18 07:21:07.974229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.348 [2024-11-18 07:21:07.974255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.348 qpair failed and we were unable to recover it. 00:35:47.348 [2024-11-18 07:21:07.974349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.348 [2024-11-18 07:21:07.974389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.348 qpair failed and we were unable to recover it. 00:35:47.348 [2024-11-18 07:21:07.974495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.348 [2024-11-18 07:21:07.974524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.348 qpair failed and we were unable to recover it. 00:35:47.348 [2024-11-18 07:21:07.974644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.348 [2024-11-18 07:21:07.974674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.348 qpair failed and we were unable to recover it. 00:35:47.348 [2024-11-18 07:21:07.974794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.348 [2024-11-18 07:21:07.974821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.348 qpair failed and we were unable to recover it. 00:35:47.348 [2024-11-18 07:21:07.974912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.348 [2024-11-18 07:21:07.974939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.348 qpair failed and we were unable to recover it. 00:35:47.348 [2024-11-18 07:21:07.975018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.348 [2024-11-18 07:21:07.975048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.348 qpair failed and we were unable to recover it. 00:35:47.348 [2024-11-18 07:21:07.975185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.348 [2024-11-18 07:21:07.975213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.348 qpair failed and we were unable to recover it. 00:35:47.348 [2024-11-18 07:21:07.975293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.348 [2024-11-18 07:21:07.975321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.348 qpair failed and we were unable to recover it. 00:35:47.348 [2024-11-18 07:21:07.975404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.348 [2024-11-18 07:21:07.975430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.348 qpair failed and we were unable to recover it. 00:35:47.348 [2024-11-18 07:21:07.975526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.348 [2024-11-18 07:21:07.975553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.348 qpair failed and we were unable to recover it. 00:35:47.348 [2024-11-18 07:21:07.975638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.348 [2024-11-18 07:21:07.975665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.348 qpair failed and we were unable to recover it. 00:35:47.349 [2024-11-18 07:21:07.975770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.349 [2024-11-18 07:21:07.975800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.349 qpair failed and we were unable to recover it. 00:35:47.349 [2024-11-18 07:21:07.975891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.349 [2024-11-18 07:21:07.975919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.349 qpair failed and we were unable to recover it. 00:35:47.349 [2024-11-18 07:21:07.976033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.349 [2024-11-18 07:21:07.976059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.349 qpair failed and we were unable to recover it. 00:35:47.349 [2024-11-18 07:21:07.976138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.349 [2024-11-18 07:21:07.976164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.349 qpair failed and we were unable to recover it. 00:35:47.349 [2024-11-18 07:21:07.976245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.349 [2024-11-18 07:21:07.976271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.349 qpair failed and we were unable to recover it. 00:35:47.349 [2024-11-18 07:21:07.976362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.349 [2024-11-18 07:21:07.976388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.349 qpair failed and we were unable to recover it. 00:35:47.349 [2024-11-18 07:21:07.976509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.349 [2024-11-18 07:21:07.976536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.349 qpair failed and we were unable to recover it. 00:35:47.349 [2024-11-18 07:21:07.976614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.349 [2024-11-18 07:21:07.976639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.349 qpair failed and we were unable to recover it. 00:35:47.349 [2024-11-18 07:21:07.976717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.349 [2024-11-18 07:21:07.976743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.349 qpair failed and we were unable to recover it. 00:35:47.349 [2024-11-18 07:21:07.976834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.349 [2024-11-18 07:21:07.976860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.349 qpair failed and we were unable to recover it. 00:35:47.349 [2024-11-18 07:21:07.976973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.349 [2024-11-18 07:21:07.977001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.349 qpair failed and we were unable to recover it. 00:35:47.349 [2024-11-18 07:21:07.977103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.349 [2024-11-18 07:21:07.977141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.349 qpair failed and we were unable to recover it. 00:35:47.349 [2024-11-18 07:21:07.977230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.349 [2024-11-18 07:21:07.977259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.349 qpair failed and we were unable to recover it. 00:35:47.349 [2024-11-18 07:21:07.977343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.349 [2024-11-18 07:21:07.977370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.349 qpair failed and we were unable to recover it. 00:35:47.349 [2024-11-18 07:21:07.977483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.349 [2024-11-18 07:21:07.977518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.349 qpair failed and we were unable to recover it. 00:35:47.349 [2024-11-18 07:21:07.977607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.349 [2024-11-18 07:21:07.977633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.349 qpair failed and we were unable to recover it. 00:35:47.349 [2024-11-18 07:21:07.977738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.349 [2024-11-18 07:21:07.977765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.349 qpair failed and we were unable to recover it. 00:35:47.349 [2024-11-18 07:21:07.977876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.349 [2024-11-18 07:21:07.977903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.349 qpair failed and we were unable to recover it. 00:35:47.349 [2024-11-18 07:21:07.977989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.349 [2024-11-18 07:21:07.978016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.349 qpair failed and we were unable to recover it. 00:35:47.349 [2024-11-18 07:21:07.978098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.349 [2024-11-18 07:21:07.978125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.349 qpair failed and we were unable to recover it. 00:35:47.349 [2024-11-18 07:21:07.978238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.349 [2024-11-18 07:21:07.978266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.349 qpair failed and we were unable to recover it. 00:35:47.349 [2024-11-18 07:21:07.978351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.349 [2024-11-18 07:21:07.978379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.349 qpair failed and we were unable to recover it. 00:35:47.349 [2024-11-18 07:21:07.978460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.349 [2024-11-18 07:21:07.978505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.349 qpair failed and we were unable to recover it. 00:35:47.349 [2024-11-18 07:21:07.978591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.349 [2024-11-18 07:21:07.978617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.349 qpair failed and we were unable to recover it. 00:35:47.350 [2024-11-18 07:21:07.978708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.350 [2024-11-18 07:21:07.978736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.350 qpair failed and we were unable to recover it. 00:35:47.350 [2024-11-18 07:21:07.978832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.350 [2024-11-18 07:21:07.978859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.350 qpair failed and we were unable to recover it. 00:35:47.350 [2024-11-18 07:21:07.978940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.350 [2024-11-18 07:21:07.978968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.350 qpair failed and we were unable to recover it. 00:35:47.350 [2024-11-18 07:21:07.979050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.350 [2024-11-18 07:21:07.979076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.350 qpair failed and we were unable to recover it. 00:35:47.350 [2024-11-18 07:21:07.979166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.350 [2024-11-18 07:21:07.979191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.350 qpair failed and we were unable to recover it. 00:35:47.350 [2024-11-18 07:21:07.979275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.350 [2024-11-18 07:21:07.979301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.350 qpair failed and we were unable to recover it. 00:35:47.350 [2024-11-18 07:21:07.979382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.350 [2024-11-18 07:21:07.979409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.350 qpair failed and we were unable to recover it. 00:35:47.350 [2024-11-18 07:21:07.979501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.350 [2024-11-18 07:21:07.979528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.350 qpair failed and we were unable to recover it. 00:35:47.350 [2024-11-18 07:21:07.979619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.350 [2024-11-18 07:21:07.979646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.350 qpair failed and we were unable to recover it. 00:35:47.350 [2024-11-18 07:21:07.979755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.350 [2024-11-18 07:21:07.979782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.350 qpair failed and we were unable to recover it. 00:35:47.350 [2024-11-18 07:21:07.979872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.350 [2024-11-18 07:21:07.979898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.350 qpair failed and we were unable to recover it. 00:35:47.350 [2024-11-18 07:21:07.980005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.350 [2024-11-18 07:21:07.980031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.350 qpair failed and we were unable to recover it. 00:35:47.350 [2024-11-18 07:21:07.980156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.350 [2024-11-18 07:21:07.980196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.350 qpair failed and we were unable to recover it. 00:35:47.350 [2024-11-18 07:21:07.980289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.350 [2024-11-18 07:21:07.980318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.350 qpair failed and we were unable to recover it. 00:35:47.350 [2024-11-18 07:21:07.980402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.350 [2024-11-18 07:21:07.980430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.350 qpair failed and we were unable to recover it. 00:35:47.350 [2024-11-18 07:21:07.980619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.350 [2024-11-18 07:21:07.980647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.350 qpair failed and we were unable to recover it. 00:35:47.350 [2024-11-18 07:21:07.980735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.350 [2024-11-18 07:21:07.980761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.350 qpair failed and we were unable to recover it. 00:35:47.350 [2024-11-18 07:21:07.980840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.350 [2024-11-18 07:21:07.980876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.350 qpair failed and we were unable to recover it. 00:35:47.350 [2024-11-18 07:21:07.980961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.350 [2024-11-18 07:21:07.980988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.350 qpair failed and we were unable to recover it. 00:35:47.350 [2024-11-18 07:21:07.981079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.350 [2024-11-18 07:21:07.981108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.350 qpair failed and we were unable to recover it. 00:35:47.350 [2024-11-18 07:21:07.981190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.350 [2024-11-18 07:21:07.981216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.350 qpair failed and we were unable to recover it. 00:35:47.350 [2024-11-18 07:21:07.981309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.350 [2024-11-18 07:21:07.981338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.350 qpair failed and we were unable to recover it. 00:35:47.350 [2024-11-18 07:21:07.981453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.350 [2024-11-18 07:21:07.981485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.350 qpair failed and we were unable to recover it. 00:35:47.350 [2024-11-18 07:21:07.981589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.350 [2024-11-18 07:21:07.981616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.350 qpair failed and we were unable to recover it. 00:35:47.350 [2024-11-18 07:21:07.981729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.350 [2024-11-18 07:21:07.981755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.350 qpair failed and we were unable to recover it. 00:35:47.350 [2024-11-18 07:21:07.981844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.350 [2024-11-18 07:21:07.981872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.350 qpair failed and we were unable to recover it. 00:35:47.350 [2024-11-18 07:21:07.981954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.350 [2024-11-18 07:21:07.981981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.350 qpair failed and we were unable to recover it. 00:35:47.350 [2024-11-18 07:21:07.982072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.350 [2024-11-18 07:21:07.982098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.350 qpair failed and we were unable to recover it. 00:35:47.350 [2024-11-18 07:21:07.982239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.350 [2024-11-18 07:21:07.982265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.350 qpair failed and we were unable to recover it. 00:35:47.350 [2024-11-18 07:21:07.982356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.350 [2024-11-18 07:21:07.982383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.350 qpair failed and we were unable to recover it. 00:35:47.350 [2024-11-18 07:21:07.982468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.350 [2024-11-18 07:21:07.982512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.350 qpair failed and we were unable to recover it. 00:35:47.350 [2024-11-18 07:21:07.982600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.350 [2024-11-18 07:21:07.982627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.350 qpair failed and we were unable to recover it. 00:35:47.350 [2024-11-18 07:21:07.982736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.350 [2024-11-18 07:21:07.982762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.350 qpair failed and we were unable to recover it. 00:35:47.350 [2024-11-18 07:21:07.982863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.351 [2024-11-18 07:21:07.982890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.351 qpair failed and we were unable to recover it. 00:35:47.351 [2024-11-18 07:21:07.983001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.351 [2024-11-18 07:21:07.983034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.351 qpair failed and we were unable to recover it. 00:35:47.351 [2024-11-18 07:21:07.983130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.351 [2024-11-18 07:21:07.983169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.351 qpair failed and we were unable to recover it. 00:35:47.351 [2024-11-18 07:21:07.983267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.351 [2024-11-18 07:21:07.983306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.351 qpair failed and we were unable to recover it. 00:35:47.351 [2024-11-18 07:21:07.983416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.351 [2024-11-18 07:21:07.983443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.351 qpair failed and we were unable to recover it. 00:35:47.351 [2024-11-18 07:21:07.983542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.351 [2024-11-18 07:21:07.983569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.351 qpair failed and we were unable to recover it. 00:35:47.351 [2024-11-18 07:21:07.983663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.351 [2024-11-18 07:21:07.983688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.351 qpair failed and we were unable to recover it. 00:35:47.351 [2024-11-18 07:21:07.983785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.351 [2024-11-18 07:21:07.983818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.351 qpair failed and we were unable to recover it. 00:35:47.351 [2024-11-18 07:21:07.983901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.351 [2024-11-18 07:21:07.983928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.351 qpair failed and we were unable to recover it. 00:35:47.351 [2024-11-18 07:21:07.984007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.351 [2024-11-18 07:21:07.984034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.351 qpair failed and we were unable to recover it. 00:35:47.351 [2024-11-18 07:21:07.984113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.351 [2024-11-18 07:21:07.984139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.351 qpair failed and we were unable to recover it. 00:35:47.351 [2024-11-18 07:21:07.984252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.351 [2024-11-18 07:21:07.984280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.351 qpair failed and we were unable to recover it. 00:35:47.351 [2024-11-18 07:21:07.984365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.351 [2024-11-18 07:21:07.984390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.351 qpair failed and we were unable to recover it. 00:35:47.351 [2024-11-18 07:21:07.984487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.351 [2024-11-18 07:21:07.984535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.351 qpair failed and we were unable to recover it. 00:35:47.351 [2024-11-18 07:21:07.984623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.351 [2024-11-18 07:21:07.984648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.351 qpair failed and we were unable to recover it. 00:35:47.351 [2024-11-18 07:21:07.984739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.351 [2024-11-18 07:21:07.984765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.351 qpair failed and we were unable to recover it. 00:35:47.351 [2024-11-18 07:21:07.984853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.351 [2024-11-18 07:21:07.984890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.351 qpair failed and we were unable to recover it. 00:35:47.351 [2024-11-18 07:21:07.984987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.351 [2024-11-18 07:21:07.985013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.351 qpair failed and we were unable to recover it. 00:35:47.351 [2024-11-18 07:21:07.985095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.351 [2024-11-18 07:21:07.985121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.351 qpair failed and we were unable to recover it. 00:35:47.351 [2024-11-18 07:21:07.985201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.351 [2024-11-18 07:21:07.985229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.351 qpair failed and we were unable to recover it. 00:35:47.351 [2024-11-18 07:21:07.985307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.351 [2024-11-18 07:21:07.985340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.351 qpair failed and we were unable to recover it. 00:35:47.351 [2024-11-18 07:21:07.985424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.351 [2024-11-18 07:21:07.985450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.351 qpair failed and we were unable to recover it. 00:35:47.351 [2024-11-18 07:21:07.985552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.351 [2024-11-18 07:21:07.985579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.351 qpair failed and we were unable to recover it. 00:35:47.351 [2024-11-18 07:21:07.985661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.351 [2024-11-18 07:21:07.985688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.351 qpair failed and we were unable to recover it. 00:35:47.351 [2024-11-18 07:21:07.985772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.351 [2024-11-18 07:21:07.985799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.351 qpair failed and we were unable to recover it. 00:35:47.351 [2024-11-18 07:21:07.985897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.351 [2024-11-18 07:21:07.985924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.351 qpair failed and we were unable to recover it. 00:35:47.351 [2024-11-18 07:21:07.986002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.351 [2024-11-18 07:21:07.986030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.351 qpair failed and we were unable to recover it. 00:35:47.351 [2024-11-18 07:21:07.986144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.351 [2024-11-18 07:21:07.986175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.351 qpair failed and we were unable to recover it. 00:35:47.351 [2024-11-18 07:21:07.986265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.351 [2024-11-18 07:21:07.986293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.351 qpair failed and we were unable to recover it. 00:35:47.351 [2024-11-18 07:21:07.986384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.351 [2024-11-18 07:21:07.986418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.351 qpair failed and we were unable to recover it. 00:35:47.351 [2024-11-18 07:21:07.986519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.351 [2024-11-18 07:21:07.986545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.351 qpair failed and we were unable to recover it. 00:35:47.351 [2024-11-18 07:21:07.986626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.351 [2024-11-18 07:21:07.986651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.352 qpair failed and we were unable to recover it. 00:35:47.352 [2024-11-18 07:21:07.986764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.352 [2024-11-18 07:21:07.986798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.352 qpair failed and we were unable to recover it. 00:35:47.352 [2024-11-18 07:21:07.986892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.352 [2024-11-18 07:21:07.986918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.352 qpair failed and we were unable to recover it. 00:35:47.352 [2024-11-18 07:21:07.987002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.352 [2024-11-18 07:21:07.987028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.352 qpair failed and we were unable to recover it. 00:35:47.352 [2024-11-18 07:21:07.987143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.352 [2024-11-18 07:21:07.987169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.352 qpair failed and we were unable to recover it. 00:35:47.352 [2024-11-18 07:21:07.987256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.352 [2024-11-18 07:21:07.987285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.352 qpair failed and we were unable to recover it. 00:35:47.352 [2024-11-18 07:21:07.987372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.352 [2024-11-18 07:21:07.987401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.352 qpair failed and we were unable to recover it. 00:35:47.352 [2024-11-18 07:21:07.987482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.352 [2024-11-18 07:21:07.987521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.352 qpair failed and we were unable to recover it. 00:35:47.352 [2024-11-18 07:21:07.987629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.352 [2024-11-18 07:21:07.987655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.352 qpair failed and we were unable to recover it. 00:35:47.352 [2024-11-18 07:21:07.987736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.352 [2024-11-18 07:21:07.987762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.352 qpair failed and we were unable to recover it. 00:35:47.352 [2024-11-18 07:21:07.987859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.352 [2024-11-18 07:21:07.987886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.352 qpair failed and we were unable to recover it. 00:35:47.352 [2024-11-18 07:21:07.987986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.352 [2024-11-18 07:21:07.988012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.352 qpair failed and we were unable to recover it. 00:35:47.352 [2024-11-18 07:21:07.988096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.352 [2024-11-18 07:21:07.988122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.352 qpair failed and we were unable to recover it. 00:35:47.352 [2024-11-18 07:21:07.988199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.352 [2024-11-18 07:21:07.988225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.352 qpair failed and we were unable to recover it. 00:35:47.352 [2024-11-18 07:21:07.988311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.352 [2024-11-18 07:21:07.988338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.352 qpair failed and we were unable to recover it. 00:35:47.352 [2024-11-18 07:21:07.988432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.352 [2024-11-18 07:21:07.988458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.352 qpair failed and we were unable to recover it. 00:35:47.352 [2024-11-18 07:21:07.988549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.352 [2024-11-18 07:21:07.988575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.352 qpair failed and we were unable to recover it. 00:35:47.352 [2024-11-18 07:21:07.988659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.352 [2024-11-18 07:21:07.988685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.352 qpair failed and we were unable to recover it. 00:35:47.352 [2024-11-18 07:21:07.988767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.352 [2024-11-18 07:21:07.988805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.352 qpair failed and we were unable to recover it. 00:35:47.352 [2024-11-18 07:21:07.988892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.352 [2024-11-18 07:21:07.988918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.352 qpair failed and we were unable to recover it. 00:35:47.352 [2024-11-18 07:21:07.989001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.352 [2024-11-18 07:21:07.989029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.352 qpair failed and we were unable to recover it. 00:35:47.352 [2024-11-18 07:21:07.989128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.352 [2024-11-18 07:21:07.989154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.352 qpair failed and we were unable to recover it. 00:35:47.352 [2024-11-18 07:21:07.989257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.352 [2024-11-18 07:21:07.989296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.352 qpair failed and we were unable to recover it. 00:35:47.352 [2024-11-18 07:21:07.989424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.352 [2024-11-18 07:21:07.989452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.352 qpair failed and we were unable to recover it. 00:35:47.352 [2024-11-18 07:21:07.989562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.352 [2024-11-18 07:21:07.989589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.352 qpair failed and we were unable to recover it. 00:35:47.352 [2024-11-18 07:21:07.989667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.352 [2024-11-18 07:21:07.989693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.352 qpair failed and we were unable to recover it. 00:35:47.352 [2024-11-18 07:21:07.989771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.352 [2024-11-18 07:21:07.989797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.352 qpair failed and we were unable to recover it. 00:35:47.352 [2024-11-18 07:21:07.989871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.352 [2024-11-18 07:21:07.989897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.352 qpair failed and we were unable to recover it. 00:35:47.352 [2024-11-18 07:21:07.989988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.352 [2024-11-18 07:21:07.990015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.352 qpair failed and we were unable to recover it. 00:35:47.352 [2024-11-18 07:21:07.990105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.352 [2024-11-18 07:21:07.990133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.352 qpair failed and we were unable to recover it. 00:35:47.352 [2024-11-18 07:21:07.990219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.352 [2024-11-18 07:21:07.990251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.352 qpair failed and we were unable to recover it. 00:35:47.352 [2024-11-18 07:21:07.990334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.352 [2024-11-18 07:21:07.990360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.352 qpair failed and we were unable to recover it. 00:35:47.353 [2024-11-18 07:21:07.990442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.353 [2024-11-18 07:21:07.990469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.353 qpair failed and we were unable to recover it. 00:35:47.353 [2024-11-18 07:21:07.990570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.353 [2024-11-18 07:21:07.990597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.353 qpair failed and we were unable to recover it. 00:35:47.353 [2024-11-18 07:21:07.990684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.353 [2024-11-18 07:21:07.990712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.353 qpair failed and we were unable to recover it. 00:35:47.353 [2024-11-18 07:21:07.990803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.353 [2024-11-18 07:21:07.990830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.353 qpair failed and we were unable to recover it. 00:35:47.353 [2024-11-18 07:21:07.990929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.353 [2024-11-18 07:21:07.990955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.353 qpair failed and we were unable to recover it. 00:35:47.353 [2024-11-18 07:21:07.991071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.353 [2024-11-18 07:21:07.991105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.353 qpair failed and we were unable to recover it. 00:35:47.353 [2024-11-18 07:21:07.991195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.353 [2024-11-18 07:21:07.991222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.353 qpair failed and we were unable to recover it. 00:35:47.353 [2024-11-18 07:21:07.991319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.353 [2024-11-18 07:21:07.991358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.353 qpair failed and we were unable to recover it. 00:35:47.353 [2024-11-18 07:21:07.991520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.353 [2024-11-18 07:21:07.991549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.353 qpair failed and we were unable to recover it. 00:35:47.353 [2024-11-18 07:21:07.991627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.353 [2024-11-18 07:21:07.991654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.353 qpair failed and we were unable to recover it. 00:35:47.353 [2024-11-18 07:21:07.991734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.353 [2024-11-18 07:21:07.991760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.353 qpair failed and we were unable to recover it. 00:35:47.353 [2024-11-18 07:21:07.991851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.353 [2024-11-18 07:21:07.991878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.353 qpair failed and we were unable to recover it. 00:35:47.353 [2024-11-18 07:21:07.991994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.353 [2024-11-18 07:21:07.992021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.353 qpair failed and we were unable to recover it. 00:35:47.353 [2024-11-18 07:21:07.992115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.353 [2024-11-18 07:21:07.992143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.353 qpair failed and we were unable to recover it. 00:35:47.353 [2024-11-18 07:21:07.992236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.353 [2024-11-18 07:21:07.992265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.353 qpair failed and we were unable to recover it. 00:35:47.353 [2024-11-18 07:21:07.992354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.353 [2024-11-18 07:21:07.992381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.353 qpair failed and we were unable to recover it. 00:35:47.353 [2024-11-18 07:21:07.992461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.353 [2024-11-18 07:21:07.992500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.353 qpair failed and we were unable to recover it. 00:35:47.353 [2024-11-18 07:21:07.992591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.353 [2024-11-18 07:21:07.992618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.353 qpair failed and we were unable to recover it. 00:35:47.353 [2024-11-18 07:21:07.992699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.353 [2024-11-18 07:21:07.992725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.353 qpair failed and we were unable to recover it. 00:35:47.353 [2024-11-18 07:21:07.992869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.353 [2024-11-18 07:21:07.992897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.353 qpair failed and we were unable to recover it. 00:35:47.353 [2024-11-18 07:21:07.992987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.353 [2024-11-18 07:21:07.993014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.353 qpair failed and we were unable to recover it. 00:35:47.353 [2024-11-18 07:21:07.993102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.353 [2024-11-18 07:21:07.993130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.353 qpair failed and we were unable to recover it. 00:35:47.353 [2024-11-18 07:21:07.993218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.353 [2024-11-18 07:21:07.993246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.353 qpair failed and we were unable to recover it. 00:35:47.353 [2024-11-18 07:21:07.993318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.353 [2024-11-18 07:21:07.993345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.353 qpair failed and we were unable to recover it. 00:35:47.353 [2024-11-18 07:21:07.993423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.353 [2024-11-18 07:21:07.993450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.353 qpair failed and we were unable to recover it. 00:35:47.353 [2024-11-18 07:21:07.993556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.353 [2024-11-18 07:21:07.993583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.353 qpair failed and we were unable to recover it. 00:35:47.353 [2024-11-18 07:21:07.993672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.353 [2024-11-18 07:21:07.993698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.353 qpair failed and we were unable to recover it. 00:35:47.353 [2024-11-18 07:21:07.993794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.353 [2024-11-18 07:21:07.993823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.353 qpair failed and we were unable to recover it. 00:35:47.353 [2024-11-18 07:21:07.993926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.353 [2024-11-18 07:21:07.993953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.353 qpair failed and we were unable to recover it. 00:35:47.353 [2024-11-18 07:21:07.994042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.353 [2024-11-18 07:21:07.994069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.353 qpair failed and we were unable to recover it. 00:35:47.353 [2024-11-18 07:21:07.994162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.353 [2024-11-18 07:21:07.994195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.353 qpair failed and we were unable to recover it. 00:35:47.353 [2024-11-18 07:21:07.994282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.353 [2024-11-18 07:21:07.994310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.353 qpair failed and we were unable to recover it. 00:35:47.354 [2024-11-18 07:21:07.994403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.354 [2024-11-18 07:21:07.994449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.354 qpair failed and we were unable to recover it. 00:35:47.354 [2024-11-18 07:21:07.994552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.354 [2024-11-18 07:21:07.994581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.354 qpair failed and we were unable to recover it. 00:35:47.354 [2024-11-18 07:21:07.994685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.354 [2024-11-18 07:21:07.994712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.354 qpair failed and we were unable to recover it. 00:35:47.354 [2024-11-18 07:21:07.994809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.354 [2024-11-18 07:21:07.994835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.354 qpair failed and we were unable to recover it. 00:35:47.354 [2024-11-18 07:21:07.994909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.354 [2024-11-18 07:21:07.994935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.354 qpair failed and we were unable to recover it. 00:35:47.354 [2024-11-18 07:21:07.995030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.354 [2024-11-18 07:21:07.995070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.354 qpair failed and we were unable to recover it. 00:35:47.354 [2024-11-18 07:21:07.995163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.354 [2024-11-18 07:21:07.995191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.354 qpair failed and we were unable to recover it. 00:35:47.354 [2024-11-18 07:21:07.995276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.354 [2024-11-18 07:21:07.995304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.354 qpair failed and we were unable to recover it. 00:35:47.354 [2024-11-18 07:21:07.995428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.354 [2024-11-18 07:21:07.995455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.354 qpair failed and we were unable to recover it. 00:35:47.354 [2024-11-18 07:21:07.995556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.354 [2024-11-18 07:21:07.995585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.354 qpair failed and we were unable to recover it. 00:35:47.354 [2024-11-18 07:21:07.995671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.354 [2024-11-18 07:21:07.995697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.354 qpair failed and we were unable to recover it. 00:35:47.354 [2024-11-18 07:21:07.995778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.354 [2024-11-18 07:21:07.995804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.354 qpair failed and we were unable to recover it. 00:35:47.354 [2024-11-18 07:21:07.995887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.354 [2024-11-18 07:21:07.995913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.354 qpair failed and we were unable to recover it. 00:35:47.354 [2024-11-18 07:21:07.996004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.354 [2024-11-18 07:21:07.996032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.354 qpair failed and we were unable to recover it. 00:35:47.354 [2024-11-18 07:21:07.996125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.354 [2024-11-18 07:21:07.996154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.354 qpair failed and we were unable to recover it. 00:35:47.354 [2024-11-18 07:21:07.996249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.354 [2024-11-18 07:21:07.996277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.354 qpair failed and we were unable to recover it. 00:35:47.354 [2024-11-18 07:21:07.996360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.354 [2024-11-18 07:21:07.996387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.354 qpair failed and we were unable to recover it. 00:35:47.354 [2024-11-18 07:21:07.996467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.354 [2024-11-18 07:21:07.996504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.354 qpair failed and we were unable to recover it. 00:35:47.354 [2024-11-18 07:21:07.996610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.354 [2024-11-18 07:21:07.996638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.354 qpair failed and we were unable to recover it. 00:35:47.354 [2024-11-18 07:21:07.996754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.354 [2024-11-18 07:21:07.996790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.354 qpair failed and we were unable to recover it. 00:35:47.354 [2024-11-18 07:21:07.996866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.354 [2024-11-18 07:21:07.996893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.354 qpair failed and we were unable to recover it. 00:35:47.354 [2024-11-18 07:21:07.996979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.354 [2024-11-18 07:21:07.997005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.354 qpair failed and we were unable to recover it. 00:35:47.354 [2024-11-18 07:21:07.997102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.354 [2024-11-18 07:21:07.997130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.354 qpair failed and we were unable to recover it. 00:35:47.354 [2024-11-18 07:21:07.997226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.354 [2024-11-18 07:21:07.997254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.354 qpair failed and we were unable to recover it. 00:35:47.354 [2024-11-18 07:21:07.997365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.354 [2024-11-18 07:21:07.997391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.354 qpair failed and we were unable to recover it. 00:35:47.354 [2024-11-18 07:21:07.997477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.354 [2024-11-18 07:21:07.997516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.354 qpair failed and we were unable to recover it. 00:35:47.354 [2024-11-18 07:21:07.997599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.354 [2024-11-18 07:21:07.997626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.354 qpair failed and we were unable to recover it. 00:35:47.354 [2024-11-18 07:21:07.997724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.354 [2024-11-18 07:21:07.997751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.354 qpair failed and we were unable to recover it. 00:35:47.354 [2024-11-18 07:21:07.997846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.354 [2024-11-18 07:21:07.997873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.354 qpair failed and we were unable to recover it. 00:35:47.354 [2024-11-18 07:21:07.997984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.354 [2024-11-18 07:21:07.998013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.354 qpair failed and we were unable to recover it. 00:35:47.354 [2024-11-18 07:21:07.998129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.354 [2024-11-18 07:21:07.998157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.354 qpair failed and we were unable to recover it. 00:35:47.354 [2024-11-18 07:21:07.998242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.354 [2024-11-18 07:21:07.998269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.355 qpair failed and we were unable to recover it. 00:35:47.355 [2024-11-18 07:21:07.998366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.355 [2024-11-18 07:21:07.998392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.355 qpair failed and we were unable to recover it. 00:35:47.355 [2024-11-18 07:21:07.998472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.355 [2024-11-18 07:21:07.998511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.355 qpair failed and we were unable to recover it. 00:35:47.355 [2024-11-18 07:21:07.998627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.355 [2024-11-18 07:21:07.998654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.355 qpair failed and we were unable to recover it. 00:35:47.355 [2024-11-18 07:21:07.998734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.355 [2024-11-18 07:21:07.998760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.355 qpair failed and we were unable to recover it. 00:35:47.355 [2024-11-18 07:21:07.998859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.355 [2024-11-18 07:21:07.998887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.355 qpair failed and we were unable to recover it. 00:35:47.355 [2024-11-18 07:21:07.998982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.355 [2024-11-18 07:21:07.999011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.355 qpair failed and we were unable to recover it. 00:35:47.355 [2024-11-18 07:21:07.999098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.355 [2024-11-18 07:21:07.999125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.355 qpair failed and we were unable to recover it. 00:35:47.355 [2024-11-18 07:21:07.999209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.355 [2024-11-18 07:21:07.999236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.355 qpair failed and we were unable to recover it. 00:35:47.355 [2024-11-18 07:21:07.999326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.355 [2024-11-18 07:21:07.999359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.355 qpair failed and we were unable to recover it. 00:35:47.355 [2024-11-18 07:21:07.999436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.355 [2024-11-18 07:21:07.999462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.355 qpair failed and we were unable to recover it. 00:35:47.355 [2024-11-18 07:21:07.999573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.355 [2024-11-18 07:21:07.999600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.355 qpair failed and we were unable to recover it. 00:35:47.355 [2024-11-18 07:21:07.999682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.355 [2024-11-18 07:21:07.999710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.355 qpair failed and we were unable to recover it. 00:35:47.355 [2024-11-18 07:21:07.999829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.355 [2024-11-18 07:21:07.999856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.355 qpair failed and we were unable to recover it. 00:35:47.355 [2024-11-18 07:21:07.999951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.355 [2024-11-18 07:21:07.999978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.355 qpair failed and we were unable to recover it. 00:35:47.355 [2024-11-18 07:21:08.000060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.355 [2024-11-18 07:21:08.000086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.355 qpair failed and we were unable to recover it. 00:35:47.355 [2024-11-18 07:21:08.000183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.355 [2024-11-18 07:21:08.000221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.355 qpair failed and we were unable to recover it. 00:35:47.355 [2024-11-18 07:21:08.000308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.355 [2024-11-18 07:21:08.000346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.355 qpair failed and we were unable to recover it. 00:35:47.355 [2024-11-18 07:21:08.000441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.355 [2024-11-18 07:21:08.000470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.355 qpair failed and we were unable to recover it. 00:35:47.355 [2024-11-18 07:21:08.000554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.355 [2024-11-18 07:21:08.000582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.355 qpair failed and we were unable to recover it. 00:35:47.355 [2024-11-18 07:21:08.000667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.355 [2024-11-18 07:21:08.000694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.355 qpair failed and we were unable to recover it. 00:35:47.355 [2024-11-18 07:21:08.000775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.355 [2024-11-18 07:21:08.000802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.355 qpair failed and we were unable to recover it. 00:35:47.355 [2024-11-18 07:21:08.000895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.355 [2024-11-18 07:21:08.000921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.355 qpair failed and we were unable to recover it. 00:35:47.355 [2024-11-18 07:21:08.001022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.355 [2024-11-18 07:21:08.001051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.355 qpair failed and we were unable to recover it. 00:35:47.355 [2024-11-18 07:21:08.001137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.355 [2024-11-18 07:21:08.001164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.355 qpair failed and we were unable to recover it. 00:35:47.355 [2024-11-18 07:21:08.001253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.355 [2024-11-18 07:21:08.001280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.355 qpair failed and we were unable to recover it. 00:35:47.355 [2024-11-18 07:21:08.001399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.355 [2024-11-18 07:21:08.001425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.355 qpair failed and we were unable to recover it. 00:35:47.355 [2024-11-18 07:21:08.001528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.355 [2024-11-18 07:21:08.001557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.355 qpair failed and we were unable to recover it. 00:35:47.355 [2024-11-18 07:21:08.001641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.356 [2024-11-18 07:21:08.001668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.356 qpair failed and we were unable to recover it. 00:35:47.356 [2024-11-18 07:21:08.001750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.356 [2024-11-18 07:21:08.001776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.356 qpair failed and we were unable to recover it. 00:35:47.356 [2024-11-18 07:21:08.001859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.356 [2024-11-18 07:21:08.001895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.356 qpair failed and we were unable to recover it. 00:35:47.356 [2024-11-18 07:21:08.001977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.356 [2024-11-18 07:21:08.002003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.356 qpair failed and we were unable to recover it. 00:35:47.356 [2024-11-18 07:21:08.002088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.356 [2024-11-18 07:21:08.002115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.356 qpair failed and we were unable to recover it. 00:35:47.356 [2024-11-18 07:21:08.002204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.356 [2024-11-18 07:21:08.002231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.356 qpair failed and we were unable to recover it. 00:35:47.356 [2024-11-18 07:21:08.002312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.356 [2024-11-18 07:21:08.002339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.356 qpair failed and we were unable to recover it. 00:35:47.356 [2024-11-18 07:21:08.002415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.356 [2024-11-18 07:21:08.002441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.356 qpair failed and we were unable to recover it. 00:35:47.356 [2024-11-18 07:21:08.002547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.356 [2024-11-18 07:21:08.002576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.356 qpair failed and we were unable to recover it. 00:35:47.356 [2024-11-18 07:21:08.002660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.356 [2024-11-18 07:21:08.002686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.356 qpair failed and we were unable to recover it. 00:35:47.356 [2024-11-18 07:21:08.002763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.356 [2024-11-18 07:21:08.002789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.356 qpair failed and we were unable to recover it. 00:35:47.356 [2024-11-18 07:21:08.002874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.356 [2024-11-18 07:21:08.002900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.356 qpair failed and we were unable to recover it. 00:35:47.356 [2024-11-18 07:21:08.002977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.356 [2024-11-18 07:21:08.003002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.356 qpair failed and we were unable to recover it. 00:35:47.356 [2024-11-18 07:21:08.003102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.356 [2024-11-18 07:21:08.003130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.356 qpair failed and we were unable to recover it. 00:35:47.356 [2024-11-18 07:21:08.003215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.356 [2024-11-18 07:21:08.003243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.356 qpair failed and we were unable to recover it. 00:35:47.356 [2024-11-18 07:21:08.003326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.356 [2024-11-18 07:21:08.003358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.356 qpair failed and we were unable to recover it. 00:35:47.356 [2024-11-18 07:21:08.003447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.356 [2024-11-18 07:21:08.003481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.356 qpair failed and we were unable to recover it. 00:35:47.356 [2024-11-18 07:21:08.003572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.356 [2024-11-18 07:21:08.003599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.356 qpair failed and we were unable to recover it. 00:35:47.356 [2024-11-18 07:21:08.003686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.356 [2024-11-18 07:21:08.003712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.356 qpair failed and we were unable to recover it. 00:35:47.356 [2024-11-18 07:21:08.003795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.356 [2024-11-18 07:21:08.003822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.356 qpair failed and we were unable to recover it. 00:35:47.356 [2024-11-18 07:21:08.003901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.356 [2024-11-18 07:21:08.003927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.356 qpair failed and we were unable to recover it. 00:35:47.356 [2024-11-18 07:21:08.004012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.356 [2024-11-18 07:21:08.004045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.356 qpair failed and we were unable to recover it. 00:35:47.356 [2024-11-18 07:21:08.004129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.356 [2024-11-18 07:21:08.004156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.356 qpair failed and we were unable to recover it. 00:35:47.356 [2024-11-18 07:21:08.004248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.356 [2024-11-18 07:21:08.004294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.356 qpair failed and we were unable to recover it. 00:35:47.356 [2024-11-18 07:21:08.004418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.356 [2024-11-18 07:21:08.004446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.356 qpair failed and we were unable to recover it. 00:35:47.356 [2024-11-18 07:21:08.004538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.356 [2024-11-18 07:21:08.004566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.356 qpair failed and we were unable to recover it. 00:35:47.356 [2024-11-18 07:21:08.004649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.356 [2024-11-18 07:21:08.004675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.356 qpair failed and we were unable to recover it. 00:35:47.356 [2024-11-18 07:21:08.004757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.356 [2024-11-18 07:21:08.004787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.356 qpair failed and we were unable to recover it. 00:35:47.356 [2024-11-18 07:21:08.004883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.356 [2024-11-18 07:21:08.004909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.356 qpair failed and we were unable to recover it. 00:35:47.356 [2024-11-18 07:21:08.004989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.356 [2024-11-18 07:21:08.005017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.356 qpair failed and we were unable to recover it. 00:35:47.356 [2024-11-18 07:21:08.005110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.356 [2024-11-18 07:21:08.005138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.356 qpair failed and we were unable to recover it. 00:35:47.356 [2024-11-18 07:21:08.005230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.357 [2024-11-18 07:21:08.005258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.357 qpair failed and we were unable to recover it. 00:35:47.357 [2024-11-18 07:21:08.005334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.357 [2024-11-18 07:21:08.005360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.357 qpair failed and we were unable to recover it. 00:35:47.357 [2024-11-18 07:21:08.005439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.357 [2024-11-18 07:21:08.005473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.357 qpair failed and we were unable to recover it. 00:35:47.357 [2024-11-18 07:21:08.005572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.357 [2024-11-18 07:21:08.005598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.357 qpair failed and we were unable to recover it. 00:35:47.357 [2024-11-18 07:21:08.005691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.357 [2024-11-18 07:21:08.005717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.357 qpair failed and we were unable to recover it. 00:35:47.357 [2024-11-18 07:21:08.005804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.357 [2024-11-18 07:21:08.005831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.357 qpair failed and we were unable to recover it. 00:35:47.357 [2024-11-18 07:21:08.005947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.357 [2024-11-18 07:21:08.005976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.357 qpair failed and we were unable to recover it. 00:35:47.357 [2024-11-18 07:21:08.006054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.357 [2024-11-18 07:21:08.006080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.357 qpair failed and we were unable to recover it. 00:35:47.357 [2024-11-18 07:21:08.006167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.357 [2024-11-18 07:21:08.006193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.357 qpair failed and we were unable to recover it. 00:35:47.357 [2024-11-18 07:21:08.006280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.357 [2024-11-18 07:21:08.006308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.357 qpair failed and we were unable to recover it. 00:35:47.357 [2024-11-18 07:21:08.006401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.357 [2024-11-18 07:21:08.006432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.357 qpair failed and we were unable to recover it. 00:35:47.357 [2024-11-18 07:21:08.006526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.357 [2024-11-18 07:21:08.006555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.357 qpair failed and we were unable to recover it. 00:35:47.357 [2024-11-18 07:21:08.006665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.357 [2024-11-18 07:21:08.006692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.357 qpair failed and we were unable to recover it. 00:35:47.357 [2024-11-18 07:21:08.006777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.357 [2024-11-18 07:21:08.006803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.357 qpair failed and we were unable to recover it. 00:35:47.357 [2024-11-18 07:21:08.006890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.357 [2024-11-18 07:21:08.006916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.357 qpair failed and we were unable to recover it. 00:35:47.357 [2024-11-18 07:21:08.007004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.357 [2024-11-18 07:21:08.007030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.357 qpair failed and we were unable to recover it. 00:35:47.357 [2024-11-18 07:21:08.007104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.357 [2024-11-18 07:21:08.007130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.357 qpair failed and we were unable to recover it. 00:35:47.357 [2024-11-18 07:21:08.007207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.357 [2024-11-18 07:21:08.007238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.357 qpair failed and we were unable to recover it. 00:35:47.357 [2024-11-18 07:21:08.007335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.357 [2024-11-18 07:21:08.007363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.357 qpair failed and we were unable to recover it. 00:35:47.357 [2024-11-18 07:21:08.007444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.357 [2024-11-18 07:21:08.007473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.357 qpair failed and we were unable to recover it. 00:35:47.357 [2024-11-18 07:21:08.007576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.357 [2024-11-18 07:21:08.007603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.357 qpair failed and we were unable to recover it. 00:35:47.357 [2024-11-18 07:21:08.007681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.357 [2024-11-18 07:21:08.007708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.357 qpair failed and we were unable to recover it. 00:35:47.357 [2024-11-18 07:21:08.007802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.357 [2024-11-18 07:21:08.007829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.357 qpair failed and we were unable to recover it. 00:35:47.357 [2024-11-18 07:21:08.007927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.357 [2024-11-18 07:21:08.007954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.357 qpair failed and we were unable to recover it. 00:35:47.357 [2024-11-18 07:21:08.008044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.357 [2024-11-18 07:21:08.008071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.357 qpair failed and we were unable to recover it. 00:35:47.357 [2024-11-18 07:21:08.008148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.357 [2024-11-18 07:21:08.008180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.357 qpair failed and we were unable to recover it. 00:35:47.357 [2024-11-18 07:21:08.008269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.357 [2024-11-18 07:21:08.008296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.357 qpair failed and we were unable to recover it. 00:35:47.357 [2024-11-18 07:21:08.008412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.357 [2024-11-18 07:21:08.008439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.357 qpair failed and we were unable to recover it. 00:35:47.357 [2024-11-18 07:21:08.008532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.357 [2024-11-18 07:21:08.008561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.357 qpair failed and we were unable to recover it. 00:35:47.357 [2024-11-18 07:21:08.008686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.357 [2024-11-18 07:21:08.008714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.357 qpair failed and we were unable to recover it. 00:35:47.357 [2024-11-18 07:21:08.008806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.357 [2024-11-18 07:21:08.008833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.357 qpair failed and we were unable to recover it. 00:35:47.357 [2024-11-18 07:21:08.008933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.357 [2024-11-18 07:21:08.008960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.357 qpair failed and we were unable to recover it. 00:35:47.357 [2024-11-18 07:21:08.009049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.357 [2024-11-18 07:21:08.009076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.357 qpair failed and we were unable to recover it. 00:35:47.357 [2024-11-18 07:21:08.009167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.357 [2024-11-18 07:21:08.009195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.357 qpair failed and we were unable to recover it. 00:35:47.357 [2024-11-18 07:21:08.009280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.357 [2024-11-18 07:21:08.009309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.357 qpair failed and we were unable to recover it. 00:35:47.357 [2024-11-18 07:21:08.009422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.357 [2024-11-18 07:21:08.009450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.357 qpair failed and we were unable to recover it. 00:35:47.357 [2024-11-18 07:21:08.009543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.357 [2024-11-18 07:21:08.009570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.357 qpair failed and we were unable to recover it. 00:35:47.358 [2024-11-18 07:21:08.009656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.358 [2024-11-18 07:21:08.009683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.358 qpair failed and we were unable to recover it. 00:35:47.358 [2024-11-18 07:21:08.009797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.358 [2024-11-18 07:21:08.009824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.358 qpair failed and we were unable to recover it. 00:35:47.358 [2024-11-18 07:21:08.009917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.358 [2024-11-18 07:21:08.009952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.358 qpair failed and we were unable to recover it. 00:35:47.358 [2024-11-18 07:21:08.010035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.358 [2024-11-18 07:21:08.010062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.358 qpair failed and we were unable to recover it. 00:35:47.358 [2024-11-18 07:21:08.010138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.358 [2024-11-18 07:21:08.010165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.358 qpair failed and we were unable to recover it. 00:35:47.358 [2024-11-18 07:21:08.010244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.358 [2024-11-18 07:21:08.010271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.358 qpair failed and we were unable to recover it. 00:35:47.358 [2024-11-18 07:21:08.010359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.358 [2024-11-18 07:21:08.010388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.358 qpair failed and we were unable to recover it. 00:35:47.358 [2024-11-18 07:21:08.010485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.358 [2024-11-18 07:21:08.010517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.358 qpair failed and we were unable to recover it. 00:35:47.358 [2024-11-18 07:21:08.010604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.358 [2024-11-18 07:21:08.010632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.358 qpair failed and we were unable to recover it. 00:35:47.358 [2024-11-18 07:21:08.010717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.358 [2024-11-18 07:21:08.010744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.358 qpair failed and we were unable to recover it. 00:35:47.358 [2024-11-18 07:21:08.010847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.358 [2024-11-18 07:21:08.010874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.358 qpair failed and we were unable to recover it. 00:35:47.358 [2024-11-18 07:21:08.010986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.358 [2024-11-18 07:21:08.011013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.358 qpair failed and we were unable to recover it. 00:35:47.358 [2024-11-18 07:21:08.011094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.358 [2024-11-18 07:21:08.011120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.358 qpair failed and we were unable to recover it. 00:35:47.358 [2024-11-18 07:21:08.011203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.358 [2024-11-18 07:21:08.011229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.358 qpair failed and we were unable to recover it. 00:35:47.358 [2024-11-18 07:21:08.011318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.358 [2024-11-18 07:21:08.011345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.358 qpair failed and we were unable to recover it. 00:35:47.358 [2024-11-18 07:21:08.011461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.358 [2024-11-18 07:21:08.011497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.358 qpair failed and we were unable to recover it. 00:35:47.358 [2024-11-18 07:21:08.011591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.358 [2024-11-18 07:21:08.011619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.358 qpair failed and we were unable to recover it. 00:35:47.358 [2024-11-18 07:21:08.011708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.358 [2024-11-18 07:21:08.011734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.358 qpair failed and we were unable to recover it. 00:35:47.358 [2024-11-18 07:21:08.011822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.358 [2024-11-18 07:21:08.011848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.358 qpair failed and we were unable to recover it. 00:35:47.358 [2024-11-18 07:21:08.011934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.358 [2024-11-18 07:21:08.011960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.358 qpair failed and we were unable to recover it. 00:35:47.358 [2024-11-18 07:21:08.012050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.358 [2024-11-18 07:21:08.012095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.358 qpair failed and we were unable to recover it. 00:35:47.358 [2024-11-18 07:21:08.012215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.358 [2024-11-18 07:21:08.012243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.358 qpair failed and we were unable to recover it. 00:35:47.358 [2024-11-18 07:21:08.012330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.358 [2024-11-18 07:21:08.012357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.358 qpair failed and we were unable to recover it. 00:35:47.358 [2024-11-18 07:21:08.012433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.358 [2024-11-18 07:21:08.012459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.358 qpair failed and we were unable to recover it. 00:35:47.358 [2024-11-18 07:21:08.012555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.358 [2024-11-18 07:21:08.012582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.358 qpair failed and we were unable to recover it. 00:35:47.358 [2024-11-18 07:21:08.012690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.358 [2024-11-18 07:21:08.012716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.358 qpair failed and we were unable to recover it. 00:35:47.358 [2024-11-18 07:21:08.012811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.358 [2024-11-18 07:21:08.012838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.358 qpair failed and we were unable to recover it. 00:35:47.358 [2024-11-18 07:21:08.012923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.358 [2024-11-18 07:21:08.012949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.358 qpair failed and we were unable to recover it. 00:35:47.358 [2024-11-18 07:21:08.013026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.358 [2024-11-18 07:21:08.013052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.358 qpair failed and we were unable to recover it. 00:35:47.358 [2024-11-18 07:21:08.013156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.358 [2024-11-18 07:21:08.013183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.358 qpair failed and we were unable to recover it. 00:35:47.358 [2024-11-18 07:21:08.013264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.358 [2024-11-18 07:21:08.013290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.358 qpair failed and we were unable to recover it. 00:35:47.358 [2024-11-18 07:21:08.013366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.358 [2024-11-18 07:21:08.013393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.358 qpair failed and we were unable to recover it. 00:35:47.358 [2024-11-18 07:21:08.013485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.358 [2024-11-18 07:21:08.013518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.358 qpair failed and we were unable to recover it. 00:35:47.358 [2024-11-18 07:21:08.013610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.358 [2024-11-18 07:21:08.013636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.358 qpair failed and we were unable to recover it. 00:35:47.358 [2024-11-18 07:21:08.013729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.359 [2024-11-18 07:21:08.013757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.359 qpair failed and we were unable to recover it. 00:35:47.359 [2024-11-18 07:21:08.013873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.359 [2024-11-18 07:21:08.013899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.359 qpair failed and we were unable to recover it. 00:35:47.359 [2024-11-18 07:21:08.013988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.359 [2024-11-18 07:21:08.014016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.359 qpair failed and we were unable to recover it. 00:35:47.359 [2024-11-18 07:21:08.014095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.359 [2024-11-18 07:21:08.014122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.359 qpair failed and we were unable to recover it. 00:35:47.359 [2024-11-18 07:21:08.014222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.359 [2024-11-18 07:21:08.014249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.359 qpair failed and we were unable to recover it. 00:35:47.359 [2024-11-18 07:21:08.014340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.359 [2024-11-18 07:21:08.014367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.359 qpair failed and we were unable to recover it. 00:35:47.359 [2024-11-18 07:21:08.014454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.359 [2024-11-18 07:21:08.014481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.359 qpair failed and we were unable to recover it. 00:35:47.359 [2024-11-18 07:21:08.014587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.359 [2024-11-18 07:21:08.014614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.359 qpair failed and we were unable to recover it. 00:35:47.359 [2024-11-18 07:21:08.014702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.359 [2024-11-18 07:21:08.014728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.359 qpair failed and we were unable to recover it. 00:35:47.359 [2024-11-18 07:21:08.014818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.359 [2024-11-18 07:21:08.014844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.359 qpair failed and we were unable to recover it. 00:35:47.359 [2024-11-18 07:21:08.014930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.359 [2024-11-18 07:21:08.014957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.359 qpair failed and we were unable to recover it. 00:35:47.359 [2024-11-18 07:21:08.015040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.359 [2024-11-18 07:21:08.015067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.359 qpair failed and we were unable to recover it. 00:35:47.359 [2024-11-18 07:21:08.015144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.359 [2024-11-18 07:21:08.015172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.359 qpair failed and we were unable to recover it. 00:35:47.359 [2024-11-18 07:21:08.015288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.359 [2024-11-18 07:21:08.015316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.359 qpair failed and we were unable to recover it. 00:35:47.359 [2024-11-18 07:21:08.015394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.359 [2024-11-18 07:21:08.015421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.359 qpair failed and we were unable to recover it. 00:35:47.359 [2024-11-18 07:21:08.015514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.359 [2024-11-18 07:21:08.015541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.359 qpair failed and we were unable to recover it. 00:35:47.359 [2024-11-18 07:21:08.015655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.359 [2024-11-18 07:21:08.015682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.359 qpair failed and we were unable to recover it. 00:35:47.359 [2024-11-18 07:21:08.015804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.359 [2024-11-18 07:21:08.015843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.359 qpair failed and we were unable to recover it. 00:35:47.359 [2024-11-18 07:21:08.015943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.359 [2024-11-18 07:21:08.015971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.359 qpair failed and we were unable to recover it. 00:35:47.359 [2024-11-18 07:21:08.016056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.359 [2024-11-18 07:21:08.016082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.359 qpair failed and we were unable to recover it. 00:35:47.359 [2024-11-18 07:21:08.016169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.359 [2024-11-18 07:21:08.016196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.359 qpair failed and we were unable to recover it. 00:35:47.359 [2024-11-18 07:21:08.016279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.359 [2024-11-18 07:21:08.016305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.359 qpair failed and we were unable to recover it. 00:35:47.359 [2024-11-18 07:21:08.016388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.359 [2024-11-18 07:21:08.016415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.359 qpair failed and we were unable to recover it. 00:35:47.359 [2024-11-18 07:21:08.016504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.359 [2024-11-18 07:21:08.016531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.359 qpair failed and we were unable to recover it. 00:35:47.359 [2024-11-18 07:21:08.016644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.359 [2024-11-18 07:21:08.016670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.359 qpair failed and we were unable to recover it. 00:35:47.359 [2024-11-18 07:21:08.016776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.359 [2024-11-18 07:21:08.016802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.359 qpair failed and we were unable to recover it. 00:35:47.359 [2024-11-18 07:21:08.016882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.359 [2024-11-18 07:21:08.016914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.359 qpair failed and we were unable to recover it. 00:35:47.359 [2024-11-18 07:21:08.017024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.359 [2024-11-18 07:21:08.017050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.359 qpair failed and we were unable to recover it. 00:35:47.359 [2024-11-18 07:21:08.017137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.359 [2024-11-18 07:21:08.017164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.359 qpair failed and we were unable to recover it. 00:35:47.359 [2024-11-18 07:21:08.017276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.359 [2024-11-18 07:21:08.017302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.359 qpair failed and we were unable to recover it. 00:35:47.359 [2024-11-18 07:21:08.017396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.360 [2024-11-18 07:21:08.017423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.360 qpair failed and we were unable to recover it. 00:35:47.360 [2024-11-18 07:21:08.017521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.360 [2024-11-18 07:21:08.017549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.360 qpair failed and we were unable to recover it. 00:35:47.360 [2024-11-18 07:21:08.017633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.360 [2024-11-18 07:21:08.017661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.360 qpair failed and we were unable to recover it. 00:35:47.360 [2024-11-18 07:21:08.017744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.360 [2024-11-18 07:21:08.017771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.360 qpair failed and we were unable to recover it. 00:35:47.360 [2024-11-18 07:21:08.017843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.360 [2024-11-18 07:21:08.017877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.360 qpair failed and we were unable to recover it. 00:35:47.360 [2024-11-18 07:21:08.017969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.360 [2024-11-18 07:21:08.017996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.360 qpair failed and we were unable to recover it. 00:35:47.360 [2024-11-18 07:21:08.018074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.360 [2024-11-18 07:21:08.018100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.360 qpair failed and we were unable to recover it. 00:35:47.360 [2024-11-18 07:21:08.018178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.360 [2024-11-18 07:21:08.018204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.360 qpair failed and we were unable to recover it. 00:35:47.360 [2024-11-18 07:21:08.018302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.360 [2024-11-18 07:21:08.018329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.360 qpair failed and we were unable to recover it. 00:35:47.360 [2024-11-18 07:21:08.018413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.360 [2024-11-18 07:21:08.018439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.360 qpair failed and we were unable to recover it. 00:35:47.360 [2024-11-18 07:21:08.018554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.360 [2024-11-18 07:21:08.018592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.360 qpair failed and we were unable to recover it. 00:35:47.360 [2024-11-18 07:21:08.018684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.360 [2024-11-18 07:21:08.018712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.360 qpair failed and we were unable to recover it. 00:35:47.360 [2024-11-18 07:21:08.018789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.360 [2024-11-18 07:21:08.018815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.360 qpair failed and we were unable to recover it. 00:35:47.360 [2024-11-18 07:21:08.018892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.360 [2024-11-18 07:21:08.018918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.360 qpair failed and we were unable to recover it. 00:35:47.360 [2024-11-18 07:21:08.018999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.360 [2024-11-18 07:21:08.019026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.360 qpair failed and we were unable to recover it. 00:35:47.360 [2024-11-18 07:21:08.019129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.360 [2024-11-18 07:21:08.019167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.360 qpair failed and we were unable to recover it. 00:35:47.360 [2024-11-18 07:21:08.019254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.360 [2024-11-18 07:21:08.019282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.360 qpair failed and we were unable to recover it. 00:35:47.360 [2024-11-18 07:21:08.019365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.360 [2024-11-18 07:21:08.019394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.360 qpair failed and we were unable to recover it. 00:35:47.360 [2024-11-18 07:21:08.019508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.360 [2024-11-18 07:21:08.019535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.360 qpair failed and we were unable to recover it. 00:35:47.360 [2024-11-18 07:21:08.019617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.360 [2024-11-18 07:21:08.019643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.360 qpair failed and we were unable to recover it. 00:35:47.360 [2024-11-18 07:21:08.019720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.360 [2024-11-18 07:21:08.019747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.360 qpair failed and we were unable to recover it. 00:35:47.360 [2024-11-18 07:21:08.019866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.360 [2024-11-18 07:21:08.019892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.360 qpair failed and we were unable to recover it. 00:35:47.360 [2024-11-18 07:21:08.019990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.360 [2024-11-18 07:21:08.020016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.360 qpair failed and we were unable to recover it. 00:35:47.360 [2024-11-18 07:21:08.020094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.360 [2024-11-18 07:21:08.020122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.360 qpair failed and we were unable to recover it. 00:35:47.360 [2024-11-18 07:21:08.020223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.360 [2024-11-18 07:21:08.020251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.360 qpair failed and we were unable to recover it. 00:35:47.360 [2024-11-18 07:21:08.020330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.360 [2024-11-18 07:21:08.020356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.360 qpair failed and we were unable to recover it. 00:35:47.360 [2024-11-18 07:21:08.020433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.360 [2024-11-18 07:21:08.020460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.360 qpair failed and we were unable to recover it. 00:35:47.360 [2024-11-18 07:21:08.020556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.360 [2024-11-18 07:21:08.020585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.360 qpair failed and we were unable to recover it. 00:35:47.361 [2024-11-18 07:21:08.020662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.361 [2024-11-18 07:21:08.020688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.361 qpair failed and we were unable to recover it. 00:35:47.361 [2024-11-18 07:21:08.020802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.361 [2024-11-18 07:21:08.020828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.361 qpair failed and we were unable to recover it. 00:35:47.361 [2024-11-18 07:21:08.020936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.361 [2024-11-18 07:21:08.020963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.361 qpair failed and we were unable to recover it. 00:35:47.361 [2024-11-18 07:21:08.021046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.361 [2024-11-18 07:21:08.021072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.361 qpair failed and we were unable to recover it. 00:35:47.361 [2024-11-18 07:21:08.021156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.361 [2024-11-18 07:21:08.021183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.361 qpair failed and we were unable to recover it. 00:35:47.361 [2024-11-18 07:21:08.021303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.361 [2024-11-18 07:21:08.021332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.361 qpair failed and we were unable to recover it. 00:35:47.361 [2024-11-18 07:21:08.021429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.361 [2024-11-18 07:21:08.021468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.361 qpair failed and we were unable to recover it. 00:35:47.361 [2024-11-18 07:21:08.021568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.361 [2024-11-18 07:21:08.021596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.361 qpair failed and we were unable to recover it. 00:35:47.361 [2024-11-18 07:21:08.021673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.361 [2024-11-18 07:21:08.021705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.361 qpair failed and we were unable to recover it. 00:35:47.361 [2024-11-18 07:21:08.021800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.361 [2024-11-18 07:21:08.021826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.361 qpair failed and we were unable to recover it. 00:35:47.361 [2024-11-18 07:21:08.021911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.361 [2024-11-18 07:21:08.021939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.361 qpair failed and we were unable to recover it. 00:35:47.361 [2024-11-18 07:21:08.022019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.361 [2024-11-18 07:21:08.022046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.361 qpair failed and we were unable to recover it. 00:35:47.361 [2024-11-18 07:21:08.022130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.361 [2024-11-18 07:21:08.022157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.361 qpair failed and we were unable to recover it. 00:35:47.361 [2024-11-18 07:21:08.022268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.361 [2024-11-18 07:21:08.022294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.361 qpair failed and we were unable to recover it. 00:35:47.361 [2024-11-18 07:21:08.022384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.361 [2024-11-18 07:21:08.022412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.361 qpair failed and we were unable to recover it. 00:35:47.361 [2024-11-18 07:21:08.022500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.361 [2024-11-18 07:21:08.022528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.361 qpair failed and we were unable to recover it. 00:35:47.361 [2024-11-18 07:21:08.022608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.361 [2024-11-18 07:21:08.022635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.361 qpair failed and we were unable to recover it. 00:35:47.361 [2024-11-18 07:21:08.022712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.361 [2024-11-18 07:21:08.022739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.361 qpair failed and we were unable to recover it. 00:35:47.361 [2024-11-18 07:21:08.022832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.361 [2024-11-18 07:21:08.022858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.361 qpair failed and we were unable to recover it. 00:35:47.361 [2024-11-18 07:21:08.022941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.361 [2024-11-18 07:21:08.022967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.361 qpair failed and we were unable to recover it. 00:35:47.361 [2024-11-18 07:21:08.023079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.361 [2024-11-18 07:21:08.023104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.361 qpair failed and we were unable to recover it. 00:35:47.361 [2024-11-18 07:21:08.023192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.361 [2024-11-18 07:21:08.023219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.361 qpair failed and we were unable to recover it. 00:35:47.361 [2024-11-18 07:21:08.023316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.361 [2024-11-18 07:21:08.023354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.361 qpair failed and we were unable to recover it. 00:35:47.361 [2024-11-18 07:21:08.023440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.361 [2024-11-18 07:21:08.023466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.361 qpair failed and we were unable to recover it. 00:35:47.361 [2024-11-18 07:21:08.023565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.361 [2024-11-18 07:21:08.023592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.361 qpair failed and we were unable to recover it. 00:35:47.361 [2024-11-18 07:21:08.023697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.361 [2024-11-18 07:21:08.023722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.361 qpair failed and we were unable to recover it. 00:35:47.361 [2024-11-18 07:21:08.023806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.361 [2024-11-18 07:21:08.023832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.361 qpair failed and we were unable to recover it. 00:35:47.361 [2024-11-18 07:21:08.023929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.361 [2024-11-18 07:21:08.023954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.361 qpair failed and we were unable to recover it. 00:35:47.361 [2024-11-18 07:21:08.024036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.361 [2024-11-18 07:21:08.024063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.361 qpair failed and we were unable to recover it. 00:35:47.361 [2024-11-18 07:21:08.024151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.361 [2024-11-18 07:21:08.024177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.361 qpair failed and we were unable to recover it. 00:35:47.361 [2024-11-18 07:21:08.024269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.361 [2024-11-18 07:21:08.024297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.361 qpair failed and we were unable to recover it. 00:35:47.361 [2024-11-18 07:21:08.024371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.362 [2024-11-18 07:21:08.024397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.362 qpair failed and we were unable to recover it. 00:35:47.362 [2024-11-18 07:21:08.024478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.362 [2024-11-18 07:21:08.024522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.362 qpair failed and we were unable to recover it. 00:35:47.362 [2024-11-18 07:21:08.024617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.362 [2024-11-18 07:21:08.024643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.362 qpair failed and we were unable to recover it. 00:35:47.362 [2024-11-18 07:21:08.024729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.362 [2024-11-18 07:21:08.024756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.362 qpair failed and we were unable to recover it. 00:35:47.362 [2024-11-18 07:21:08.024832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.362 [2024-11-18 07:21:08.024866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.362 qpair failed and we were unable to recover it. 00:35:47.362 [2024-11-18 07:21:08.024984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.362 [2024-11-18 07:21:08.025010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.362 qpair failed and we were unable to recover it. 00:35:47.362 [2024-11-18 07:21:08.025125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.362 [2024-11-18 07:21:08.025151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.362 qpair failed and we were unable to recover it. 00:35:47.362 [2024-11-18 07:21:08.025230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.362 [2024-11-18 07:21:08.025263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.362 qpair failed and we were unable to recover it. 00:35:47.362 [2024-11-18 07:21:08.025348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.362 [2024-11-18 07:21:08.025376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.362 qpair failed and we were unable to recover it. 00:35:47.362 [2024-11-18 07:21:08.025457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.362 [2024-11-18 07:21:08.025484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.362 qpair failed and we were unable to recover it. 00:35:47.362 [2024-11-18 07:21:08.025572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.362 [2024-11-18 07:21:08.025597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.362 qpair failed and we were unable to recover it. 00:35:47.362 [2024-11-18 07:21:08.025674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.362 [2024-11-18 07:21:08.025699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.362 qpair failed and we were unable to recover it. 00:35:47.362 [2024-11-18 07:21:08.025781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.362 [2024-11-18 07:21:08.025806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.362 qpair failed and we were unable to recover it. 00:35:47.362 [2024-11-18 07:21:08.025891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.362 [2024-11-18 07:21:08.025918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.362 qpair failed and we were unable to recover it. 00:35:47.362 [2024-11-18 07:21:08.026001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.362 [2024-11-18 07:21:08.026028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.362 qpair failed and we were unable to recover it. 00:35:47.362 [2024-11-18 07:21:08.026121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.362 [2024-11-18 07:21:08.026148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.362 qpair failed and we were unable to recover it. 00:35:47.362 [2024-11-18 07:21:08.026239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.362 [2024-11-18 07:21:08.026265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.362 qpair failed and we were unable to recover it. 00:35:47.362 [2024-11-18 07:21:08.026376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.362 [2024-11-18 07:21:08.026402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.362 qpair failed and we were unable to recover it. 00:35:47.362 [2024-11-18 07:21:08.026494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.362 [2024-11-18 07:21:08.026522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.362 qpair failed and we were unable to recover it. 00:35:47.362 [2024-11-18 07:21:08.026601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.362 [2024-11-18 07:21:08.026629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.362 qpair failed and we were unable to recover it. 00:35:47.362 [2024-11-18 07:21:08.026700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.362 [2024-11-18 07:21:08.026727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.362 qpair failed and we were unable to recover it. 00:35:47.362 [2024-11-18 07:21:08.026835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.362 [2024-11-18 07:21:08.026862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.362 qpair failed and we were unable to recover it. 00:35:47.362 [2024-11-18 07:21:08.026946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.362 [2024-11-18 07:21:08.026973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.362 qpair failed and we were unable to recover it. 00:35:47.362 [2024-11-18 07:21:08.027057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.362 [2024-11-18 07:21:08.027084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.362 qpair failed and we were unable to recover it. 00:35:47.362 [2024-11-18 07:21:08.027168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.362 [2024-11-18 07:21:08.027196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.362 qpair failed and we were unable to recover it. 00:35:47.362 [2024-11-18 07:21:08.027273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.362 [2024-11-18 07:21:08.027301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.362 qpair failed and we were unable to recover it. 00:35:47.362 [2024-11-18 07:21:08.027391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.362 [2024-11-18 07:21:08.027419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.362 qpair failed and we were unable to recover it. 00:35:47.362 [2024-11-18 07:21:08.027526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.362 [2024-11-18 07:21:08.027554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.362 qpair failed and we were unable to recover it. 00:35:47.362 [2024-11-18 07:21:08.027642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.362 [2024-11-18 07:21:08.027669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.362 qpair failed and we were unable to recover it. 00:35:47.362 [2024-11-18 07:21:08.027762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.362 [2024-11-18 07:21:08.027801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.362 qpair failed and we were unable to recover it. 00:35:47.362 [2024-11-18 07:21:08.027898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.362 [2024-11-18 07:21:08.027929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.362 qpair failed and we were unable to recover it. 00:35:47.362 [2024-11-18 07:21:08.028022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.363 [2024-11-18 07:21:08.028050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.363 qpair failed and we were unable to recover it. 00:35:47.363 [2024-11-18 07:21:08.028125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.363 [2024-11-18 07:21:08.028152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.363 qpair failed and we were unable to recover it. 00:35:47.363 [2024-11-18 07:21:08.028224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.363 [2024-11-18 07:21:08.028251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.363 qpair failed and we were unable to recover it. 00:35:47.363 [2024-11-18 07:21:08.028333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.363 [2024-11-18 07:21:08.028362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.363 qpair failed and we were unable to recover it. 00:35:47.363 [2024-11-18 07:21:08.028444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.363 [2024-11-18 07:21:08.028472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.363 qpair failed and we were unable to recover it. 00:35:47.363 [2024-11-18 07:21:08.028576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.363 [2024-11-18 07:21:08.028605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.363 qpair failed and we were unable to recover it. 00:35:47.363 [2024-11-18 07:21:08.028687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.363 [2024-11-18 07:21:08.028714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.363 qpair failed and we were unable to recover it. 00:35:47.363 [2024-11-18 07:21:08.028789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.363 [2024-11-18 07:21:08.028816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.363 qpair failed and we were unable to recover it. 00:35:47.363 [2024-11-18 07:21:08.028898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.363 [2024-11-18 07:21:08.028926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.363 qpair failed and we were unable to recover it. 00:35:47.363 [2024-11-18 07:21:08.029005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.363 [2024-11-18 07:21:08.029033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.363 qpair failed and we were unable to recover it. 00:35:47.363 [2024-11-18 07:21:08.029111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.363 [2024-11-18 07:21:08.029138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.363 qpair failed and we were unable to recover it. 00:35:47.363 [2024-11-18 07:21:08.029212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.363 [2024-11-18 07:21:08.029239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.363 qpair failed and we were unable to recover it. 00:35:47.363 [2024-11-18 07:21:08.029315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.363 [2024-11-18 07:21:08.029341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.363 qpair failed and we were unable to recover it. 00:35:47.363 [2024-11-18 07:21:08.029451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.363 [2024-11-18 07:21:08.029483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.363 qpair failed and we were unable to recover it. 00:35:47.363 [2024-11-18 07:21:08.029581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.363 [2024-11-18 07:21:08.029608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.363 qpair failed and we were unable to recover it. 00:35:47.363 [2024-11-18 07:21:08.029689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.363 [2024-11-18 07:21:08.029718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.363 qpair failed and we were unable to recover it. 00:35:47.363 [2024-11-18 07:21:08.029803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.363 [2024-11-18 07:21:08.029830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.363 qpair failed and we were unable to recover it. 00:35:47.363 [2024-11-18 07:21:08.029943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.363 [2024-11-18 07:21:08.029970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.363 qpair failed and we were unable to recover it. 00:35:47.363 [2024-11-18 07:21:08.030050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.363 [2024-11-18 07:21:08.030078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.363 qpair failed and we were unable to recover it. 00:35:47.363 [2024-11-18 07:21:08.030159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.363 [2024-11-18 07:21:08.030188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.363 qpair failed and we were unable to recover it. 00:35:47.363 [2024-11-18 07:21:08.030277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.363 [2024-11-18 07:21:08.030306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.363 qpair failed and we were unable to recover it. 00:35:47.363 [2024-11-18 07:21:08.030400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.363 [2024-11-18 07:21:08.030426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.363 qpair failed and we were unable to recover it. 00:35:47.363 [2024-11-18 07:21:08.030542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.363 [2024-11-18 07:21:08.030569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.363 qpair failed and we were unable to recover it. 00:35:47.363 [2024-11-18 07:21:08.030650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.363 [2024-11-18 07:21:08.030677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.363 qpair failed and we were unable to recover it. 00:35:47.363 [2024-11-18 07:21:08.030753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.363 [2024-11-18 07:21:08.030779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.363 qpair failed and we were unable to recover it. 00:35:47.363 [2024-11-18 07:21:08.030887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.363 [2024-11-18 07:21:08.030913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.363 qpair failed and we were unable to recover it. 00:35:47.363 [2024-11-18 07:21:08.030987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.363 [2024-11-18 07:21:08.031014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.363 qpair failed and we were unable to recover it. 00:35:47.363 [2024-11-18 07:21:08.031140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.363 [2024-11-18 07:21:08.031169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.363 qpair failed and we were unable to recover it. 00:35:47.363 [2024-11-18 07:21:08.031266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.363 [2024-11-18 07:21:08.031295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.363 qpair failed and we were unable to recover it. 00:35:47.363 [2024-11-18 07:21:08.031372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.363 [2024-11-18 07:21:08.031400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.363 qpair failed and we were unable to recover it. 00:35:47.363 [2024-11-18 07:21:08.031505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.363 [2024-11-18 07:21:08.031532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.363 qpair failed and we were unable to recover it. 00:35:47.363 [2024-11-18 07:21:08.031614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.363 [2024-11-18 07:21:08.031641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.363 qpair failed and we were unable to recover it. 00:35:47.363 [2024-11-18 07:21:08.031724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.363 [2024-11-18 07:21:08.031752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.363 qpair failed and we were unable to recover it. 00:35:47.364 [2024-11-18 07:21:08.031839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.364 [2024-11-18 07:21:08.031867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.364 qpair failed and we were unable to recover it. 00:35:47.364 [2024-11-18 07:21:08.031957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.364 [2024-11-18 07:21:08.031984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.364 qpair failed and we were unable to recover it. 00:35:47.364 [2024-11-18 07:21:08.032061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.364 [2024-11-18 07:21:08.032088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.364 qpair failed and we were unable to recover it. 00:35:47.364 [2024-11-18 07:21:08.032167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.364 [2024-11-18 07:21:08.032194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.364 qpair failed and we were unable to recover it. 00:35:47.364 [2024-11-18 07:21:08.032275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.364 [2024-11-18 07:21:08.032302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.364 qpair failed and we were unable to recover it. 00:35:47.364 [2024-11-18 07:21:08.032382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.364 [2024-11-18 07:21:08.032409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.364 qpair failed and we were unable to recover it. 00:35:47.364 [2024-11-18 07:21:08.032498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.364 [2024-11-18 07:21:08.032526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.364 qpair failed and we were unable to recover it. 00:35:47.364 [2024-11-18 07:21:08.032611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.364 [2024-11-18 07:21:08.032642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.364 qpair failed and we were unable to recover it. 00:35:47.364 [2024-11-18 07:21:08.032730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.364 [2024-11-18 07:21:08.032767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.364 qpair failed and we were unable to recover it. 00:35:47.364 [2024-11-18 07:21:08.032848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.364 [2024-11-18 07:21:08.032876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.364 qpair failed and we were unable to recover it. 00:35:47.364 [2024-11-18 07:21:08.032994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.364 [2024-11-18 07:21:08.033021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.364 qpair failed and we were unable to recover it. 00:35:47.364 [2024-11-18 07:21:08.033103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.364 [2024-11-18 07:21:08.033131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.364 qpair failed and we were unable to recover it. 00:35:47.364 [2024-11-18 07:21:08.033220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.364 [2024-11-18 07:21:08.033248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.364 qpair failed and we were unable to recover it. 00:35:47.364 [2024-11-18 07:21:08.033334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.364 [2024-11-18 07:21:08.033362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.364 qpair failed and we were unable to recover it. 00:35:47.364 [2024-11-18 07:21:08.033450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.364 [2024-11-18 07:21:08.033478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.364 qpair failed and we were unable to recover it. 00:35:47.364 [2024-11-18 07:21:08.033571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.364 [2024-11-18 07:21:08.033599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.364 qpair failed and we were unable to recover it. 00:35:47.364 [2024-11-18 07:21:08.033678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.364 [2024-11-18 07:21:08.033704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.364 qpair failed and we were unable to recover it. 00:35:47.364 [2024-11-18 07:21:08.033816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.364 [2024-11-18 07:21:08.033842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.364 qpair failed and we were unable to recover it. 00:35:47.364 [2024-11-18 07:21:08.033929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.364 [2024-11-18 07:21:08.033956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.364 qpair failed and we were unable to recover it. 00:35:47.364 [2024-11-18 07:21:08.034033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.364 [2024-11-18 07:21:08.034059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.364 qpair failed and we were unable to recover it. 00:35:47.364 [2024-11-18 07:21:08.034146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.364 [2024-11-18 07:21:08.034177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.364 qpair failed and we were unable to recover it. 00:35:47.364 [2024-11-18 07:21:08.034283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.364 [2024-11-18 07:21:08.034310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.364 qpair failed and we were unable to recover it. 00:35:47.364 [2024-11-18 07:21:08.034389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.364 [2024-11-18 07:21:08.034415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.364 qpair failed and we were unable to recover it. 00:35:47.364 [2024-11-18 07:21:08.034503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.364 [2024-11-18 07:21:08.034530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.364 qpair failed and we were unable to recover it. 00:35:47.364 [2024-11-18 07:21:08.034639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.364 [2024-11-18 07:21:08.034666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.364 qpair failed and we were unable to recover it. 00:35:47.364 [2024-11-18 07:21:08.034747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.364 [2024-11-18 07:21:08.034773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.364 qpair failed and we were unable to recover it. 00:35:47.364 [2024-11-18 07:21:08.034855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.364 [2024-11-18 07:21:08.034883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.364 qpair failed and we were unable to recover it. 00:35:47.364 [2024-11-18 07:21:08.034966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.364 [2024-11-18 07:21:08.034993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.364 qpair failed and we were unable to recover it. 00:35:47.364 [2024-11-18 07:21:08.035069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.364 [2024-11-18 07:21:08.035096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.364 qpair failed and we were unable to recover it. 00:35:47.364 [2024-11-18 07:21:08.035170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.364 [2024-11-18 07:21:08.035197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.364 qpair failed and we were unable to recover it. 00:35:47.364 [2024-11-18 07:21:08.035323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.364 [2024-11-18 07:21:08.035364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.364 qpair failed and we were unable to recover it. 00:35:47.364 [2024-11-18 07:21:08.035482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.364 [2024-11-18 07:21:08.035517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.364 qpair failed and we were unable to recover it. 00:35:47.365 [2024-11-18 07:21:08.035630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.365 [2024-11-18 07:21:08.035659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.365 qpair failed and we were unable to recover it. 00:35:47.365 [2024-11-18 07:21:08.035737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.365 [2024-11-18 07:21:08.035765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.365 qpair failed and we were unable to recover it. 00:35:47.365 [2024-11-18 07:21:08.035867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.365 [2024-11-18 07:21:08.035895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.365 qpair failed and we were unable to recover it. 00:35:47.365 [2024-11-18 07:21:08.035975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.365 [2024-11-18 07:21:08.036003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.365 qpair failed and we were unable to recover it. 00:35:47.365 [2024-11-18 07:21:08.036094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.365 [2024-11-18 07:21:08.036123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.365 qpair failed and we were unable to recover it. 00:35:47.365 [2024-11-18 07:21:08.036205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.365 [2024-11-18 07:21:08.036232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.365 qpair failed and we were unable to recover it. 00:35:47.365 [2024-11-18 07:21:08.036320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.365 [2024-11-18 07:21:08.036348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.365 qpair failed and we were unable to recover it. 00:35:47.365 [2024-11-18 07:21:08.036424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.365 [2024-11-18 07:21:08.036453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.365 qpair failed and we were unable to recover it. 00:35:47.365 [2024-11-18 07:21:08.036538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.365 [2024-11-18 07:21:08.036567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.365 qpair failed and we were unable to recover it. 00:35:47.365 [2024-11-18 07:21:08.036654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.365 [2024-11-18 07:21:08.036683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.365 qpair failed and we were unable to recover it. 00:35:47.365 [2024-11-18 07:21:08.036771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.365 [2024-11-18 07:21:08.036798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.365 qpair failed and we were unable to recover it. 00:35:47.365 [2024-11-18 07:21:08.036873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.365 [2024-11-18 07:21:08.036901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.365 qpair failed and we were unable to recover it. 00:35:47.365 [2024-11-18 07:21:08.036985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.365 [2024-11-18 07:21:08.037013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.365 qpair failed and we were unable to recover it. 00:35:47.365 [2024-11-18 07:21:08.037103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.365 [2024-11-18 07:21:08.037130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.365 qpair failed and we were unable to recover it. 00:35:47.365 [2024-11-18 07:21:08.037207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.365 [2024-11-18 07:21:08.037234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.365 qpair failed and we were unable to recover it. 00:35:47.365 [2024-11-18 07:21:08.037325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.365 [2024-11-18 07:21:08.037358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.365 qpair failed and we were unable to recover it. 00:35:47.365 [2024-11-18 07:21:08.037444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.365 [2024-11-18 07:21:08.037471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.365 qpair failed and we were unable to recover it. 00:35:47.365 [2024-11-18 07:21:08.037571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.365 [2024-11-18 07:21:08.037598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.365 qpair failed and we were unable to recover it. 00:35:47.365 [2024-11-18 07:21:08.037676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.365 [2024-11-18 07:21:08.037704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.365 qpair failed and we were unable to recover it. 00:35:47.365 [2024-11-18 07:21:08.037783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.365 [2024-11-18 07:21:08.037810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.365 qpair failed and we were unable to recover it. 00:35:47.365 [2024-11-18 07:21:08.037891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.365 [2024-11-18 07:21:08.037919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.365 qpair failed and we were unable to recover it. 00:35:47.365 [2024-11-18 07:21:08.038007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.365 [2024-11-18 07:21:08.038035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.365 qpair failed and we were unable to recover it. 00:35:47.365 [2024-11-18 07:21:08.038112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.365 [2024-11-18 07:21:08.038141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.365 qpair failed and we were unable to recover it. 00:35:47.365 [2024-11-18 07:21:08.038261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.365 [2024-11-18 07:21:08.038289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.365 qpair failed and we were unable to recover it. 00:35:47.365 [2024-11-18 07:21:08.038388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.365 [2024-11-18 07:21:08.038428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.365 qpair failed and we were unable to recover it. 00:35:47.365 [2024-11-18 07:21:08.038571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.365 [2024-11-18 07:21:08.038601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.365 qpair failed and we were unable to recover it. 00:35:47.365 [2024-11-18 07:21:08.038687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.365 [2024-11-18 07:21:08.038714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.365 qpair failed and we were unable to recover it. 00:35:47.365 [2024-11-18 07:21:08.038794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.365 [2024-11-18 07:21:08.038822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.365 qpair failed and we were unable to recover it. 00:35:47.365 [2024-11-18 07:21:08.038904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.365 [2024-11-18 07:21:08.038932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.365 qpair failed and we were unable to recover it. 00:35:47.365 [2024-11-18 07:21:08.039029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.365 [2024-11-18 07:21:08.039062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.365 qpair failed and we were unable to recover it. 00:35:47.365 [2024-11-18 07:21:08.039174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.365 [2024-11-18 07:21:08.039202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.366 qpair failed and we were unable to recover it. 00:35:47.366 [2024-11-18 07:21:08.039317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.366 [2024-11-18 07:21:08.039344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.366 qpair failed and we were unable to recover it. 00:35:47.366 [2024-11-18 07:21:08.039436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.366 [2024-11-18 07:21:08.039464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.366 qpair failed and we were unable to recover it. 00:35:47.366 [2024-11-18 07:21:08.039554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.366 [2024-11-18 07:21:08.039582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.366 qpair failed and we were unable to recover it. 00:35:47.366 [2024-11-18 07:21:08.039662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.366 [2024-11-18 07:21:08.039689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.366 qpair failed and we were unable to recover it. 00:35:47.366 [2024-11-18 07:21:08.039810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.366 [2024-11-18 07:21:08.039837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.366 qpair failed and we were unable to recover it. 00:35:47.366 [2024-11-18 07:21:08.039950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.366 [2024-11-18 07:21:08.039978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.366 qpair failed and we were unable to recover it. 00:35:47.366 [2024-11-18 07:21:08.040071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.366 [2024-11-18 07:21:08.040098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.366 qpair failed and we were unable to recover it. 00:35:47.366 [2024-11-18 07:21:08.040203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.366 [2024-11-18 07:21:08.040230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.366 qpair failed and we were unable to recover it. 00:35:47.366 [2024-11-18 07:21:08.040320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.366 [2024-11-18 07:21:08.040348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.366 qpair failed and we were unable to recover it. 00:35:47.366 [2024-11-18 07:21:08.040436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.366 [2024-11-18 07:21:08.040464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.366 qpair failed and we were unable to recover it. 00:35:47.366 [2024-11-18 07:21:08.040564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.366 [2024-11-18 07:21:08.040593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.366 qpair failed and we were unable to recover it. 00:35:47.366 [2024-11-18 07:21:08.040681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.366 [2024-11-18 07:21:08.040708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.366 qpair failed and we were unable to recover it. 00:35:47.366 [2024-11-18 07:21:08.040786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.366 [2024-11-18 07:21:08.040813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.366 qpair failed and we were unable to recover it. 00:35:47.366 [2024-11-18 07:21:08.040904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.366 [2024-11-18 07:21:08.040933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.366 qpair failed and we were unable to recover it. 00:35:47.366 [2024-11-18 07:21:08.041045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.366 [2024-11-18 07:21:08.041073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.366 qpair failed and we were unable to recover it. 00:35:47.366 [2024-11-18 07:21:08.041156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.366 [2024-11-18 07:21:08.041184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.366 qpair failed and we were unable to recover it. 00:35:47.366 [2024-11-18 07:21:08.041281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.366 [2024-11-18 07:21:08.041308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.366 qpair failed and we were unable to recover it. 00:35:47.366 [2024-11-18 07:21:08.041391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.366 [2024-11-18 07:21:08.041421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.366 qpair failed and we were unable to recover it. 00:35:47.366 [2024-11-18 07:21:08.041527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.366 [2024-11-18 07:21:08.041555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.366 qpair failed and we were unable to recover it. 00:35:47.366 [2024-11-18 07:21:08.041640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.366 [2024-11-18 07:21:08.041667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.366 qpair failed and we were unable to recover it. 00:35:47.366 [2024-11-18 07:21:08.041781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.366 [2024-11-18 07:21:08.041808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.366 qpair failed and we were unable to recover it. 00:35:47.366 [2024-11-18 07:21:08.041888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.366 [2024-11-18 07:21:08.041915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.366 qpair failed and we were unable to recover it. 00:35:47.366 [2024-11-18 07:21:08.041997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.366 [2024-11-18 07:21:08.042023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.366 qpair failed and we were unable to recover it. 00:35:47.366 [2024-11-18 07:21:08.042102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.366 [2024-11-18 07:21:08.042128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.366 qpair failed and we were unable to recover it. 00:35:47.366 [2024-11-18 07:21:08.042229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.366 [2024-11-18 07:21:08.042275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.366 qpair failed and we were unable to recover it. 00:35:47.366 [2024-11-18 07:21:08.042358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.366 [2024-11-18 07:21:08.042386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.366 qpair failed and we were unable to recover it. 00:35:47.366 [2024-11-18 07:21:08.042471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.366 [2024-11-18 07:21:08.042506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.366 qpair failed and we were unable to recover it. 00:35:47.366 [2024-11-18 07:21:08.042620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.366 [2024-11-18 07:21:08.042648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.366 qpair failed and we were unable to recover it. 00:35:47.366 [2024-11-18 07:21:08.042725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.366 [2024-11-18 07:21:08.042752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.366 qpair failed and we were unable to recover it. 00:35:47.366 [2024-11-18 07:21:08.042836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.366 [2024-11-18 07:21:08.042863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.366 qpair failed and we were unable to recover it. 00:35:47.366 [2024-11-18 07:21:08.042973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.366 [2024-11-18 07:21:08.043000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.366 qpair failed and we were unable to recover it. 00:35:47.366 [2024-11-18 07:21:08.043089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.366 [2024-11-18 07:21:08.043116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.367 qpair failed and we were unable to recover it. 00:35:47.367 [2024-11-18 07:21:08.043231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.367 [2024-11-18 07:21:08.043260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.367 qpair failed and we were unable to recover it. 00:35:47.367 [2024-11-18 07:21:08.043352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.367 [2024-11-18 07:21:08.043379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.367 qpair failed and we were unable to recover it. 00:35:47.367 [2024-11-18 07:21:08.043457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.367 [2024-11-18 07:21:08.043501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.367 qpair failed and we were unable to recover it. 00:35:47.367 [2024-11-18 07:21:08.043592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.367 [2024-11-18 07:21:08.043619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.367 qpair failed and we were unable to recover it. 00:35:47.367 [2024-11-18 07:21:08.043700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.367 [2024-11-18 07:21:08.043726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.367 qpair failed and we were unable to recover it. 00:35:47.367 [2024-11-18 07:21:08.043834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.367 [2024-11-18 07:21:08.043860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.367 qpair failed and we were unable to recover it. 00:35:47.367 [2024-11-18 07:21:08.043949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.367 [2024-11-18 07:21:08.043976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.367 qpair failed and we were unable to recover it. 00:35:47.367 [2024-11-18 07:21:08.044062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.367 [2024-11-18 07:21:08.044089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.367 qpair failed and we were unable to recover it. 00:35:47.367 [2024-11-18 07:21:08.044163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.367 [2024-11-18 07:21:08.044190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.367 qpair failed and we were unable to recover it. 00:35:47.367 [2024-11-18 07:21:08.044299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.367 [2024-11-18 07:21:08.044326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.367 qpair failed and we were unable to recover it. 00:35:47.367 [2024-11-18 07:21:08.044408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.367 [2024-11-18 07:21:08.044435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.367 qpair failed and we were unable to recover it. 00:35:47.367 [2024-11-18 07:21:08.044521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.367 [2024-11-18 07:21:08.044548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.367 qpair failed and we were unable to recover it. 00:35:47.367 [2024-11-18 07:21:08.044657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.367 [2024-11-18 07:21:08.044684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.367 qpair failed and we were unable to recover it. 00:35:47.367 [2024-11-18 07:21:08.044764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.367 [2024-11-18 07:21:08.044791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.367 qpair failed and we were unable to recover it. 00:35:47.367 [2024-11-18 07:21:08.044879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.367 [2024-11-18 07:21:08.044906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.367 qpair failed and we were unable to recover it. 00:35:47.367 [2024-11-18 07:21:08.045015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.367 [2024-11-18 07:21:08.045042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.367 qpair failed and we were unable to recover it. 00:35:47.367 [2024-11-18 07:21:08.045136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.367 [2024-11-18 07:21:08.045176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.367 qpair failed and we were unable to recover it. 00:35:47.367 [2024-11-18 07:21:08.045281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.367 [2024-11-18 07:21:08.045310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.367 qpair failed and we were unable to recover it. 00:35:47.367 [2024-11-18 07:21:08.045396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.367 [2024-11-18 07:21:08.045424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.367 qpair failed and we were unable to recover it. 00:35:47.367 [2024-11-18 07:21:08.045513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.367 [2024-11-18 07:21:08.045541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.367 qpair failed and we were unable to recover it. 00:35:47.367 [2024-11-18 07:21:08.045638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.367 [2024-11-18 07:21:08.045666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.367 qpair failed and we were unable to recover it. 00:35:47.367 [2024-11-18 07:21:08.045742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.367 [2024-11-18 07:21:08.045769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.367 qpair failed and we were unable to recover it. 00:35:47.367 [2024-11-18 07:21:08.045851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.367 [2024-11-18 07:21:08.045879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.367 qpair failed and we were unable to recover it. 00:35:47.367 [2024-11-18 07:21:08.045992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.367 [2024-11-18 07:21:08.046032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.367 qpair failed and we were unable to recover it. 00:35:47.367 [2024-11-18 07:21:08.046155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.367 [2024-11-18 07:21:08.046184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.367 qpair failed and we were unable to recover it. 00:35:47.367 [2024-11-18 07:21:08.046271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.367 [2024-11-18 07:21:08.046297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.367 qpair failed and we were unable to recover it. 00:35:47.367 [2024-11-18 07:21:08.046402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.367 [2024-11-18 07:21:08.046429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.367 qpair failed and we were unable to recover it. 00:35:47.367 [2024-11-18 07:21:08.046547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.367 [2024-11-18 07:21:08.046574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.367 qpair failed and we were unable to recover it. 00:35:47.367 [2024-11-18 07:21:08.046661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.367 [2024-11-18 07:21:08.046687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.367 qpair failed and we were unable to recover it. 00:35:47.367 [2024-11-18 07:21:08.046782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.367 [2024-11-18 07:21:08.046819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.367 qpair failed and we were unable to recover it. 00:35:47.367 [2024-11-18 07:21:08.046908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.367 [2024-11-18 07:21:08.046934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.367 qpair failed and we were unable to recover it. 00:35:47.367 [2024-11-18 07:21:08.047018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.367 [2024-11-18 07:21:08.047050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.367 qpair failed and we were unable to recover it. 00:35:47.367 [2024-11-18 07:21:08.047143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.368 [2024-11-18 07:21:08.047175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.368 qpair failed and we were unable to recover it. 00:35:47.368 [2024-11-18 07:21:08.047251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.368 [2024-11-18 07:21:08.047278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.368 qpair failed and we were unable to recover it. 00:35:47.368 [2024-11-18 07:21:08.047359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.368 [2024-11-18 07:21:08.047386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.368 qpair failed and we were unable to recover it. 00:35:47.368 [2024-11-18 07:21:08.047468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.368 [2024-11-18 07:21:08.047512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.368 qpair failed and we were unable to recover it. 00:35:47.368 [2024-11-18 07:21:08.047594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.368 [2024-11-18 07:21:08.047621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.368 qpair failed and we were unable to recover it. 00:35:47.368 [2024-11-18 07:21:08.047707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.368 [2024-11-18 07:21:08.047733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.368 qpair failed and we were unable to recover it. 00:35:47.368 [2024-11-18 07:21:08.047829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.368 [2024-11-18 07:21:08.047857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.368 qpair failed and we were unable to recover it. 00:35:47.368 [2024-11-18 07:21:08.047971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.368 [2024-11-18 07:21:08.047997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.368 qpair failed and we were unable to recover it. 00:35:47.368 [2024-11-18 07:21:08.048076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.368 [2024-11-18 07:21:08.048103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.368 qpair failed and we were unable to recover it. 00:35:47.368 [2024-11-18 07:21:08.048194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.368 [2024-11-18 07:21:08.048222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.368 qpair failed and we were unable to recover it. 00:35:47.368 [2024-11-18 07:21:08.048298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.368 [2024-11-18 07:21:08.048325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.368 qpair failed and we were unable to recover it. 00:35:47.368 [2024-11-18 07:21:08.048403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.368 [2024-11-18 07:21:08.048430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.368 qpair failed and we were unable to recover it. 00:35:47.368 [2024-11-18 07:21:08.048511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.368 [2024-11-18 07:21:08.048549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.368 qpair failed and we were unable to recover it. 00:35:47.368 [2024-11-18 07:21:08.048633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.368 [2024-11-18 07:21:08.048659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.368 qpair failed and we were unable to recover it. 00:35:47.368 [2024-11-18 07:21:08.048776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.368 [2024-11-18 07:21:08.048803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.368 qpair failed and we were unable to recover it. 00:35:47.368 [2024-11-18 07:21:08.048886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.368 [2024-11-18 07:21:08.048912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.368 qpair failed and we were unable to recover it. 00:35:47.368 [2024-11-18 07:21:08.049038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.368 [2024-11-18 07:21:08.049073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.368 qpair failed and we were unable to recover it. 00:35:47.368 [2024-11-18 07:21:08.049167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.368 [2024-11-18 07:21:08.049207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.368 qpair failed and we were unable to recover it. 00:35:47.368 07:21:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:47.368 [2024-11-18 07:21:08.049291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.368 [2024-11-18 07:21:08.049318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.368 qpair failed and we were unable to recover it. 00:35:47.368 [2024-11-18 07:21:08.049410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.368 07:21:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:35:47.368 [2024-11-18 07:21:08.049437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.368 qpair failed and we were unable to recover it. 00:35:47.368 [2024-11-18 07:21:08.049521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.368 [2024-11-18 07:21:08.049547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.368 qpair failed and we were unable to recover it. 00:35:47.368 [2024-11-18 07:21:08.049631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.368 07:21:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:47.368 [2024-11-18 07:21:08.049658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.368 qpair failed and we were unable to recover it. 00:35:47.368 [2024-11-18 07:21:08.049745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.368 [2024-11-18 07:21:08.049770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.368 qpair failed and we were unable to recover it. 00:35:47.368 07:21:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:47.368 [2024-11-18 07:21:08.049846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.368 [2024-11-18 07:21:08.049872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.368 qpair failed and we were unable to recover it. 00:35:47.368 [2024-11-18 07:21:08.049945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.368 [2024-11-18 07:21:08.049972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.368 qpair failed and we were unable to recover it. 00:35:47.368 07:21:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:47.368 [2024-11-18 07:21:08.050056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.368 [2024-11-18 07:21:08.050093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.368 qpair failed and we were unable to recover it. 00:35:47.368 [2024-11-18 07:21:08.050177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.368 [2024-11-18 07:21:08.050202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.368 qpair failed and we were unable to recover it. 00:35:47.368 [2024-11-18 07:21:08.050282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.368 [2024-11-18 07:21:08.050308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.368 qpair failed and we were unable to recover it. 00:35:47.368 [2024-11-18 07:21:08.050387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.368 [2024-11-18 07:21:08.050413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.368 qpair failed and we were unable to recover it. 00:35:47.368 [2024-11-18 07:21:08.050504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.368 [2024-11-18 07:21:08.050530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.368 qpair failed and we were unable to recover it. 00:35:47.368 [2024-11-18 07:21:08.050606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.368 [2024-11-18 07:21:08.050633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.368 qpair failed and we were unable to recover it. 00:35:47.368 [2024-11-18 07:21:08.050741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.368 [2024-11-18 07:21:08.050769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.368 qpair failed and we were unable to recover it. 00:35:47.368 [2024-11-18 07:21:08.050848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.368 [2024-11-18 07:21:08.050874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.368 qpair failed and we were unable to recover it. 00:35:47.369 [2024-11-18 07:21:08.050966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.369 [2024-11-18 07:21:08.050993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.369 qpair failed and we were unable to recover it. 00:35:47.369 [2024-11-18 07:21:08.051087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.369 [2024-11-18 07:21:08.051113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.369 qpair failed and we were unable to recover it. 00:35:47.369 [2024-11-18 07:21:08.051196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.369 [2024-11-18 07:21:08.051223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.369 qpair failed and we were unable to recover it. 00:35:47.369 [2024-11-18 07:21:08.051297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.369 [2024-11-18 07:21:08.051333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.369 qpair failed and we were unable to recover it. 00:35:47.369 [2024-11-18 07:21:08.051427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.369 [2024-11-18 07:21:08.051455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.369 qpair failed and we were unable to recover it. 00:35:47.369 [2024-11-18 07:21:08.051551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.369 [2024-11-18 07:21:08.051579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.369 qpair failed and we were unable to recover it. 00:35:47.369 [2024-11-18 07:21:08.051671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.369 [2024-11-18 07:21:08.051697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.369 qpair failed and we were unable to recover it. 00:35:47.369 [2024-11-18 07:21:08.051784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.369 [2024-11-18 07:21:08.051810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.369 qpair failed and we were unable to recover it. 00:35:47.369 [2024-11-18 07:21:08.051890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.369 [2024-11-18 07:21:08.051916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.369 qpair failed and we were unable to recover it. 00:35:47.369 [2024-11-18 07:21:08.052029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.369 [2024-11-18 07:21:08.052055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.369 qpair failed and we were unable to recover it. 00:35:47.369 [2024-11-18 07:21:08.052167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.369 [2024-11-18 07:21:08.052194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.369 qpair failed and we were unable to recover it. 00:35:47.369 [2024-11-18 07:21:08.052292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.369 [2024-11-18 07:21:08.052337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.369 qpair failed and we were unable to recover it. 00:35:47.369 [2024-11-18 07:21:08.052443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.369 [2024-11-18 07:21:08.052502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.369 qpair failed and we were unable to recover it. 00:35:47.369 [2024-11-18 07:21:08.052657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.369 [2024-11-18 07:21:08.052686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.369 qpair failed and we were unable to recover it. 00:35:47.369 [2024-11-18 07:21:08.052768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.369 [2024-11-18 07:21:08.052794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.369 qpair failed and we were unable to recover it. 00:35:47.369 [2024-11-18 07:21:08.052906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.369 [2024-11-18 07:21:08.052943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.369 qpair failed and we were unable to recover it. 00:35:47.369 [2024-11-18 07:21:08.053022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.369 [2024-11-18 07:21:08.053047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.369 qpair failed and we were unable to recover it. 00:35:47.369 [2024-11-18 07:21:08.053154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.369 [2024-11-18 07:21:08.053181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.369 qpair failed and we were unable to recover it. 00:35:47.369 [2024-11-18 07:21:08.053260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.369 [2024-11-18 07:21:08.053287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.369 qpair failed and we were unable to recover it. 00:35:47.369 [2024-11-18 07:21:08.053369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.369 [2024-11-18 07:21:08.053396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.369 qpair failed and we were unable to recover it. 00:35:47.369 [2024-11-18 07:21:08.053480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.369 [2024-11-18 07:21:08.053517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.369 qpair failed and we were unable to recover it. 00:35:47.369 [2024-11-18 07:21:08.053598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.369 [2024-11-18 07:21:08.053626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.369 qpair failed and we were unable to recover it. 00:35:47.369 [2024-11-18 07:21:08.053726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.369 [2024-11-18 07:21:08.053753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.369 qpair failed and we were unable to recover it. 00:35:47.369 [2024-11-18 07:21:08.053872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.369 [2024-11-18 07:21:08.053900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.369 qpair failed and we were unable to recover it. 00:35:47.369 [2024-11-18 07:21:08.053984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.369 [2024-11-18 07:21:08.054012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.369 qpair failed and we were unable to recover it. 00:35:47.369 [2024-11-18 07:21:08.054126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.369 [2024-11-18 07:21:08.054153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.369 qpair failed and we were unable to recover it. 00:35:47.369 [2024-11-18 07:21:08.054248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.369 [2024-11-18 07:21:08.054289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.369 qpair failed and we were unable to recover it. 00:35:47.369 [2024-11-18 07:21:08.054429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.369 [2024-11-18 07:21:08.054472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.369 qpair failed and we were unable to recover it. 00:35:47.369 [2024-11-18 07:21:08.054596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.369 [2024-11-18 07:21:08.054636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.369 qpair failed and we were unable to recover it. 00:35:47.369 [2024-11-18 07:21:08.054729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.369 [2024-11-18 07:21:08.054757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.369 qpair failed and we were unable to recover it. 00:35:47.369 [2024-11-18 07:21:08.054848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.369 [2024-11-18 07:21:08.054875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.370 qpair failed and we were unable to recover it. 00:35:47.370 [2024-11-18 07:21:08.054972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.370 [2024-11-18 07:21:08.054998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.370 qpair failed and we were unable to recover it. 00:35:47.370 [2024-11-18 07:21:08.055075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.370 [2024-11-18 07:21:08.055108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.370 qpair failed and we were unable to recover it. 00:35:47.370 [2024-11-18 07:21:08.055194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.370 [2024-11-18 07:21:08.055222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.370 qpair failed and we were unable to recover it. 00:35:47.370 [2024-11-18 07:21:08.055305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.370 [2024-11-18 07:21:08.055333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.370 qpair failed and we were unable to recover it. 00:35:47.370 [2024-11-18 07:21:08.055414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.370 [2024-11-18 07:21:08.055440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.370 qpair failed and we were unable to recover it. 00:35:47.370 [2024-11-18 07:21:08.055531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.370 [2024-11-18 07:21:08.055557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.370 qpair failed and we were unable to recover it. 00:35:47.370 [2024-11-18 07:21:08.055633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.370 [2024-11-18 07:21:08.055659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.370 qpair failed and we were unable to recover it. 00:35:47.370 [2024-11-18 07:21:08.055738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.370 [2024-11-18 07:21:08.055765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.370 qpair failed and we were unable to recover it. 00:35:47.370 [2024-11-18 07:21:08.055841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.370 [2024-11-18 07:21:08.055867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.370 qpair failed and we were unable to recover it. 00:35:47.370 [2024-11-18 07:21:08.055971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.370 [2024-11-18 07:21:08.055999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.370 qpair failed and we were unable to recover it. 00:35:47.370 [2024-11-18 07:21:08.056077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.370 [2024-11-18 07:21:08.056103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.370 qpair failed and we were unable to recover it. 00:35:47.370 [2024-11-18 07:21:08.056210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.370 [2024-11-18 07:21:08.056236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.370 qpair failed and we were unable to recover it. 00:35:47.370 [2024-11-18 07:21:08.056310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.370 [2024-11-18 07:21:08.056335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.370 qpair failed and we were unable to recover it. 00:35:47.370 [2024-11-18 07:21:08.056428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.370 [2024-11-18 07:21:08.056458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.370 qpair failed and we were unable to recover it. 00:35:47.370 [2024-11-18 07:21:08.056567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.370 [2024-11-18 07:21:08.056594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.370 qpair failed and we were unable to recover it. 00:35:47.370 [2024-11-18 07:21:08.056687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.370 [2024-11-18 07:21:08.056714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.370 qpair failed and we were unable to recover it. 00:35:47.370 [2024-11-18 07:21:08.056792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.370 [2024-11-18 07:21:08.056819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.370 qpair failed and we were unable to recover it. 00:35:47.370 [2024-11-18 07:21:08.056898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.370 [2024-11-18 07:21:08.056925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.370 qpair failed and we were unable to recover it. 00:35:47.370 [2024-11-18 07:21:08.057039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.370 [2024-11-18 07:21:08.057065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.370 qpair failed and we were unable to recover it. 00:35:47.370 [2024-11-18 07:21:08.057153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.370 [2024-11-18 07:21:08.057180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.370 qpair failed and we were unable to recover it. 00:35:47.370 [2024-11-18 07:21:08.057261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.370 [2024-11-18 07:21:08.057287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.370 qpair failed and we were unable to recover it. 00:35:47.370 [2024-11-18 07:21:08.057399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.370 [2024-11-18 07:21:08.057432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.370 qpair failed and we were unable to recover it. 00:35:47.370 [2024-11-18 07:21:08.057536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.370 [2024-11-18 07:21:08.057564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.370 qpair failed and we were unable to recover it. 00:35:47.370 [2024-11-18 07:21:08.057643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.370 [2024-11-18 07:21:08.057670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.370 qpair failed and we were unable to recover it. 00:35:47.370 [2024-11-18 07:21:08.057766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.370 [2024-11-18 07:21:08.057815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.370 qpair failed and we were unable to recover it. 00:35:47.370 [2024-11-18 07:21:08.057909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.370 [2024-11-18 07:21:08.057939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.370 qpair failed and we were unable to recover it. 00:35:47.370 [2024-11-18 07:21:08.058016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.370 [2024-11-18 07:21:08.058044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.370 qpair failed and we were unable to recover it. 00:35:47.370 [2024-11-18 07:21:08.058139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.370 [2024-11-18 07:21:08.058166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.370 qpair failed and we were unable to recover it. 00:35:47.370 [2024-11-18 07:21:08.058282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.370 [2024-11-18 07:21:08.058310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.370 qpair failed and we were unable to recover it. 00:35:47.370 [2024-11-18 07:21:08.058405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.370 [2024-11-18 07:21:08.058432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.370 qpair failed and we were unable to recover it. 00:35:47.370 [2024-11-18 07:21:08.058559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.370 [2024-11-18 07:21:08.058587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.370 qpair failed and we were unable to recover it. 00:35:47.370 [2024-11-18 07:21:08.058678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.370 [2024-11-18 07:21:08.058704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.370 qpair failed and we were unable to recover it. 00:35:47.370 [2024-11-18 07:21:08.058789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.370 [2024-11-18 07:21:08.058816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.370 qpair failed and we were unable to recover it. 00:35:47.370 [2024-11-18 07:21:08.058905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.370 [2024-11-18 07:21:08.058932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.370 qpair failed and we were unable to recover it. 00:35:47.370 [2024-11-18 07:21:08.059012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.370 [2024-11-18 07:21:08.059046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.371 qpair failed and we were unable to recover it. 00:35:47.371 [2024-11-18 07:21:08.059120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.371 [2024-11-18 07:21:08.059147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.371 qpair failed and we were unable to recover it. 00:35:47.371 [2024-11-18 07:21:08.059221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.371 [2024-11-18 07:21:08.059247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.371 qpair failed and we were unable to recover it. 00:35:47.371 [2024-11-18 07:21:08.059327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.371 [2024-11-18 07:21:08.059353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.371 qpair failed and we were unable to recover it. 00:35:47.371 [2024-11-18 07:21:08.059432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.371 [2024-11-18 07:21:08.059458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.371 qpair failed and we were unable to recover it. 00:35:47.371 [2024-11-18 07:21:08.059543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.371 [2024-11-18 07:21:08.059568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.371 qpair failed and we were unable to recover it. 00:35:47.371 [2024-11-18 07:21:08.059652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.371 [2024-11-18 07:21:08.059679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.371 qpair failed and we were unable to recover it. 00:35:47.371 [2024-11-18 07:21:08.059756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.371 [2024-11-18 07:21:08.059794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.371 qpair failed and we were unable to recover it. 00:35:47.371 [2024-11-18 07:21:08.059885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.371 [2024-11-18 07:21:08.059912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.371 qpair failed and we were unable to recover it. 00:35:47.371 [2024-11-18 07:21:08.059992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.371 [2024-11-18 07:21:08.060018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.371 qpair failed and we were unable to recover it. 00:35:47.371 [2024-11-18 07:21:08.060108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.371 [2024-11-18 07:21:08.060134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.371 qpair failed and we were unable to recover it. 00:35:47.371 [2024-11-18 07:21:08.060219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.371 [2024-11-18 07:21:08.060244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.371 qpair failed and we were unable to recover it. 00:35:47.371 [2024-11-18 07:21:08.060324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.371 [2024-11-18 07:21:08.060349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.371 qpair failed and we were unable to recover it. 00:35:47.371 [2024-11-18 07:21:08.060433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.371 [2024-11-18 07:21:08.060460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.371 qpair failed and we were unable to recover it. 00:35:47.371 [2024-11-18 07:21:08.060552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.371 [2024-11-18 07:21:08.060582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.371 qpair failed and we were unable to recover it. 00:35:47.371 [2024-11-18 07:21:08.060665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.371 [2024-11-18 07:21:08.060692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.371 qpair failed and we were unable to recover it. 00:35:47.371 [2024-11-18 07:21:08.060815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.371 [2024-11-18 07:21:08.060843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.371 qpair failed and we were unable to recover it. 00:35:47.371 [2024-11-18 07:21:08.060930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.371 [2024-11-18 07:21:08.060957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.371 qpair failed and we were unable to recover it. 00:35:47.371 [2024-11-18 07:21:08.061045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.371 [2024-11-18 07:21:08.061073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.371 qpair failed and we were unable to recover it. 00:35:47.371 [2024-11-18 07:21:08.061153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.371 [2024-11-18 07:21:08.061180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.371 qpair failed and we were unable to recover it. 00:35:47.371 A controller has encountered a failure and is being reset. 00:35:47.371 [2024-11-18 07:21:08.061343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.371 [2024-11-18 07:21:08.061383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.371 qpair failed and we were unable to recover it. 00:35:47.371 [2024-11-18 07:21:08.061474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.371 [2024-11-18 07:21:08.061509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.371 qpair failed and we were unable to recover it. 00:35:47.371 [2024-11-18 07:21:08.061602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.371 [2024-11-18 07:21:08.061642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.371 qpair failed and we were unable to recover it. 00:35:47.371 [2024-11-18 07:21:08.061759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.371 [2024-11-18 07:21:08.061787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.371 qpair failed and we were unable to recover it. 00:35:47.371 [2024-11-18 07:21:08.061904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.371 [2024-11-18 07:21:08.061932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.371 qpair failed and we were unable to recover it. 00:35:47.371 [2024-11-18 07:21:08.062013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.371 [2024-11-18 07:21:08.062040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.371 qpair failed and we were unable to recover it. 00:35:47.371 [2024-11-18 07:21:08.062121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.371 [2024-11-18 07:21:08.062154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.371 qpair failed and we were unable to recover it. 00:35:47.371 [2024-11-18 07:21:08.062250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.371 [2024-11-18 07:21:08.062277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.372 qpair failed and we were unable to recover it. 00:35:47.372 [2024-11-18 07:21:08.062826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.372 [2024-11-18 07:21:08.062859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.372 qpair failed and we were unable to recover it. 00:35:47.372 [2024-11-18 07:21:08.062976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.372 [2024-11-18 07:21:08.063003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.372 qpair failed and we were unable to recover it. 00:35:47.372 [2024-11-18 07:21:08.063092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.372 [2024-11-18 07:21:08.063119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.372 qpair failed and we were unable to recover it. 00:35:47.372 [2024-11-18 07:21:08.063202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.372 [2024-11-18 07:21:08.063229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.372 qpair failed and we were unable to recover it. 00:35:47.372 [2024-11-18 07:21:08.063317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.372 [2024-11-18 07:21:08.063344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.372 qpair failed and we were unable to recover it. 00:35:47.372 [2024-11-18 07:21:08.063426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.372 [2024-11-18 07:21:08.063453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.372 qpair failed and we were unable to recover it. 00:35:47.372 [2024-11-18 07:21:08.063551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.372 [2024-11-18 07:21:08.063595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.372 qpair failed and we were unable to recover it. 00:35:47.372 [2024-11-18 07:21:08.063688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.372 [2024-11-18 07:21:08.063717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.372 qpair failed and we were unable to recover it. 00:35:47.372 [2024-11-18 07:21:08.063807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.372 [2024-11-18 07:21:08.063834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.372 qpair failed and we were unable to recover it. 00:35:47.372 [2024-11-18 07:21:08.063921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.372 [2024-11-18 07:21:08.063949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.372 qpair failed and we were unable to recover it. 00:35:47.372 [2024-11-18 07:21:08.064025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.372 [2024-11-18 07:21:08.064053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.372 qpair failed and we were unable to recover it. 00:35:47.372 [2024-11-18 07:21:08.064141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.372 [2024-11-18 07:21:08.064168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.372 qpair failed and we were unable to recover it. 00:35:47.372 [2024-11-18 07:21:08.064280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.372 [2024-11-18 07:21:08.064308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.372 qpair failed and we were unable to recover it. 00:35:47.372 [2024-11-18 07:21:08.064392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.372 [2024-11-18 07:21:08.064419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.372 qpair failed and we were unable to recover it. 00:35:47.372 [2024-11-18 07:21:08.064507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.372 [2024-11-18 07:21:08.064536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.372 qpair failed and we were unable to recover it. 00:35:47.372 [2024-11-18 07:21:08.064616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.372 [2024-11-18 07:21:08.064644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.372 qpair failed and we were unable to recover it. 00:35:47.372 [2024-11-18 07:21:08.064762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.372 [2024-11-18 07:21:08.064789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.372 qpair failed and we were unable to recover it. 00:35:47.372 [2024-11-18 07:21:08.064866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.372 [2024-11-18 07:21:08.064893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.372 qpair failed and we were unable to recover it. 00:35:47.372 [2024-11-18 07:21:08.065013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.372 [2024-11-18 07:21:08.065041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.372 qpair failed and we were unable to recover it. 00:35:47.372 [2024-11-18 07:21:08.065154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.372 [2024-11-18 07:21:08.065181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.372 qpair failed and we were unable to recover it. 00:35:47.372 [2024-11-18 07:21:08.065282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.372 [2024-11-18 07:21:08.065310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.372 qpair failed and we were unable to recover it. 00:35:47.372 [2024-11-18 07:21:08.065426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.372 [2024-11-18 07:21:08.065453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.372 qpair failed and we were unable to recover it. 00:35:47.372 [2024-11-18 07:21:08.065558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.372 [2024-11-18 07:21:08.065587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.372 qpair failed and we were unable to recover it. 00:35:47.372 [2024-11-18 07:21:08.065669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.372 [2024-11-18 07:21:08.065697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.372 qpair failed and we were unable to recover it. 00:35:47.372 [2024-11-18 07:21:08.065778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.372 [2024-11-18 07:21:08.065806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.372 qpair failed and we were unable to recover it. 00:35:47.372 [2024-11-18 07:21:08.065918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.372 [2024-11-18 07:21:08.065946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.372 qpair failed and we were unable to recover it. 00:35:47.372 [2024-11-18 07:21:08.066032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.372 [2024-11-18 07:21:08.066060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.372 qpair failed and we were unable to recover it. 00:35:47.372 [2024-11-18 07:21:08.066143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.372 [2024-11-18 07:21:08.066171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.372 qpair failed and we were unable to recover it. 00:35:47.372 [2024-11-18 07:21:08.066256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.372 [2024-11-18 07:21:08.066283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.372 qpair failed and we were unable to recover it. 00:35:47.372 [2024-11-18 07:21:08.066388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.372 [2024-11-18 07:21:08.066424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.372 qpair failed and we were unable to recover it. 00:35:47.372 [2024-11-18 07:21:08.066508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.372 [2024-11-18 07:21:08.066536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.372 qpair failed and we were unable to recover it. 00:35:47.372 [2024-11-18 07:21:08.066614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.372 [2024-11-18 07:21:08.066641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.372 qpair failed and we were unable to recover it. 00:35:47.372 [2024-11-18 07:21:08.066721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.372 [2024-11-18 07:21:08.066748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.373 qpair failed and we were unable to recover it. 00:35:47.373 [2024-11-18 07:21:08.066833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.373 [2024-11-18 07:21:08.066872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.373 qpair failed and we were unable to recover it. 00:35:47.373 [2024-11-18 07:21:08.066990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.373 [2024-11-18 07:21:08.067026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.373 qpair failed and we were unable to recover it. 00:35:47.373 [2024-11-18 07:21:08.067137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.373 [2024-11-18 07:21:08.067164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.373 qpair failed and we were unable to recover it. 00:35:47.373 [2024-11-18 07:21:08.067255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.373 [2024-11-18 07:21:08.067282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.373 qpair failed and we were unable to recover it. 00:35:47.373 [2024-11-18 07:21:08.067375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.373 [2024-11-18 07:21:08.067424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.373 qpair failed and we were unable to recover it. 00:35:47.373 [2024-11-18 07:21:08.067533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.373 [2024-11-18 07:21:08.067563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.373 qpair failed and we were unable to recover it. 00:35:47.373 [2024-11-18 07:21:08.067647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.373 [2024-11-18 07:21:08.067674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.373 qpair failed and we were unable to recover it. 00:35:47.373 [2024-11-18 07:21:08.067759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.373 [2024-11-18 07:21:08.067786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.373 qpair failed and we were unable to recover it. 00:35:47.373 [2024-11-18 07:21:08.067861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.373 [2024-11-18 07:21:08.067887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.373 qpair failed and we were unable to recover it. 00:35:47.373 [2024-11-18 07:21:08.067991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.373 [2024-11-18 07:21:08.068025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.373 qpair failed and we were unable to recover it. 00:35:47.373 [2024-11-18 07:21:08.068118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.373 [2024-11-18 07:21:08.068158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.373 qpair failed and we were unable to recover it. 00:35:47.373 [2024-11-18 07:21:08.068253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.373 [2024-11-18 07:21:08.068281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.373 qpair failed and we were unable to recover it. 00:35:47.373 [2024-11-18 07:21:08.068361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.373 [2024-11-18 07:21:08.068391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.373 qpair failed and we were unable to recover it. 00:35:47.373 07:21:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:47.373 [2024-11-18 07:21:08.068512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.373 [2024-11-18 07:21:08.068545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.373 qpair failed and we were unable to recover it. 00:35:47.373 [2024-11-18 07:21:08.068632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.373 [2024-11-18 07:21:08.068660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b9 07:21:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:35:47.373 0 with addr=10.0.0.2, port=4420 00:35:47.373 qpair failed and we were unable to recover it. 00:35:47.373 [2024-11-18 07:21:08.068751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.373 [2024-11-18 07:21:08.068780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.373 qpair failed and we were unable to recover it. 00:35:47.373 07:21:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:47.373 [2024-11-18 07:21:08.068898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.373 07:21:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:47.373 [2024-11-18 07:21:08.068927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.373 qpair failed and we were unable to recover it. 00:35:47.373 [2024-11-18 07:21:08.069009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.373 [2024-11-18 07:21:08.069048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.373 qpair failed and we were unable to recover it. 00:35:47.373 [2024-11-18 07:21:08.069129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.373 [2024-11-18 07:21:08.069157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.373 qpair failed and we were unable to recover it. 00:35:47.373 [2024-11-18 07:21:08.069247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.373 [2024-11-18 07:21:08.069273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.373 qpair failed and we were unable to recover it. 00:35:47.373 [2024-11-18 07:21:08.069364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.373 [2024-11-18 07:21:08.069394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.373 qpair failed and we were unable to recover it. 00:35:47.373 [2024-11-18 07:21:08.069524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.373 [2024-11-18 07:21:08.069553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.373 qpair failed and we were unable to recover it. 00:35:47.373 [2024-11-18 07:21:08.069636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.373 [2024-11-18 07:21:08.069663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.373 qpair failed and we were unable to recover it. 00:35:47.373 [2024-11-18 07:21:08.069747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.373 [2024-11-18 07:21:08.069774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.373 qpair failed and we were unable to recover it. 00:35:47.373 [2024-11-18 07:21:08.069901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.373 [2024-11-18 07:21:08.069928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.373 qpair failed and we were unable to recover it. 00:35:47.373 [2024-11-18 07:21:08.070021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.373 [2024-11-18 07:21:08.070048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.373 qpair failed and we were unable to recover it. 00:35:47.373 [2024-11-18 07:21:08.070140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.373 [2024-11-18 07:21:08.070168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.373 qpair failed and we were unable to recover it. 00:35:47.373 [2024-11-18 07:21:08.070281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.373 [2024-11-18 07:21:08.070310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.373 qpair failed and we were unable to recover it. 00:35:47.373 [2024-11-18 07:21:08.070390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.373 [2024-11-18 07:21:08.070419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.373 qpair failed and we were unable to recover it. 00:35:47.373 [2024-11-18 07:21:08.070523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.373 [2024-11-18 07:21:08.070554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.373 qpair failed and we were unable to recover it. 00:35:47.373 [2024-11-18 07:21:08.070639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.374 [2024-11-18 07:21:08.070666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.374 qpair failed and we were unable to recover it. 00:35:47.374 [2024-11-18 07:21:08.070744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.374 [2024-11-18 07:21:08.070772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.374 qpair failed and we were unable to recover it. 00:35:47.374 [2024-11-18 07:21:08.070857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.374 [2024-11-18 07:21:08.070883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.374 qpair failed and we were unable to recover it. 00:35:47.374 [2024-11-18 07:21:08.070967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.374 [2024-11-18 07:21:08.070993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.374 qpair failed and we were unable to recover it. 00:35:47.374 [2024-11-18 07:21:08.071072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.374 [2024-11-18 07:21:08.071101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.374 qpair failed and we were unable to recover it. 00:35:47.374 [2024-11-18 07:21:08.071193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.374 [2024-11-18 07:21:08.071231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.374 qpair failed and we were unable to recover it. 00:35:47.374 [2024-11-18 07:21:08.071317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.374 [2024-11-18 07:21:08.071344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.374 qpair failed and we were unable to recover it. 00:35:47.374 [2024-11-18 07:21:08.071458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.374 [2024-11-18 07:21:08.071507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.374 qpair failed and we were unable to recover it. 00:35:47.374 [2024-11-18 07:21:08.071630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.374 [2024-11-18 07:21:08.071664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.374 qpair failed and we were unable to recover it. 00:35:47.374 [2024-11-18 07:21:08.071741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.374 [2024-11-18 07:21:08.071769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.374 qpair failed and we were unable to recover it. 00:35:47.374 [2024-11-18 07:21:08.071889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.374 [2024-11-18 07:21:08.071917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.374 qpair failed and we were unable to recover it. 00:35:47.374 [2024-11-18 07:21:08.071995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.374 [2024-11-18 07:21:08.072023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.374 qpair failed and we were unable to recover it. 00:35:47.374 [2024-11-18 07:21:08.072113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.374 [2024-11-18 07:21:08.072141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.374 qpair failed and we were unable to recover it. 00:35:47.374 [2024-11-18 07:21:08.072220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.374 [2024-11-18 07:21:08.072248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.374 qpair failed and we were unable to recover it. 00:35:47.374 [2024-11-18 07:21:08.072369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.374 [2024-11-18 07:21:08.072398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.374 qpair failed and we were unable to recover it. 00:35:47.374 [2024-11-18 07:21:08.072498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.374 [2024-11-18 07:21:08.072539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.374 qpair failed and we were unable to recover it. 00:35:47.374 [2024-11-18 07:21:08.072641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.374 [2024-11-18 07:21:08.072670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.374 qpair failed and we were unable to recover it. 00:35:47.374 [2024-11-18 07:21:08.072752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.374 [2024-11-18 07:21:08.072780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.374 qpair failed and we were unable to recover it. 00:35:47.374 [2024-11-18 07:21:08.072869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.374 [2024-11-18 07:21:08.072896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.374 qpair failed and we were unable to recover it. 00:35:47.374 [2024-11-18 07:21:08.073005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.374 [2024-11-18 07:21:08.073031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.374 qpair failed and we were unable to recover it. 00:35:47.374 [2024-11-18 07:21:08.073111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.374 [2024-11-18 07:21:08.073139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.374 qpair failed and we were unable to recover it. 00:35:47.374 [2024-11-18 07:21:08.073224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.374 [2024-11-18 07:21:08.073251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.374 qpair failed and we were unable to recover it. 00:35:47.374 [2024-11-18 07:21:08.073337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.374 [2024-11-18 07:21:08.073365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.374 qpair failed and we were unable to recover it. 00:35:47.374 [2024-11-18 07:21:08.073452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.374 [2024-11-18 07:21:08.073479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.374 qpair failed and we were unable to recover it. 00:35:47.374 [2024-11-18 07:21:08.073563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.374 [2024-11-18 07:21:08.073589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.374 qpair failed and we were unable to recover it. 00:35:47.374 [2024-11-18 07:21:08.073697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.374 [2024-11-18 07:21:08.073723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.374 qpair failed and we were unable to recover it. 00:35:47.374 [2024-11-18 07:21:08.073838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.374 [2024-11-18 07:21:08.073865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.374 qpair failed and we were unable to recover it. 00:35:47.374 [2024-11-18 07:21:08.073947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.374 [2024-11-18 07:21:08.073975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.374 qpair failed and we were unable to recover it. 00:35:47.374 [2024-11-18 07:21:08.074057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.374 [2024-11-18 07:21:08.074083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.374 qpair failed and we were unable to recover it. 00:35:47.374 [2024-11-18 07:21:08.074164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.374 [2024-11-18 07:21:08.074190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.374 qpair failed and we were unable to recover it. 00:35:47.374 [2024-11-18 07:21:08.074264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.374 [2024-11-18 07:21:08.074294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.374 qpair failed and we were unable to recover it. 00:35:47.374 [2024-11-18 07:21:08.074402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.374 [2024-11-18 07:21:08.074429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.374 qpair failed and we were unable to recover it. 00:35:47.374 [2024-11-18 07:21:08.074523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.374 [2024-11-18 07:21:08.074551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.375 qpair failed and we were unable to recover it. 00:35:47.375 [2024-11-18 07:21:08.074632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.375 [2024-11-18 07:21:08.074659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.375 qpair failed and we were unable to recover it. 00:35:47.375 [2024-11-18 07:21:08.074748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.375 [2024-11-18 07:21:08.074775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.375 qpair failed and we were unable to recover it. 00:35:47.375 [2024-11-18 07:21:08.074907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.375 [2024-11-18 07:21:08.074934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.375 qpair failed and we were unable to recover it. 00:35:47.375 [2024-11-18 07:21:08.075019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.375 [2024-11-18 07:21:08.075045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.375 qpair failed and we were unable to recover it. 00:35:47.375 [2024-11-18 07:21:08.075162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.375 [2024-11-18 07:21:08.075189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.375 qpair failed and we were unable to recover it. 00:35:47.375 [2024-11-18 07:21:08.075270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.375 [2024-11-18 07:21:08.075296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.375 qpair failed and we were unable to recover it. 00:35:47.375 [2024-11-18 07:21:08.075409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.375 [2024-11-18 07:21:08.075436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.375 qpair failed and we were unable to recover it. 00:35:47.375 [2024-11-18 07:21:08.075542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.375 [2024-11-18 07:21:08.075582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.375 qpair failed and we were unable to recover it. 00:35:47.375 [2024-11-18 07:21:08.075667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.375 [2024-11-18 07:21:08.075694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.375 qpair failed and we were unable to recover it. 00:35:47.375 [2024-11-18 07:21:08.075784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.375 [2024-11-18 07:21:08.075821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.375 qpair failed and we were unable to recover it. 00:35:47.375 [2024-11-18 07:21:08.075912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.375 [2024-11-18 07:21:08.075939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.375 qpair failed and we were unable to recover it. 00:35:47.375 [2024-11-18 07:21:08.076039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.375 [2024-11-18 07:21:08.076066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.375 qpair failed and we were unable to recover it. 00:35:47.375 [2024-11-18 07:21:08.076176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.375 [2024-11-18 07:21:08.076203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.375 qpair failed and we were unable to recover it. 00:35:47.375 [2024-11-18 07:21:08.076280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.375 [2024-11-18 07:21:08.076307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.375 qpair failed and we were unable to recover it. 00:35:47.375 [2024-11-18 07:21:08.076385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.375 [2024-11-18 07:21:08.076411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.375 qpair failed and we were unable to recover it. 00:35:47.375 [2024-11-18 07:21:08.076516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.375 [2024-11-18 07:21:08.076545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.375 qpair failed and we were unable to recover it. 00:35:47.375 [2024-11-18 07:21:08.076636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.375 [2024-11-18 07:21:08.076663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.375 qpair failed and we were unable to recover it. 00:35:47.375 [2024-11-18 07:21:08.076746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.375 [2024-11-18 07:21:08.076776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.375 qpair failed and we were unable to recover it. 00:35:47.375 [2024-11-18 07:21:08.076887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.375 [2024-11-18 07:21:08.076916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.375 qpair failed and we were unable to recover it. 00:35:47.375 [2024-11-18 07:21:08.076999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.375 [2024-11-18 07:21:08.077027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.375 qpair failed and we were unable to recover it. 00:35:47.375 [2024-11-18 07:21:08.077121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.375 [2024-11-18 07:21:08.077149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.375 qpair failed and we were unable to recover it. 00:35:47.375 [2024-11-18 07:21:08.077232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.375 [2024-11-18 07:21:08.077260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.375 qpair failed and we were unable to recover it. 00:35:47.375 [2024-11-18 07:21:08.077358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.375 [2024-11-18 07:21:08.077387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.375 qpair failed and we were unable to recover it. 00:35:47.375 [2024-11-18 07:21:08.077470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.375 [2024-11-18 07:21:08.077512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.375 qpair failed and we were unable to recover it. 00:35:47.375 [2024-11-18 07:21:08.077596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.375 [2024-11-18 07:21:08.077624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.375 qpair failed and we were unable to recover it. 00:35:47.375 [2024-11-18 07:21:08.077705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.375 [2024-11-18 07:21:08.077732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.375 qpair failed and we were unable to recover it. 00:35:47.375 [2024-11-18 07:21:08.077825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.375 [2024-11-18 07:21:08.077853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.375 qpair failed and we were unable to recover it. 00:35:47.375 [2024-11-18 07:21:08.077952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.375 [2024-11-18 07:21:08.077979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.375 qpair failed and we were unable to recover it. 00:35:47.375 [2024-11-18 07:21:08.078057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.375 [2024-11-18 07:21:08.078085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.375 qpair failed and we were unable to recover it. 00:35:47.375 [2024-11-18 07:21:08.078209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.375 [2024-11-18 07:21:08.078237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.375 qpair failed and we were unable to recover it. 00:35:47.375 [2024-11-18 07:21:08.078332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.375 [2024-11-18 07:21:08.078360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.375 qpair failed and we were unable to recover it. 00:35:47.375 [2024-11-18 07:21:08.078436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.375 [2024-11-18 07:21:08.078463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.375 qpair failed and we were unable to recover it. 00:35:47.375 [2024-11-18 07:21:08.078588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.375 [2024-11-18 07:21:08.078616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.375 qpair failed and we were unable to recover it. 00:35:47.375 [2024-11-18 07:21:08.078705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.376 [2024-11-18 07:21:08.078734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.376 qpair failed and we were unable to recover it. 00:35:47.376 [2024-11-18 07:21:08.078821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.376 [2024-11-18 07:21:08.078849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.376 qpair failed and we were unable to recover it. 00:35:47.376 [2024-11-18 07:21:08.078950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.376 [2024-11-18 07:21:08.078977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.376 qpair failed and we were unable to recover it. 00:35:47.376 [2024-11-18 07:21:08.079071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.376 [2024-11-18 07:21:08.079098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.376 qpair failed and we were unable to recover it. 00:35:47.376 [2024-11-18 07:21:08.079181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.376 [2024-11-18 07:21:08.079209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.376 qpair failed and we were unable to recover it. 00:35:47.376 [2024-11-18 07:21:08.079307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.376 [2024-11-18 07:21:08.079347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.376 qpair failed and we were unable to recover it. 00:35:47.376 [2024-11-18 07:21:08.079452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.376 [2024-11-18 07:21:08.079481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.376 qpair failed and we were unable to recover it. 00:35:47.376 [2024-11-18 07:21:08.079615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.376 [2024-11-18 07:21:08.079643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.376 qpair failed and we were unable to recover it. 00:35:47.376 [2024-11-18 07:21:08.079765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.376 [2024-11-18 07:21:08.079803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.376 qpair failed and we were unable to recover it. 00:35:47.376 [2024-11-18 07:21:08.079924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.376 [2024-11-18 07:21:08.079957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.376 qpair failed and we were unable to recover it. 00:35:47.376 [2024-11-18 07:21:08.080042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.376 [2024-11-18 07:21:08.080072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.376 qpair failed and we were unable to recover it. 00:35:47.376 [2024-11-18 07:21:08.080154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.376 [2024-11-18 07:21:08.080181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.376 qpair failed and we were unable to recover it. 00:35:47.376 [2024-11-18 07:21:08.080294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.376 [2024-11-18 07:21:08.080321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.376 qpair failed and we were unable to recover it. 00:35:47.376 [2024-11-18 07:21:08.080410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.376 [2024-11-18 07:21:08.080437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.376 qpair failed and we were unable to recover it. 00:35:47.376 [2024-11-18 07:21:08.080535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.376 [2024-11-18 07:21:08.080562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.376 qpair failed and we were unable to recover it. 00:35:47.376 [2024-11-18 07:21:08.080640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.376 [2024-11-18 07:21:08.080668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.376 qpair failed and we were unable to recover it. 00:35:47.376 [2024-11-18 07:21:08.080750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.376 [2024-11-18 07:21:08.080777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.376 qpair failed and we were unable to recover it. 00:35:47.376 [2024-11-18 07:21:08.080868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.376 [2024-11-18 07:21:08.080895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.376 qpair failed and we were unable to recover it. 00:35:47.376 [2024-11-18 07:21:08.080979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.376 [2024-11-18 07:21:08.081007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.376 qpair failed and we were unable to recover it. 00:35:47.376 [2024-11-18 07:21:08.081136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.376 [2024-11-18 07:21:08.081163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.376 qpair failed and we were unable to recover it. 00:35:47.376 [2024-11-18 07:21:08.081255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.376 [2024-11-18 07:21:08.081284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.376 qpair failed and we were unable to recover it. 00:35:47.376 [2024-11-18 07:21:08.081384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.376 [2024-11-18 07:21:08.081424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.376 qpair failed and we were unable to recover it. 00:35:47.376 [2024-11-18 07:21:08.081549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.376 [2024-11-18 07:21:08.081578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.376 qpair failed and we were unable to recover it. 00:35:47.376 [2024-11-18 07:21:08.081674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.376 [2024-11-18 07:21:08.081701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.376 qpair failed and we were unable to recover it. 00:35:47.376 [2024-11-18 07:21:08.081818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.376 [2024-11-18 07:21:08.081844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.376 qpair failed and we were unable to recover it. 00:35:47.376 [2024-11-18 07:21:08.081961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.376 [2024-11-18 07:21:08.081988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.376 qpair failed and we were unable to recover it. 00:35:47.376 [2024-11-18 07:21:08.082100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.376 [2024-11-18 07:21:08.082129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.376 qpair failed and we were unable to recover it. 00:35:47.376 [2024-11-18 07:21:08.082214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.376 [2024-11-18 07:21:08.082241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.376 qpair failed and we were unable to recover it. 00:35:47.376 [2024-11-18 07:21:08.082327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.376 [2024-11-18 07:21:08.082355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.376 qpair failed and we were unable to recover it. 00:35:47.376 [2024-11-18 07:21:08.082451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.377 [2024-11-18 07:21:08.082504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.377 qpair failed and we were unable to recover it. 00:35:47.377 [2024-11-18 07:21:08.082604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.377 [2024-11-18 07:21:08.082632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.377 qpair failed and we were unable to recover it. 00:35:47.377 [2024-11-18 07:21:08.082753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.377 [2024-11-18 07:21:08.082780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.377 qpair failed and we were unable to recover it. 00:35:47.377 [2024-11-18 07:21:08.082874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.377 [2024-11-18 07:21:08.082903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.377 qpair failed and we were unable to recover it. 00:35:47.377 [2024-11-18 07:21:08.082987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.377 [2024-11-18 07:21:08.083015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.377 qpair failed and we were unable to recover it. 00:35:47.377 [2024-11-18 07:21:08.083135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.377 [2024-11-18 07:21:08.083165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.377 qpair failed and we were unable to recover it. 00:35:47.377 [2024-11-18 07:21:08.083284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.377 [2024-11-18 07:21:08.083311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.377 qpair failed and we were unable to recover it. 00:35:47.377 [2024-11-18 07:21:08.083427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.377 [2024-11-18 07:21:08.083456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.377 qpair failed and we were unable to recover it. 00:35:47.377 [2024-11-18 07:21:08.083562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.377 [2024-11-18 07:21:08.083591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.377 qpair failed and we were unable to recover it. 00:35:47.377 [2024-11-18 07:21:08.083670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.377 [2024-11-18 07:21:08.083697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.377 qpair failed and we were unable to recover it. 00:35:47.377 [2024-11-18 07:21:08.083839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.377 [2024-11-18 07:21:08.083866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.377 qpair failed and we were unable to recover it. 00:35:47.377 [2024-11-18 07:21:08.083944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.377 [2024-11-18 07:21:08.083971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f772c000b90 with addr=10.0.0.2, port=4420 00:35:47.377 qpair failed and we were unable to recover it. 00:35:47.377 [2024-11-18 07:21:08.084058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.377 [2024-11-18 07:21:08.084086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.377 qpair failed and we were unable to recover it. 00:35:47.377 [2024-11-18 07:21:08.084178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.377 [2024-11-18 07:21:08.084207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.377 qpair failed and we were unable to recover it. 00:35:47.377 [2024-11-18 07:21:08.084294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.377 [2024-11-18 07:21:08.084321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.377 qpair failed and we were unable to recover it. 00:35:47.377 [2024-11-18 07:21:08.084445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.377 [2024-11-18 07:21:08.084473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.377 qpair failed and we were unable to recover it. 00:35:47.377 [2024-11-18 07:21:08.084566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.377 [2024-11-18 07:21:08.084595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.377 qpair failed and we were unable to recover it. 00:35:47.377 [2024-11-18 07:21:08.084678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.377 [2024-11-18 07:21:08.084705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.377 qpair failed and we were unable to recover it. 00:35:47.377 [2024-11-18 07:21:08.084797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.377 [2024-11-18 07:21:08.084824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.377 qpair failed and we were unable to recover it. 00:35:47.377 [2024-11-18 07:21:08.084939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.377 [2024-11-18 07:21:08.084966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.377 qpair failed and we were unable to recover it. 00:35:47.377 [2024-11-18 07:21:08.085050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.377 [2024-11-18 07:21:08.085094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.377 qpair failed and we were unable to recover it. 00:35:47.377 [2024-11-18 07:21:08.085211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.377 [2024-11-18 07:21:08.085240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.377 qpair failed and we were unable to recover it. 00:35:47.377 [2024-11-18 07:21:08.085336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.377 [2024-11-18 07:21:08.085363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.377 qpair failed and we were unable to recover it. 00:35:47.377 [2024-11-18 07:21:08.085453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.377 [2024-11-18 07:21:08.085481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.377 qpair failed and we were unable to recover it. 00:35:47.377 [2024-11-18 07:21:08.085602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.377 [2024-11-18 07:21:08.085630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.377 qpair failed and we were unable to recover it. 00:35:47.377 [2024-11-18 07:21:08.085716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.377 [2024-11-18 07:21:08.085743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.377 qpair failed and we were unable to recover it. 00:35:47.377 [2024-11-18 07:21:08.085829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.377 [2024-11-18 07:21:08.085866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.377 qpair failed and we were unable to recover it. 00:35:47.377 [2024-11-18 07:21:08.085998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.377 [2024-11-18 07:21:08.086025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.377 qpair failed and we were unable to recover it. 00:35:47.377 [2024-11-18 07:21:08.086147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.377 [2024-11-18 07:21:08.086174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.377 qpair failed and we were unable to recover it. 00:35:47.377 [2024-11-18 07:21:08.086246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.377 [2024-11-18 07:21:08.086272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.377 qpair failed and we were unable to recover it. 00:35:47.377 [2024-11-18 07:21:08.086364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.377 [2024-11-18 07:21:08.086391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.377 qpair failed and we were unable to recover it. 00:35:47.377 [2024-11-18 07:21:08.086506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.377 [2024-11-18 07:21:08.086533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.377 qpair failed and we were unable to recover it. 00:35:47.378 [2024-11-18 07:21:08.086611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.378 [2024-11-18 07:21:08.086638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.378 qpair failed and we were unable to recover it. 00:35:47.378 [2024-11-18 07:21:08.086720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.378 [2024-11-18 07:21:08.086746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.378 qpair failed and we were unable to recover it. 00:35:47.378 [2024-11-18 07:21:08.086827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.378 [2024-11-18 07:21:08.086854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.378 qpair failed and we were unable to recover it. 00:35:47.378 [2024-11-18 07:21:08.086936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.378 [2024-11-18 07:21:08.086963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.378 qpair failed and we were unable to recover it. 00:35:47.378 [2024-11-18 07:21:08.087048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.378 [2024-11-18 07:21:08.087076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.378 qpair failed and we were unable to recover it. 00:35:47.378 [2024-11-18 07:21:08.087153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.378 [2024-11-18 07:21:08.087181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.378 qpair failed and we were unable to recover it. 00:35:47.378 [2024-11-18 07:21:08.087260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.378 [2024-11-18 07:21:08.087287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.378 qpair failed and we were unable to recover it. 00:35:47.378 [2024-11-18 07:21:08.087397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.378 [2024-11-18 07:21:08.087424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.378 qpair failed and we were unable to recover it. 00:35:47.378 [2024-11-18 07:21:08.087511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.378 [2024-11-18 07:21:08.087538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.378 qpair failed and we were unable to recover it. 00:35:47.378 [2024-11-18 07:21:08.087618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.378 [2024-11-18 07:21:08.087646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.378 qpair failed and we were unable to recover it. 00:35:47.378 [2024-11-18 07:21:08.087731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.378 [2024-11-18 07:21:08.087758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.378 qpair failed and we were unable to recover it. 00:35:47.378 [2024-11-18 07:21:08.087836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.378 [2024-11-18 07:21:08.087862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.378 qpair failed and we were unable to recover it. 00:35:47.378 [2024-11-18 07:21:08.088811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.378 [2024-11-18 07:21:08.088845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.378 qpair failed and we were unable to recover it. 00:35:47.378 [2024-11-18 07:21:08.089000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.378 [2024-11-18 07:21:08.089028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.378 qpair failed and we were unable to recover it. 00:35:47.378 [2024-11-18 07:21:08.089139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.378 [2024-11-18 07:21:08.089166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.378 qpair failed and we were unable to recover it. 00:35:47.378 [2024-11-18 07:21:08.089253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.378 [2024-11-18 07:21:08.089281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.378 qpair failed and we were unable to recover it. 00:35:47.378 [2024-11-18 07:21:08.089378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.378 [2024-11-18 07:21:08.089416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.378 qpair failed and we were unable to recover it. 00:35:47.378 [2024-11-18 07:21:08.089532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.378 [2024-11-18 07:21:08.089561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.378 qpair failed and we were unable to recover it. 00:35:47.378 [2024-11-18 07:21:08.089647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.378 [2024-11-18 07:21:08.089674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.378 qpair failed and we were unable to recover it. 00:35:47.378 [2024-11-18 07:21:08.089756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.378 [2024-11-18 07:21:08.089783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.378 qpair failed and we were unable to recover it. 00:35:47.378 [2024-11-18 07:21:08.089907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.378 [2024-11-18 07:21:08.089934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.378 qpair failed and we were unable to recover it. 00:35:47.378 [2024-11-18 07:21:08.090044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.378 [2024-11-18 07:21:08.090071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.378 qpair failed and we were unable to recover it. 00:35:47.378 [2024-11-18 07:21:08.090154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.378 [2024-11-18 07:21:08.090181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.378 qpair failed and we were unable to recover it. 00:35:47.378 [2024-11-18 07:21:08.090318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.378 [2024-11-18 07:21:08.090369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.378 qpair failed and we were unable to recover it. 00:35:47.378 [2024-11-18 07:21:08.090465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.378 [2024-11-18 07:21:08.090515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.378 qpair failed and we were unable to recover it. 00:35:47.378 [2024-11-18 07:21:08.090609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.378 [2024-11-18 07:21:08.090635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.378 qpair failed and we were unable to recover it. 00:35:47.378 [2024-11-18 07:21:08.090745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.378 [2024-11-18 07:21:08.090771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.378 qpair failed and we were unable to recover it. 00:35:47.378 [2024-11-18 07:21:08.090910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.378 [2024-11-18 07:21:08.090947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.378 qpair failed and we were unable to recover it. 00:35:47.378 [2024-11-18 07:21:08.091029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.378 [2024-11-18 07:21:08.091058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.378 qpair failed and we were unable to recover it. 00:35:47.378 [2024-11-18 07:21:08.091148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.378 [2024-11-18 07:21:08.091183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.378 qpair failed and we were unable to recover it. 00:35:47.378 [2024-11-18 07:21:08.091276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.378 [2024-11-18 07:21:08.091301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.378 qpair failed and we were unable to recover it. 00:35:47.378 [2024-11-18 07:21:08.091381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.378 [2024-11-18 07:21:08.091407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.378 qpair failed and we were unable to recover it. 00:35:47.378 [2024-11-18 07:21:08.091483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.378 [2024-11-18 07:21:08.091522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.378 qpair failed and we were unable to recover it. 00:35:47.378 [2024-11-18 07:21:08.091599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.379 [2024-11-18 07:21:08.091624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.379 qpair failed and we were unable to recover it. 00:35:47.379 [2024-11-18 07:21:08.091705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.379 [2024-11-18 07:21:08.091731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.379 qpair failed and we were unable to recover it. 00:35:47.379 [2024-11-18 07:21:08.091815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.379 [2024-11-18 07:21:08.091840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.379 qpair failed and we were unable to recover it. 00:35:47.379 [2024-11-18 07:21:08.091945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.379 [2024-11-18 07:21:08.091970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.379 qpair failed and we were unable to recover it. 00:35:47.379 [2024-11-18 07:21:08.092089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.379 [2024-11-18 07:21:08.092115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.379 qpair failed and we were unable to recover it. 00:35:47.379 [2024-11-18 07:21:08.092205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.379 [2024-11-18 07:21:08.092231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.379 qpair failed and we were unable to recover it. 00:35:47.379 [2024-11-18 07:21:08.092321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.379 [2024-11-18 07:21:08.092357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.379 qpair failed and we were unable to recover it. 00:35:47.379 [2024-11-18 07:21:08.092466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.379 [2024-11-18 07:21:08.092500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.379 qpair failed and we were unable to recover it. 00:35:47.379 [2024-11-18 07:21:08.092590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.379 [2024-11-18 07:21:08.092617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.379 qpair failed and we were unable to recover it. 00:35:47.379 [2024-11-18 07:21:08.092710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.379 [2024-11-18 07:21:08.092738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.379 qpair failed and we were unable to recover it. 00:35:47.379 [2024-11-18 07:21:08.092822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.379 [2024-11-18 07:21:08.092849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.379 qpair failed and we were unable to recover it. 00:35:47.379 [2024-11-18 07:21:08.092965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.379 [2024-11-18 07:21:08.092992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.379 qpair failed and we were unable to recover it. 00:35:47.379 [2024-11-18 07:21:08.093081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.379 [2024-11-18 07:21:08.093108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.379 qpair failed and we were unable to recover it. 00:35:47.379 [2024-11-18 07:21:08.093197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.379 [2024-11-18 07:21:08.093224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.379 qpair failed and we were unable to recover it. 00:35:47.379 [2024-11-18 07:21:08.093302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.379 [2024-11-18 07:21:08.093338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.379 qpair failed and we were unable to recover it. 00:35:47.379 [2024-11-18 07:21:08.093415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.379 [2024-11-18 07:21:08.093441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.379 qpair failed and we were unable to recover it. 00:35:47.379 [2024-11-18 07:21:08.093541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.379 [2024-11-18 07:21:08.093569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.379 qpair failed and we were unable to recover it. 00:35:47.379 [2024-11-18 07:21:08.093666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.379 [2024-11-18 07:21:08.093692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.379 qpair failed and we were unable to recover it. 00:35:47.379 [2024-11-18 07:21:08.093773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.379 [2024-11-18 07:21:08.093800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.379 qpair failed and we were unable to recover it. 00:35:47.379 [2024-11-18 07:21:08.093896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.379 [2024-11-18 07:21:08.093923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.379 qpair failed and we were unable to recover it. 00:35:47.379 [2024-11-18 07:21:08.094002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.379 [2024-11-18 07:21:08.094029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.379 qpair failed and we were unable to recover it. 00:35:47.379 [2024-11-18 07:21:08.094122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.379 [2024-11-18 07:21:08.094148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.379 qpair failed and we were unable to recover it. 00:35:47.379 [2024-11-18 07:21:08.094237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.379 [2024-11-18 07:21:08.094269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.379 qpair failed and we were unable to recover it. 00:35:47.379 [2024-11-18 07:21:08.094356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.379 [2024-11-18 07:21:08.094383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.379 qpair failed and we were unable to recover it. 00:35:47.379 [2024-11-18 07:21:08.094479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.379 [2024-11-18 07:21:08.094510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.379 qpair failed and we were unable to recover it. 00:35:47.379 [2024-11-18 07:21:08.094597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.379 [2024-11-18 07:21:08.094624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.379 qpair failed and we were unable to recover it. 00:35:47.379 [2024-11-18 07:21:08.094705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.379 [2024-11-18 07:21:08.094732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.379 qpair failed and we were unable to recover it. 00:35:47.379 [2024-11-18 07:21:08.094844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.379 [2024-11-18 07:21:08.094881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.379 qpair failed and we were unable to recover it. 00:35:47.379 [2024-11-18 07:21:08.094960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.379 [2024-11-18 07:21:08.094987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.379 qpair failed and we were unable to recover it. 00:35:47.379 [2024-11-18 07:21:08.095094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.379 [2024-11-18 07:21:08.095122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.379 qpair failed and we were unable to recover it. 00:35:47.379 [2024-11-18 07:21:08.095208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.379 [2024-11-18 07:21:08.095237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.379 qpair failed and we were unable to recover it. 00:35:47.379 [2024-11-18 07:21:08.095352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.379 [2024-11-18 07:21:08.095384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.379 qpair failed and we were unable to recover it. 00:35:47.379 [2024-11-18 07:21:08.095461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.380 [2024-11-18 07:21:08.095487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.380 qpair failed and we were unable to recover it. 00:35:47.380 [2024-11-18 07:21:08.095586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.380 [2024-11-18 07:21:08.095613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.380 qpair failed and we were unable to recover it. 00:35:47.380 [2024-11-18 07:21:08.095690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.380 [2024-11-18 07:21:08.095716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.380 qpair failed and we were unable to recover it. 00:35:47.380 [2024-11-18 07:21:08.095823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.380 [2024-11-18 07:21:08.095848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.380 qpair failed and we were unable to recover it. 00:35:47.380 [2024-11-18 07:21:08.095948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.380 [2024-11-18 07:21:08.095983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.380 qpair failed and we were unable to recover it. 00:35:47.380 [2024-11-18 07:21:08.096064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.380 [2024-11-18 07:21:08.096090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.380 qpair failed and we were unable to recover it. 00:35:47.380 [2024-11-18 07:21:08.096171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.380 [2024-11-18 07:21:08.096198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.380 qpair failed and we were unable to recover it. 00:35:47.380 [2024-11-18 07:21:08.096286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.380 [2024-11-18 07:21:08.096311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.380 qpair failed and we were unable to recover it. 00:35:47.380 [2024-11-18 07:21:08.096397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.380 [2024-11-18 07:21:08.096423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.380 qpair failed and we were unable to recover it. 00:35:47.380 [2024-11-18 07:21:08.096511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.380 [2024-11-18 07:21:08.096539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.380 qpair failed and we were unable to recover it. 00:35:47.380 [2024-11-18 07:21:08.096623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.380 [2024-11-18 07:21:08.096649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.380 qpair failed and we were unable to recover it. 00:35:47.380 [2024-11-18 07:21:08.096727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.380 [2024-11-18 07:21:08.096753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.380 qpair failed and we were unable to recover it. 00:35:47.380 [2024-11-18 07:21:08.096835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.380 [2024-11-18 07:21:08.096861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.380 qpair failed and we were unable to recover it. 00:35:47.380 [2024-11-18 07:21:08.096947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.380 [2024-11-18 07:21:08.096973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.380 qpair failed and we were unable to recover it. 00:35:47.380 [2024-11-18 07:21:08.097082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.380 [2024-11-18 07:21:08.097108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.380 qpair failed and we were unable to recover it. 00:35:47.380 [2024-11-18 07:21:08.097191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.380 [2024-11-18 07:21:08.097217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.380 qpair failed and we were unable to recover it. 00:35:47.380 [2024-11-18 07:21:08.097308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.380 [2024-11-18 07:21:08.097349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.380 qpair failed and we were unable to recover it. 00:35:47.380 [2024-11-18 07:21:08.097446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.380 [2024-11-18 07:21:08.097483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.380 qpair failed and we were unable to recover it. 00:35:47.380 [2024-11-18 07:21:08.097585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.380 [2024-11-18 07:21:08.097613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.380 qpair failed and we were unable to recover it. 00:35:47.380 [2024-11-18 07:21:08.097696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.380 [2024-11-18 07:21:08.097723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.380 qpair failed and we were unable to recover it. 00:35:47.380 [2024-11-18 07:21:08.097811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.380 [2024-11-18 07:21:08.097839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.380 qpair failed and we were unable to recover it. 00:35:47.380 [2024-11-18 07:21:08.097924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.380 [2024-11-18 07:21:08.097951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.380 qpair failed and we were unable to recover it. 00:35:47.380 [2024-11-18 07:21:08.098028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.380 [2024-11-18 07:21:08.098055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.380 qpair failed and we were unable to recover it. 00:35:47.380 [2024-11-18 07:21:08.098136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.380 [2024-11-18 07:21:08.098164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.380 qpair failed and we were unable to recover it. 00:35:47.380 [2024-11-18 07:21:08.098270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.380 [2024-11-18 07:21:08.098298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.380 qpair failed and we were unable to recover it. 00:35:47.380 [2024-11-18 07:21:08.098377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.380 [2024-11-18 07:21:08.098404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.380 qpair failed and we were unable to recover it. 00:35:47.380 [2024-11-18 07:21:08.098485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.380 [2024-11-18 07:21:08.098518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.380 qpair failed and we were unable to recover it. 00:35:47.380 [2024-11-18 07:21:08.098633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.380 [2024-11-18 07:21:08.098661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.380 qpair failed and we were unable to recover it. 00:35:47.380 [2024-11-18 07:21:08.098744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.380 [2024-11-18 07:21:08.098772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.380 qpair failed and we were unable to recover it. 00:35:47.380 [2024-11-18 07:21:08.098866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.380 [2024-11-18 07:21:08.098893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.380 qpair failed and we were unable to recover it. 00:35:47.381 [2024-11-18 07:21:08.098976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.381 [2024-11-18 07:21:08.099004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.381 qpair failed and we were unable to recover it. 00:35:47.381 [2024-11-18 07:21:08.099102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.381 [2024-11-18 07:21:08.099130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.381 qpair failed and we were unable to recover it. 00:35:47.381 [2024-11-18 07:21:08.099212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.381 [2024-11-18 07:21:08.099245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.381 qpair failed and we were unable to recover it. 00:35:47.381 [2024-11-18 07:21:08.099347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.381 [2024-11-18 07:21:08.099386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.381 qpair failed and we were unable to recover it. 00:35:47.381 [2024-11-18 07:21:08.099479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.381 [2024-11-18 07:21:08.099531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.381 qpair failed and we were unable to recover it. 00:35:47.381 [2024-11-18 07:21:08.099613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.381 [2024-11-18 07:21:08.099640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.381 qpair failed and we were unable to recover it. 00:35:47.381 [2024-11-18 07:21:08.099720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.381 [2024-11-18 07:21:08.099747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.381 qpair failed and we were unable to recover it. 00:35:47.381 [2024-11-18 07:21:08.099827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.381 [2024-11-18 07:21:08.099858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.381 qpair failed and we were unable to recover it. 00:35:47.381 [2024-11-18 07:21:08.099942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.381 [2024-11-18 07:21:08.099970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.381 qpair failed and we were unable to recover it. 00:35:47.381 [2024-11-18 07:21:08.100051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.381 [2024-11-18 07:21:08.100078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.381 qpair failed and we were unable to recover it. 00:35:47.381 [2024-11-18 07:21:08.100190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.381 [2024-11-18 07:21:08.100218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.381 qpair failed and we were unable to recover it. 00:35:47.381 [2024-11-18 07:21:08.100336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.381 [2024-11-18 07:21:08.100365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.381 qpair failed and we were unable to recover it. 00:35:47.381 [2024-11-18 07:21:08.100519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.381 [2024-11-18 07:21:08.100559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.381 qpair failed and we were unable to recover it. 00:35:47.381 [2024-11-18 07:21:08.100647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.381 [2024-11-18 07:21:08.100675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7734000b90 with addr=10.0.0.2, port=4420 00:35:47.381 qpair failed and we were unable to recover it. 00:35:47.381 [2024-11-18 07:21:08.100758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.381 [2024-11-18 07:21:08.100787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.381 qpair failed and we were unable to recover it. 00:35:47.381 [2024-11-18 07:21:08.100881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.381 [2024-11-18 07:21:08.100907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.381 qpair failed and we were unable to recover it. 00:35:47.381 [2024-11-18 07:21:08.101030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.381 [2024-11-18 07:21:08.101068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.381 qpair failed and we were unable to recover it. 00:35:47.381 [2024-11-18 07:21:08.101151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.381 [2024-11-18 07:21:08.101177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.381 qpair failed and we were unable to recover it. 00:35:47.381 [2024-11-18 07:21:08.101275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.381 [2024-11-18 07:21:08.101302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.381 qpair failed and we were unable to recover it. 00:35:47.381 [2024-11-18 07:21:08.101396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.381 [2024-11-18 07:21:08.101422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.381 qpair failed and we were unable to recover it. 00:35:47.381 [2024-11-18 07:21:08.101506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.381 [2024-11-18 07:21:08.101534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.381 qpair failed and we were unable to recover it. 00:35:47.381 [2024-11-18 07:21:08.101613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.381 [2024-11-18 07:21:08.101639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.381 qpair failed and we were unable to recover it. 00:35:47.381 [2024-11-18 07:21:08.101726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.381 [2024-11-18 07:21:08.101752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.381 qpair failed and we were unable to recover it. 00:35:47.381 [2024-11-18 07:21:08.101867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.381 [2024-11-18 07:21:08.101893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.381 qpair failed and we were unable to recover it. 00:35:47.381 [2024-11-18 07:21:08.101993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.381 [2024-11-18 07:21:08.102032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.381 qpair failed and we were unable to recover it. 00:35:47.381 [2024-11-18 07:21:08.102120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.381 [2024-11-18 07:21:08.102147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.381 qpair failed and we were unable to recover it. 00:35:47.381 [2024-11-18 07:21:08.102234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.381 [2024-11-18 07:21:08.102266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.381 qpair failed and we were unable to recover it. 00:35:47.381 [2024-11-18 07:21:08.102387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.381 [2024-11-18 07:21:08.102413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170b40 with addr=10.0.0.2, port=4420 00:35:47.381 qpair failed and we were unable to recover it. 00:35:47.381 [2024-11-18 07:21:08.102520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.381 [2024-11-18 07:21:08.102549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.382 qpair failed and we were unable to recover it. 00:35:47.382 [2024-11-18 07:21:08.102630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.382 [2024-11-18 07:21:08.102658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.382 qpair failed and we were unable to recover it. 00:35:47.382 [2024-11-18 07:21:08.102771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.382 [2024-11-18 07:21:08.102799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.382 qpair failed and we were unable to recover it. 00:35:47.382 [2024-11-18 07:21:08.102880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.382 [2024-11-18 07:21:08.102908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.382 qpair failed and we were unable to recover it. 00:35:47.382 [2024-11-18 07:21:08.102993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.382 [2024-11-18 07:21:08.103020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.382 qpair failed and we were unable to recover it. 00:35:47.382 [2024-11-18 07:21:08.103133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.382 [2024-11-18 07:21:08.103163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.382 qpair failed and we were unable to recover it. 00:35:47.382 [2024-11-18 07:21:08.103247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.382 [2024-11-18 07:21:08.103275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.382 qpair failed and we were unable to recover it. 00:35:47.382 [2024-11-18 07:21:08.103364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.382 [2024-11-18 07:21:08.103392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.382 qpair failed and we were unable to recover it. 00:35:47.382 [2024-11-18 07:21:08.103478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.382 [2024-11-18 07:21:08.103512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.382 qpair failed and we were unable to recover it. 00:35:47.382 [2024-11-18 07:21:08.103603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.382 [2024-11-18 07:21:08.103631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.382 qpair failed and we were unable to recover it. 00:35:47.382 [2024-11-18 07:21:08.103714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.382 [2024-11-18 07:21:08.103741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.382 qpair failed and we were unable to recover it. 00:35:47.382 Malloc0 00:35:47.382 [2024-11-18 07:21:08.103837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.382 [2024-11-18 07:21:08.103865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.382 qpair failed and we were unable to recover it. 00:35:47.382 [2024-11-18 07:21:08.103976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.382 [2024-11-18 07:21:08.104004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.382 qpair failed and we were unable to recover it. 00:35:47.382 [2024-11-18 07:21:08.104125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.382 [2024-11-18 07:21:08.104153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.382 qpair failed and we were unable to recover it. 00:35:47.382 07:21:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:47.382 [2024-11-18 07:21:08.104241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.382 [2024-11-18 07:21:08.104275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.382 qpair failed and we were unable to recover it. 00:35:47.382 07:21:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:35:47.382 [2024-11-18 07:21:08.104360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.382 [2024-11-18 07:21:08.104393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.382 qpair failed and we were unable to recover it. 00:35:47.382 [2024-11-18 07:21:08.104496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.382 07:21:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:47.382 [2024-11-18 07:21:08.104525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.382 qpair failed and we were unable to recover it. 00:35:47.382 07:21:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:47.382 [2024-11-18 07:21:08.104633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.382 [2024-11-18 07:21:08.104661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.382 qpair failed and we were unable to recover it. 00:35:47.382 [2024-11-18 07:21:08.104744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.382 [2024-11-18 07:21:08.104772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.382 qpair failed and we were unable to recover it. 00:35:47.382 [2024-11-18 07:21:08.104851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.382 [2024-11-18 07:21:08.104889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.382 qpair failed and we were unable to recover it. 00:35:47.382 [2024-11-18 07:21:08.104978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.382 [2024-11-18 07:21:08.105011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.382 qpair failed and we were unable to recover it. 00:35:47.382 [2024-11-18 07:21:08.105098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.382 [2024-11-18 07:21:08.105125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.382 qpair failed and we were unable to recover it. 00:35:47.382 [2024-11-18 07:21:08.105208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.382 [2024-11-18 07:21:08.105235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.382 qpair failed and we were unable to recover it. 00:35:47.382 [2024-11-18 07:21:08.105352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.382 [2024-11-18 07:21:08.105379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.382 qpair failed and we were unable to recover it. 00:35:47.382 [2024-11-18 07:21:08.105457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.382 [2024-11-18 07:21:08.105498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.382 qpair failed and we were unable to recover it. 00:35:47.382 [2024-11-18 07:21:08.105591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.382 [2024-11-18 07:21:08.105619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.382 qpair failed and we were unable to recover it. 00:35:47.382 [2024-11-18 07:21:08.105701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.382 [2024-11-18 07:21:08.105729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.382 qpair failed and we were unable to recover it. 00:35:47.382 [2024-11-18 07:21:08.105809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.382 [2024-11-18 07:21:08.105836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.383 qpair failed and we were unable to recover it. 00:35:47.383 [2024-11-18 07:21:08.105940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.383 [2024-11-18 07:21:08.105967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.383 qpair failed and we were unable to recover it. 00:35:47.383 [2024-11-18 07:21:08.106051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.383 [2024-11-18 07:21:08.106078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.383 qpair failed and we were unable to recover it. 00:35:47.383 [2024-11-18 07:21:08.106191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.383 [2024-11-18 07:21:08.106228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.383 qpair failed and we were unable to recover it. 00:35:47.383 [2024-11-18 07:21:08.106312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.383 [2024-11-18 07:21:08.106339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.383 qpair failed and we were unable to recover it. 00:35:47.383 [2024-11-18 07:21:08.106448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.383 [2024-11-18 07:21:08.106475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.383 qpair failed and we were unable to recover it. 00:35:47.383 [2024-11-18 07:21:08.106568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.383 [2024-11-18 07:21:08.106595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.383 qpair failed and we were unable to recover it. 00:35:47.383 [2024-11-18 07:21:08.106672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.383 [2024-11-18 07:21:08.106699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.383 qpair failed and we were unable to recover it. 00:35:47.383 [2024-11-18 07:21:08.106787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.383 [2024-11-18 07:21:08.106814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.383 qpair failed and we were unable to recover it. 00:35:47.383 [2024-11-18 07:21:08.106899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.383 [2024-11-18 07:21:08.106926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.383 qpair failed and we were unable to recover it. 00:35:47.383 [2024-11-18 07:21:08.107011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.383 [2024-11-18 07:21:08.107040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.383 qpair failed and we were unable to recover it. 00:35:47.383 [2024-11-18 07:21:08.107124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.383 [2024-11-18 07:21:08.107152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.383 qpair failed and we were unable to recover it. 00:35:47.383 [2024-11-18 07:21:08.107240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.383 [2024-11-18 07:21:08.107266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.383 qpair failed and we were unable to recover it. 00:35:47.383 [2024-11-18 07:21:08.107353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.383 [2024-11-18 07:21:08.107381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.383 qpair failed and we were unable to recover it. 00:35:47.383 [2024-11-18 07:21:08.107508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.383 [2024-11-18 07:21:08.107536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.383 qpair failed and we were unable to recover it. 00:35:47.383 [2024-11-18 07:21:08.107623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.383 [2024-11-18 07:21:08.107618] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:47.383 [2024-11-18 07:21:08.107650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.383 qpair failed and we were unable to recover it. 00:35:47.383 [2024-11-18 07:21:08.107741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.383 [2024-11-18 07:21:08.107770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.383 qpair failed and we were unable to recover it. 00:35:47.383 [2024-11-18 07:21:08.107867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.383 [2024-11-18 07:21:08.107894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.383 qpair failed and we were unable to recover it. 00:35:47.383 [2024-11-18 07:21:08.108013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.383 [2024-11-18 07:21:08.108045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.383 qpair failed and we were unable to recover it. 00:35:47.383 [2024-11-18 07:21:08.108134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.383 [2024-11-18 07:21:08.108162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.383 qpair failed and we were unable to recover it. 00:35:47.383 [2024-11-18 07:21:08.108243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.383 [2024-11-18 07:21:08.108270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.383 qpair failed and we were unable to recover it. 00:35:47.383 [2024-11-18 07:21:08.108353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.383 [2024-11-18 07:21:08.108381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.383 qpair failed and we were unable to recover it. 00:35:47.383 [2024-11-18 07:21:08.108463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.383 [2024-11-18 07:21:08.108499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.383 qpair failed and we were unable to recover it. 00:35:47.383 [2024-11-18 07:21:08.108587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.383 [2024-11-18 07:21:08.108614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.383 qpair failed and we were unable to recover it. 00:35:47.383 [2024-11-18 07:21:08.108742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.383 [2024-11-18 07:21:08.108769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.383 qpair failed and we were unable to recover it. 00:35:47.383 [2024-11-18 07:21:08.108864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.383 [2024-11-18 07:21:08.108892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.383 qpair failed and we were unable to recover it. 00:35:47.383 [2024-11-18 07:21:08.108975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.383 [2024-11-18 07:21:08.109002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.383 qpair failed and we were unable to recover it. 00:35:47.383 [2024-11-18 07:21:08.109090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.383 [2024-11-18 07:21:08.109117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.383 qpair failed and we were unable to recover it. 00:35:47.383 [2024-11-18 07:21:08.109207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.383 [2024-11-18 07:21:08.109236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.383 qpair failed and we were unable to recover it. 00:35:47.383 [2024-11-18 07:21:08.109346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.383 [2024-11-18 07:21:08.109373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.383 qpair failed and we were unable to recover it. 00:35:47.383 [2024-11-18 07:21:08.109458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.383 [2024-11-18 07:21:08.109486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.383 qpair failed and we were unable to recover it. 00:35:47.383 [2024-11-18 07:21:08.109584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.383 [2024-11-18 07:21:08.109611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.383 qpair failed and we were unable to recover it. 00:35:47.383 [2024-11-18 07:21:08.109692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.383 [2024-11-18 07:21:08.109719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.383 qpair failed and we were unable to recover it. 00:35:47.383 [2024-11-18 07:21:08.109818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.383 [2024-11-18 07:21:08.109856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.383 qpair failed and we were unable to recover it. 00:35:47.383 [2024-11-18 07:21:08.109934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.383 [2024-11-18 07:21:08.109962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.384 qpair failed and we were unable to recover it. 00:35:47.384 [2024-11-18 07:21:08.110043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.384 [2024-11-18 07:21:08.110071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.384 qpair failed and we were unable to recover it. 00:35:47.384 [2024-11-18 07:21:08.110161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.384 [2024-11-18 07:21:08.110189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.384 qpair failed and we were unable to recover it. 00:35:47.384 [2024-11-18 07:21:08.110292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.384 [2024-11-18 07:21:08.110328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.384 qpair failed and we were unable to recover it. 00:35:47.384 [2024-11-18 07:21:08.110456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.384 [2024-11-18 07:21:08.110497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.384 qpair failed and we were unable to recover it. 00:35:47.384 [2024-11-18 07:21:08.110585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.384 [2024-11-18 07:21:08.110612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7728000b90 with addr=10.0.0.2, port=4420 00:35:47.384 qpair failed and we were unable to recover it. 00:35:47.384 [2024-11-18 07:21:08.110738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.384 [2024-11-18 07:21:08.110798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217e970 with addr=10.0.0.2, port=4420 00:35:47.384 [2024-11-18 07:21:08.110820] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217e970 is same with the state(6) to be set 00:35:47.384 [2024-11-18 07:21:08.110846] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217e970 (9): Bad file descriptor 00:35:47.384 [2024-11-18 07:21:08.110877] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:35:47.384 [2024-11-18 07:21:08.110892] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:35:47.384 [2024-11-18 07:21:08.110908] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:35:47.384 Unable to reset the controller. 00:35:47.384 07:21:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:47.384 07:21:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:35:47.384 07:21:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:47.384 07:21:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:47.384 07:21:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:47.384 07:21:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:35:47.384 07:21:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:47.384 07:21:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:47.384 07:21:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:47.384 07:21:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:47.384 07:21:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:47.384 07:21:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:47.384 [2024-11-18 07:21:08.135826] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:47.384 07:21:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:47.384 07:21:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:35:47.384 07:21:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:47.384 07:21:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:47.384 07:21:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:47.384 07:21:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 404088 00:35:48.318 Controller properly reset. 00:35:53.652 Initializing NVMe Controllers 00:35:53.652 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:35:53.652 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:35:53.652 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:35:53.652 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:35:53.652 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:35:53.652 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:35:53.652 Initialization complete. Launching workers. 00:35:53.652 Starting thread on core 1 00:35:53.652 Starting thread on core 2 00:35:53.652 Starting thread on core 3 00:35:53.652 Starting thread on core 0 00:35:53.652 07:21:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:35:53.652 00:35:53.652 real 0m10.614s 00:35:53.652 user 0m33.630s 00:35:53.652 sys 0m7.383s 00:35:53.652 07:21:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:53.652 07:21:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:53.652 ************************************ 00:35:53.652 END TEST nvmf_target_disconnect_tc2 00:35:53.652 ************************************ 00:35:53.652 07:21:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:35:53.652 07:21:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:35:53.652 07:21:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:35:53.652 07:21:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:53.653 07:21:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # sync 00:35:53.653 07:21:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:53.653 07:21:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set +e 00:35:53.653 07:21:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:53.653 07:21:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:53.653 rmmod nvme_tcp 00:35:53.653 rmmod nvme_fabrics 00:35:53.653 rmmod nvme_keyring 00:35:53.653 07:21:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:53.653 07:21:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@128 -- # set -e 00:35:53.653 07:21:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@129 -- # return 0 00:35:53.653 07:21:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@517 -- # '[' -n 404728 ']' 00:35:53.653 07:21:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@518 -- # killprocess 404728 00:35:53.653 07:21:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # '[' -z 404728 ']' 00:35:53.653 07:21:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # kill -0 404728 00:35:53.653 07:21:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # uname 00:35:53.653 07:21:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:53.653 07:21:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 404728 00:35:53.653 07:21:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_4 00:35:53.653 07:21:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_4 = sudo ']' 00:35:53.653 07:21:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 404728' 00:35:53.653 killing process with pid 404728 00:35:53.653 07:21:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@973 -- # kill 404728 00:35:53.653 07:21:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@978 -- # wait 404728 00:35:53.653 07:21:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:35:53.653 07:21:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:53.653 07:21:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:53.653 07:21:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # iptr 00:35:53.653 07:21:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:35:53.653 07:21:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:53.653 07:21:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:35:53.653 07:21:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:53.653 07:21:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:53.653 07:21:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:53.653 07:21:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:53.653 07:21:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:55.558 07:21:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:55.558 00:35:55.558 real 0m15.748s 00:35:55.558 user 0m58.757s 00:35:55.558 sys 0m10.116s 00:35:55.558 07:21:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:55.558 07:21:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:35:55.558 ************************************ 00:35:55.558 END TEST nvmf_target_disconnect 00:35:55.558 ************************************ 00:35:55.558 07:21:16 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:35:55.558 00:35:55.558 real 6m45.481s 00:35:55.558 user 17m32.815s 00:35:55.558 sys 1m29.110s 00:35:55.558 07:21:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:55.558 07:21:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:35:55.558 ************************************ 00:35:55.558 END TEST nvmf_host 00:35:55.558 ************************************ 00:35:55.558 07:21:16 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:35:55.558 07:21:16 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 0 -eq 0 ]] 00:35:55.558 07:21:16 nvmf_tcp -- nvmf/nvmf.sh@20 -- # run_test nvmf_target_core_interrupt_mode /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:35:55.558 07:21:16 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:35:55.558 07:21:16 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:55.558 07:21:16 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:55.558 ************************************ 00:35:55.558 START TEST nvmf_target_core_interrupt_mode 00:35:55.558 ************************************ 00:35:55.558 07:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:35:55.558 * Looking for test storage... 00:35:55.558 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:35:55.558 07:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:35:55.558 07:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1693 -- # lcov --version 00:35:55.558 07:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:35:55.558 07:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:35:55.558 07:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:55.558 07:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:55.558 07:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:55.558 07:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # IFS=.-: 00:35:55.558 07:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # read -ra ver1 00:35:55.558 07:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # IFS=.-: 00:35:55.558 07:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # read -ra ver2 00:35:55.558 07:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@338 -- # local 'op=<' 00:35:55.558 07:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@340 -- # ver1_l=2 00:35:55.558 07:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@341 -- # ver2_l=1 00:35:55.558 07:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:55.558 07:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@344 -- # case "$op" in 00:35:55.558 07:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@345 -- # : 1 00:35:55.558 07:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:55.558 07:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:55.558 07:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # decimal 1 00:35:55.558 07:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=1 00:35:55.558 07:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:55.558 07:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 1 00:35:55.558 07:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # ver1[v]=1 00:35:55.558 07:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # decimal 2 00:35:55.558 07:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=2 00:35:55.558 07:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:55.558 07:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 2 00:35:55.558 07:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # ver2[v]=2 00:35:55.558 07:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:55.558 07:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:55.558 07:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # return 0 00:35:55.558 07:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:55.558 07:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:35:55.558 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:55.558 --rc genhtml_branch_coverage=1 00:35:55.558 --rc genhtml_function_coverage=1 00:35:55.558 --rc genhtml_legend=1 00:35:55.558 --rc geninfo_all_blocks=1 00:35:55.558 --rc geninfo_unexecuted_blocks=1 00:35:55.558 00:35:55.558 ' 00:35:55.558 07:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:35:55.558 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:55.558 --rc genhtml_branch_coverage=1 00:35:55.558 --rc genhtml_function_coverage=1 00:35:55.558 --rc genhtml_legend=1 00:35:55.558 --rc geninfo_all_blocks=1 00:35:55.558 --rc geninfo_unexecuted_blocks=1 00:35:55.558 00:35:55.558 ' 00:35:55.558 07:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:35:55.558 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:55.558 --rc genhtml_branch_coverage=1 00:35:55.558 --rc genhtml_function_coverage=1 00:35:55.558 --rc genhtml_legend=1 00:35:55.558 --rc geninfo_all_blocks=1 00:35:55.558 --rc geninfo_unexecuted_blocks=1 00:35:55.558 00:35:55.558 ' 00:35:55.558 07:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:35:55.558 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:55.558 --rc genhtml_branch_coverage=1 00:35:55.558 --rc genhtml_function_coverage=1 00:35:55.558 --rc genhtml_legend=1 00:35:55.558 --rc geninfo_all_blocks=1 00:35:55.558 --rc geninfo_unexecuted_blocks=1 00:35:55.558 00:35:55.558 ' 00:35:55.558 07:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:35:55.558 07:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:35:55.558 07:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:55.558 07:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # uname -s 00:35:55.558 07:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:55.558 07:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:55.558 07:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:55.558 07:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:55.558 07:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:55.558 07:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:55.558 07:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:55.558 07:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:55.558 07:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:55.558 07:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:55.558 07:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:35:55.558 07:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:35:55.558 07:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:55.558 07:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:55.558 07:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:55.558 07:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:55.558 07:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:55.558 07:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@15 -- # shopt -s extglob 00:35:55.558 07:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:55.558 07:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:55.558 07:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:55.558 07:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:55.559 07:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:55.559 07:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:55.559 07:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@5 -- # export PATH 00:35:55.559 07:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:55.559 07:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@51 -- # : 0 00:35:55.559 07:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:55.559 07:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:55.559 07:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:55.559 07:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:55.559 07:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:55.559 07:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:35:55.559 07:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:35:55.559 07:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:55.559 07:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:55.559 07:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:55.559 07:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:35:55.559 07:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:35:55.559 07:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:35:55.559 07:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:35:55.559 07:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:35:55.559 07:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:55.559 07:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:35:55.818 ************************************ 00:35:55.818 START TEST nvmf_abort 00:35:55.818 ************************************ 00:35:55.818 07:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:35:55.818 * Looking for test storage... 00:35:55.818 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:55.818 07:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:35:55.818 07:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1693 -- # lcov --version 00:35:55.818 07:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:35:55.818 07:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:35:55.818 07:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:55.818 07:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:55.818 07:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:55.818 07:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:35:55.818 07:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:35:55.819 07:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:35:55.819 07:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:35:55.819 07:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:35:55.819 07:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:35:55.819 07:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:35:55.819 07:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:55.819 07:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:35:55.819 07:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:35:55.819 07:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:55.819 07:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:55.819 07:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:35:55.819 07:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:35:55.819 07:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:55.819 07:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:35:55.819 07:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:35:55.819 07:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:35:55.819 07:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:35:55.819 07:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:55.819 07:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:35:55.819 07:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:35:55.819 07:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:55.819 07:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:55.819 07:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:35:55.819 07:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:55.819 07:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:35:55.819 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:55.819 --rc genhtml_branch_coverage=1 00:35:55.819 --rc genhtml_function_coverage=1 00:35:55.819 --rc genhtml_legend=1 00:35:55.819 --rc geninfo_all_blocks=1 00:35:55.819 --rc geninfo_unexecuted_blocks=1 00:35:55.819 00:35:55.819 ' 00:35:55.819 07:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:35:55.819 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:55.819 --rc genhtml_branch_coverage=1 00:35:55.819 --rc genhtml_function_coverage=1 00:35:55.819 --rc genhtml_legend=1 00:35:55.819 --rc geninfo_all_blocks=1 00:35:55.819 --rc geninfo_unexecuted_blocks=1 00:35:55.819 00:35:55.819 ' 00:35:55.819 07:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:35:55.819 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:55.819 --rc genhtml_branch_coverage=1 00:35:55.819 --rc genhtml_function_coverage=1 00:35:55.819 --rc genhtml_legend=1 00:35:55.819 --rc geninfo_all_blocks=1 00:35:55.819 --rc geninfo_unexecuted_blocks=1 00:35:55.819 00:35:55.819 ' 00:35:55.819 07:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:35:55.819 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:55.819 --rc genhtml_branch_coverage=1 00:35:55.819 --rc genhtml_function_coverage=1 00:35:55.819 --rc genhtml_legend=1 00:35:55.819 --rc geninfo_all_blocks=1 00:35:55.819 --rc geninfo_unexecuted_blocks=1 00:35:55.819 00:35:55.819 ' 00:35:55.819 07:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:55.819 07:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:35:55.819 07:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:55.819 07:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:55.819 07:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:55.819 07:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:55.819 07:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:55.819 07:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:55.819 07:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:55.819 07:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:55.819 07:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:55.819 07:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:55.819 07:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:35:55.819 07:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:35:55.819 07:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:55.819 07:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:55.819 07:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:55.819 07:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:55.819 07:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:55.819 07:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:35:55.819 07:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:55.819 07:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:55.819 07:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:55.819 07:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:55.819 07:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:55.819 07:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:55.819 07:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:35:55.819 07:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:55.819 07:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:35:55.819 07:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:55.819 07:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:55.819 07:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:55.819 07:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:55.819 07:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:55.819 07:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:35:55.819 07:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:35:55.819 07:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:55.819 07:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:55.819 07:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:55.819 07:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:35:55.819 07:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:35:55.819 07:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:35:55.819 07:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:35:55.819 07:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:55.819 07:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:35:55.819 07:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:35:55.820 07:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:35:55.820 07:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:55.820 07:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:55.820 07:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:55.820 07:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:35:55.820 07:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:35:55.820 07:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:35:55.820 07:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:35:57.724 07:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:57.724 07:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:35:57.724 07:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:57.724 07:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:57.724 07:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:57.724 07:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:57.724 07:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:57.724 07:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:35:57.724 07:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:57.724 07:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:35:57.724 07:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:35:57.724 07:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:35:57.724 07:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:35:57.724 07:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:35:57.724 07:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:35:57.724 07:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:57.724 07:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:57.724 07:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:57.724 07:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:57.724 07:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:57.724 07:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:57.724 07:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:57.724 07:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:57.724 07:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:57.724 07:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:57.725 07:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:57.725 07:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:57.725 07:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:57.725 07:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:57.725 07:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:57.725 07:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:57.725 07:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:57.725 07:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:57.725 07:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:57.725 07:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:35:57.725 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:35:57.725 07:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:57.725 07:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:57.725 07:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:57.725 07:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:57.725 07:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:57.725 07:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:57.725 07:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:35:57.725 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:35:57.725 07:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:57.725 07:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:57.725 07:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:57.725 07:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:57.725 07:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:57.725 07:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:57.725 07:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:57.725 07:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:57.725 07:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:57.725 07:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:57.725 07:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:57.725 07:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:57.725 07:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:57.725 07:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:57.725 07:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:57.725 07:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:35:57.725 Found net devices under 0000:0a:00.0: cvl_0_0 00:35:57.725 07:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:57.725 07:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:57.725 07:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:57.725 07:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:57.725 07:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:57.725 07:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:57.725 07:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:57.725 07:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:57.725 07:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:35:57.725 Found net devices under 0000:0a:00.1: cvl_0_1 00:35:57.725 07:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:57.725 07:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:35:57.725 07:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:35:57.725 07:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:35:57.725 07:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:35:57.725 07:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:35:57.725 07:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:57.725 07:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:57.725 07:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:57.725 07:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:57.725 07:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:57.725 07:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:57.725 07:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:57.725 07:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:57.725 07:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:57.725 07:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:57.725 07:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:57.725 07:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:57.725 07:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:57.985 07:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:57.985 07:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:57.985 07:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:57.985 07:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:57.985 07:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:57.985 07:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:57.985 07:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:57.985 07:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:57.985 07:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:57.985 07:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:57.985 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:57.985 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.284 ms 00:35:57.985 00:35:57.985 --- 10.0.0.2 ping statistics --- 00:35:57.985 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:57.985 rtt min/avg/max/mdev = 0.284/0.284/0.284/0.000 ms 00:35:57.985 07:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:57.985 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:57.985 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.088 ms 00:35:57.985 00:35:57.985 --- 10.0.0.1 ping statistics --- 00:35:57.985 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:57.985 rtt min/avg/max/mdev = 0.088/0.088/0.088/0.000 ms 00:35:57.985 07:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:57.985 07:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:35:57.985 07:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:35:57.985 07:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:57.985 07:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:35:57.985 07:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:35:57.985 07:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:57.985 07:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:35:57.985 07:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:35:57.985 07:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:35:57.985 07:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:57.985 07:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:57.985 07:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:35:57.985 07:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=407809 00:35:57.985 07:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:35:57.985 07:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 407809 00:35:57.985 07:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 407809 ']' 00:35:57.985 07:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:57.985 07:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:57.985 07:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:57.985 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:57.985 07:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:57.985 07:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:35:57.985 [2024-11-18 07:21:18.901059] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:35:57.985 [2024-11-18 07:21:18.902151] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:35:57.985 [2024-11-18 07:21:18.902218] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:58.244 [2024-11-18 07:21:18.981930] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:35:58.244 [2024-11-18 07:21:19.027467] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:58.244 [2024-11-18 07:21:19.027530] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:58.244 [2024-11-18 07:21:19.027556] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:58.244 [2024-11-18 07:21:19.027566] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:58.244 [2024-11-18 07:21:19.027576] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:58.244 [2024-11-18 07:21:19.029011] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:35:58.244 [2024-11-18 07:21:19.029069] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:35:58.244 [2024-11-18 07:21:19.029074] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:58.244 [2024-11-18 07:21:19.110814] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:35:58.244 [2024-11-18 07:21:19.111052] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:35:58.244 [2024-11-18 07:21:19.111056] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:35:58.244 [2024-11-18 07:21:19.111290] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:35:58.244 07:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:58.244 07:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:35:58.244 07:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:58.244 07:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:58.244 07:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:35:58.244 07:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:58.244 07:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:35:58.244 07:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:58.244 07:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:35:58.244 [2024-11-18 07:21:19.161748] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:58.244 07:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:58.244 07:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:35:58.244 07:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:58.244 07:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:35:58.244 Malloc0 00:35:58.244 07:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:58.244 07:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:35:58.244 07:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:58.244 07:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:35:58.244 Delay0 00:35:58.244 07:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:58.244 07:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:35:58.244 07:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:58.244 07:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:35:58.502 07:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:58.502 07:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:35:58.502 07:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:58.502 07:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:35:58.502 07:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:58.502 07:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:58.502 07:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:58.502 07:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:35:58.502 [2024-11-18 07:21:19.237915] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:58.502 07:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:58.502 07:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:35:58.502 07:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:58.502 07:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:35:58.502 07:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:58.502 07:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:35:58.502 [2024-11-18 07:21:19.381583] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:36:01.033 Initializing NVMe Controllers 00:36:01.033 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:36:01.033 controller IO queue size 128 less than required 00:36:01.033 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:36:01.033 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:36:01.033 Initialization complete. Launching workers. 00:36:01.033 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 29086 00:36:01.033 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 29143, failed to submit 66 00:36:01.033 success 29086, unsuccessful 57, failed 0 00:36:01.034 07:21:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:01.034 07:21:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:01.034 07:21:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:01.034 07:21:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:01.034 07:21:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:36:01.034 07:21:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:36:01.034 07:21:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:36:01.034 07:21:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:36:01.034 07:21:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:01.034 07:21:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:36:01.034 07:21:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:01.034 07:21:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:01.034 rmmod nvme_tcp 00:36:01.034 rmmod nvme_fabrics 00:36:01.034 rmmod nvme_keyring 00:36:01.034 07:21:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:01.034 07:21:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:36:01.034 07:21:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:36:01.034 07:21:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 407809 ']' 00:36:01.034 07:21:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 407809 00:36:01.034 07:21:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 407809 ']' 00:36:01.034 07:21:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 407809 00:36:01.034 07:21:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:36:01.034 07:21:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:01.034 07:21:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 407809 00:36:01.034 07:21:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:36:01.034 07:21:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:36:01.034 07:21:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 407809' 00:36:01.034 killing process with pid 407809 00:36:01.034 07:21:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@973 -- # kill 407809 00:36:01.034 07:21:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@978 -- # wait 407809 00:36:01.034 07:21:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:36:01.034 07:21:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:36:01.034 07:21:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:36:01.034 07:21:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:36:01.034 07:21:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:36:01.034 07:21:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:36:01.034 07:21:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:36:01.034 07:21:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:01.034 07:21:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:01.034 07:21:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:01.034 07:21:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:01.034 07:21:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:02.938 07:21:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:02.938 00:36:02.938 real 0m7.344s 00:36:02.938 user 0m9.655s 00:36:02.938 sys 0m2.905s 00:36:02.938 07:21:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:02.938 07:21:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:02.938 ************************************ 00:36:02.938 END TEST nvmf_abort 00:36:02.938 ************************************ 00:36:03.197 07:21:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:36:03.197 07:21:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:36:03.197 07:21:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:03.197 07:21:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:36:03.197 ************************************ 00:36:03.197 START TEST nvmf_ns_hotplug_stress 00:36:03.197 ************************************ 00:36:03.197 07:21:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:36:03.197 * Looking for test storage... 00:36:03.197 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:36:03.197 07:21:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:36:03.197 07:21:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:36:03.197 07:21:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:36:03.197 07:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:36:03.197 07:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:03.197 07:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:03.197 07:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:03.197 07:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:36:03.197 07:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:36:03.197 07:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:36:03.197 07:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:36:03.197 07:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:36:03.197 07:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:36:03.197 07:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:36:03.197 07:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:03.197 07:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:36:03.198 07:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:36:03.198 07:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:03.198 07:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:03.198 07:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:36:03.198 07:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:36:03.198 07:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:03.198 07:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:36:03.198 07:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:36:03.198 07:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:36:03.198 07:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:36:03.198 07:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:03.198 07:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:36:03.198 07:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:36:03.198 07:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:03.198 07:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:03.198 07:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:36:03.198 07:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:03.198 07:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:36:03.198 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:03.198 --rc genhtml_branch_coverage=1 00:36:03.198 --rc genhtml_function_coverage=1 00:36:03.198 --rc genhtml_legend=1 00:36:03.198 --rc geninfo_all_blocks=1 00:36:03.198 --rc geninfo_unexecuted_blocks=1 00:36:03.198 00:36:03.198 ' 00:36:03.198 07:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:36:03.198 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:03.198 --rc genhtml_branch_coverage=1 00:36:03.198 --rc genhtml_function_coverage=1 00:36:03.198 --rc genhtml_legend=1 00:36:03.198 --rc geninfo_all_blocks=1 00:36:03.198 --rc geninfo_unexecuted_blocks=1 00:36:03.198 00:36:03.198 ' 00:36:03.198 07:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:36:03.198 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:03.198 --rc genhtml_branch_coverage=1 00:36:03.198 --rc genhtml_function_coverage=1 00:36:03.198 --rc genhtml_legend=1 00:36:03.198 --rc geninfo_all_blocks=1 00:36:03.198 --rc geninfo_unexecuted_blocks=1 00:36:03.198 00:36:03.198 ' 00:36:03.198 07:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:36:03.198 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:03.198 --rc genhtml_branch_coverage=1 00:36:03.198 --rc genhtml_function_coverage=1 00:36:03.198 --rc genhtml_legend=1 00:36:03.198 --rc geninfo_all_blocks=1 00:36:03.198 --rc geninfo_unexecuted_blocks=1 00:36:03.198 00:36:03.198 ' 00:36:03.198 07:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:03.198 07:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:36:03.198 07:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:03.198 07:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:03.198 07:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:03.198 07:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:03.198 07:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:03.198 07:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:03.198 07:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:03.198 07:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:03.198 07:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:03.198 07:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:03.198 07:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:36:03.198 07:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:36:03.198 07:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:03.198 07:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:03.198 07:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:03.198 07:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:03.198 07:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:03.198 07:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:36:03.198 07:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:03.198 07:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:03.198 07:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:03.198 07:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:03.198 07:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:03.198 07:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:03.198 07:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:36:03.198 07:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:03.198 07:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:36:03.198 07:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:03.198 07:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:03.198 07:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:03.198 07:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:03.198 07:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:03.198 07:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:36:03.198 07:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:36:03.198 07:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:03.198 07:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:03.198 07:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:03.198 07:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:36:03.198 07:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:36:03.198 07:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:36:03.198 07:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:03.198 07:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:36:03.198 07:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:36:03.198 07:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:36:03.198 07:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:03.199 07:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:03.199 07:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:03.199 07:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:36:03.199 07:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:36:03.199 07:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:36:03.199 07:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:36:05.732 07:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:05.732 07:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:36:05.732 07:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:05.732 07:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:05.732 07:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:05.732 07:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:05.732 07:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:05.732 07:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:36:05.732 07:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:05.732 07:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:36:05.732 07:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:36:05.732 07:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:36:05.732 07:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:36:05.732 07:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:36:05.732 07:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:36:05.732 07:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:05.732 07:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:05.732 07:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:05.732 07:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:05.732 07:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:05.732 07:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:05.732 07:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:05.732 07:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:36:05.732 07:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:05.732 07:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:05.732 07:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:05.732 07:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:05.732 07:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:36:05.732 07:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:36:05.732 07:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:36:05.732 07:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:36:05.732 07:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:36:05.732 07:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:36:05.732 07:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:05.732 07:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:36:05.732 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:36:05.733 07:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:05.733 07:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:05.733 07:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:05.733 07:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:05.733 07:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:05.733 07:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:05.733 07:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:36:05.733 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:36:05.733 07:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:05.733 07:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:05.733 07:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:05.733 07:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:05.733 07:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:05.733 07:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:36:05.733 07:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:36:05.733 07:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:36:05.733 07:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:05.733 07:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:05.733 07:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:05.733 07:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:05.733 07:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:05.733 07:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:05.733 07:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:05.733 07:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:36:05.733 Found net devices under 0000:0a:00.0: cvl_0_0 00:36:05.733 07:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:05.733 07:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:05.733 07:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:05.733 07:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:05.733 07:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:05.733 07:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:05.733 07:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:05.733 07:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:05.733 07:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:36:05.733 Found net devices under 0000:0a:00.1: cvl_0_1 00:36:05.733 07:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:05.733 07:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:36:05.733 07:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:36:05.733 07:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:36:05.733 07:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:36:05.733 07:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:36:05.733 07:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:05.733 07:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:05.733 07:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:05.733 07:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:05.733 07:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:36:05.733 07:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:05.733 07:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:05.733 07:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:36:05.733 07:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:36:05.733 07:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:05.733 07:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:05.733 07:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:36:05.733 07:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:36:05.733 07:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:36:05.733 07:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:05.733 07:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:05.733 07:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:05.733 07:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:36:05.733 07:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:05.733 07:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:05.733 07:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:05.733 07:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:36:05.733 07:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:36:05.733 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:05.733 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.182 ms 00:36:05.733 00:36:05.733 --- 10.0.0.2 ping statistics --- 00:36:05.733 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:05.733 rtt min/avg/max/mdev = 0.182/0.182/0.182/0.000 ms 00:36:05.733 07:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:05.733 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:05.733 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.128 ms 00:36:05.733 00:36:05.733 --- 10.0.0.1 ping statistics --- 00:36:05.733 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:05.733 rtt min/avg/max/mdev = 0.128/0.128/0.128/0.000 ms 00:36:05.733 07:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:05.733 07:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:36:05.733 07:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:36:05.733 07:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:05.733 07:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:36:05.733 07:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:36:05.733 07:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:05.733 07:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:36:05.733 07:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:36:05.733 07:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:36:05.733 07:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:36:05.733 07:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:05.733 07:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:36:05.733 07:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=410143 00:36:05.733 07:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:36:05.733 07:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 410143 00:36:05.733 07:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 410143 ']' 00:36:05.733 07:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:05.733 07:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:05.733 07:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:05.733 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:05.733 07:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:05.734 07:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:36:05.734 [2024-11-18 07:21:26.553351] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:36:05.734 [2024-11-18 07:21:26.554413] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:36:05.734 [2024-11-18 07:21:26.554474] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:05.734 [2024-11-18 07:21:26.625845] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:36:05.734 [2024-11-18 07:21:26.669412] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:05.734 [2024-11-18 07:21:26.669473] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:05.734 [2024-11-18 07:21:26.669508] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:05.734 [2024-11-18 07:21:26.669520] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:05.734 [2024-11-18 07:21:26.669529] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:05.734 [2024-11-18 07:21:26.670949] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:36:05.734 [2024-11-18 07:21:26.671016] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:36:05.734 [2024-11-18 07:21:26.671019] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:05.993 [2024-11-18 07:21:26.753129] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:36:05.993 [2024-11-18 07:21:26.753268] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:36:05.993 [2024-11-18 07:21:26.753280] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:36:05.993 [2024-11-18 07:21:26.753581] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:36:05.993 07:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:05.993 07:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:36:05.993 07:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:36:05.993 07:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:05.993 07:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:36:05.993 07:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:05.993 07:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:36:05.993 07:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:36:06.252 [2024-11-18 07:21:27.063686] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:06.252 07:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:36:06.511 07:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:06.769 [2024-11-18 07:21:27.628049] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:06.769 07:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:36:07.028 07:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:36:07.287 Malloc0 00:36:07.287 07:21:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:36:07.546 Delay0 00:36:07.546 07:21:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:07.804 07:21:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:36:08.370 NULL1 00:36:08.370 07:21:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:36:08.628 07:21:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=410442 00:36:08.628 07:21:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 410442 00:36:08.628 07:21:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:08.628 07:21:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:36:09.561 Read completed with error (sct=0, sc=11) 00:36:09.820 07:21:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:09.820 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:09.820 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:09.820 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:09.820 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:09.820 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:09.820 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:10.079 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:10.079 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:10.079 07:21:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:36:10.079 07:21:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:36:10.337 true 00:36:10.337 07:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 410442 00:36:10.337 07:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:11.270 07:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:11.270 07:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:36:11.270 07:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:36:11.528 true 00:36:11.528 07:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 410442 00:36:11.528 07:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:11.786 07:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:12.044 07:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:36:12.044 07:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:36:12.302 true 00:36:12.302 07:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 410442 00:36:12.302 07:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:12.560 07:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:12.817 07:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:36:12.817 07:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:36:13.075 true 00:36:13.333 07:21:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 410442 00:36:13.333 07:21:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:14.267 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:14.267 07:21:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:14.267 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:14.267 07:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:36:14.267 07:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:36:14.524 true 00:36:14.782 07:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 410442 00:36:14.782 07:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:15.040 07:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:15.299 07:21:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:36:15.299 07:21:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:36:15.582 true 00:36:15.583 07:21:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 410442 00:36:15.583 07:21:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:15.841 07:21:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:16.099 07:21:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:36:16.099 07:21:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:36:16.357 true 00:36:16.357 07:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 410442 00:36:16.357 07:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:17.303 07:21:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:17.303 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:17.562 07:21:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:36:17.562 07:21:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:36:17.819 true 00:36:17.819 07:21:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 410442 00:36:17.819 07:21:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:18.076 07:21:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:18.333 07:21:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:36:18.333 07:21:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:36:18.590 true 00:36:18.590 07:21:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 410442 00:36:18.590 07:21:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:18.848 07:21:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:19.106 07:21:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:36:19.106 07:21:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:36:19.365 true 00:36:19.365 07:21:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 410442 00:36:19.365 07:21:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:20.299 07:21:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:20.557 07:21:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:36:20.557 07:21:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:36:20.815 true 00:36:20.815 07:21:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 410442 00:36:20.815 07:21:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:21.074 07:21:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:21.332 07:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:36:21.332 07:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:36:21.590 true 00:36:21.590 07:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 410442 00:36:21.590 07:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:21.848 07:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:22.107 07:21:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:36:22.107 07:21:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:36:22.365 true 00:36:22.365 07:21:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 410442 00:36:22.365 07:21:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:23.300 07:21:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:23.559 07:21:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:36:23.559 07:21:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:36:23.817 true 00:36:23.817 07:21:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 410442 00:36:23.817 07:21:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:24.075 07:21:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:24.333 07:21:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:36:24.333 07:21:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:36:24.591 true 00:36:24.591 07:21:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 410442 00:36:24.591 07:21:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:25.157 07:21:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:25.157 07:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:36:25.157 07:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:36:25.414 true 00:36:25.415 07:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 410442 00:36:25.415 07:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:26.348 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:26.348 07:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:26.606 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:26.607 07:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:36:26.607 07:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:36:26.864 true 00:36:27.122 07:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 410442 00:36:27.122 07:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:27.380 07:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:27.638 07:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:36:27.638 07:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:36:27.896 true 00:36:27.897 07:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 410442 00:36:27.897 07:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:28.461 07:21:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:28.461 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:28.719 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:28.719 07:21:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:36:28.719 07:21:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:36:28.977 true 00:36:28.977 07:21:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 410442 00:36:28.977 07:21:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:29.543 07:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:29.543 07:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:36:29.543 07:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:36:29.800 true 00:36:30.058 07:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 410442 00:36:30.058 07:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:30.992 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:30.992 07:21:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:30.992 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:30.992 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:30.992 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:30.992 07:21:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:36:30.992 07:21:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:36:31.250 true 00:36:31.250 07:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 410442 00:36:31.250 07:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:31.507 07:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:31.765 07:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:36:31.765 07:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:36:32.023 true 00:36:32.281 07:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 410442 00:36:32.281 07:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:33.214 07:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:33.214 07:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:36:33.214 07:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:36:33.472 true 00:36:33.472 07:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 410442 00:36:33.472 07:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:34.037 07:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:34.037 07:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:36:34.037 07:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:36:34.295 true 00:36:34.295 07:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 410442 00:36:34.295 07:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:34.553 07:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:34.811 07:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:36:34.811 07:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:36:35.069 true 00:36:35.327 07:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 410442 00:36:35.327 07:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:36.260 07:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:36.260 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:36.518 07:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:36:36.518 07:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:36:36.775 true 00:36:36.775 07:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 410442 00:36:36.775 07:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:37.033 07:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:37.291 07:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:36:37.291 07:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:36:37.549 true 00:36:37.549 07:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 410442 00:36:37.549 07:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:37.807 07:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:38.065 07:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:36:38.065 07:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:36:38.326 true 00:36:38.326 07:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 410442 00:36:38.326 07:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:39.262 Initializing NVMe Controllers 00:36:39.262 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:36:39.262 Controller IO queue size 128, less than required. 00:36:39.262 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:36:39.262 Controller IO queue size 128, less than required. 00:36:39.262 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:36:39.262 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:36:39.262 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:36:39.262 Initialization complete. Launching workers. 00:36:39.262 ======================================================== 00:36:39.262 Latency(us) 00:36:39.262 Device Information : IOPS MiB/s Average min max 00:36:39.262 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 718.97 0.35 79856.24 3362.85 1013262.51 00:36:39.262 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 8673.90 4.24 14758.03 2457.96 449337.25 00:36:39.262 ======================================================== 00:36:39.262 Total : 9392.87 4.59 19740.90 2457.96 1013262.51 00:36:39.262 00:36:39.262 07:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:39.520 07:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:36:39.520 07:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:36:39.778 true 00:36:39.778 07:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 410442 00:36:39.778 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (410442) - No such process 00:36:39.778 07:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 410442 00:36:39.778 07:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:40.036 07:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:36:40.293 07:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:36:40.293 07:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:36:40.293 07:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:36:40.293 07:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:36:40.293 07:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:36:40.551 null0 00:36:40.809 07:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:36:40.809 07:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:36:40.809 07:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:36:40.809 null1 00:36:41.067 07:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:36:41.067 07:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:36:41.067 07:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:36:41.325 null2 00:36:41.325 07:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:36:41.325 07:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:36:41.325 07:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:36:41.585 null3 00:36:41.585 07:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:36:41.585 07:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:36:41.585 07:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:36:41.843 null4 00:36:41.844 07:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:36:41.844 07:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:36:41.844 07:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:36:42.102 null5 00:36:42.102 07:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:36:42.102 07:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:36:42.102 07:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:36:42.360 null6 00:36:42.360 07:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:36:42.360 07:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:36:42.360 07:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:36:42.620 null7 00:36:42.620 07:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:36:42.620 07:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:36:42.620 07:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:36:42.620 07:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:36:42.620 07:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:36:42.620 07:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:36:42.620 07:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:36:42.620 07:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:36:42.620 07:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:36:42.620 07:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:36:42.620 07:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:42.620 07:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:36:42.620 07:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:36:42.620 07:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:36:42.620 07:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:36:42.620 07:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:36:42.620 07:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:36:42.620 07:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:36:42.620 07:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:36:42.620 07:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:36:42.620 07:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:36:42.620 07:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:42.620 07:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:36:42.620 07:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:36:42.620 07:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:36:42.620 07:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:36:42.620 07:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:42.620 07:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:36:42.620 07:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:36:42.620 07:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:36:42.620 07:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:36:42.620 07:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:36:42.620 07:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:36:42.620 07:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:36:42.620 07:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:42.620 07:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:36:42.620 07:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:36:42.620 07:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:36:42.620 07:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:36:42.620 07:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:36:42.620 07:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:36:42.620 07:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:36:42.620 07:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:42.620 07:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:36:42.620 07:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:36:42.620 07:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:36:42.620 07:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:36:42.620 07:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:36:42.620 07:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:36:42.620 07:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:36:42.620 07:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:42.620 07:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:36:42.620 07:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:36:42.620 07:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:36:42.620 07:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:36:42.620 07:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:36:42.620 07:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:36:42.620 07:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:36:42.620 07:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:42.620 07:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:36:42.620 07:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:36:42.620 07:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:36:42.620 07:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:36:42.620 07:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:36:42.620 07:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:36:42.620 07:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 414446 414447 414449 414450 414453 414455 414457 414459 00:36:42.620 07:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:36:42.620 07:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:42.620 07:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:36:42.879 07:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:42.879 07:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:36:42.879 07:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:36:42.879 07:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:36:42.879 07:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:36:42.879 07:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:36:42.879 07:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:36:42.879 07:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:36:43.137 07:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:43.137 07:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:43.137 07:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:43.137 07:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:43.138 07:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:36:43.138 07:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:36:43.138 07:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:43.138 07:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:43.138 07:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:36:43.138 07:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:43.138 07:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:43.138 07:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:36:43.138 07:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:43.138 07:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:43.138 07:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:36:43.138 07:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:43.138 07:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:43.138 07:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:36:43.138 07:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:43.138 07:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:43.138 07:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:36:43.138 07:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:43.138 07:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:43.138 07:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:36:43.396 07:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:36:43.396 07:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:36:43.396 07:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:36:43.396 07:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:36:43.396 07:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:36:43.396 07:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:43.396 07:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:36:43.396 07:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:36:43.655 07:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:43.655 07:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:43.655 07:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:36:43.655 07:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:43.655 07:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:43.655 07:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:36:43.655 07:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:43.655 07:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:43.655 07:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:36:43.655 07:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:43.655 07:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:43.655 07:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:36:43.655 07:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:43.655 07:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:43.655 07:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:36:43.655 07:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:43.655 07:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:43.655 07:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:36:43.655 07:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:43.655 07:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:43.655 07:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:36:43.655 07:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:43.655 07:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:43.655 07:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:36:43.914 07:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:36:44.172 07:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:36:44.172 07:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:36:44.172 07:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:36:44.172 07:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:44.172 07:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:36:44.172 07:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:36:44.172 07:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:36:44.430 07:22:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:44.431 07:22:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:44.431 07:22:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:36:44.431 07:22:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:44.431 07:22:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:44.431 07:22:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:36:44.431 07:22:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:44.431 07:22:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:44.431 07:22:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:36:44.431 07:22:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:44.431 07:22:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:44.431 07:22:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:36:44.431 07:22:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:44.431 07:22:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:44.431 07:22:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:36:44.431 07:22:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:44.431 07:22:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:44.431 07:22:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:36:44.431 07:22:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:44.431 07:22:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:44.431 07:22:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:36:44.431 07:22:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:44.431 07:22:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:44.431 07:22:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:36:44.689 07:22:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:36:44.689 07:22:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:36:44.689 07:22:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:36:44.689 07:22:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:36:44.689 07:22:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:44.689 07:22:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:36:44.689 07:22:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:36:44.689 07:22:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:36:44.947 07:22:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:44.947 07:22:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:44.947 07:22:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:36:44.947 07:22:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:44.947 07:22:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:44.947 07:22:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:36:44.947 07:22:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:44.947 07:22:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:44.947 07:22:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:36:44.947 07:22:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:44.947 07:22:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:44.947 07:22:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:36:44.947 07:22:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:44.947 07:22:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:44.947 07:22:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:36:44.947 07:22:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:44.947 07:22:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:44.947 07:22:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:36:44.947 07:22:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:44.947 07:22:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:44.947 07:22:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:36:44.947 07:22:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:44.947 07:22:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:44.947 07:22:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:36:45.206 07:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:36:45.206 07:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:36:45.206 07:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:36:45.206 07:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:36:45.206 07:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:36:45.206 07:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:36:45.206 07:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:45.206 07:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:36:45.464 07:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:45.464 07:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:45.464 07:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:36:45.464 07:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:45.464 07:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:45.464 07:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:36:45.464 07:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:45.464 07:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:45.464 07:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:36:45.465 07:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:45.465 07:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:45.465 07:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:36:45.465 07:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:45.465 07:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:45.465 07:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:36:45.465 07:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:45.465 07:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:45.465 07:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:36:45.465 07:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:45.465 07:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:45.465 07:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:36:45.465 07:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:45.465 07:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:45.465 07:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:36:45.723 07:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:36:45.723 07:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:36:45.723 07:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:36:45.723 07:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:36:45.723 07:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:45.981 07:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:36:45.981 07:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:36:45.981 07:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:36:46.240 07:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:46.240 07:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:46.240 07:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:36:46.240 07:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:46.240 07:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:46.240 07:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:36:46.240 07:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:46.240 07:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:46.240 07:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:36:46.240 07:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:46.240 07:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:46.240 07:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:36:46.240 07:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:46.240 07:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:46.240 07:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:36:46.240 07:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:46.240 07:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:46.240 07:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:36:46.240 07:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:46.240 07:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:46.240 07:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:36:46.240 07:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:46.240 07:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:46.240 07:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:36:46.498 07:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:36:46.498 07:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:36:46.498 07:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:36:46.498 07:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:36:46.498 07:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:46.498 07:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:36:46.498 07:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:36:46.498 07:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:36:46.755 07:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:46.755 07:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:46.755 07:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:36:46.755 07:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:46.755 07:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:46.755 07:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:36:46.755 07:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:46.755 07:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:46.755 07:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:36:46.755 07:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:46.755 07:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:46.755 07:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:36:46.755 07:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:46.755 07:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:46.755 07:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:36:46.755 07:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:46.755 07:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:46.755 07:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:46.755 07:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:36:46.755 07:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:46.755 07:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:36:46.755 07:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:46.755 07:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:46.755 07:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:36:47.013 07:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:36:47.013 07:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:36:47.013 07:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:36:47.013 07:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:47.013 07:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:36:47.013 07:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:36:47.013 07:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:36:47.013 07:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:36:47.272 07:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:47.272 07:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:47.272 07:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:36:47.272 07:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:47.272 07:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:47.272 07:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:36:47.272 07:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:47.272 07:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:47.272 07:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:36:47.272 07:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:47.272 07:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:47.273 07:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:36:47.273 07:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:47.273 07:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:47.273 07:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:36:47.273 07:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:47.273 07:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:47.273 07:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:36:47.273 07:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:47.273 07:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:47.273 07:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:36:47.273 07:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:47.273 07:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:47.273 07:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:36:47.532 07:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:36:47.532 07:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:36:47.532 07:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:47.532 07:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:36:47.791 07:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:36:47.791 07:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:36:47.791 07:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:36:47.791 07:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:36:48.050 07:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:48.050 07:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:48.050 07:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:36:48.050 07:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:48.050 07:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:48.050 07:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:36:48.050 07:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:48.050 07:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:48.050 07:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:36:48.050 07:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:48.050 07:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:48.050 07:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:36:48.050 07:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:48.050 07:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:48.050 07:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:36:48.050 07:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:48.050 07:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:48.050 07:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:36:48.050 07:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:48.050 07:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:48.050 07:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:36:48.050 07:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:48.050 07:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:48.050 07:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:36:48.308 07:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:36:48.308 07:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:36:48.308 07:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:48.308 07:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:36:48.308 07:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:36:48.308 07:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:36:48.308 07:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:36:48.308 07:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:36:48.566 07:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:48.566 07:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:48.566 07:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:48.566 07:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:48.566 07:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:48.566 07:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:48.566 07:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:48.566 07:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:48.566 07:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:48.566 07:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:48.566 07:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:48.566 07:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:48.566 07:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:48.566 07:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:48.566 07:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:48.566 07:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:48.566 07:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:36:48.566 07:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:36:48.566 07:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:36:48.566 07:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:36:48.566 07:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:48.566 07:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:36:48.566 07:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:48.566 07:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:48.566 rmmod nvme_tcp 00:36:48.566 rmmod nvme_fabrics 00:36:48.566 rmmod nvme_keyring 00:36:48.566 07:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:48.566 07:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:36:48.566 07:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:36:48.566 07:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 410143 ']' 00:36:48.566 07:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 410143 00:36:48.566 07:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 410143 ']' 00:36:48.566 07:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 410143 00:36:48.566 07:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:36:48.566 07:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:48.566 07:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 410143 00:36:48.566 07:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:36:48.566 07:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:36:48.566 07:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 410143' 00:36:48.566 killing process with pid 410143 00:36:48.566 07:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 410143 00:36:48.566 07:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 410143 00:36:48.825 07:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:36:48.825 07:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:36:48.825 07:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:36:48.825 07:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:36:48.825 07:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:36:48.825 07:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:36:48.825 07:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:36:48.825 07:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:48.825 07:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:48.825 07:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:48.825 07:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:48.825 07:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:51.367 07:22:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:51.367 00:36:51.367 real 0m47.844s 00:36:51.367 user 3m19.490s 00:36:51.367 sys 0m21.920s 00:36:51.367 07:22:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:51.367 07:22:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:36:51.367 ************************************ 00:36:51.367 END TEST nvmf_ns_hotplug_stress 00:36:51.367 ************************************ 00:36:51.367 07:22:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:36:51.367 07:22:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:36:51.367 07:22:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:51.367 07:22:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:36:51.367 ************************************ 00:36:51.367 START TEST nvmf_delete_subsystem 00:36:51.367 ************************************ 00:36:51.367 07:22:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:36:51.367 * Looking for test storage... 00:36:51.367 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:36:51.367 07:22:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:36:51.367 07:22:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lcov --version 00:36:51.367 07:22:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:36:51.367 07:22:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:36:51.367 07:22:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:51.367 07:22:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:51.367 07:22:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:51.367 07:22:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:36:51.367 07:22:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:36:51.367 07:22:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:36:51.367 07:22:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:36:51.367 07:22:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:36:51.367 07:22:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:36:51.367 07:22:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:36:51.367 07:22:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:51.367 07:22:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:36:51.367 07:22:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:36:51.367 07:22:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:51.367 07:22:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:51.367 07:22:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:36:51.367 07:22:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:36:51.367 07:22:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:51.367 07:22:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:36:51.367 07:22:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:36:51.367 07:22:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:36:51.367 07:22:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:36:51.367 07:22:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:51.367 07:22:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:36:51.367 07:22:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:36:51.367 07:22:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:51.367 07:22:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:51.367 07:22:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:36:51.367 07:22:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:51.367 07:22:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:36:51.367 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:51.367 --rc genhtml_branch_coverage=1 00:36:51.367 --rc genhtml_function_coverage=1 00:36:51.367 --rc genhtml_legend=1 00:36:51.367 --rc geninfo_all_blocks=1 00:36:51.367 --rc geninfo_unexecuted_blocks=1 00:36:51.367 00:36:51.367 ' 00:36:51.367 07:22:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:36:51.367 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:51.367 --rc genhtml_branch_coverage=1 00:36:51.367 --rc genhtml_function_coverage=1 00:36:51.367 --rc genhtml_legend=1 00:36:51.367 --rc geninfo_all_blocks=1 00:36:51.367 --rc geninfo_unexecuted_blocks=1 00:36:51.367 00:36:51.367 ' 00:36:51.367 07:22:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:36:51.367 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:51.367 --rc genhtml_branch_coverage=1 00:36:51.367 --rc genhtml_function_coverage=1 00:36:51.367 --rc genhtml_legend=1 00:36:51.367 --rc geninfo_all_blocks=1 00:36:51.367 --rc geninfo_unexecuted_blocks=1 00:36:51.367 00:36:51.367 ' 00:36:51.367 07:22:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:36:51.367 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:51.367 --rc genhtml_branch_coverage=1 00:36:51.367 --rc genhtml_function_coverage=1 00:36:51.367 --rc genhtml_legend=1 00:36:51.367 --rc geninfo_all_blocks=1 00:36:51.367 --rc geninfo_unexecuted_blocks=1 00:36:51.367 00:36:51.367 ' 00:36:51.367 07:22:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:51.367 07:22:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:36:51.367 07:22:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:51.367 07:22:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:51.367 07:22:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:51.367 07:22:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:51.367 07:22:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:51.367 07:22:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:51.367 07:22:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:51.367 07:22:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:51.367 07:22:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:51.367 07:22:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:51.367 07:22:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:36:51.367 07:22:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:36:51.368 07:22:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:51.368 07:22:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:51.368 07:22:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:51.368 07:22:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:51.368 07:22:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:51.368 07:22:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:36:51.368 07:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:51.368 07:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:51.368 07:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:51.368 07:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:51.368 07:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:51.368 07:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:51.368 07:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:36:51.368 07:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:51.368 07:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:36:51.368 07:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:51.368 07:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:51.368 07:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:51.368 07:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:51.368 07:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:51.368 07:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:36:51.368 07:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:36:51.368 07:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:51.368 07:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:51.368 07:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:51.368 07:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:36:51.368 07:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:36:51.368 07:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:51.368 07:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:36:51.368 07:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:36:51.368 07:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:36:51.368 07:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:51.368 07:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:51.368 07:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:51.368 07:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:36:51.368 07:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:36:51.368 07:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:36:51.368 07:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:36:53.270 07:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:53.270 07:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:36:53.270 07:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:53.270 07:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:53.270 07:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:53.270 07:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:53.270 07:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:53.270 07:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:36:53.270 07:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:53.270 07:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:36:53.270 07:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:36:53.270 07:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:36:53.270 07:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:36:53.270 07:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:36:53.270 07:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:36:53.270 07:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:53.270 07:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:53.270 07:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:53.270 07:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:53.270 07:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:53.270 07:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:53.270 07:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:53.270 07:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:36:53.270 07:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:53.270 07:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:53.270 07:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:53.270 07:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:53.270 07:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:36:53.270 07:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:36:53.270 07:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:36:53.270 07:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:36:53.270 07:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:36:53.270 07:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:36:53.270 07:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:53.270 07:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:36:53.270 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:36:53.270 07:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:53.270 07:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:53.270 07:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:53.270 07:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:53.270 07:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:53.270 07:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:53.270 07:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:36:53.270 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:36:53.270 07:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:53.270 07:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:53.270 07:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:53.270 07:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:53.270 07:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:53.270 07:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:36:53.270 07:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:36:53.270 07:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:36:53.270 07:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:53.270 07:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:53.270 07:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:53.270 07:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:53.270 07:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:53.271 07:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:53.271 07:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:53.271 07:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:36:53.271 Found net devices under 0000:0a:00.0: cvl_0_0 00:36:53.271 07:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:53.271 07:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:53.271 07:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:53.271 07:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:53.271 07:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:53.271 07:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:53.271 07:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:53.271 07:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:53.271 07:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:36:53.271 Found net devices under 0000:0a:00.1: cvl_0_1 00:36:53.271 07:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:53.271 07:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:36:53.271 07:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:36:53.271 07:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:36:53.271 07:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:36:53.271 07:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:36:53.271 07:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:53.271 07:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:53.271 07:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:53.271 07:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:53.271 07:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:36:53.271 07:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:53.271 07:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:53.271 07:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:36:53.271 07:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:36:53.271 07:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:53.271 07:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:53.271 07:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:36:53.271 07:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:36:53.271 07:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:36:53.271 07:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:53.271 07:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:53.271 07:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:53.271 07:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:36:53.271 07:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:53.271 07:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:53.271 07:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:53.271 07:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:36:53.271 07:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:36:53.271 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:53.271 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.272 ms 00:36:53.271 00:36:53.271 --- 10.0.0.2 ping statistics --- 00:36:53.271 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:53.271 rtt min/avg/max/mdev = 0.272/0.272/0.272/0.000 ms 00:36:53.271 07:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:53.271 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:53.271 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.089 ms 00:36:53.271 00:36:53.271 --- 10.0.0.1 ping statistics --- 00:36:53.271 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:53.271 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 00:36:53.271 07:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:53.271 07:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:36:53.271 07:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:36:53.271 07:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:53.271 07:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:36:53.271 07:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:36:53.271 07:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:53.271 07:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:36:53.271 07:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:36:53.530 07:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:36:53.530 07:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:36:53.530 07:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:53.530 07:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:36:53.530 07:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=417333 00:36:53.530 07:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:36:53.530 07:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 417333 00:36:53.530 07:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 417333 ']' 00:36:53.530 07:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:53.530 07:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:53.530 07:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:53.530 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:53.530 07:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:53.530 07:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:36:53.530 [2024-11-18 07:22:14.317173] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:36:53.530 [2024-11-18 07:22:14.318282] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:36:53.530 [2024-11-18 07:22:14.318344] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:53.530 [2024-11-18 07:22:14.390162] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:36:53.530 [2024-11-18 07:22:14.431912] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:53.530 [2024-11-18 07:22:14.431976] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:53.530 [2024-11-18 07:22:14.432005] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:53.530 [2024-11-18 07:22:14.432017] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:53.530 [2024-11-18 07:22:14.432027] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:53.530 [2024-11-18 07:22:14.433363] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:53.530 [2024-11-18 07:22:14.433368] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:53.797 [2024-11-18 07:22:14.514305] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:36:53.797 [2024-11-18 07:22:14.514342] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:36:53.797 [2024-11-18 07:22:14.514629] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:36:53.797 07:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:53.797 07:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:36:53.797 07:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:36:53.797 07:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:53.797 07:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:36:53.797 07:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:53.797 07:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:36:53.797 07:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:53.797 07:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:36:53.797 [2024-11-18 07:22:14.570170] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:53.797 07:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:53.797 07:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:36:53.797 07:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:53.797 07:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:36:53.797 07:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:53.797 07:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:53.797 07:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:53.797 07:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:36:53.797 [2024-11-18 07:22:14.590353] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:53.797 07:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:53.797 07:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:36:53.797 07:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:53.797 07:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:36:53.797 NULL1 00:36:53.797 07:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:53.797 07:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:36:53.797 07:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:53.798 07:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:36:53.798 Delay0 00:36:53.798 07:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:53.798 07:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:53.798 07:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:53.798 07:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:36:53.798 07:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:53.798 07:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=417359 00:36:53.798 07:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:36:53.798 07:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:36:53.798 [2024-11-18 07:22:14.669506] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:36:55.822 07:22:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:36:55.822 07:22:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:55.822 07:22:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:36:55.822 Read completed with error (sct=0, sc=8) 00:36:55.822 Write completed with error (sct=0, sc=8) 00:36:55.822 starting I/O failed: -6 00:36:55.822 Read completed with error (sct=0, sc=8) 00:36:55.822 Read completed with error (sct=0, sc=8) 00:36:55.822 Read completed with error (sct=0, sc=8) 00:36:55.822 Read completed with error (sct=0, sc=8) 00:36:55.822 starting I/O failed: -6 00:36:55.822 Read completed with error (sct=0, sc=8) 00:36:55.822 Write completed with error (sct=0, sc=8) 00:36:55.822 Read completed with error (sct=0, sc=8) 00:36:55.822 Write completed with error (sct=0, sc=8) 00:36:55.822 starting I/O failed: -6 00:36:55.822 Write completed with error (sct=0, sc=8) 00:36:55.822 Read completed with error (sct=0, sc=8) 00:36:55.822 Read completed with error (sct=0, sc=8) 00:36:55.822 Read completed with error (sct=0, sc=8) 00:36:55.822 starting I/O failed: -6 00:36:55.822 Write completed with error (sct=0, sc=8) 00:36:55.822 Read completed with error (sct=0, sc=8) 00:36:55.822 Write completed with error (sct=0, sc=8) 00:36:55.822 Read completed with error (sct=0, sc=8) 00:36:55.822 starting I/O failed: -6 00:36:55.822 Write completed with error (sct=0, sc=8) 00:36:55.822 Write completed with error (sct=0, sc=8) 00:36:55.822 Read completed with error (sct=0, sc=8) 00:36:55.822 Write completed with error (sct=0, sc=8) 00:36:55.822 starting I/O failed: -6 00:36:55.822 Read completed with error (sct=0, sc=8) 00:36:55.822 Write completed with error (sct=0, sc=8) 00:36:55.822 Write completed with error (sct=0, sc=8) 00:36:55.822 Read completed with error (sct=0, sc=8) 00:36:55.822 starting I/O failed: -6 00:36:55.822 Read completed with error (sct=0, sc=8) 00:36:55.822 Read completed with error (sct=0, sc=8) 00:36:55.822 Read completed with error (sct=0, sc=8) 00:36:55.822 Read completed with error (sct=0, sc=8) 00:36:55.822 starting I/O failed: -6 00:36:55.822 Read completed with error (sct=0, sc=8) 00:36:55.822 Read completed with error (sct=0, sc=8) 00:36:55.822 Write completed with error (sct=0, sc=8) 00:36:55.822 Read completed with error (sct=0, sc=8) 00:36:55.822 starting I/O failed: -6 00:36:55.822 Read completed with error (sct=0, sc=8) 00:36:55.822 Read completed with error (sct=0, sc=8) 00:36:55.822 Read completed with error (sct=0, sc=8) 00:36:55.822 Write completed with error (sct=0, sc=8) 00:36:55.822 starting I/O failed: -6 00:36:55.822 Read completed with error (sct=0, sc=8) 00:36:55.822 Read completed with error (sct=0, sc=8) 00:36:55.822 Read completed with error (sct=0, sc=8) 00:36:55.822 Write completed with error (sct=0, sc=8) 00:36:55.822 starting I/O failed: -6 00:36:55.822 Read completed with error (sct=0, sc=8) 00:36:55.822 Read completed with error (sct=0, sc=8) 00:36:55.822 Read completed with error (sct=0, sc=8) 00:36:55.822 [2024-11-18 07:22:16.749938] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d1810 is same with the state(6) to be set 00:36:55.822 Read completed with error (sct=0, sc=8) 00:36:55.822 Write completed with error (sct=0, sc=8) 00:36:55.822 Read completed with error (sct=0, sc=8) 00:36:55.822 starting I/O failed: -6 00:36:55.822 Write completed with error (sct=0, sc=8) 00:36:55.822 Write completed with error (sct=0, sc=8) 00:36:55.822 Read completed with error (sct=0, sc=8) 00:36:55.822 Write completed with error (sct=0, sc=8) 00:36:55.822 Write completed with error (sct=0, sc=8) 00:36:55.822 Read completed with error (sct=0, sc=8) 00:36:55.822 Read completed with error (sct=0, sc=8) 00:36:55.822 starting I/O failed: -6 00:36:55.822 Read completed with error (sct=0, sc=8) 00:36:55.822 Read completed with error (sct=0, sc=8) 00:36:55.822 Read completed with error (sct=0, sc=8) 00:36:55.822 Write completed with error (sct=0, sc=8) 00:36:55.822 Read completed with error (sct=0, sc=8) 00:36:55.822 Read completed with error (sct=0, sc=8) 00:36:55.822 Read completed with error (sct=0, sc=8) 00:36:55.822 Read completed with error (sct=0, sc=8) 00:36:55.822 Read completed with error (sct=0, sc=8) 00:36:55.822 Read completed with error (sct=0, sc=8) 00:36:55.822 starting I/O failed: -6 00:36:55.822 Write completed with error (sct=0, sc=8) 00:36:55.822 Read completed with error (sct=0, sc=8) 00:36:55.822 Write completed with error (sct=0, sc=8) 00:36:55.822 Write completed with error (sct=0, sc=8) 00:36:55.822 Read completed with error (sct=0, sc=8) 00:36:55.822 Read completed with error (sct=0, sc=8) 00:36:55.822 Write completed with error (sct=0, sc=8) 00:36:55.822 Read completed with error (sct=0, sc=8) 00:36:55.822 Write completed with error (sct=0, sc=8) 00:36:55.822 Write completed with error (sct=0, sc=8) 00:36:55.822 Read completed with error (sct=0, sc=8) 00:36:55.822 starting I/O failed: -6 00:36:55.823 Write completed with error (sct=0, sc=8) 00:36:55.823 Read completed with error (sct=0, sc=8) 00:36:55.823 Write completed with error (sct=0, sc=8) 00:36:55.823 Write completed with error (sct=0, sc=8) 00:36:55.823 Read completed with error (sct=0, sc=8) 00:36:55.823 Read completed with error (sct=0, sc=8) 00:36:55.823 Write completed with error (sct=0, sc=8) 00:36:55.823 Read completed with error (sct=0, sc=8) 00:36:55.823 Write completed with error (sct=0, sc=8) 00:36:55.823 Write completed with error (sct=0, sc=8) 00:36:55.823 Write completed with error (sct=0, sc=8) 00:36:55.823 Read completed with error (sct=0, sc=8) 00:36:55.823 Read completed with error (sct=0, sc=8) 00:36:55.823 starting I/O failed: -6 00:36:55.823 Write completed with error (sct=0, sc=8) 00:36:55.823 Read completed with error (sct=0, sc=8) 00:36:55.823 Read completed with error (sct=0, sc=8) 00:36:55.823 Write completed with error (sct=0, sc=8) 00:36:55.823 Write completed with error (sct=0, sc=8) 00:36:55.823 Write completed with error (sct=0, sc=8) 00:36:55.823 Read completed with error (sct=0, sc=8) 00:36:55.823 Write completed with error (sct=0, sc=8) 00:36:55.823 Read completed with error (sct=0, sc=8) 00:36:55.823 Read completed with error (sct=0, sc=8) 00:36:55.823 starting I/O failed: -6 00:36:55.823 Write completed with error (sct=0, sc=8) 00:36:55.823 Read completed with error (sct=0, sc=8) 00:36:55.823 Read completed with error (sct=0, sc=8) 00:36:55.823 Read completed with error (sct=0, sc=8) 00:36:55.823 Read completed with error (sct=0, sc=8) 00:36:55.823 Read completed with error (sct=0, sc=8) 00:36:55.823 Read completed with error (sct=0, sc=8) 00:36:55.823 Read completed with error (sct=0, sc=8) 00:36:55.823 Write completed with error (sct=0, sc=8) 00:36:55.823 Write completed with error (sct=0, sc=8) 00:36:55.823 Read completed with error (sct=0, sc=8) 00:36:55.823 starting I/O failed: -6 00:36:55.823 Write completed with error (sct=0, sc=8) 00:36:55.823 Write completed with error (sct=0, sc=8) 00:36:55.823 Read completed with error (sct=0, sc=8) 00:36:55.823 Write completed with error (sct=0, sc=8) 00:36:55.823 Read completed with error (sct=0, sc=8) 00:36:55.823 Read completed with error (sct=0, sc=8) 00:36:55.823 Read completed with error (sct=0, sc=8) 00:36:55.823 Read completed with error (sct=0, sc=8) 00:36:55.823 Write completed with error (sct=0, sc=8) 00:36:55.823 starting I/O failed: -6 00:36:55.823 Write completed with error (sct=0, sc=8) 00:36:55.823 Read completed with error (sct=0, sc=8) 00:36:55.823 Write completed with error (sct=0, sc=8) 00:36:55.823 Read completed with error (sct=0, sc=8) 00:36:55.823 Read completed with error (sct=0, sc=8) 00:36:55.823 Write completed with error (sct=0, sc=8) 00:36:55.823 Read completed with error (sct=0, sc=8) 00:36:55.823 Read completed with error (sct=0, sc=8) 00:36:55.823 Read completed with error (sct=0, sc=8) 00:36:55.823 Read completed with error (sct=0, sc=8) 00:36:55.823 Read completed with error (sct=0, sc=8) 00:36:55.823 Write completed with error (sct=0, sc=8) 00:36:55.823 starting I/O failed: -6 00:36:55.823 Read completed with error (sct=0, sc=8) 00:36:55.823 Read completed with error (sct=0, sc=8) 00:36:55.823 Read completed with error (sct=0, sc=8) 00:36:55.823 Read completed with error (sct=0, sc=8) 00:36:55.823 Read completed with error (sct=0, sc=8) 00:36:55.823 Write completed with error (sct=0, sc=8) 00:36:55.823 Read completed with error (sct=0, sc=8) 00:36:55.823 Read completed with error (sct=0, sc=8) 00:36:55.823 Read completed with error (sct=0, sc=8) 00:36:55.823 starting I/O failed: -6 00:36:55.823 Read completed with error (sct=0, sc=8) 00:36:55.823 Read completed with error (sct=0, sc=8) 00:36:55.823 Read completed with error (sct=0, sc=8) 00:36:55.823 [2024-11-18 07:22:16.750676] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f98a400d4b0 is same with the state(6) to be set 00:36:55.823 Write completed with error (sct=0, sc=8) 00:36:55.823 Write completed with error (sct=0, sc=8) 00:36:55.823 Read completed with error (sct=0, sc=8) 00:36:55.823 Read completed with error (sct=0, sc=8) 00:36:55.823 Read completed with error (sct=0, sc=8) 00:36:55.823 Read completed with error (sct=0, sc=8) 00:36:55.823 Read completed with error (sct=0, sc=8) 00:36:55.823 Read completed with error (sct=0, sc=8) 00:36:55.823 Read completed with error (sct=0, sc=8) 00:36:55.823 Read completed with error (sct=0, sc=8) 00:36:55.823 Read completed with error (sct=0, sc=8) 00:36:55.823 Write completed with error (sct=0, sc=8) 00:36:55.823 Read completed with error (sct=0, sc=8) 00:36:55.823 Write completed with error (sct=0, sc=8) 00:36:55.823 Read completed with error (sct=0, sc=8) 00:36:55.823 Write completed with error (sct=0, sc=8) 00:36:55.823 Read completed with error (sct=0, sc=8) 00:36:55.823 Read completed with error (sct=0, sc=8) 00:36:55.823 Write completed with error (sct=0, sc=8) 00:36:55.823 Write completed with error (sct=0, sc=8) 00:36:55.823 Read completed with error (sct=0, sc=8) 00:36:55.823 Read completed with error (sct=0, sc=8) 00:36:55.823 Read completed with error (sct=0, sc=8) 00:36:55.823 Read completed with error (sct=0, sc=8) 00:36:55.823 Read completed with error (sct=0, sc=8) 00:36:55.823 Read completed with error (sct=0, sc=8) 00:36:55.823 Write completed with error (sct=0, sc=8) 00:36:55.823 Read completed with error (sct=0, sc=8) 00:36:55.823 Read completed with error (sct=0, sc=8) 00:36:55.823 Write completed with error (sct=0, sc=8) 00:36:55.823 Read completed with error (sct=0, sc=8) 00:36:55.823 Read completed with error (sct=0, sc=8) 00:36:55.823 Read completed with error (sct=0, sc=8) 00:36:55.823 Read completed with error (sct=0, sc=8) 00:36:55.823 Read completed with error (sct=0, sc=8) 00:36:55.823 Read completed with error (sct=0, sc=8) 00:36:55.823 Read completed with error (sct=0, sc=8) 00:36:55.823 Read completed with error (sct=0, sc=8) 00:36:55.823 Read completed with error (sct=0, sc=8) 00:36:55.823 Read completed with error (sct=0, sc=8) 00:36:55.823 Read completed with error (sct=0, sc=8) 00:36:55.823 Read completed with error (sct=0, sc=8) 00:36:55.823 Read completed with error (sct=0, sc=8) 00:36:55.823 Write completed with error (sct=0, sc=8) 00:36:55.823 Write completed with error (sct=0, sc=8) 00:36:55.823 Read completed with error (sct=0, sc=8) 00:36:55.823 Read completed with error (sct=0, sc=8) 00:36:55.823 Read completed with error (sct=0, sc=8) 00:36:55.823 Write completed with error (sct=0, sc=8) 00:36:55.823 Write completed with error (sct=0, sc=8) 00:36:55.823 Read completed with error (sct=0, sc=8) 00:36:55.823 Read completed with error (sct=0, sc=8) 00:36:55.823 Write completed with error (sct=0, sc=8) 00:36:56.757 [2024-11-18 07:22:17.725482] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22df5b0 is same with the state(6) to be set 00:36:57.015 Read completed with error (sct=0, sc=8) 00:36:57.015 Read completed with error (sct=0, sc=8) 00:36:57.015 Read completed with error (sct=0, sc=8) 00:36:57.015 Read completed with error (sct=0, sc=8) 00:36:57.015 Read completed with error (sct=0, sc=8) 00:36:57.015 Read completed with error (sct=0, sc=8) 00:36:57.015 Write completed with error (sct=0, sc=8) 00:36:57.015 Read completed with error (sct=0, sc=8) 00:36:57.015 Write completed with error (sct=0, sc=8) 00:36:57.015 Read completed with error (sct=0, sc=8) 00:36:57.015 Write completed with error (sct=0, sc=8) 00:36:57.015 Read completed with error (sct=0, sc=8) 00:36:57.015 Write completed with error (sct=0, sc=8) 00:36:57.015 Write completed with error (sct=0, sc=8) 00:36:57.015 Write completed with error (sct=0, sc=8) 00:36:57.015 Read completed with error (sct=0, sc=8) 00:36:57.015 Read completed with error (sct=0, sc=8) 00:36:57.015 Write completed with error (sct=0, sc=8) 00:36:57.015 Write completed with error (sct=0, sc=8) 00:36:57.015 Write completed with error (sct=0, sc=8) 00:36:57.015 Write completed with error (sct=0, sc=8) 00:36:57.015 Read completed with error (sct=0, sc=8) 00:36:57.015 Write completed with error (sct=0, sc=8) 00:36:57.015 Read completed with error (sct=0, sc=8) 00:36:57.015 Write completed with error (sct=0, sc=8) 00:36:57.015 [2024-11-18 07:22:17.752394] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d13f0 is same with the state(6) to be set 00:36:57.015 Read completed with error (sct=0, sc=8) 00:36:57.015 Read completed with error (sct=0, sc=8) 00:36:57.015 Write completed with error (sct=0, sc=8) 00:36:57.015 Read completed with error (sct=0, sc=8) 00:36:57.015 Read completed with error (sct=0, sc=8) 00:36:57.015 Read completed with error (sct=0, sc=8) 00:36:57.015 Read completed with error (sct=0, sc=8) 00:36:57.015 Read completed with error (sct=0, sc=8) 00:36:57.015 Read completed with error (sct=0, sc=8) 00:36:57.015 Read completed with error (sct=0, sc=8) 00:36:57.015 Read completed with error (sct=0, sc=8) 00:36:57.015 Read completed with error (sct=0, sc=8) 00:36:57.015 Read completed with error (sct=0, sc=8) 00:36:57.015 Read completed with error (sct=0, sc=8) 00:36:57.015 Write completed with error (sct=0, sc=8) 00:36:57.015 Write completed with error (sct=0, sc=8) 00:36:57.015 Write completed with error (sct=0, sc=8) 00:36:57.015 Write completed with error (sct=0, sc=8) 00:36:57.015 Read completed with error (sct=0, sc=8) 00:36:57.015 Read completed with error (sct=0, sc=8) 00:36:57.015 Read completed with error (sct=0, sc=8) 00:36:57.015 Read completed with error (sct=0, sc=8) 00:36:57.015 Read completed with error (sct=0, sc=8) 00:36:57.015 Read completed with error (sct=0, sc=8) 00:36:57.015 [2024-11-18 07:22:17.752601] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d1b40 is same with the state(6) to be set 00:36:57.015 Read completed with error (sct=0, sc=8) 00:36:57.015 Read completed with error (sct=0, sc=8) 00:36:57.015 Read completed with error (sct=0, sc=8) 00:36:57.015 Read completed with error (sct=0, sc=8) 00:36:57.015 Write completed with error (sct=0, sc=8) 00:36:57.015 Read completed with error (sct=0, sc=8) 00:36:57.015 Write completed with error (sct=0, sc=8) 00:36:57.015 Read completed with error (sct=0, sc=8) 00:36:57.015 Read completed with error (sct=0, sc=8) 00:36:57.015 Write completed with error (sct=0, sc=8) 00:36:57.015 Read completed with error (sct=0, sc=8) 00:36:57.015 Read completed with error (sct=0, sc=8) 00:36:57.015 Read completed with error (sct=0, sc=8) 00:36:57.015 Write completed with error (sct=0, sc=8) 00:36:57.015 Read completed with error (sct=0, sc=8) 00:36:57.015 Read completed with error (sct=0, sc=8) 00:36:57.015 Write completed with error (sct=0, sc=8) 00:36:57.015 Read completed with error (sct=0, sc=8) 00:36:57.015 Read completed with error (sct=0, sc=8) 00:36:57.015 Write completed with error (sct=0, sc=8) 00:36:57.015 Write completed with error (sct=0, sc=8) 00:36:57.016 [2024-11-18 07:22:17.752974] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f98a400d020 is same with the state(6) to be set 00:36:57.016 Read completed with error (sct=0, sc=8) 00:36:57.016 Read completed with error (sct=0, sc=8) 00:36:57.016 Write completed with error (sct=0, sc=8) 00:36:57.016 Read completed with error (sct=0, sc=8) 00:36:57.016 Read completed with error (sct=0, sc=8) 00:36:57.016 Read completed with error (sct=0, sc=8) 00:36:57.016 Read completed with error (sct=0, sc=8) 00:36:57.016 Write completed with error (sct=0, sc=8) 00:36:57.016 Write completed with error (sct=0, sc=8) 00:36:57.016 Read completed with error (sct=0, sc=8) 00:36:57.016 Write completed with error (sct=0, sc=8) 00:36:57.016 Read completed with error (sct=0, sc=8) 00:36:57.016 Read completed with error (sct=0, sc=8) 00:36:57.016 Read completed with error (sct=0, sc=8) 00:36:57.016 Read completed with error (sct=0, sc=8) 00:36:57.016 Write completed with error (sct=0, sc=8) 00:36:57.016 Read completed with error (sct=0, sc=8) 00:36:57.016 Read completed with error (sct=0, sc=8) 00:36:57.016 Read completed with error (sct=0, sc=8) 00:36:57.016 Read completed with error (sct=0, sc=8) 00:36:57.016 [2024-11-18 07:22:17.753133] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f98a400d7e0 is same with the state(6) to be set 00:36:57.016 Initializing NVMe Controllers 00:36:57.016 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:36:57.016 Controller IO queue size 128, less than required. 00:36:57.016 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:36:57.016 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:36:57.016 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:36:57.016 Initialization complete. Launching workers. 00:36:57.016 ======================================================== 00:36:57.016 Latency(us) 00:36:57.016 Device Information : IOPS MiB/s Average min max 00:36:57.016 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 169.73 0.08 897468.98 708.42 1012536.01 00:36:57.016 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 162.78 0.08 911614.05 350.67 1012993.44 00:36:57.016 ======================================================== 00:36:57.016 Total : 332.50 0.16 904393.73 350.67 1012993.44 00:36:57.016 00:36:57.016 [2024-11-18 07:22:17.753929] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22df5b0 (9): Bad file descriptor 00:36:57.016 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:36:57.016 07:22:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:57.016 07:22:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:36:57.016 07:22:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 417359 00:36:57.016 07:22:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:36:57.584 07:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:36:57.584 07:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 417359 00:36:57.584 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (417359) - No such process 00:36:57.584 07:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 417359 00:36:57.584 07:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:36:57.584 07:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 417359 00:36:57.584 07:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:36:57.584 07:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:57.584 07:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:36:57.584 07:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:57.584 07:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 417359 00:36:57.584 07:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:36:57.584 07:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:36:57.584 07:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:36:57.584 07:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:36:57.584 07:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:36:57.584 07:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:57.584 07:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:36:57.584 07:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:57.584 07:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:57.584 07:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:57.584 07:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:36:57.584 [2024-11-18 07:22:18.274303] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:57.584 07:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:57.584 07:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:57.584 07:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:57.584 07:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:36:57.584 07:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:57.584 07:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=417768 00:36:57.584 07:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:36:57.584 07:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 417768 00:36:57.584 07:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:36:57.584 07:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:36:57.584 [2024-11-18 07:22:18.336788] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:36:57.844 07:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:36:57.844 07:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 417768 00:36:57.844 07:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:36:58.412 07:22:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:36:58.412 07:22:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 417768 00:36:58.412 07:22:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:36:58.978 07:22:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:36:58.978 07:22:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 417768 00:36:58.978 07:22:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:36:59.545 07:22:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:36:59.545 07:22:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 417768 00:36:59.545 07:22:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:37:00.114 07:22:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:37:00.114 07:22:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 417768 00:37:00.114 07:22:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:37:00.372 07:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:37:00.372 07:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 417768 00:37:00.372 07:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:37:00.630 Initializing NVMe Controllers 00:37:00.630 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:37:00.630 Controller IO queue size 128, less than required. 00:37:00.630 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:37:00.630 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:37:00.630 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:37:00.630 Initialization complete. Launching workers. 00:37:00.630 ======================================================== 00:37:00.630 Latency(us) 00:37:00.630 Device Information : IOPS MiB/s Average min max 00:37:00.630 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1004740.25 1000178.06 1013010.37 00:37:00.630 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1006037.85 1000261.72 1043474.45 00:37:00.630 ======================================================== 00:37:00.630 Total : 256.00 0.12 1005389.05 1000178.06 1043474.45 00:37:00.630 00:37:00.889 07:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:37:00.889 07:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 417768 00:37:00.889 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (417768) - No such process 00:37:00.889 07:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 417768 00:37:00.889 07:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:37:00.889 07:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:37:00.889 07:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:37:00.889 07:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:37:00.889 07:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:00.889 07:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:37:00.889 07:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:00.889 07:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:00.889 rmmod nvme_tcp 00:37:00.889 rmmod nvme_fabrics 00:37:00.889 rmmod nvme_keyring 00:37:00.889 07:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:00.889 07:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:37:00.889 07:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:37:00.889 07:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 417333 ']' 00:37:00.889 07:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 417333 00:37:00.889 07:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 417333 ']' 00:37:00.889 07:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 417333 00:37:00.889 07:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:37:00.889 07:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:00.889 07:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 417333 00:37:01.148 07:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:37:01.148 07:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:37:01.148 07:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 417333' 00:37:01.148 killing process with pid 417333 00:37:01.148 07:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 417333 00:37:01.148 07:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 417333 00:37:01.148 07:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:37:01.148 07:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:37:01.148 07:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:37:01.148 07:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:37:01.148 07:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:37:01.148 07:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:37:01.148 07:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:37:01.148 07:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:01.148 07:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:01.148 07:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:01.148 07:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:01.148 07:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:03.684 07:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:03.684 00:37:03.684 real 0m12.288s 00:37:03.684 user 0m24.412s 00:37:03.684 sys 0m3.741s 00:37:03.685 07:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:03.685 07:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:03.685 ************************************ 00:37:03.685 END TEST nvmf_delete_subsystem 00:37:03.685 ************************************ 00:37:03.685 07:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:37:03.685 07:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:37:03.685 07:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:03.685 07:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:37:03.685 ************************************ 00:37:03.685 START TEST nvmf_host_management 00:37:03.685 ************************************ 00:37:03.685 07:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:37:03.685 * Looking for test storage... 00:37:03.685 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:37:03.685 07:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:37:03.685 07:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1693 -- # lcov --version 00:37:03.685 07:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:37:03.685 07:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:37:03.685 07:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:03.685 07:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:03.685 07:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:03.685 07:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:37:03.685 07:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:37:03.685 07:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:37:03.685 07:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:37:03.685 07:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:37:03.685 07:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:37:03.685 07:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:37:03.685 07:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:03.685 07:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:37:03.685 07:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:37:03.685 07:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:03.685 07:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:03.685 07:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:37:03.685 07:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:37:03.685 07:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:03.685 07:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:37:03.685 07:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:37:03.685 07:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:37:03.685 07:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:37:03.685 07:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:03.685 07:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:37:03.685 07:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:37:03.685 07:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:03.685 07:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:03.685 07:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:37:03.685 07:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:03.685 07:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:37:03.685 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:03.685 --rc genhtml_branch_coverage=1 00:37:03.685 --rc genhtml_function_coverage=1 00:37:03.685 --rc genhtml_legend=1 00:37:03.685 --rc geninfo_all_blocks=1 00:37:03.685 --rc geninfo_unexecuted_blocks=1 00:37:03.685 00:37:03.685 ' 00:37:03.685 07:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:37:03.685 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:03.685 --rc genhtml_branch_coverage=1 00:37:03.685 --rc genhtml_function_coverage=1 00:37:03.685 --rc genhtml_legend=1 00:37:03.685 --rc geninfo_all_blocks=1 00:37:03.685 --rc geninfo_unexecuted_blocks=1 00:37:03.685 00:37:03.685 ' 00:37:03.685 07:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:37:03.685 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:03.685 --rc genhtml_branch_coverage=1 00:37:03.685 --rc genhtml_function_coverage=1 00:37:03.685 --rc genhtml_legend=1 00:37:03.685 --rc geninfo_all_blocks=1 00:37:03.685 --rc geninfo_unexecuted_blocks=1 00:37:03.685 00:37:03.685 ' 00:37:03.685 07:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:37:03.685 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:03.685 --rc genhtml_branch_coverage=1 00:37:03.685 --rc genhtml_function_coverage=1 00:37:03.685 --rc genhtml_legend=1 00:37:03.685 --rc geninfo_all_blocks=1 00:37:03.685 --rc geninfo_unexecuted_blocks=1 00:37:03.685 00:37:03.685 ' 00:37:03.685 07:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:03.685 07:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:37:03.685 07:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:03.685 07:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:03.685 07:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:03.685 07:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:03.685 07:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:03.685 07:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:03.685 07:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:03.685 07:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:03.685 07:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:03.685 07:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:03.685 07:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:37:03.685 07:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:37:03.685 07:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:03.685 07:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:03.685 07:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:03.685 07:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:03.685 07:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:03.685 07:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:37:03.685 07:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:03.685 07:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:03.685 07:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:03.686 07:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:03.686 07:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:03.686 07:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:03.686 07:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:37:03.686 07:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:03.686 07:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:37:03.686 07:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:03.686 07:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:03.686 07:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:03.686 07:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:03.686 07:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:03.686 07:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:37:03.686 07:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:37:03.686 07:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:03.686 07:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:03.686 07:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:03.686 07:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:37:03.686 07:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:37:03.686 07:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:37:03.686 07:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:37:03.686 07:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:03.686 07:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:37:03.686 07:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:37:03.686 07:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:37:03.686 07:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:03.686 07:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:03.686 07:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:03.686 07:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:37:03.686 07:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:37:03.686 07:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:37:03.686 07:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:05.591 07:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:05.591 07:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:37:05.591 07:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:37:05.591 07:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:37:05.591 07:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:37:05.591 07:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:37:05.591 07:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:37:05.591 07:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:37:05.591 07:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:37:05.591 07:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:37:05.591 07:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:37:05.591 07:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:37:05.591 07:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:37:05.591 07:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:37:05.591 07:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:37:05.591 07:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:05.591 07:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:05.591 07:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:05.591 07:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:05.591 07:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:05.591 07:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:05.591 07:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:05.591 07:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:37:05.591 07:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:05.591 07:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:05.591 07:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:05.591 07:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:05.592 07:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:37:05.592 07:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:37:05.592 07:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:37:05.592 07:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:37:05.592 07:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:37:05.592 07:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:37:05.592 07:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:05.592 07:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:37:05.592 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:37:05.592 07:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:05.592 07:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:05.592 07:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:05.592 07:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:05.592 07:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:05.592 07:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:05.592 07:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:37:05.592 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:37:05.592 07:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:05.592 07:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:05.592 07:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:05.592 07:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:05.592 07:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:05.592 07:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:37:05.592 07:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:37:05.592 07:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:37:05.592 07:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:05.592 07:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:05.592 07:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:05.592 07:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:05.592 07:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:05.592 07:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:05.592 07:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:05.592 07:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:37:05.592 Found net devices under 0000:0a:00.0: cvl_0_0 00:37:05.592 07:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:05.592 07:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:05.592 07:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:05.592 07:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:05.592 07:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:05.592 07:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:05.592 07:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:05.592 07:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:05.592 07:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:37:05.592 Found net devices under 0000:0a:00.1: cvl_0_1 00:37:05.592 07:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:05.592 07:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:37:05.592 07:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:37:05.592 07:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:37:05.592 07:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:37:05.592 07:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:37:05.592 07:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:05.592 07:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:05.592 07:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:05.592 07:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:05.592 07:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:37:05.592 07:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:05.592 07:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:05.592 07:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:37:05.592 07:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:37:05.592 07:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:05.592 07:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:05.592 07:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:37:05.592 07:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:37:05.592 07:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:37:05.592 07:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:05.592 07:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:05.592 07:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:05.592 07:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:37:05.592 07:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:05.592 07:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:05.592 07:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:05.592 07:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:37:05.592 07:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:37:05.853 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:05.853 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.233 ms 00:37:05.853 00:37:05.853 --- 10.0.0.2 ping statistics --- 00:37:05.853 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:05.853 rtt min/avg/max/mdev = 0.233/0.233/0.233/0.000 ms 00:37:05.853 07:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:05.853 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:05.853 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.118 ms 00:37:05.853 00:37:05.853 --- 10.0.0.1 ping statistics --- 00:37:05.853 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:05.853 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:37:05.853 07:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:05.853 07:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:37:05.853 07:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:37:05.853 07:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:05.853 07:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:37:05.853 07:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:37:05.853 07:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:05.853 07:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:37:05.853 07:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:37:05.853 07:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:37:05.853 07:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:37:05.853 07:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:37:05.853 07:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:37:05.853 07:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:05.853 07:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:05.853 07:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=420225 00:37:05.853 07:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1E 00:37:05.853 07:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 420225 00:37:05.853 07:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 420225 ']' 00:37:05.853 07:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:05.853 07:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:05.853 07:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:05.853 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:05.853 07:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:05.853 07:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:05.853 [2024-11-18 07:22:26.647248] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:37:05.853 [2024-11-18 07:22:26.648252] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:37:05.853 [2024-11-18 07:22:26.648317] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:05.854 [2024-11-18 07:22:26.721382] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:37:05.854 [2024-11-18 07:22:26.773517] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:05.854 [2024-11-18 07:22:26.773590] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:05.854 [2024-11-18 07:22:26.773604] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:05.854 [2024-11-18 07:22:26.773616] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:05.854 [2024-11-18 07:22:26.773626] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:05.854 [2024-11-18 07:22:26.775347] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:37:05.854 [2024-11-18 07:22:26.775414] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:37:05.854 [2024-11-18 07:22:26.775465] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:37:05.854 [2024-11-18 07:22:26.775468] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:06.113 [2024-11-18 07:22:26.871607] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:37:06.113 [2024-11-18 07:22:26.871849] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:37:06.113 [2024-11-18 07:22:26.872143] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:37:06.113 [2024-11-18 07:22:26.872802] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:37:06.113 [2024-11-18 07:22:26.873036] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:37:06.113 07:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:06.113 07:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:37:06.113 07:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:37:06.113 07:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:06.113 07:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:06.113 07:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:06.113 07:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:37:06.113 07:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:06.113 07:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:06.113 [2024-11-18 07:22:26.924212] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:06.113 07:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:06.113 07:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:37:06.113 07:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:06.113 07:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:06.113 07:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:37:06.113 07:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:37:06.113 07:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:37:06.113 07:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:06.113 07:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:06.113 Malloc0 00:37:06.113 [2024-11-18 07:22:26.996326] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:06.113 07:22:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:06.113 07:22:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:37:06.113 07:22:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:06.113 07:22:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:06.113 07:22:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=420268 00:37:06.113 07:22:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 420268 /var/tmp/bdevperf.sock 00:37:06.113 07:22:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 420268 ']' 00:37:06.113 07:22:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:37:06.113 07:22:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:37:06.113 07:22:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:37:06.113 07:22:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:06.113 07:22:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:37:06.113 07:22:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:37:06.113 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:37:06.113 07:22:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:37:06.113 07:22:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:06.113 07:22:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:06.113 07:22:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:37:06.113 07:22:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:37:06.113 { 00:37:06.113 "params": { 00:37:06.113 "name": "Nvme$subsystem", 00:37:06.113 "trtype": "$TEST_TRANSPORT", 00:37:06.113 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:06.113 "adrfam": "ipv4", 00:37:06.113 "trsvcid": "$NVMF_PORT", 00:37:06.113 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:06.113 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:06.113 "hdgst": ${hdgst:-false}, 00:37:06.113 "ddgst": ${ddgst:-false} 00:37:06.113 }, 00:37:06.113 "method": "bdev_nvme_attach_controller" 00:37:06.113 } 00:37:06.113 EOF 00:37:06.113 )") 00:37:06.113 07:22:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:37:06.113 07:22:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:37:06.113 07:22:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:37:06.113 07:22:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:37:06.113 "params": { 00:37:06.113 "name": "Nvme0", 00:37:06.113 "trtype": "tcp", 00:37:06.113 "traddr": "10.0.0.2", 00:37:06.113 "adrfam": "ipv4", 00:37:06.113 "trsvcid": "4420", 00:37:06.113 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:06.113 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:06.113 "hdgst": false, 00:37:06.113 "ddgst": false 00:37:06.113 }, 00:37:06.113 "method": "bdev_nvme_attach_controller" 00:37:06.113 }' 00:37:06.113 [2024-11-18 07:22:27.080172] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:37:06.113 [2024-11-18 07:22:27.080263] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid420268 ] 00:37:06.373 [2024-11-18 07:22:27.151139] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:06.373 [2024-11-18 07:22:27.198288] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:06.632 Running I/O for 10 seconds... 00:37:06.632 07:22:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:06.632 07:22:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:37:06.632 07:22:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:37:06.632 07:22:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:06.632 07:22:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:06.632 07:22:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:06.632 07:22:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:37:06.632 07:22:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:37:06.632 07:22:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:37:06.632 07:22:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:37:06.632 07:22:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:37:06.632 07:22:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:37:06.632 07:22:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:37:06.632 07:22:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:37:06.632 07:22:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:37:06.632 07:22:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:37:06.632 07:22:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:06.632 07:22:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:06.632 07:22:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:06.632 07:22:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:37:06.632 07:22:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:37:06.632 07:22:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:37:06.891 07:22:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:37:06.891 07:22:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:37:06.891 07:22:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:37:06.891 07:22:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:37:06.891 07:22:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:06.891 07:22:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:06.891 07:22:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:07.151 07:22:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=579 00:37:07.151 07:22:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 579 -ge 100 ']' 00:37:07.151 07:22:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:37:07.151 07:22:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@60 -- # break 00:37:07.151 07:22:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:37:07.151 07:22:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:37:07.151 07:22:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:07.151 07:22:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:07.151 [2024-11-18 07:22:27.888604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.151 [2024-11-18 07:22:27.888667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:07.151 [2024-11-18 07:22:27.888696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:82048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.151 [2024-11-18 07:22:27.888713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:07.151 [2024-11-18 07:22:27.888730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:82176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.151 [2024-11-18 07:22:27.888745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:07.151 [2024-11-18 07:22:27.888760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:82304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.151 [2024-11-18 07:22:27.888775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:07.151 [2024-11-18 07:22:27.888799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:82432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.151 [2024-11-18 07:22:27.888814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:07.151 [2024-11-18 07:22:27.888829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:82560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.151 [2024-11-18 07:22:27.888863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:07.151 [2024-11-18 07:22:27.888879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:82688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.151 [2024-11-18 07:22:27.888894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:07.151 [2024-11-18 07:22:27.888909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:82816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.151 [2024-11-18 07:22:27.888923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:07.151 [2024-11-18 07:22:27.888938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:82944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.151 [2024-11-18 07:22:27.888953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:07.151 [2024-11-18 07:22:27.888968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:83072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.151 [2024-11-18 07:22:27.888983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:07.151 [2024-11-18 07:22:27.888997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:83200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.151 [2024-11-18 07:22:27.889012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:07.151 [2024-11-18 07:22:27.889027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:83328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.151 [2024-11-18 07:22:27.889057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:07.151 [2024-11-18 07:22:27.889072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:83456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.151 [2024-11-18 07:22:27.889085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:07.151 [2024-11-18 07:22:27.889100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:83584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.151 [2024-11-18 07:22:27.889114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:07.151 [2024-11-18 07:22:27.889128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:83712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.151 [2024-11-18 07:22:27.889142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:07.151 [2024-11-18 07:22:27.889157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:83840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.151 [2024-11-18 07:22:27.889171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:07.151 [2024-11-18 07:22:27.889186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:83968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.151 [2024-11-18 07:22:27.889201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:07.151 [2024-11-18 07:22:27.889216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:84096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.151 [2024-11-18 07:22:27.889230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:07.151 [2024-11-18 07:22:27.889248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:84224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.151 [2024-11-18 07:22:27.889262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:07.151 [2024-11-18 07:22:27.889278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:84352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.151 [2024-11-18 07:22:27.889292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:07.151 [2024-11-18 07:22:27.889308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:84480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.151 [2024-11-18 07:22:27.889322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:07.151 [2024-11-18 07:22:27.889336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:84608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.151 [2024-11-18 07:22:27.889350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:07.151 [2024-11-18 07:22:27.889365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:84736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.151 [2024-11-18 07:22:27.889379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:07.151 [2024-11-18 07:22:27.889393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:84864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.151 [2024-11-18 07:22:27.889407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:07.151 [2024-11-18 07:22:27.889422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:84992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.151 [2024-11-18 07:22:27.889436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:07.152 [2024-11-18 07:22:27.889451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:85120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.152 [2024-11-18 07:22:27.889464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:07.152 [2024-11-18 07:22:27.889479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:85248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.152 [2024-11-18 07:22:27.889516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:07.152 [2024-11-18 07:22:27.889535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:85376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.152 [2024-11-18 07:22:27.889550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:07.152 [2024-11-18 07:22:27.889564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:85504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.152 [2024-11-18 07:22:27.889578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:07.152 [2024-11-18 07:22:27.889593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:85632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.152 [2024-11-18 07:22:27.889607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:07.152 [2024-11-18 07:22:27.889622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:85760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.152 [2024-11-18 07:22:27.889640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:07.152 [2024-11-18 07:22:27.889656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:85888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.152 [2024-11-18 07:22:27.889670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:07.152 [2024-11-18 07:22:27.889685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:86016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.152 [2024-11-18 07:22:27.889699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:07.152 [2024-11-18 07:22:27.889714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:86144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.152 [2024-11-18 07:22:27.889729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:07.152 [2024-11-18 07:22:27.889744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:86272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.152 [2024-11-18 07:22:27.889757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:07.152 [2024-11-18 07:22:27.889773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:86400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.152 [2024-11-18 07:22:27.889790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:07.152 [2024-11-18 07:22:27.889806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:86528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.152 [2024-11-18 07:22:27.889819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:07.152 [2024-11-18 07:22:27.889834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:86656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.152 [2024-11-18 07:22:27.889854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:07.152 [2024-11-18 07:22:27.889869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:86784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.152 [2024-11-18 07:22:27.889898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:07.152 [2024-11-18 07:22:27.889914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:86912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.152 [2024-11-18 07:22:27.889927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:07.152 [2024-11-18 07:22:27.889942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:87040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.152 [2024-11-18 07:22:27.889956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:07.152 [2024-11-18 07:22:27.889970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:87168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.152 [2024-11-18 07:22:27.889983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:07.152 [2024-11-18 07:22:27.889998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:87296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.152 [2024-11-18 07:22:27.890012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:07.152 [2024-11-18 07:22:27.890030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:87424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.152 [2024-11-18 07:22:27.890044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:07.152 [2024-11-18 07:22:27.890059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:87552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.152 [2024-11-18 07:22:27.890072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:07.152 [2024-11-18 07:22:27.890086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:87680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.152 [2024-11-18 07:22:27.890100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:07.152 [2024-11-18 07:22:27.890114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:87808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.152 [2024-11-18 07:22:27.890128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:07.152 [2024-11-18 07:22:27.890142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:87936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.152 [2024-11-18 07:22:27.890156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:07.152 [2024-11-18 07:22:27.890171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:88064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.152 [2024-11-18 07:22:27.890184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:07.152 [2024-11-18 07:22:27.890199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:88192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.152 [2024-11-18 07:22:27.890212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:07.152 [2024-11-18 07:22:27.890227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:88320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.152 [2024-11-18 07:22:27.890240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:07.152 [2024-11-18 07:22:27.890262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:88448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.152 [2024-11-18 07:22:27.890277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:07.152 [2024-11-18 07:22:27.890292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:88576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.152 [2024-11-18 07:22:27.890305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:07.152 [2024-11-18 07:22:27.890319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:88704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.152 [2024-11-18 07:22:27.890334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:07.152 [2024-11-18 07:22:27.890348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:88832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.152 [2024-11-18 07:22:27.890363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:07.152 [2024-11-18 07:22:27.890377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:88960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.152 [2024-11-18 07:22:27.890394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:07.152 [2024-11-18 07:22:27.890410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:89088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.152 [2024-11-18 07:22:27.890424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:07.152 [2024-11-18 07:22:27.890438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:89216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.152 [2024-11-18 07:22:27.890452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:07.152 [2024-11-18 07:22:27.890481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:89344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.152 [2024-11-18 07:22:27.890506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:07.152 [2024-11-18 07:22:27.890522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:89472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.152 [2024-11-18 07:22:27.890544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:07.153 [2024-11-18 07:22:27.890559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:89600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.153 [2024-11-18 07:22:27.890573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:07.153 [2024-11-18 07:22:27.890588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:89728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.153 [2024-11-18 07:22:27.890602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:07.153 [2024-11-18 07:22:27.890617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:89856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.153 [2024-11-18 07:22:27.890631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:07.153 [2024-11-18 07:22:27.890646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:89984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.153 [2024-11-18 07:22:27.890660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:07.153 [2024-11-18 07:22:27.890822] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:37:07.153 [2024-11-18 07:22:27.890844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:07.153 [2024-11-18 07:22:27.890859] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:37:07.153 [2024-11-18 07:22:27.890886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:07.153 [2024-11-18 07:22:27.890901] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:37:07.153 [2024-11-18 07:22:27.890915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:07.153 [2024-11-18 07:22:27.890931] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:37:07.153 [2024-11-18 07:22:27.890946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:07.153 [2024-11-18 07:22:27.890964] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1ed70 is same with the state(6) to be set 00:37:07.153 [2024-11-18 07:22:27.892137] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:37:07.153 07:22:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:07.153 07:22:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:37:07.153 07:22:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:07.153 07:22:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:07.153 task offset: 81920 on job bdev=Nvme0n1 fails 00:37:07.153 00:37:07.153 Latency(us) 00:37:07.153 [2024-11-18T06:22:28.131Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:07.153 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:37:07.153 Job: Nvme0n1 ended in about 0.40 seconds with error 00:37:07.153 Verification LBA range: start 0x0 length 0x400 00:37:07.153 Nvme0n1 : 0.40 1607.38 100.46 160.74 0.00 35140.18 3179.71 34564.17 00:37:07.153 [2024-11-18T06:22:28.131Z] =================================================================================================================== 00:37:07.153 [2024-11-18T06:22:28.131Z] Total : 1607.38 100.46 160.74 0.00 35140.18 3179.71 34564.17 00:37:07.153 [2024-11-18 07:22:27.894012] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:37:07.153 [2024-11-18 07:22:27.894040] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e1ed70 (9): Bad file descriptor 00:37:07.153 [2024-11-18 07:22:27.895213] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:37:07.153 [2024-11-18 07:22:27.895354] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:37:07.153 [2024-11-18 07:22:27.895383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:07.153 [2024-11-18 07:22:27.895409] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:37:07.153 [2024-11-18 07:22:27.895427] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:37:07.153 [2024-11-18 07:22:27.895441] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:07.153 [2024-11-18 07:22:27.895453] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e1ed70 00:37:07.153 [2024-11-18 07:22:27.895486] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e1ed70 (9): Bad file descriptor 00:37:07.153 [2024-11-18 07:22:27.895522] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:37:07.153 [2024-11-18 07:22:27.895550] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:37:07.153 [2024-11-18 07:22:27.895565] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:37:07.153 [2024-11-18 07:22:27.895580] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:37:07.153 07:22:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:07.153 07:22:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:37:08.089 07:22:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 420268 00:37:08.089 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (420268) - No such process 00:37:08.089 07:22:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # true 00:37:08.089 07:22:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:37:08.089 07:22:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:37:08.089 07:22:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:37:08.089 07:22:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:37:08.089 07:22:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:37:08.089 07:22:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:37:08.089 07:22:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:37:08.089 { 00:37:08.089 "params": { 00:37:08.089 "name": "Nvme$subsystem", 00:37:08.089 "trtype": "$TEST_TRANSPORT", 00:37:08.089 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:08.089 "adrfam": "ipv4", 00:37:08.089 "trsvcid": "$NVMF_PORT", 00:37:08.089 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:08.089 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:08.089 "hdgst": ${hdgst:-false}, 00:37:08.089 "ddgst": ${ddgst:-false} 00:37:08.089 }, 00:37:08.089 "method": "bdev_nvme_attach_controller" 00:37:08.089 } 00:37:08.089 EOF 00:37:08.089 )") 00:37:08.089 07:22:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:37:08.089 07:22:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:37:08.089 07:22:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:37:08.089 07:22:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:37:08.089 "params": { 00:37:08.089 "name": "Nvme0", 00:37:08.089 "trtype": "tcp", 00:37:08.089 "traddr": "10.0.0.2", 00:37:08.089 "adrfam": "ipv4", 00:37:08.089 "trsvcid": "4420", 00:37:08.089 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:08.089 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:08.089 "hdgst": false, 00:37:08.089 "ddgst": false 00:37:08.089 }, 00:37:08.089 "method": "bdev_nvme_attach_controller" 00:37:08.089 }' 00:37:08.089 [2024-11-18 07:22:28.949503] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:37:08.089 [2024-11-18 07:22:28.949629] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid420544 ] 00:37:08.089 [2024-11-18 07:22:29.019886] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:08.089 [2024-11-18 07:22:29.066441] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:08.352 Running I/O for 1 seconds... 00:37:09.547 1664.00 IOPS, 104.00 MiB/s 00:37:09.547 Latency(us) 00:37:09.547 [2024-11-18T06:22:30.525Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:09.547 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:37:09.547 Verification LBA range: start 0x0 length 0x400 00:37:09.547 Nvme0n1 : 1.03 1681.58 105.10 0.00 0.00 37448.58 6893.42 33399.09 00:37:09.547 [2024-11-18T06:22:30.525Z] =================================================================================================================== 00:37:09.547 [2024-11-18T06:22:30.525Z] Total : 1681.58 105.10 0.00 0.00 37448.58 6893.42 33399.09 00:37:09.547 07:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:37:09.547 07:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:37:09.547 07:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:37:09.547 07:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:37:09.547 07:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:37:09.547 07:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:37:09.547 07:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:37:09.547 07:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:09.547 07:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:37:09.547 07:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:09.547 07:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:09.547 rmmod nvme_tcp 00:37:09.547 rmmod nvme_fabrics 00:37:09.547 rmmod nvme_keyring 00:37:09.547 07:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:09.547 07:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:37:09.547 07:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:37:09.547 07:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 420225 ']' 00:37:09.547 07:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 420225 00:37:09.547 07:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 420225 ']' 00:37:09.547 07:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 420225 00:37:09.547 07:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:37:09.547 07:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:09.547 07:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 420225 00:37:09.806 07:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:37:09.806 07:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:37:09.806 07:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 420225' 00:37:09.806 killing process with pid 420225 00:37:09.806 07:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 420225 00:37:09.806 07:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 420225 00:37:09.806 [2024-11-18 07:22:30.730914] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:37:09.806 07:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:37:09.806 07:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:37:09.806 07:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:37:09.806 07:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:37:09.806 07:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:37:09.806 07:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:37:09.806 07:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:37:09.806 07:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:09.806 07:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:09.806 07:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:09.806 07:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:09.806 07:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:12.346 07:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:12.346 07:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:37:12.346 00:37:12.346 real 0m8.634s 00:37:12.346 user 0m16.892s 00:37:12.346 sys 0m3.665s 00:37:12.346 07:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:12.346 07:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:12.346 ************************************ 00:37:12.346 END TEST nvmf_host_management 00:37:12.346 ************************************ 00:37:12.346 07:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:37:12.346 07:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:37:12.346 07:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:12.346 07:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:37:12.346 ************************************ 00:37:12.346 START TEST nvmf_lvol 00:37:12.346 ************************************ 00:37:12.346 07:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:37:12.346 * Looking for test storage... 00:37:12.346 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:37:12.346 07:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:37:12.346 07:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1693 -- # lcov --version 00:37:12.346 07:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:37:12.346 07:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:37:12.346 07:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:12.346 07:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:12.346 07:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:12.346 07:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:37:12.346 07:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:37:12.346 07:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:37:12.346 07:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:37:12.346 07:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:37:12.346 07:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:37:12.346 07:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:37:12.346 07:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:12.346 07:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:37:12.346 07:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:37:12.346 07:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:12.346 07:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:12.346 07:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:37:12.346 07:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:37:12.346 07:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:12.347 07:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:37:12.347 07:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:37:12.347 07:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:37:12.347 07:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:37:12.347 07:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:12.347 07:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:37:12.347 07:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:37:12.347 07:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:12.347 07:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:12.347 07:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:37:12.347 07:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:12.347 07:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:37:12.347 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:12.347 --rc genhtml_branch_coverage=1 00:37:12.347 --rc genhtml_function_coverage=1 00:37:12.347 --rc genhtml_legend=1 00:37:12.347 --rc geninfo_all_blocks=1 00:37:12.347 --rc geninfo_unexecuted_blocks=1 00:37:12.347 00:37:12.347 ' 00:37:12.347 07:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:37:12.347 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:12.347 --rc genhtml_branch_coverage=1 00:37:12.347 --rc genhtml_function_coverage=1 00:37:12.347 --rc genhtml_legend=1 00:37:12.347 --rc geninfo_all_blocks=1 00:37:12.347 --rc geninfo_unexecuted_blocks=1 00:37:12.347 00:37:12.347 ' 00:37:12.347 07:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:37:12.347 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:12.347 --rc genhtml_branch_coverage=1 00:37:12.347 --rc genhtml_function_coverage=1 00:37:12.347 --rc genhtml_legend=1 00:37:12.347 --rc geninfo_all_blocks=1 00:37:12.347 --rc geninfo_unexecuted_blocks=1 00:37:12.347 00:37:12.347 ' 00:37:12.347 07:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:37:12.347 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:12.347 --rc genhtml_branch_coverage=1 00:37:12.347 --rc genhtml_function_coverage=1 00:37:12.347 --rc genhtml_legend=1 00:37:12.347 --rc geninfo_all_blocks=1 00:37:12.347 --rc geninfo_unexecuted_blocks=1 00:37:12.347 00:37:12.347 ' 00:37:12.347 07:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:12.347 07:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:37:12.347 07:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:12.347 07:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:12.347 07:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:12.347 07:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:12.347 07:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:12.347 07:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:12.347 07:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:12.347 07:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:12.347 07:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:12.347 07:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:12.347 07:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:37:12.347 07:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:37:12.347 07:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:12.347 07:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:12.347 07:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:12.347 07:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:12.347 07:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:12.347 07:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:37:12.347 07:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:12.347 07:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:12.347 07:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:12.347 07:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:12.347 07:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:12.347 07:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:12.347 07:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:37:12.347 07:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:12.347 07:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:37:12.347 07:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:12.347 07:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:12.347 07:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:12.347 07:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:12.347 07:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:12.347 07:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:37:12.347 07:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:37:12.347 07:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:12.347 07:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:12.347 07:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:12.347 07:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:37:12.347 07:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:37:12.347 07:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:37:12.347 07:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:37:12.347 07:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:37:12.347 07:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:37:12.347 07:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:37:12.347 07:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:12.347 07:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:37:12.347 07:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:37:12.347 07:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:37:12.347 07:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:12.347 07:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:12.347 07:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:12.347 07:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:37:12.348 07:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:37:12.348 07:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:37:12.348 07:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:37:14.254 07:22:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:14.254 07:22:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:37:14.254 07:22:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:37:14.254 07:22:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:37:14.254 07:22:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:37:14.254 07:22:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:37:14.254 07:22:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:37:14.254 07:22:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:37:14.254 07:22:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:37:14.254 07:22:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:37:14.254 07:22:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:37:14.254 07:22:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:37:14.254 07:22:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:37:14.254 07:22:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:37:14.254 07:22:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:37:14.254 07:22:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:14.255 07:22:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:14.255 07:22:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:14.255 07:22:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:14.255 07:22:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:14.255 07:22:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:14.255 07:22:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:14.255 07:22:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:37:14.255 07:22:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:14.255 07:22:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:14.255 07:22:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:14.255 07:22:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:14.255 07:22:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:37:14.255 07:22:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:37:14.255 07:22:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:37:14.255 07:22:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:37:14.255 07:22:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:37:14.255 07:22:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:37:14.255 07:22:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:14.255 07:22:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:37:14.255 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:37:14.255 07:22:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:14.255 07:22:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:14.255 07:22:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:14.255 07:22:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:14.255 07:22:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:14.255 07:22:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:14.255 07:22:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:37:14.255 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:37:14.255 07:22:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:14.255 07:22:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:14.255 07:22:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:14.255 07:22:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:14.255 07:22:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:14.255 07:22:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:37:14.255 07:22:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:37:14.255 07:22:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:37:14.255 07:22:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:14.255 07:22:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:14.255 07:22:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:14.255 07:22:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:14.255 07:22:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:14.255 07:22:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:14.255 07:22:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:14.255 07:22:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:37:14.255 Found net devices under 0000:0a:00.0: cvl_0_0 00:37:14.255 07:22:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:14.255 07:22:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:14.255 07:22:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:14.255 07:22:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:14.255 07:22:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:14.255 07:22:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:14.255 07:22:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:14.255 07:22:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:14.255 07:22:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:37:14.255 Found net devices under 0000:0a:00.1: cvl_0_1 00:37:14.255 07:22:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:14.255 07:22:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:37:14.255 07:22:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:37:14.255 07:22:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:37:14.255 07:22:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:37:14.255 07:22:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:37:14.255 07:22:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:14.255 07:22:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:14.255 07:22:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:14.255 07:22:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:14.255 07:22:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:37:14.255 07:22:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:14.255 07:22:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:14.255 07:22:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:37:14.255 07:22:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:37:14.255 07:22:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:14.255 07:22:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:14.255 07:22:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:37:14.255 07:22:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:37:14.255 07:22:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:37:14.255 07:22:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:14.255 07:22:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:14.255 07:22:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:14.255 07:22:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:37:14.255 07:22:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:14.255 07:22:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:14.255 07:22:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:14.255 07:22:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:37:14.255 07:22:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:37:14.255 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:14.255 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.297 ms 00:37:14.255 00:37:14.255 --- 10.0.0.2 ping statistics --- 00:37:14.255 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:14.255 rtt min/avg/max/mdev = 0.297/0.297/0.297/0.000 ms 00:37:14.255 07:22:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:14.255 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:14.255 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.183 ms 00:37:14.255 00:37:14.255 --- 10.0.0.1 ping statistics --- 00:37:14.255 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:14.255 rtt min/avg/max/mdev = 0.183/0.183/0.183/0.000 ms 00:37:14.255 07:22:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:14.255 07:22:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:37:14.255 07:22:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:37:14.255 07:22:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:14.255 07:22:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:37:14.255 07:22:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:37:14.255 07:22:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:14.255 07:22:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:37:14.255 07:22:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:37:14.255 07:22:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:37:14.256 07:22:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:37:14.256 07:22:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:14.256 07:22:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:37:14.256 07:22:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=422622 00:37:14.256 07:22:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x7 00:37:14.256 07:22:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 422622 00:37:14.256 07:22:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 422622 ']' 00:37:14.256 07:22:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:14.256 07:22:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:14.256 07:22:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:14.256 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:14.256 07:22:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:14.256 07:22:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:37:14.256 [2024-11-18 07:22:35.224892] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:37:14.256 [2024-11-18 07:22:35.225976] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:37:14.256 [2024-11-18 07:22:35.226028] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:14.512 [2024-11-18 07:22:35.298155] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:37:14.512 [2024-11-18 07:22:35.343114] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:14.513 [2024-11-18 07:22:35.343170] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:14.513 [2024-11-18 07:22:35.343193] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:14.513 [2024-11-18 07:22:35.343204] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:14.513 [2024-11-18 07:22:35.343214] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:14.513 [2024-11-18 07:22:35.344608] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:14.513 [2024-11-18 07:22:35.344670] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:37:14.513 [2024-11-18 07:22:35.344674] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:14.513 [2024-11-18 07:22:35.426671] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:37:14.513 [2024-11-18 07:22:35.426847] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:37:14.513 [2024-11-18 07:22:35.426880] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:37:14.513 [2024-11-18 07:22:35.427122] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:37:14.513 07:22:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:14.513 07:22:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:37:14.513 07:22:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:37:14.513 07:22:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:14.513 07:22:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:37:14.513 07:22:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:14.513 07:22:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:37:14.771 [2024-11-18 07:22:35.725413] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:15.030 07:22:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:37:15.290 07:22:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:37:15.290 07:22:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:37:15.550 07:22:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:37:15.550 07:22:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:37:15.809 07:22:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:37:16.067 07:22:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=a56852da-bba6-4a28-9fc8-086709b0c072 00:37:16.067 07:22:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u a56852da-bba6-4a28-9fc8-086709b0c072 lvol 20 00:37:16.325 07:22:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=0a997a3f-ca33-48fd-b659-625c80ef1c04 00:37:16.325 07:22:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:37:16.583 07:22:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 0a997a3f-ca33-48fd-b659-625c80ef1c04 00:37:16.841 07:22:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:37:17.100 [2024-11-18 07:22:37.981548] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:17.100 07:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:37:17.358 07:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=423043 00:37:17.358 07:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:37:17.358 07:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:37:18.737 07:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 0a997a3f-ca33-48fd-b659-625c80ef1c04 MY_SNAPSHOT 00:37:18.737 07:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=a440b4a6-9309-461c-8dcb-0740e9378664 00:37:18.737 07:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 0a997a3f-ca33-48fd-b659-625c80ef1c04 30 00:37:18.996 07:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone a440b4a6-9309-461c-8dcb-0740e9378664 MY_CLONE 00:37:19.255 07:22:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=3c206bc1-5e22-4458-9c9c-5696cb6609bd 00:37:19.255 07:22:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 3c206bc1-5e22-4458-9c9c-5696cb6609bd 00:37:19.822 07:22:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 423043 00:37:27.936 Initializing NVMe Controllers 00:37:27.936 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:37:27.936 Controller IO queue size 128, less than required. 00:37:27.936 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:37:27.936 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:37:27.936 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:37:27.936 Initialization complete. Launching workers. 00:37:27.936 ======================================================== 00:37:27.936 Latency(us) 00:37:27.936 Device Information : IOPS MiB/s Average min max 00:37:27.936 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10694.44 41.78 11971.97 4955.19 72908.76 00:37:27.936 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10598.04 41.40 12084.55 5058.06 76745.68 00:37:27.936 ======================================================== 00:37:27.936 Total : 21292.49 83.17 12028.00 4955.19 76745.68 00:37:27.936 00:37:27.936 07:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:37:28.194 07:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 0a997a3f-ca33-48fd-b659-625c80ef1c04 00:37:28.452 07:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u a56852da-bba6-4a28-9fc8-086709b0c072 00:37:28.710 07:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:37:28.710 07:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:37:28.710 07:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:37:28.710 07:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:37:28.710 07:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:37:28.710 07:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:28.710 07:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:37:28.710 07:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:28.710 07:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:28.710 rmmod nvme_tcp 00:37:28.710 rmmod nvme_fabrics 00:37:28.710 rmmod nvme_keyring 00:37:28.710 07:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:28.710 07:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:37:28.710 07:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:37:28.710 07:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 422622 ']' 00:37:28.710 07:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 422622 00:37:28.710 07:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 422622 ']' 00:37:28.710 07:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 422622 00:37:28.710 07:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:37:28.710 07:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:28.710 07:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 422622 00:37:28.710 07:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:37:28.710 07:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:37:28.710 07:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 422622' 00:37:28.710 killing process with pid 422622 00:37:28.710 07:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 422622 00:37:28.710 07:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 422622 00:37:28.970 07:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:37:28.970 07:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:37:28.970 07:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:37:28.970 07:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:37:28.970 07:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:37:28.970 07:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:37:28.970 07:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:37:28.970 07:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:28.970 07:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:28.970 07:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:28.970 07:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:28.970 07:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:31.511 07:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:31.511 00:37:31.511 real 0m19.046s 00:37:31.511 user 0m56.483s 00:37:31.511 sys 0m7.674s 00:37:31.511 07:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:31.511 07:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:37:31.511 ************************************ 00:37:31.511 END TEST nvmf_lvol 00:37:31.511 ************************************ 00:37:31.511 07:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:37:31.511 07:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:37:31.511 07:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:31.511 07:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:37:31.511 ************************************ 00:37:31.511 START TEST nvmf_lvs_grow 00:37:31.511 ************************************ 00:37:31.511 07:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:37:31.511 * Looking for test storage... 00:37:31.511 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:37:31.511 07:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:37:31.511 07:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lcov --version 00:37:31.511 07:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:37:31.511 07:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:37:31.511 07:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:31.511 07:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:31.511 07:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:31.511 07:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:37:31.511 07:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:37:31.511 07:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:37:31.511 07:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:37:31.511 07:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:37:31.511 07:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:37:31.511 07:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:37:31.511 07:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:31.511 07:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:37:31.511 07:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:37:31.511 07:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:31.511 07:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:31.511 07:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:37:31.511 07:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:37:31.511 07:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:31.511 07:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:37:31.511 07:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:37:31.511 07:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:37:31.511 07:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:37:31.511 07:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:31.511 07:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:37:31.511 07:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:37:31.511 07:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:31.511 07:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:31.511 07:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:37:31.511 07:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:31.511 07:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:37:31.511 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:31.511 --rc genhtml_branch_coverage=1 00:37:31.511 --rc genhtml_function_coverage=1 00:37:31.511 --rc genhtml_legend=1 00:37:31.511 --rc geninfo_all_blocks=1 00:37:31.511 --rc geninfo_unexecuted_blocks=1 00:37:31.511 00:37:31.511 ' 00:37:31.511 07:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:37:31.511 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:31.511 --rc genhtml_branch_coverage=1 00:37:31.511 --rc genhtml_function_coverage=1 00:37:31.511 --rc genhtml_legend=1 00:37:31.511 --rc geninfo_all_blocks=1 00:37:31.511 --rc geninfo_unexecuted_blocks=1 00:37:31.511 00:37:31.511 ' 00:37:31.511 07:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:37:31.511 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:31.511 --rc genhtml_branch_coverage=1 00:37:31.511 --rc genhtml_function_coverage=1 00:37:31.511 --rc genhtml_legend=1 00:37:31.511 --rc geninfo_all_blocks=1 00:37:31.511 --rc geninfo_unexecuted_blocks=1 00:37:31.511 00:37:31.511 ' 00:37:31.511 07:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:37:31.511 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:31.511 --rc genhtml_branch_coverage=1 00:37:31.511 --rc genhtml_function_coverage=1 00:37:31.511 --rc genhtml_legend=1 00:37:31.511 --rc geninfo_all_blocks=1 00:37:31.511 --rc geninfo_unexecuted_blocks=1 00:37:31.511 00:37:31.511 ' 00:37:31.511 07:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:31.511 07:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:37:31.511 07:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:31.511 07:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:31.511 07:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:31.511 07:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:31.511 07:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:31.511 07:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:31.511 07:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:31.511 07:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:31.511 07:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:31.511 07:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:31.512 07:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:37:31.512 07:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:37:31.512 07:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:31.512 07:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:31.512 07:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:31.512 07:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:31.512 07:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:31.512 07:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:37:31.512 07:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:31.512 07:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:31.512 07:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:31.512 07:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:31.512 07:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:31.512 07:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:31.512 07:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:37:31.512 07:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:31.512 07:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:37:31.512 07:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:31.512 07:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:31.512 07:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:31.512 07:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:31.512 07:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:31.512 07:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:37:31.512 07:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:37:31.512 07:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:31.512 07:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:31.512 07:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:31.512 07:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:37:31.512 07:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:37:31.512 07:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:37:31.512 07:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:37:31.512 07:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:31.512 07:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:37:31.512 07:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:37:31.512 07:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:37:31.512 07:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:31.512 07:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:31.512 07:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:31.512 07:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:37:31.512 07:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:37:31.512 07:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:37:31.512 07:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:37:33.418 07:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:33.418 07:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:37:33.418 07:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:37:33.418 07:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:37:33.418 07:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:37:33.418 07:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:37:33.418 07:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:37:33.418 07:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:37:33.418 07:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:37:33.418 07:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:37:33.418 07:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:37:33.418 07:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:37:33.418 07:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:37:33.418 07:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:37:33.418 07:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:37:33.418 07:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:33.418 07:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:33.418 07:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:33.418 07:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:33.418 07:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:33.418 07:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:33.418 07:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:33.418 07:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:37:33.418 07:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:33.418 07:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:33.418 07:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:33.418 07:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:33.418 07:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:37:33.418 07:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:37:33.418 07:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:37:33.418 07:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:37:33.418 07:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:37:33.418 07:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:37:33.418 07:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:33.418 07:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:37:33.418 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:37:33.418 07:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:33.418 07:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:33.418 07:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:33.418 07:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:33.418 07:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:33.419 07:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:33.419 07:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:37:33.419 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:37:33.419 07:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:33.419 07:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:33.419 07:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:33.419 07:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:33.419 07:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:33.419 07:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:37:33.419 07:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:37:33.419 07:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:37:33.419 07:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:33.419 07:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:33.419 07:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:33.419 07:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:33.419 07:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:33.419 07:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:33.419 07:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:33.419 07:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:37:33.419 Found net devices under 0000:0a:00.0: cvl_0_0 00:37:33.419 07:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:33.419 07:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:33.419 07:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:33.419 07:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:33.419 07:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:33.419 07:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:33.419 07:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:33.419 07:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:33.419 07:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:37:33.419 Found net devices under 0000:0a:00.1: cvl_0_1 00:37:33.419 07:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:33.419 07:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:37:33.419 07:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:37:33.419 07:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:37:33.419 07:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:37:33.419 07:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:37:33.419 07:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:33.419 07:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:33.419 07:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:33.419 07:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:33.419 07:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:37:33.419 07:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:33.419 07:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:33.419 07:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:37:33.419 07:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:37:33.419 07:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:33.419 07:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:33.419 07:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:37:33.419 07:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:37:33.419 07:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:37:33.419 07:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:33.419 07:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:33.419 07:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:33.419 07:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:37:33.419 07:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:33.419 07:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:33.419 07:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:33.419 07:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:37:33.419 07:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:37:33.419 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:33.419 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.251 ms 00:37:33.419 00:37:33.419 --- 10.0.0.2 ping statistics --- 00:37:33.419 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:33.419 rtt min/avg/max/mdev = 0.251/0.251/0.251/0.000 ms 00:37:33.419 07:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:33.419 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:33.419 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.096 ms 00:37:33.419 00:37:33.419 --- 10.0.0.1 ping statistics --- 00:37:33.419 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:33.419 rtt min/avg/max/mdev = 0.096/0.096/0.096/0.000 ms 00:37:33.419 07:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:33.419 07:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:37:33.419 07:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:37:33.419 07:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:33.419 07:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:37:33.419 07:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:37:33.419 07:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:33.419 07:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:37:33.419 07:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:37:33.419 07:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:37:33.419 07:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:37:33.419 07:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:33.419 07:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:37:33.419 07:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=426298 00:37:33.419 07:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:37:33.419 07:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 426298 00:37:33.419 07:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 426298 ']' 00:37:33.420 07:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:33.420 07:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:33.420 07:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:33.420 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:33.420 07:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:33.420 07:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:37:33.420 [2024-11-18 07:22:54.360444] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:37:33.420 [2024-11-18 07:22:54.361538] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:37:33.420 [2024-11-18 07:22:54.361593] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:33.678 [2024-11-18 07:22:54.434156] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:33.678 [2024-11-18 07:22:54.477746] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:33.678 [2024-11-18 07:22:54.477815] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:33.678 [2024-11-18 07:22:54.477843] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:33.678 [2024-11-18 07:22:54.477855] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:33.678 [2024-11-18 07:22:54.477865] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:33.678 [2024-11-18 07:22:54.478426] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:33.678 [2024-11-18 07:22:54.560160] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:37:33.678 [2024-11-18 07:22:54.560466] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:37:33.678 07:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:33.678 07:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:37:33.678 07:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:37:33.678 07:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:33.678 07:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:37:33.678 07:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:33.678 07:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:37:33.937 [2024-11-18 07:22:54.854995] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:33.937 07:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:37:33.937 07:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:37:33.937 07:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:33.937 07:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:37:33.937 ************************************ 00:37:33.937 START TEST lvs_grow_clean 00:37:33.937 ************************************ 00:37:33.937 07:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:37:33.937 07:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:37:33.937 07:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:37:33.937 07:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:37:33.937 07:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:37:33.937 07:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:37:33.937 07:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:37:33.937 07:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:37:33.937 07:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:37:33.937 07:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:37:34.508 07:22:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:37:34.508 07:22:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:37:34.768 07:22:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=a4e39cb6-7c66-4faf-a911-d049b14c93d3 00:37:34.768 07:22:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a4e39cb6-7c66-4faf-a911-d049b14c93d3 00:37:34.768 07:22:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:37:35.029 07:22:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:37:35.029 07:22:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:37:35.029 07:22:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u a4e39cb6-7c66-4faf-a911-d049b14c93d3 lvol 150 00:37:35.287 07:22:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=fe61e4e8-6948-472b-9a05-03df655826fa 00:37:35.287 07:22:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:37:35.287 07:22:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:37:35.545 [2024-11-18 07:22:56.306919] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:37:35.545 [2024-11-18 07:22:56.307023] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:37:35.545 true 00:37:35.545 07:22:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:37:35.545 07:22:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a4e39cb6-7c66-4faf-a911-d049b14c93d3 00:37:35.805 07:22:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:37:35.805 07:22:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:37:36.066 07:22:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 fe61e4e8-6948-472b-9a05-03df655826fa 00:37:36.323 07:22:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:37:36.583 [2024-11-18 07:22:57.399205] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:36.583 07:22:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:37:36.844 07:22:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=426728 00:37:36.844 07:22:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:37:36.844 07:22:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:37:36.844 07:22:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 426728 /var/tmp/bdevperf.sock 00:37:36.844 07:22:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 426728 ']' 00:37:36.844 07:22:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:37:36.844 07:22:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:36.844 07:22:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:37:36.844 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:37:36.844 07:22:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:36.844 07:22:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:37:36.844 [2024-11-18 07:22:57.740893] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:37:36.844 [2024-11-18 07:22:57.740979] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid426728 ] 00:37:36.844 [2024-11-18 07:22:57.812434] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:37.104 [2024-11-18 07:22:57.863296] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:37.104 07:22:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:37.104 07:22:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:37:37.104 07:22:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:37:37.364 Nvme0n1 00:37:37.364 07:22:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:37:37.622 [ 00:37:37.622 { 00:37:37.622 "name": "Nvme0n1", 00:37:37.622 "aliases": [ 00:37:37.622 "fe61e4e8-6948-472b-9a05-03df655826fa" 00:37:37.622 ], 00:37:37.622 "product_name": "NVMe disk", 00:37:37.622 "block_size": 4096, 00:37:37.622 "num_blocks": 38912, 00:37:37.622 "uuid": "fe61e4e8-6948-472b-9a05-03df655826fa", 00:37:37.622 "numa_id": 0, 00:37:37.623 "assigned_rate_limits": { 00:37:37.623 "rw_ios_per_sec": 0, 00:37:37.623 "rw_mbytes_per_sec": 0, 00:37:37.623 "r_mbytes_per_sec": 0, 00:37:37.623 "w_mbytes_per_sec": 0 00:37:37.623 }, 00:37:37.623 "claimed": false, 00:37:37.623 "zoned": false, 00:37:37.623 "supported_io_types": { 00:37:37.623 "read": true, 00:37:37.623 "write": true, 00:37:37.623 "unmap": true, 00:37:37.623 "flush": true, 00:37:37.623 "reset": true, 00:37:37.623 "nvme_admin": true, 00:37:37.623 "nvme_io": true, 00:37:37.623 "nvme_io_md": false, 00:37:37.623 "write_zeroes": true, 00:37:37.623 "zcopy": false, 00:37:37.623 "get_zone_info": false, 00:37:37.623 "zone_management": false, 00:37:37.623 "zone_append": false, 00:37:37.623 "compare": true, 00:37:37.623 "compare_and_write": true, 00:37:37.623 "abort": true, 00:37:37.623 "seek_hole": false, 00:37:37.623 "seek_data": false, 00:37:37.623 "copy": true, 00:37:37.623 "nvme_iov_md": false 00:37:37.623 }, 00:37:37.623 "memory_domains": [ 00:37:37.623 { 00:37:37.623 "dma_device_id": "system", 00:37:37.623 "dma_device_type": 1 00:37:37.623 } 00:37:37.623 ], 00:37:37.623 "driver_specific": { 00:37:37.623 "nvme": [ 00:37:37.623 { 00:37:37.623 "trid": { 00:37:37.623 "trtype": "TCP", 00:37:37.623 "adrfam": "IPv4", 00:37:37.623 "traddr": "10.0.0.2", 00:37:37.623 "trsvcid": "4420", 00:37:37.623 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:37:37.623 }, 00:37:37.623 "ctrlr_data": { 00:37:37.623 "cntlid": 1, 00:37:37.623 "vendor_id": "0x8086", 00:37:37.623 "model_number": "SPDK bdev Controller", 00:37:37.623 "serial_number": "SPDK0", 00:37:37.623 "firmware_revision": "25.01", 00:37:37.623 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:37.623 "oacs": { 00:37:37.623 "security": 0, 00:37:37.623 "format": 0, 00:37:37.623 "firmware": 0, 00:37:37.623 "ns_manage": 0 00:37:37.623 }, 00:37:37.623 "multi_ctrlr": true, 00:37:37.623 "ana_reporting": false 00:37:37.623 }, 00:37:37.623 "vs": { 00:37:37.623 "nvme_version": "1.3" 00:37:37.623 }, 00:37:37.623 "ns_data": { 00:37:37.623 "id": 1, 00:37:37.623 "can_share": true 00:37:37.623 } 00:37:37.623 } 00:37:37.623 ], 00:37:37.623 "mp_policy": "active_passive" 00:37:37.623 } 00:37:37.623 } 00:37:37.623 ] 00:37:37.623 07:22:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=426863 00:37:37.623 07:22:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:37:37.623 07:22:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:37:37.881 Running I/O for 10 seconds... 00:37:38.830 Latency(us) 00:37:38.830 [2024-11-18T06:22:59.808Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:38.830 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:38.830 Nvme0n1 : 1.00 14986.00 58.54 0.00 0.00 0.00 0.00 0.00 00:37:38.830 [2024-11-18T06:22:59.808Z] =================================================================================================================== 00:37:38.830 [2024-11-18T06:22:59.808Z] Total : 14986.00 58.54 0.00 0.00 0.00 0.00 0.00 00:37:38.830 00:37:39.767 07:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u a4e39cb6-7c66-4faf-a911-d049b14c93d3 00:37:39.767 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:39.767 Nvme0n1 : 2.00 15058.00 58.82 0.00 0.00 0.00 0.00 0.00 00:37:39.767 [2024-11-18T06:23:00.745Z] =================================================================================================================== 00:37:39.767 [2024-11-18T06:23:00.745Z] Total : 15058.00 58.82 0.00 0.00 0.00 0.00 0.00 00:37:39.767 00:37:40.026 true 00:37:40.026 07:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a4e39cb6-7c66-4faf-a911-d049b14c93d3 00:37:40.026 07:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:37:40.284 07:23:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:37:40.284 07:23:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:37:40.284 07:23:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 426863 00:37:40.853 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:40.853 Nvme0n1 : 3.00 15161.00 59.22 0.00 0.00 0.00 0.00 0.00 00:37:40.853 [2024-11-18T06:23:01.831Z] =================================================================================================================== 00:37:40.853 [2024-11-18T06:23:01.831Z] Total : 15161.00 59.22 0.00 0.00 0.00 0.00 0.00 00:37:40.853 00:37:41.795 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:41.795 Nvme0n1 : 4.00 15244.25 59.55 0.00 0.00 0.00 0.00 0.00 00:37:41.795 [2024-11-18T06:23:02.773Z] =================================================================================================================== 00:37:41.795 [2024-11-18T06:23:02.773Z] Total : 15244.25 59.55 0.00 0.00 0.00 0.00 0.00 00:37:41.795 00:37:43.175 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:43.175 Nvme0n1 : 5.00 15218.00 59.45 0.00 0.00 0.00 0.00 0.00 00:37:43.175 [2024-11-18T06:23:04.153Z] =================================================================================================================== 00:37:43.175 [2024-11-18T06:23:04.153Z] Total : 15218.00 59.45 0.00 0.00 0.00 0.00 0.00 00:37:43.175 00:37:43.745 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:43.745 Nvme0n1 : 6.00 15285.17 59.71 0.00 0.00 0.00 0.00 0.00 00:37:43.745 [2024-11-18T06:23:04.723Z] =================================================================================================================== 00:37:43.745 [2024-11-18T06:23:04.723Z] Total : 15285.17 59.71 0.00 0.00 0.00 0.00 0.00 00:37:43.745 00:37:45.124 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:45.124 Nvme0n1 : 7.00 15342.29 59.93 0.00 0.00 0.00 0.00 0.00 00:37:45.124 [2024-11-18T06:23:06.102Z] =================================================================================================================== 00:37:45.124 [2024-11-18T06:23:06.102Z] Total : 15342.29 59.93 0.00 0.00 0.00 0.00 0.00 00:37:45.124 00:37:46.063 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:46.064 Nvme0n1 : 8.00 15389.25 60.11 0.00 0.00 0.00 0.00 0.00 00:37:46.064 [2024-11-18T06:23:07.042Z] =================================================================================================================== 00:37:46.064 [2024-11-18T06:23:07.042Z] Total : 15389.25 60.11 0.00 0.00 0.00 0.00 0.00 00:37:46.064 00:37:46.999 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:46.999 Nvme0n1 : 9.00 15436.22 60.30 0.00 0.00 0.00 0.00 0.00 00:37:46.999 [2024-11-18T06:23:07.977Z] =================================================================================================================== 00:37:46.999 [2024-11-18T06:23:07.977Z] Total : 15436.22 60.30 0.00 0.00 0.00 0.00 0.00 00:37:46.999 00:37:47.937 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:47.937 Nvme0n1 : 10.00 15473.70 60.44 0.00 0.00 0.00 0.00 0.00 00:37:47.937 [2024-11-18T06:23:08.915Z] =================================================================================================================== 00:37:47.937 [2024-11-18T06:23:08.915Z] Total : 15473.70 60.44 0.00 0.00 0.00 0.00 0.00 00:37:47.937 00:37:47.937 00:37:47.937 Latency(us) 00:37:47.937 [2024-11-18T06:23:08.915Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:47.937 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:47.937 Nvme0n1 : 10.00 15479.30 60.47 0.00 0.00 8264.61 4296.25 17767.54 00:37:47.937 [2024-11-18T06:23:08.915Z] =================================================================================================================== 00:37:47.937 [2024-11-18T06:23:08.915Z] Total : 15479.30 60.47 0.00 0.00 8264.61 4296.25 17767.54 00:37:47.937 { 00:37:47.937 "results": [ 00:37:47.937 { 00:37:47.937 "job": "Nvme0n1", 00:37:47.937 "core_mask": "0x2", 00:37:47.937 "workload": "randwrite", 00:37:47.937 "status": "finished", 00:37:47.937 "queue_depth": 128, 00:37:47.937 "io_size": 4096, 00:37:47.937 "runtime": 10.004652, 00:37:47.937 "iops": 15479.299030091202, 00:37:47.937 "mibps": 60.46601183629376, 00:37:47.937 "io_failed": 0, 00:37:47.937 "io_timeout": 0, 00:37:47.937 "avg_latency_us": 8264.61041885226, 00:37:47.937 "min_latency_us": 4296.248888888889, 00:37:47.937 "max_latency_us": 17767.53777777778 00:37:47.937 } 00:37:47.937 ], 00:37:47.937 "core_count": 1 00:37:47.937 } 00:37:47.937 07:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 426728 00:37:47.937 07:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 426728 ']' 00:37:47.937 07:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 426728 00:37:47.937 07:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:37:47.937 07:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:47.937 07:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 426728 00:37:47.937 07:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:37:47.937 07:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:37:47.937 07:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 426728' 00:37:47.937 killing process with pid 426728 00:37:47.937 07:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 426728 00:37:47.937 Received shutdown signal, test time was about 10.000000 seconds 00:37:47.937 00:37:47.937 Latency(us) 00:37:47.937 [2024-11-18T06:23:08.915Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:47.937 [2024-11-18T06:23:08.915Z] =================================================================================================================== 00:37:47.937 [2024-11-18T06:23:08.915Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:37:47.937 07:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 426728 00:37:48.197 07:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:37:48.456 07:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:37:48.715 07:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a4e39cb6-7c66-4faf-a911-d049b14c93d3 00:37:48.715 07:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:37:48.974 07:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:37:48.974 07:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:37:48.974 07:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:37:49.234 [2024-11-18 07:23:10.082944] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:37:49.234 07:23:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a4e39cb6-7c66-4faf-a911-d049b14c93d3 00:37:49.234 07:23:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:37:49.234 07:23:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a4e39cb6-7c66-4faf-a911-d049b14c93d3 00:37:49.234 07:23:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:37:49.234 07:23:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:37:49.234 07:23:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:37:49.234 07:23:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:37:49.234 07:23:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:37:49.234 07:23:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:37:49.234 07:23:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:37:49.234 07:23:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:37:49.234 07:23:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a4e39cb6-7c66-4faf-a911-d049b14c93d3 00:37:49.494 request: 00:37:49.495 { 00:37:49.495 "uuid": "a4e39cb6-7c66-4faf-a911-d049b14c93d3", 00:37:49.495 "method": "bdev_lvol_get_lvstores", 00:37:49.495 "req_id": 1 00:37:49.495 } 00:37:49.495 Got JSON-RPC error response 00:37:49.495 response: 00:37:49.495 { 00:37:49.495 "code": -19, 00:37:49.495 "message": "No such device" 00:37:49.495 } 00:37:49.495 07:23:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:37:49.495 07:23:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:37:49.495 07:23:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:37:49.495 07:23:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:37:49.495 07:23:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:37:49.758 aio_bdev 00:37:49.758 07:23:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev fe61e4e8-6948-472b-9a05-03df655826fa 00:37:49.758 07:23:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=fe61e4e8-6948-472b-9a05-03df655826fa 00:37:49.758 07:23:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:37:49.758 07:23:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:37:49.758 07:23:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:37:49.758 07:23:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:37:49.758 07:23:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:37:50.031 07:23:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b fe61e4e8-6948-472b-9a05-03df655826fa -t 2000 00:37:50.309 [ 00:37:50.309 { 00:37:50.309 "name": "fe61e4e8-6948-472b-9a05-03df655826fa", 00:37:50.309 "aliases": [ 00:37:50.309 "lvs/lvol" 00:37:50.309 ], 00:37:50.309 "product_name": "Logical Volume", 00:37:50.309 "block_size": 4096, 00:37:50.309 "num_blocks": 38912, 00:37:50.309 "uuid": "fe61e4e8-6948-472b-9a05-03df655826fa", 00:37:50.309 "assigned_rate_limits": { 00:37:50.309 "rw_ios_per_sec": 0, 00:37:50.309 "rw_mbytes_per_sec": 0, 00:37:50.309 "r_mbytes_per_sec": 0, 00:37:50.309 "w_mbytes_per_sec": 0 00:37:50.309 }, 00:37:50.309 "claimed": false, 00:37:50.309 "zoned": false, 00:37:50.309 "supported_io_types": { 00:37:50.309 "read": true, 00:37:50.309 "write": true, 00:37:50.309 "unmap": true, 00:37:50.309 "flush": false, 00:37:50.309 "reset": true, 00:37:50.309 "nvme_admin": false, 00:37:50.309 "nvme_io": false, 00:37:50.309 "nvme_io_md": false, 00:37:50.309 "write_zeroes": true, 00:37:50.309 "zcopy": false, 00:37:50.309 "get_zone_info": false, 00:37:50.309 "zone_management": false, 00:37:50.309 "zone_append": false, 00:37:50.309 "compare": false, 00:37:50.309 "compare_and_write": false, 00:37:50.309 "abort": false, 00:37:50.309 "seek_hole": true, 00:37:50.309 "seek_data": true, 00:37:50.309 "copy": false, 00:37:50.309 "nvme_iov_md": false 00:37:50.309 }, 00:37:50.309 "driver_specific": { 00:37:50.309 "lvol": { 00:37:50.309 "lvol_store_uuid": "a4e39cb6-7c66-4faf-a911-d049b14c93d3", 00:37:50.309 "base_bdev": "aio_bdev", 00:37:50.309 "thin_provision": false, 00:37:50.309 "num_allocated_clusters": 38, 00:37:50.309 "snapshot": false, 00:37:50.309 "clone": false, 00:37:50.309 "esnap_clone": false 00:37:50.309 } 00:37:50.309 } 00:37:50.309 } 00:37:50.309 ] 00:37:50.309 07:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:37:50.309 07:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a4e39cb6-7c66-4faf-a911-d049b14c93d3 00:37:50.309 07:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:37:50.591 07:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:37:50.591 07:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a4e39cb6-7c66-4faf-a911-d049b14c93d3 00:37:50.591 07:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:37:50.853 07:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:37:50.853 07:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete fe61e4e8-6948-472b-9a05-03df655826fa 00:37:51.110 07:23:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u a4e39cb6-7c66-4faf-a911-d049b14c93d3 00:37:51.676 07:23:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:37:51.934 07:23:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:37:51.934 00:37:51.934 real 0m17.774s 00:37:51.934 user 0m17.332s 00:37:51.934 sys 0m1.851s 00:37:51.934 07:23:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:51.934 07:23:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:37:51.934 ************************************ 00:37:51.934 END TEST lvs_grow_clean 00:37:51.934 ************************************ 00:37:51.934 07:23:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:37:51.934 07:23:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:37:51.934 07:23:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:51.934 07:23:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:37:51.934 ************************************ 00:37:51.934 START TEST lvs_grow_dirty 00:37:51.934 ************************************ 00:37:51.934 07:23:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:37:51.934 07:23:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:37:51.934 07:23:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:37:51.934 07:23:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:37:51.934 07:23:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:37:51.934 07:23:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:37:51.934 07:23:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:37:51.934 07:23:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:37:51.934 07:23:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:37:51.934 07:23:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:37:52.194 07:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:37:52.194 07:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:37:52.452 07:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=f0228a0e-efc2-4f04-bddc-8a557119013e 00:37:52.452 07:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f0228a0e-efc2-4f04-bddc-8a557119013e 00:37:52.452 07:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:37:52.712 07:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:37:52.712 07:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:37:52.712 07:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u f0228a0e-efc2-4f04-bddc-8a557119013e lvol 150 00:37:52.973 07:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=b52c5ff4-ea28-436a-a868-7072a13672e8 00:37:52.973 07:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:37:52.973 07:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:37:53.232 [2024-11-18 07:23:14.110893] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:37:53.232 [2024-11-18 07:23:14.111027] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:37:53.232 true 00:37:53.232 07:23:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f0228a0e-efc2-4f04-bddc-8a557119013e 00:37:53.232 07:23:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:37:53.491 07:23:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:37:53.491 07:23:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:37:53.751 07:23:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 b52c5ff4-ea28-436a-a868-7072a13672e8 00:37:54.012 07:23:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:37:54.272 [2024-11-18 07:23:15.227177] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:54.272 07:23:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:37:54.843 07:23:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=428773 00:37:54.843 07:23:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:37:54.843 07:23:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:37:54.843 07:23:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 428773 /var/tmp/bdevperf.sock 00:37:54.843 07:23:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 428773 ']' 00:37:54.843 07:23:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:37:54.843 07:23:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:54.843 07:23:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:37:54.843 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:37:54.843 07:23:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:54.843 07:23:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:37:54.843 [2024-11-18 07:23:15.562323] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:37:54.843 [2024-11-18 07:23:15.562422] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid428773 ] 00:37:54.843 [2024-11-18 07:23:15.630081] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:54.843 [2024-11-18 07:23:15.678212] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:54.843 07:23:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:54.843 07:23:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:37:54.843 07:23:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:37:55.413 Nvme0n1 00:37:55.413 07:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:37:55.671 [ 00:37:55.671 { 00:37:55.671 "name": "Nvme0n1", 00:37:55.671 "aliases": [ 00:37:55.671 "b52c5ff4-ea28-436a-a868-7072a13672e8" 00:37:55.671 ], 00:37:55.671 "product_name": "NVMe disk", 00:37:55.671 "block_size": 4096, 00:37:55.671 "num_blocks": 38912, 00:37:55.671 "uuid": "b52c5ff4-ea28-436a-a868-7072a13672e8", 00:37:55.671 "numa_id": 0, 00:37:55.671 "assigned_rate_limits": { 00:37:55.671 "rw_ios_per_sec": 0, 00:37:55.671 "rw_mbytes_per_sec": 0, 00:37:55.671 "r_mbytes_per_sec": 0, 00:37:55.671 "w_mbytes_per_sec": 0 00:37:55.671 }, 00:37:55.671 "claimed": false, 00:37:55.671 "zoned": false, 00:37:55.671 "supported_io_types": { 00:37:55.671 "read": true, 00:37:55.671 "write": true, 00:37:55.671 "unmap": true, 00:37:55.671 "flush": true, 00:37:55.671 "reset": true, 00:37:55.671 "nvme_admin": true, 00:37:55.671 "nvme_io": true, 00:37:55.671 "nvme_io_md": false, 00:37:55.671 "write_zeroes": true, 00:37:55.671 "zcopy": false, 00:37:55.671 "get_zone_info": false, 00:37:55.671 "zone_management": false, 00:37:55.671 "zone_append": false, 00:37:55.671 "compare": true, 00:37:55.671 "compare_and_write": true, 00:37:55.671 "abort": true, 00:37:55.671 "seek_hole": false, 00:37:55.671 "seek_data": false, 00:37:55.671 "copy": true, 00:37:55.671 "nvme_iov_md": false 00:37:55.671 }, 00:37:55.671 "memory_domains": [ 00:37:55.671 { 00:37:55.671 "dma_device_id": "system", 00:37:55.671 "dma_device_type": 1 00:37:55.671 } 00:37:55.671 ], 00:37:55.671 "driver_specific": { 00:37:55.671 "nvme": [ 00:37:55.671 { 00:37:55.671 "trid": { 00:37:55.671 "trtype": "TCP", 00:37:55.671 "adrfam": "IPv4", 00:37:55.671 "traddr": "10.0.0.2", 00:37:55.671 "trsvcid": "4420", 00:37:55.671 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:37:55.671 }, 00:37:55.671 "ctrlr_data": { 00:37:55.671 "cntlid": 1, 00:37:55.671 "vendor_id": "0x8086", 00:37:55.671 "model_number": "SPDK bdev Controller", 00:37:55.671 "serial_number": "SPDK0", 00:37:55.671 "firmware_revision": "25.01", 00:37:55.671 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:55.671 "oacs": { 00:37:55.671 "security": 0, 00:37:55.671 "format": 0, 00:37:55.671 "firmware": 0, 00:37:55.671 "ns_manage": 0 00:37:55.671 }, 00:37:55.671 "multi_ctrlr": true, 00:37:55.671 "ana_reporting": false 00:37:55.671 }, 00:37:55.671 "vs": { 00:37:55.671 "nvme_version": "1.3" 00:37:55.671 }, 00:37:55.671 "ns_data": { 00:37:55.671 "id": 1, 00:37:55.671 "can_share": true 00:37:55.671 } 00:37:55.671 } 00:37:55.671 ], 00:37:55.671 "mp_policy": "active_passive" 00:37:55.671 } 00:37:55.671 } 00:37:55.671 ] 00:37:55.671 07:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=428908 00:37:55.671 07:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:37:55.671 07:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:37:55.930 Running I/O for 10 seconds... 00:37:56.866 Latency(us) 00:37:56.866 [2024-11-18T06:23:17.844Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:56.866 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:56.866 Nvme0n1 : 1.00 14986.00 58.54 0.00 0.00 0.00 0.00 0.00 00:37:56.866 [2024-11-18T06:23:17.844Z] =================================================================================================================== 00:37:56.866 [2024-11-18T06:23:17.844Z] Total : 14986.00 58.54 0.00 0.00 0.00 0.00 0.00 00:37:56.866 00:37:57.806 07:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u f0228a0e-efc2-4f04-bddc-8a557119013e 00:37:57.806 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:57.806 Nvme0n1 : 2.00 15272.00 59.66 0.00 0.00 0.00 0.00 0.00 00:37:57.806 [2024-11-18T06:23:18.784Z] =================================================================================================================== 00:37:57.806 [2024-11-18T06:23:18.784Z] Total : 15272.00 59.66 0.00 0.00 0.00 0.00 0.00 00:37:57.806 00:37:58.064 true 00:37:58.064 07:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f0228a0e-efc2-4f04-bddc-8a557119013e 00:37:58.064 07:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:37:58.332 07:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:37:58.332 07:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:37:58.332 07:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 428908 00:37:58.903 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:58.903 Nvme0n1 : 3.00 15367.00 60.03 0.00 0.00 0.00 0.00 0.00 00:37:58.903 [2024-11-18T06:23:19.881Z] =================================================================================================================== 00:37:58.903 [2024-11-18T06:23:19.881Z] Total : 15367.00 60.03 0.00 0.00 0.00 0.00 0.00 00:37:58.903 00:37:59.842 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:59.842 Nvme0n1 : 4.00 15335.25 59.90 0.00 0.00 0.00 0.00 0.00 00:37:59.842 [2024-11-18T06:23:20.820Z] =================================================================================================================== 00:37:59.842 [2024-11-18T06:23:20.820Z] Total : 15335.25 59.90 0.00 0.00 0.00 0.00 0.00 00:37:59.842 00:38:00.777 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:00.777 Nvme0n1 : 5.00 15392.40 60.13 0.00 0.00 0.00 0.00 0.00 00:38:00.777 [2024-11-18T06:23:21.755Z] =================================================================================================================== 00:38:00.777 [2024-11-18T06:23:21.755Z] Total : 15392.40 60.13 0.00 0.00 0.00 0.00 0.00 00:38:00.777 00:38:02.157 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:02.157 Nvme0n1 : 6.00 15472.83 60.44 0.00 0.00 0.00 0.00 0.00 00:38:02.157 [2024-11-18T06:23:23.135Z] =================================================================================================================== 00:38:02.157 [2024-11-18T06:23:23.135Z] Total : 15472.83 60.44 0.00 0.00 0.00 0.00 0.00 00:38:02.157 00:38:03.095 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:03.096 Nvme0n1 : 7.00 15512.14 60.59 0.00 0.00 0.00 0.00 0.00 00:38:03.096 [2024-11-18T06:23:24.074Z] =================================================================================================================== 00:38:03.096 [2024-11-18T06:23:24.074Z] Total : 15512.14 60.59 0.00 0.00 0.00 0.00 0.00 00:38:03.096 00:38:04.032 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:04.032 Nvme0n1 : 8.00 15525.75 60.65 0.00 0.00 0.00 0.00 0.00 00:38:04.032 [2024-11-18T06:23:25.010Z] =================================================================================================================== 00:38:04.032 [2024-11-18T06:23:25.010Z] Total : 15525.75 60.65 0.00 0.00 0.00 0.00 0.00 00:38:04.032 00:38:04.969 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:04.969 Nvme0n1 : 9.00 15564.56 60.80 0.00 0.00 0.00 0.00 0.00 00:38:04.969 [2024-11-18T06:23:25.947Z] =================================================================================================================== 00:38:04.969 [2024-11-18T06:23:25.947Z] Total : 15564.56 60.80 0.00 0.00 0.00 0.00 0.00 00:38:04.969 00:38:05.907 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:05.907 Nvme0n1 : 10.00 15608.30 60.97 0.00 0.00 0.00 0.00 0.00 00:38:05.907 [2024-11-18T06:23:26.885Z] =================================================================================================================== 00:38:05.907 [2024-11-18T06:23:26.885Z] Total : 15608.30 60.97 0.00 0.00 0.00 0.00 0.00 00:38:05.907 00:38:05.907 00:38:05.907 Latency(us) 00:38:05.907 [2024-11-18T06:23:26.885Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:05.907 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:05.907 Nvme0n1 : 10.01 15610.94 60.98 0.00 0.00 8194.63 6310.87 18738.44 00:38:05.907 [2024-11-18T06:23:26.885Z] =================================================================================================================== 00:38:05.907 [2024-11-18T06:23:26.885Z] Total : 15610.94 60.98 0.00 0.00 8194.63 6310.87 18738.44 00:38:05.907 { 00:38:05.907 "results": [ 00:38:05.907 { 00:38:05.907 "job": "Nvme0n1", 00:38:05.907 "core_mask": "0x2", 00:38:05.907 "workload": "randwrite", 00:38:05.907 "status": "finished", 00:38:05.907 "queue_depth": 128, 00:38:05.907 "io_size": 4096, 00:38:05.907 "runtime": 10.006507, 00:38:05.907 "iops": 15610.941960066584, 00:38:05.907 "mibps": 60.980242031510095, 00:38:05.907 "io_failed": 0, 00:38:05.907 "io_timeout": 0, 00:38:05.907 "avg_latency_us": 8194.631184402293, 00:38:05.907 "min_latency_us": 6310.874074074074, 00:38:05.907 "max_latency_us": 18738.44148148148 00:38:05.907 } 00:38:05.907 ], 00:38:05.907 "core_count": 1 00:38:05.907 } 00:38:05.907 07:23:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 428773 00:38:05.907 07:23:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 428773 ']' 00:38:05.907 07:23:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 428773 00:38:05.907 07:23:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:38:05.907 07:23:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:05.907 07:23:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 428773 00:38:05.907 07:23:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:38:05.907 07:23:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:38:05.907 07:23:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 428773' 00:38:05.907 killing process with pid 428773 00:38:05.907 07:23:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 428773 00:38:05.907 Received shutdown signal, test time was about 10.000000 seconds 00:38:05.907 00:38:05.907 Latency(us) 00:38:05.907 [2024-11-18T06:23:26.885Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:05.907 [2024-11-18T06:23:26.885Z] =================================================================================================================== 00:38:05.907 [2024-11-18T06:23:26.885Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:38:05.907 07:23:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 428773 00:38:06.166 07:23:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:38:06.426 07:23:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:38:06.685 07:23:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f0228a0e-efc2-4f04-bddc-8a557119013e 00:38:06.685 07:23:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:38:06.944 07:23:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:38:06.944 07:23:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:38:06.944 07:23:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 426298 00:38:06.944 07:23:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 426298 00:38:06.944 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 426298 Killed "${NVMF_APP[@]}" "$@" 00:38:06.944 07:23:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:38:06.944 07:23:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:38:06.945 07:23:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:38:06.945 07:23:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:38:06.945 07:23:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:38:06.945 07:23:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=430224 00:38:06.945 07:23:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:38:06.945 07:23:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 430224 00:38:06.945 07:23:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 430224 ']' 00:38:06.945 07:23:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:06.945 07:23:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:06.945 07:23:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:06.945 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:06.945 07:23:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:06.945 07:23:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:38:06.945 [2024-11-18 07:23:27.912639] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:38:06.945 [2024-11-18 07:23:27.913760] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:38:06.945 [2024-11-18 07:23:27.913833] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:07.203 [2024-11-18 07:23:27.986279] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:07.203 [2024-11-18 07:23:28.030942] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:07.203 [2024-11-18 07:23:28.030998] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:07.203 [2024-11-18 07:23:28.031026] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:07.203 [2024-11-18 07:23:28.031037] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:07.203 [2024-11-18 07:23:28.031047] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:07.203 [2024-11-18 07:23:28.031581] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:07.203 [2024-11-18 07:23:28.114190] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:38:07.203 [2024-11-18 07:23:28.114513] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:38:07.203 07:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:07.203 07:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:38:07.203 07:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:38:07.203 07:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:38:07.203 07:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:38:07.203 07:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:07.203 07:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:38:07.463 [2024-11-18 07:23:28.438270] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:38:07.463 [2024-11-18 07:23:28.438416] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:38:07.463 [2024-11-18 07:23:28.438465] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:38:07.721 07:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:38:07.721 07:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev b52c5ff4-ea28-436a-a868-7072a13672e8 00:38:07.721 07:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=b52c5ff4-ea28-436a-a868-7072a13672e8 00:38:07.721 07:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:38:07.721 07:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:38:07.721 07:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:38:07.721 07:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:38:07.721 07:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:38:07.979 07:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b b52c5ff4-ea28-436a-a868-7072a13672e8 -t 2000 00:38:08.239 [ 00:38:08.239 { 00:38:08.239 "name": "b52c5ff4-ea28-436a-a868-7072a13672e8", 00:38:08.239 "aliases": [ 00:38:08.239 "lvs/lvol" 00:38:08.239 ], 00:38:08.239 "product_name": "Logical Volume", 00:38:08.239 "block_size": 4096, 00:38:08.239 "num_blocks": 38912, 00:38:08.239 "uuid": "b52c5ff4-ea28-436a-a868-7072a13672e8", 00:38:08.239 "assigned_rate_limits": { 00:38:08.239 "rw_ios_per_sec": 0, 00:38:08.239 "rw_mbytes_per_sec": 0, 00:38:08.239 "r_mbytes_per_sec": 0, 00:38:08.239 "w_mbytes_per_sec": 0 00:38:08.239 }, 00:38:08.239 "claimed": false, 00:38:08.239 "zoned": false, 00:38:08.239 "supported_io_types": { 00:38:08.239 "read": true, 00:38:08.239 "write": true, 00:38:08.239 "unmap": true, 00:38:08.239 "flush": false, 00:38:08.239 "reset": true, 00:38:08.239 "nvme_admin": false, 00:38:08.239 "nvme_io": false, 00:38:08.239 "nvme_io_md": false, 00:38:08.239 "write_zeroes": true, 00:38:08.239 "zcopy": false, 00:38:08.239 "get_zone_info": false, 00:38:08.239 "zone_management": false, 00:38:08.239 "zone_append": false, 00:38:08.239 "compare": false, 00:38:08.239 "compare_and_write": false, 00:38:08.239 "abort": false, 00:38:08.239 "seek_hole": true, 00:38:08.239 "seek_data": true, 00:38:08.239 "copy": false, 00:38:08.239 "nvme_iov_md": false 00:38:08.239 }, 00:38:08.239 "driver_specific": { 00:38:08.239 "lvol": { 00:38:08.239 "lvol_store_uuid": "f0228a0e-efc2-4f04-bddc-8a557119013e", 00:38:08.239 "base_bdev": "aio_bdev", 00:38:08.239 "thin_provision": false, 00:38:08.239 "num_allocated_clusters": 38, 00:38:08.239 "snapshot": false, 00:38:08.239 "clone": false, 00:38:08.239 "esnap_clone": false 00:38:08.239 } 00:38:08.239 } 00:38:08.239 } 00:38:08.239 ] 00:38:08.239 07:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:38:08.239 07:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f0228a0e-efc2-4f04-bddc-8a557119013e 00:38:08.239 07:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:38:08.496 07:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:38:08.496 07:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f0228a0e-efc2-4f04-bddc-8a557119013e 00:38:08.496 07:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:38:08.755 07:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:38:08.755 07:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:38:09.013 [2024-11-18 07:23:29.808064] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:38:09.013 07:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f0228a0e-efc2-4f04-bddc-8a557119013e 00:38:09.013 07:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:38:09.013 07:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f0228a0e-efc2-4f04-bddc-8a557119013e 00:38:09.013 07:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:38:09.013 07:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:09.013 07:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:38:09.013 07:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:09.013 07:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:38:09.013 07:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:09.014 07:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:38:09.014 07:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:38:09.014 07:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f0228a0e-efc2-4f04-bddc-8a557119013e 00:38:09.273 request: 00:38:09.273 { 00:38:09.273 "uuid": "f0228a0e-efc2-4f04-bddc-8a557119013e", 00:38:09.273 "method": "bdev_lvol_get_lvstores", 00:38:09.273 "req_id": 1 00:38:09.273 } 00:38:09.273 Got JSON-RPC error response 00:38:09.273 response: 00:38:09.273 { 00:38:09.273 "code": -19, 00:38:09.273 "message": "No such device" 00:38:09.273 } 00:38:09.273 07:23:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:38:09.273 07:23:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:38:09.273 07:23:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:38:09.273 07:23:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:38:09.273 07:23:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:38:09.532 aio_bdev 00:38:09.532 07:23:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev b52c5ff4-ea28-436a-a868-7072a13672e8 00:38:09.532 07:23:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=b52c5ff4-ea28-436a-a868-7072a13672e8 00:38:09.532 07:23:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:38:09.532 07:23:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:38:09.532 07:23:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:38:09.532 07:23:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:38:09.532 07:23:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:38:09.792 07:23:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b b52c5ff4-ea28-436a-a868-7072a13672e8 -t 2000 00:38:10.051 [ 00:38:10.051 { 00:38:10.051 "name": "b52c5ff4-ea28-436a-a868-7072a13672e8", 00:38:10.051 "aliases": [ 00:38:10.051 "lvs/lvol" 00:38:10.051 ], 00:38:10.051 "product_name": "Logical Volume", 00:38:10.051 "block_size": 4096, 00:38:10.051 "num_blocks": 38912, 00:38:10.051 "uuid": "b52c5ff4-ea28-436a-a868-7072a13672e8", 00:38:10.051 "assigned_rate_limits": { 00:38:10.051 "rw_ios_per_sec": 0, 00:38:10.051 "rw_mbytes_per_sec": 0, 00:38:10.051 "r_mbytes_per_sec": 0, 00:38:10.051 "w_mbytes_per_sec": 0 00:38:10.051 }, 00:38:10.051 "claimed": false, 00:38:10.051 "zoned": false, 00:38:10.051 "supported_io_types": { 00:38:10.051 "read": true, 00:38:10.051 "write": true, 00:38:10.051 "unmap": true, 00:38:10.051 "flush": false, 00:38:10.051 "reset": true, 00:38:10.051 "nvme_admin": false, 00:38:10.051 "nvme_io": false, 00:38:10.051 "nvme_io_md": false, 00:38:10.051 "write_zeroes": true, 00:38:10.051 "zcopy": false, 00:38:10.051 "get_zone_info": false, 00:38:10.051 "zone_management": false, 00:38:10.051 "zone_append": false, 00:38:10.051 "compare": false, 00:38:10.051 "compare_and_write": false, 00:38:10.051 "abort": false, 00:38:10.051 "seek_hole": true, 00:38:10.051 "seek_data": true, 00:38:10.051 "copy": false, 00:38:10.051 "nvme_iov_md": false 00:38:10.051 }, 00:38:10.051 "driver_specific": { 00:38:10.051 "lvol": { 00:38:10.051 "lvol_store_uuid": "f0228a0e-efc2-4f04-bddc-8a557119013e", 00:38:10.051 "base_bdev": "aio_bdev", 00:38:10.051 "thin_provision": false, 00:38:10.051 "num_allocated_clusters": 38, 00:38:10.051 "snapshot": false, 00:38:10.051 "clone": false, 00:38:10.051 "esnap_clone": false 00:38:10.051 } 00:38:10.051 } 00:38:10.051 } 00:38:10.051 ] 00:38:10.051 07:23:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:38:10.051 07:23:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f0228a0e-efc2-4f04-bddc-8a557119013e 00:38:10.051 07:23:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:38:10.311 07:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:38:10.311 07:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f0228a0e-efc2-4f04-bddc-8a557119013e 00:38:10.311 07:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:38:10.571 07:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:38:10.571 07:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete b52c5ff4-ea28-436a-a868-7072a13672e8 00:38:10.830 07:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u f0228a0e-efc2-4f04-bddc-8a557119013e 00:38:11.089 07:23:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:38:11.349 07:23:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:38:11.608 00:38:11.608 real 0m19.609s 00:38:11.608 user 0m36.266s 00:38:11.608 sys 0m4.788s 00:38:11.608 07:23:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:11.608 07:23:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:38:11.608 ************************************ 00:38:11.608 END TEST lvs_grow_dirty 00:38:11.608 ************************************ 00:38:11.608 07:23:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:38:11.608 07:23:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:38:11.608 07:23:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:38:11.608 07:23:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:38:11.608 07:23:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:38:11.608 07:23:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:38:11.608 07:23:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:38:11.608 07:23:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:38:11.608 07:23:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:38:11.608 nvmf_trace.0 00:38:11.608 07:23:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:38:11.608 07:23:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:38:11.608 07:23:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:38:11.608 07:23:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:38:11.608 07:23:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:11.608 07:23:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:38:11.608 07:23:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:11.608 07:23:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:11.608 rmmod nvme_tcp 00:38:11.608 rmmod nvme_fabrics 00:38:11.608 rmmod nvme_keyring 00:38:11.608 07:23:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:11.608 07:23:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:38:11.608 07:23:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:38:11.608 07:23:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 430224 ']' 00:38:11.608 07:23:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 430224 00:38:11.608 07:23:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 430224 ']' 00:38:11.608 07:23:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 430224 00:38:11.608 07:23:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:38:11.608 07:23:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:11.608 07:23:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 430224 00:38:11.608 07:23:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:38:11.608 07:23:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:38:11.608 07:23:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 430224' 00:38:11.608 killing process with pid 430224 00:38:11.608 07:23:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 430224 00:38:11.608 07:23:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 430224 00:38:11.866 07:23:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:38:11.866 07:23:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:38:11.866 07:23:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:38:11.866 07:23:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:38:11.866 07:23:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:38:11.866 07:23:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:38:11.866 07:23:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:38:11.866 07:23:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:11.866 07:23:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:38:11.866 07:23:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:11.866 07:23:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:11.866 07:23:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:13.772 07:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:38:13.772 00:38:13.772 real 0m42.773s 00:38:13.772 user 0m55.342s 00:38:13.772 sys 0m8.566s 00:38:13.772 07:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:13.772 07:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:38:13.772 ************************************ 00:38:13.772 END TEST nvmf_lvs_grow 00:38:13.772 ************************************ 00:38:13.772 07:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:38:13.772 07:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:38:13.772 07:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:13.772 07:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:38:14.031 ************************************ 00:38:14.031 START TEST nvmf_bdev_io_wait 00:38:14.031 ************************************ 00:38:14.031 07:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:38:14.031 * Looking for test storage... 00:38:14.031 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:38:14.031 07:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:38:14.031 07:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lcov --version 00:38:14.031 07:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:38:14.031 07:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:38:14.031 07:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:14.031 07:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:14.031 07:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:14.031 07:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:38:14.031 07:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:38:14.031 07:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:38:14.031 07:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:38:14.031 07:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:38:14.031 07:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:38:14.031 07:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:38:14.031 07:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:14.031 07:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:38:14.031 07:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:38:14.031 07:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:14.031 07:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:14.031 07:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:38:14.031 07:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:38:14.031 07:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:14.031 07:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:38:14.031 07:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:38:14.031 07:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:38:14.031 07:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:38:14.031 07:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:14.031 07:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:38:14.031 07:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:38:14.031 07:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:14.031 07:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:14.031 07:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:38:14.031 07:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:14.031 07:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:38:14.031 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:14.031 --rc genhtml_branch_coverage=1 00:38:14.031 --rc genhtml_function_coverage=1 00:38:14.031 --rc genhtml_legend=1 00:38:14.031 --rc geninfo_all_blocks=1 00:38:14.031 --rc geninfo_unexecuted_blocks=1 00:38:14.031 00:38:14.032 ' 00:38:14.032 07:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:38:14.032 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:14.032 --rc genhtml_branch_coverage=1 00:38:14.032 --rc genhtml_function_coverage=1 00:38:14.032 --rc genhtml_legend=1 00:38:14.032 --rc geninfo_all_blocks=1 00:38:14.032 --rc geninfo_unexecuted_blocks=1 00:38:14.032 00:38:14.032 ' 00:38:14.032 07:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:38:14.032 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:14.032 --rc genhtml_branch_coverage=1 00:38:14.032 --rc genhtml_function_coverage=1 00:38:14.032 --rc genhtml_legend=1 00:38:14.032 --rc geninfo_all_blocks=1 00:38:14.032 --rc geninfo_unexecuted_blocks=1 00:38:14.032 00:38:14.032 ' 00:38:14.032 07:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:38:14.032 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:14.032 --rc genhtml_branch_coverage=1 00:38:14.032 --rc genhtml_function_coverage=1 00:38:14.032 --rc genhtml_legend=1 00:38:14.032 --rc geninfo_all_blocks=1 00:38:14.032 --rc geninfo_unexecuted_blocks=1 00:38:14.032 00:38:14.032 ' 00:38:14.032 07:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:14.032 07:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:38:14.032 07:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:14.032 07:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:14.032 07:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:14.032 07:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:14.032 07:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:14.032 07:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:14.032 07:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:14.032 07:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:14.032 07:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:14.032 07:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:14.032 07:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:38:14.032 07:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:38:14.032 07:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:14.032 07:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:14.032 07:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:14.032 07:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:14.032 07:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:14.032 07:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:38:14.032 07:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:14.032 07:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:14.032 07:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:14.032 07:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:14.032 07:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:14.032 07:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:14.032 07:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:38:14.032 07:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:14.032 07:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:38:14.032 07:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:14.032 07:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:14.032 07:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:14.032 07:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:14.032 07:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:14.032 07:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:38:14.032 07:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:38:14.032 07:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:14.032 07:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:14.032 07:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:14.032 07:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:38:14.032 07:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:38:14.032 07:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:38:14.032 07:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:38:14.032 07:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:14.032 07:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:38:14.032 07:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:38:14.032 07:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:38:14.032 07:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:14.032 07:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:14.032 07:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:14.032 07:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:38:14.032 07:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:38:14.032 07:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:38:14.032 07:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:16.565 07:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:16.565 07:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:38:16.565 07:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:38:16.565 07:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:38:16.565 07:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:38:16.565 07:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:38:16.565 07:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:38:16.565 07:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:38:16.565 07:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:38:16.565 07:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:38:16.565 07:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:38:16.565 07:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:38:16.565 07:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:38:16.565 07:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:38:16.565 07:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:38:16.565 07:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:16.565 07:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:16.565 07:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:16.565 07:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:16.565 07:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:16.565 07:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:16.565 07:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:16.565 07:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:38:16.565 07:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:16.565 07:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:16.565 07:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:16.565 07:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:16.565 07:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:38:16.565 07:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:38:16.565 07:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:38:16.565 07:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:38:16.565 07:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:38:16.565 07:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:38:16.565 07:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:16.565 07:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:38:16.565 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:38:16.565 07:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:16.565 07:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:16.565 07:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:16.565 07:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:16.565 07:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:16.565 07:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:16.565 07:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:38:16.565 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:38:16.565 07:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:16.565 07:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:16.565 07:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:16.565 07:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:16.565 07:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:16.565 07:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:38:16.565 07:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:38:16.565 07:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:38:16.565 07:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:16.565 07:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:16.565 07:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:16.565 07:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:16.565 07:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:16.565 07:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:16.565 07:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:16.565 07:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:38:16.565 Found net devices under 0000:0a:00.0: cvl_0_0 00:38:16.565 07:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:16.565 07:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:16.565 07:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:16.565 07:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:16.565 07:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:16.565 07:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:16.565 07:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:16.565 07:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:16.565 07:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:38:16.565 Found net devices under 0000:0a:00.1: cvl_0_1 00:38:16.565 07:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:16.565 07:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:38:16.565 07:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:38:16.565 07:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:38:16.565 07:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:38:16.565 07:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:38:16.565 07:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:38:16.565 07:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:16.565 07:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:16.565 07:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:16.565 07:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:38:16.565 07:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:16.566 07:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:16.566 07:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:38:16.566 07:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:38:16.566 07:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:16.566 07:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:16.566 07:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:38:16.566 07:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:38:16.566 07:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:38:16.566 07:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:16.566 07:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:16.566 07:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:16.566 07:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:38:16.566 07:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:16.566 07:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:16.566 07:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:16.566 07:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:38:16.566 07:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:38:16.566 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:16.566 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.226 ms 00:38:16.566 00:38:16.566 --- 10.0.0.2 ping statistics --- 00:38:16.566 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:16.566 rtt min/avg/max/mdev = 0.226/0.226/0.226/0.000 ms 00:38:16.566 07:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:16.566 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:16.566 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.113 ms 00:38:16.566 00:38:16.566 --- 10.0.0.1 ping statistics --- 00:38:16.566 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:16.566 rtt min/avg/max/mdev = 0.113/0.113/0.113/0.000 ms 00:38:16.566 07:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:16.566 07:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:38:16.566 07:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:38:16.566 07:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:16.566 07:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:38:16.566 07:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:38:16.566 07:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:16.566 07:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:38:16.566 07:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:38:16.566 07:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:38:16.566 07:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:38:16.566 07:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:38:16.566 07:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:16.566 07:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=432742 00:38:16.566 07:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF --wait-for-rpc 00:38:16.566 07:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 432742 00:38:16.566 07:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 432742 ']' 00:38:16.566 07:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:16.566 07:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:16.566 07:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:16.566 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:16.566 07:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:16.566 07:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:16.566 [2024-11-18 07:23:37.271090] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:38:16.566 [2024-11-18 07:23:37.272148] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:38:16.566 [2024-11-18 07:23:37.272211] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:16.566 [2024-11-18 07:23:37.340717] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:38:16.566 [2024-11-18 07:23:37.386563] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:16.566 [2024-11-18 07:23:37.386626] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:16.566 [2024-11-18 07:23:37.386655] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:16.566 [2024-11-18 07:23:37.386666] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:16.566 [2024-11-18 07:23:37.386676] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:16.566 [2024-11-18 07:23:37.388143] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:16.566 [2024-11-18 07:23:37.388205] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:38:16.566 [2024-11-18 07:23:37.388313] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:38:16.566 [2024-11-18 07:23:37.388316] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:16.566 [2024-11-18 07:23:37.388787] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:38:16.566 07:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:16.566 07:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:38:16.566 07:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:38:16.566 07:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:38:16.566 07:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:16.566 07:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:16.566 07:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:38:16.566 07:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:16.566 07:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:16.566 07:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:16.566 07:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:38:16.566 07:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:16.566 07:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:16.826 [2024-11-18 07:23:37.589988] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:38:16.826 [2024-11-18 07:23:37.590227] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:38:16.826 [2024-11-18 07:23:37.591045] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:38:16.826 [2024-11-18 07:23:37.591842] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:38:16.826 07:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:16.826 07:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:38:16.826 07:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:16.826 07:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:16.826 [2024-11-18 07:23:37.597004] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:16.826 07:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:16.826 07:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:38:16.826 07:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:16.826 07:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:16.826 Malloc0 00:38:16.826 07:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:16.826 07:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:38:16.826 07:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:16.826 07:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:16.826 07:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:16.826 07:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:38:16.826 07:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:16.826 07:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:16.826 07:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:16.826 07:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:16.826 07:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:16.826 07:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:16.826 [2024-11-18 07:23:37.653145] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:16.826 07:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:16.826 07:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=432892 00:38:16.826 07:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:38:16.826 07:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:38:16.826 07:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=432894 00:38:16.826 07:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:38:16.826 07:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:38:16.826 07:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:38:16.826 07:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:38:16.826 { 00:38:16.826 "params": { 00:38:16.826 "name": "Nvme$subsystem", 00:38:16.826 "trtype": "$TEST_TRANSPORT", 00:38:16.826 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:16.826 "adrfam": "ipv4", 00:38:16.826 "trsvcid": "$NVMF_PORT", 00:38:16.826 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:16.826 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:16.826 "hdgst": ${hdgst:-false}, 00:38:16.826 "ddgst": ${ddgst:-false} 00:38:16.826 }, 00:38:16.826 "method": "bdev_nvme_attach_controller" 00:38:16.826 } 00:38:16.826 EOF 00:38:16.826 )") 00:38:16.826 07:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:38:16.826 07:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:38:16.826 07:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=432896 00:38:16.826 07:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:38:16.826 07:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:38:16.826 07:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:38:16.826 07:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:38:16.826 { 00:38:16.826 "params": { 00:38:16.826 "name": "Nvme$subsystem", 00:38:16.826 "trtype": "$TEST_TRANSPORT", 00:38:16.826 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:16.826 "adrfam": "ipv4", 00:38:16.826 "trsvcid": "$NVMF_PORT", 00:38:16.826 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:16.826 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:16.826 "hdgst": ${hdgst:-false}, 00:38:16.826 "ddgst": ${ddgst:-false} 00:38:16.826 }, 00:38:16.826 "method": "bdev_nvme_attach_controller" 00:38:16.826 } 00:38:16.826 EOF 00:38:16.826 )") 00:38:16.826 07:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:38:16.826 07:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:38:16.826 07:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:38:16.826 07:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=432899 00:38:16.826 07:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:38:16.826 07:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:38:16.826 07:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:38:16.826 07:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:38:16.826 07:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:38:16.826 { 00:38:16.826 "params": { 00:38:16.826 "name": "Nvme$subsystem", 00:38:16.826 "trtype": "$TEST_TRANSPORT", 00:38:16.826 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:16.826 "adrfam": "ipv4", 00:38:16.826 "trsvcid": "$NVMF_PORT", 00:38:16.826 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:16.826 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:16.826 "hdgst": ${hdgst:-false}, 00:38:16.826 "ddgst": ${ddgst:-false} 00:38:16.826 }, 00:38:16.826 "method": "bdev_nvme_attach_controller" 00:38:16.826 } 00:38:16.826 EOF 00:38:16.826 )") 00:38:16.826 07:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:38:16.826 07:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:38:16.826 07:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:38:16.826 07:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:38:16.826 07:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:38:16.826 07:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:38:16.826 07:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:38:16.826 { 00:38:16.826 "params": { 00:38:16.826 "name": "Nvme$subsystem", 00:38:16.826 "trtype": "$TEST_TRANSPORT", 00:38:16.826 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:16.826 "adrfam": "ipv4", 00:38:16.826 "trsvcid": "$NVMF_PORT", 00:38:16.826 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:16.826 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:16.826 "hdgst": ${hdgst:-false}, 00:38:16.826 "ddgst": ${ddgst:-false} 00:38:16.826 }, 00:38:16.826 "method": "bdev_nvme_attach_controller" 00:38:16.826 } 00:38:16.826 EOF 00:38:16.826 )") 00:38:16.826 07:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:38:16.826 07:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 432892 00:38:16.826 07:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:38:16.826 07:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:38:16.826 07:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:38:16.826 07:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:38:16.826 07:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:38:16.827 07:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:38:16.827 "params": { 00:38:16.827 "name": "Nvme1", 00:38:16.827 "trtype": "tcp", 00:38:16.827 "traddr": "10.0.0.2", 00:38:16.827 "adrfam": "ipv4", 00:38:16.827 "trsvcid": "4420", 00:38:16.827 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:38:16.827 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:38:16.827 "hdgst": false, 00:38:16.827 "ddgst": false 00:38:16.827 }, 00:38:16.827 "method": "bdev_nvme_attach_controller" 00:38:16.827 }' 00:38:16.827 07:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:38:16.827 07:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:38:16.827 "params": { 00:38:16.827 "name": "Nvme1", 00:38:16.827 "trtype": "tcp", 00:38:16.827 "traddr": "10.0.0.2", 00:38:16.827 "adrfam": "ipv4", 00:38:16.827 "trsvcid": "4420", 00:38:16.827 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:38:16.827 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:38:16.827 "hdgst": false, 00:38:16.827 "ddgst": false 00:38:16.827 }, 00:38:16.827 "method": "bdev_nvme_attach_controller" 00:38:16.827 }' 00:38:16.827 07:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:38:16.827 07:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:38:16.827 07:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:38:16.827 "params": { 00:38:16.827 "name": "Nvme1", 00:38:16.827 "trtype": "tcp", 00:38:16.827 "traddr": "10.0.0.2", 00:38:16.827 "adrfam": "ipv4", 00:38:16.827 "trsvcid": "4420", 00:38:16.827 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:38:16.827 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:38:16.827 "hdgst": false, 00:38:16.827 "ddgst": false 00:38:16.827 }, 00:38:16.827 "method": "bdev_nvme_attach_controller" 00:38:16.827 }' 00:38:16.827 07:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:38:16.827 07:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:38:16.827 "params": { 00:38:16.827 "name": "Nvme1", 00:38:16.827 "trtype": "tcp", 00:38:16.827 "traddr": "10.0.0.2", 00:38:16.827 "adrfam": "ipv4", 00:38:16.827 "trsvcid": "4420", 00:38:16.827 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:38:16.827 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:38:16.827 "hdgst": false, 00:38:16.827 "ddgst": false 00:38:16.827 }, 00:38:16.827 "method": "bdev_nvme_attach_controller" 00:38:16.827 }' 00:38:16.827 [2024-11-18 07:23:37.702689] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:38:16.827 [2024-11-18 07:23:37.702690] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:38:16.827 [2024-11-18 07:23:37.702794] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-11-18 07:23:37.702795] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:38:16.827 --proc-type=auto ] 00:38:16.827 [2024-11-18 07:23:37.702953] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:38:16.827 [2024-11-18 07:23:37.702953] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:38:16.827 [2024-11-18 07:23:37.703029] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-11-18 07:23:37.703030] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:38:16.827 --proc-type=auto ] 00:38:17.085 [2024-11-18 07:23:37.892150] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:17.085 [2024-11-18 07:23:37.934128] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:38:17.085 [2024-11-18 07:23:37.989401] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:17.085 [2024-11-18 07:23:38.028034] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:38:17.085 [2024-11-18 07:23:38.054156] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:17.344 [2024-11-18 07:23:38.091716] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:38:17.344 [2024-11-18 07:23:38.121265] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:17.344 [2024-11-18 07:23:38.159115] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:38:17.603 Running I/O for 1 seconds... 00:38:17.603 Running I/O for 1 seconds... 00:38:17.603 Running I/O for 1 seconds... 00:38:17.603 Running I/O for 1 seconds... 00:38:18.541 10329.00 IOPS, 40.35 MiB/s [2024-11-18T06:23:39.519Z] 8829.00 IOPS, 34.49 MiB/s 00:38:18.541 Latency(us) 00:38:18.541 [2024-11-18T06:23:39.519Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:18.541 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:38:18.541 Nvme1n1 : 1.01 8895.95 34.75 0.00 0.00 14328.57 5679.79 18544.26 00:38:18.541 [2024-11-18T06:23:39.519Z] =================================================================================================================== 00:38:18.541 [2024-11-18T06:23:39.519Z] Total : 8895.95 34.75 0.00 0.00 14328.57 5679.79 18544.26 00:38:18.541 00:38:18.541 Latency(us) 00:38:18.541 [2024-11-18T06:23:39.519Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:18.541 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:38:18.541 Nvme1n1 : 1.05 9970.73 38.95 0.00 0.00 12298.21 3980.71 47185.92 00:38:18.541 [2024-11-18T06:23:39.519Z] =================================================================================================================== 00:38:18.541 [2024-11-18T06:23:39.519Z] Total : 9970.73 38.95 0.00 0.00 12298.21 3980.71 47185.92 00:38:18.541 10210.00 IOPS, 39.88 MiB/s 00:38:18.541 Latency(us) 00:38:18.541 [2024-11-18T06:23:39.519Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:18.541 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:38:18.541 Nvme1n1 : 1.01 10288.60 40.19 0.00 0.00 12402.81 2585.03 18835.53 00:38:18.541 [2024-11-18T06:23:39.519Z] =================================================================================================================== 00:38:18.541 [2024-11-18T06:23:39.519Z] Total : 10288.60 40.19 0.00 0.00 12402.81 2585.03 18835.53 00:38:18.541 185984.00 IOPS, 726.50 MiB/s 00:38:18.541 Latency(us) 00:38:18.541 [2024-11-18T06:23:39.519Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:18.541 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:38:18.541 Nvme1n1 : 1.00 185616.68 725.07 0.00 0.00 685.87 295.82 1941.81 00:38:18.541 [2024-11-18T06:23:39.519Z] =================================================================================================================== 00:38:18.541 [2024-11-18T06:23:39.519Z] Total : 185616.68 725.07 0.00 0.00 685.87 295.82 1941.81 00:38:18.541 07:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 432894 00:38:18.799 07:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 432896 00:38:18.799 07:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 432899 00:38:18.799 07:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:38:18.799 07:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:18.799 07:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:18.799 07:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:18.799 07:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:38:18.799 07:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:38:18.799 07:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:38:18.799 07:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:38:18.799 07:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:18.799 07:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:38:18.799 07:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:18.799 07:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:18.799 rmmod nvme_tcp 00:38:18.799 rmmod nvme_fabrics 00:38:18.799 rmmod nvme_keyring 00:38:18.799 07:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:18.799 07:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:38:18.799 07:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:38:18.799 07:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 432742 ']' 00:38:18.799 07:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 432742 00:38:18.799 07:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 432742 ']' 00:38:18.799 07:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 432742 00:38:18.799 07:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:38:18.799 07:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:18.799 07:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 432742 00:38:18.799 07:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:38:18.799 07:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:38:18.799 07:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 432742' 00:38:18.799 killing process with pid 432742 00:38:18.799 07:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 432742 00:38:18.799 07:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 432742 00:38:19.058 07:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:38:19.058 07:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:38:19.058 07:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:38:19.058 07:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:38:19.058 07:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:38:19.058 07:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:38:19.058 07:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:38:19.058 07:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:19.058 07:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:38:19.058 07:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:19.058 07:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:19.058 07:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:20.964 07:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:38:20.965 00:38:20.965 real 0m7.141s 00:38:20.965 user 0m14.031s 00:38:20.965 sys 0m4.063s 00:38:20.965 07:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:20.965 07:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:20.965 ************************************ 00:38:20.965 END TEST nvmf_bdev_io_wait 00:38:20.965 ************************************ 00:38:20.965 07:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:38:20.965 07:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:38:20.965 07:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:20.965 07:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:38:21.224 ************************************ 00:38:21.224 START TEST nvmf_queue_depth 00:38:21.224 ************************************ 00:38:21.224 07:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:38:21.224 * Looking for test storage... 00:38:21.224 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:38:21.224 07:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:38:21.224 07:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lcov --version 00:38:21.224 07:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:38:21.224 07:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:38:21.224 07:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:21.224 07:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:21.224 07:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:21.224 07:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:38:21.224 07:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:38:21.224 07:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:38:21.224 07:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:38:21.224 07:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:38:21.224 07:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:38:21.224 07:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:38:21.224 07:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:21.224 07:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:38:21.224 07:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:38:21.224 07:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:21.224 07:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:21.224 07:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:38:21.224 07:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:38:21.224 07:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:21.224 07:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:38:21.224 07:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:38:21.224 07:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:38:21.224 07:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:38:21.224 07:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:21.224 07:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:38:21.224 07:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:38:21.224 07:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:21.224 07:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:21.224 07:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:38:21.224 07:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:21.224 07:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:38:21.224 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:21.224 --rc genhtml_branch_coverage=1 00:38:21.224 --rc genhtml_function_coverage=1 00:38:21.224 --rc genhtml_legend=1 00:38:21.224 --rc geninfo_all_blocks=1 00:38:21.224 --rc geninfo_unexecuted_blocks=1 00:38:21.224 00:38:21.224 ' 00:38:21.224 07:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:38:21.224 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:21.224 --rc genhtml_branch_coverage=1 00:38:21.224 --rc genhtml_function_coverage=1 00:38:21.224 --rc genhtml_legend=1 00:38:21.224 --rc geninfo_all_blocks=1 00:38:21.224 --rc geninfo_unexecuted_blocks=1 00:38:21.224 00:38:21.224 ' 00:38:21.224 07:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:38:21.224 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:21.224 --rc genhtml_branch_coverage=1 00:38:21.224 --rc genhtml_function_coverage=1 00:38:21.224 --rc genhtml_legend=1 00:38:21.224 --rc geninfo_all_blocks=1 00:38:21.224 --rc geninfo_unexecuted_blocks=1 00:38:21.224 00:38:21.224 ' 00:38:21.224 07:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:38:21.224 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:21.224 --rc genhtml_branch_coverage=1 00:38:21.224 --rc genhtml_function_coverage=1 00:38:21.224 --rc genhtml_legend=1 00:38:21.224 --rc geninfo_all_blocks=1 00:38:21.224 --rc geninfo_unexecuted_blocks=1 00:38:21.224 00:38:21.224 ' 00:38:21.224 07:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:21.224 07:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:38:21.224 07:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:21.224 07:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:21.224 07:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:21.225 07:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:21.225 07:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:21.225 07:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:21.225 07:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:21.225 07:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:21.225 07:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:21.225 07:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:21.225 07:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:38:21.225 07:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:38:21.225 07:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:21.225 07:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:21.225 07:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:21.225 07:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:21.225 07:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:21.225 07:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:38:21.225 07:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:21.225 07:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:21.225 07:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:21.225 07:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:21.225 07:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:21.225 07:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:21.225 07:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:38:21.225 07:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:21.225 07:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:38:21.225 07:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:21.225 07:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:21.225 07:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:21.225 07:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:21.225 07:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:21.225 07:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:38:21.225 07:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:38:21.225 07:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:21.225 07:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:21.225 07:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:21.225 07:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:38:21.225 07:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:38:21.225 07:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:38:21.225 07:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:38:21.225 07:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:38:21.225 07:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:21.225 07:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:38:21.225 07:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:38:21.225 07:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:38:21.225 07:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:21.225 07:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:21.225 07:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:21.225 07:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:38:21.225 07:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:38:21.225 07:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:38:21.225 07:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:38:23.762 07:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:23.762 07:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:38:23.762 07:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:38:23.762 07:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:38:23.762 07:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:38:23.762 07:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:38:23.762 07:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:38:23.762 07:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:38:23.762 07:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:38:23.762 07:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:38:23.762 07:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:38:23.762 07:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:38:23.762 07:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:38:23.762 07:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:38:23.762 07:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:38:23.762 07:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:23.762 07:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:23.762 07:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:23.762 07:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:23.762 07:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:23.762 07:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:23.762 07:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:23.762 07:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:38:23.762 07:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:23.762 07:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:23.762 07:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:23.762 07:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:23.762 07:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:38:23.762 07:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:38:23.762 07:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:38:23.762 07:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:38:23.762 07:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:38:23.762 07:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:38:23.762 07:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:23.762 07:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:38:23.762 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:38:23.762 07:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:23.762 07:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:23.762 07:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:23.762 07:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:23.762 07:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:23.762 07:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:23.762 07:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:38:23.762 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:38:23.762 07:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:23.762 07:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:23.762 07:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:23.762 07:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:23.762 07:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:23.762 07:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:38:23.762 07:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:38:23.762 07:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:38:23.762 07:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:23.762 07:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:23.762 07:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:23.762 07:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:23.762 07:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:23.763 07:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:23.763 07:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:23.763 07:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:38:23.763 Found net devices under 0000:0a:00.0: cvl_0_0 00:38:23.763 07:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:23.763 07:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:23.763 07:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:23.763 07:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:23.763 07:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:23.763 07:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:23.763 07:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:23.763 07:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:23.763 07:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:38:23.763 Found net devices under 0000:0a:00.1: cvl_0_1 00:38:23.763 07:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:23.763 07:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:38:23.763 07:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:38:23.763 07:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:38:23.763 07:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:38:23.763 07:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:38:23.763 07:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:38:23.763 07:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:23.763 07:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:23.763 07:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:23.763 07:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:38:23.763 07:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:23.763 07:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:23.763 07:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:38:23.763 07:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:38:23.763 07:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:23.763 07:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:23.763 07:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:38:23.763 07:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:38:23.763 07:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:38:23.763 07:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:23.763 07:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:23.763 07:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:23.763 07:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:38:23.763 07:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:23.763 07:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:23.763 07:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:23.763 07:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:38:23.763 07:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:38:23.763 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:23.763 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.252 ms 00:38:23.763 00:38:23.763 --- 10.0.0.2 ping statistics --- 00:38:23.763 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:23.763 rtt min/avg/max/mdev = 0.252/0.252/0.252/0.000 ms 00:38:23.763 07:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:23.763 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:23.763 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.122 ms 00:38:23.763 00:38:23.763 --- 10.0.0.1 ping statistics --- 00:38:23.763 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:23.763 rtt min/avg/max/mdev = 0.122/0.122/0.122/0.000 ms 00:38:23.763 07:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:23.763 07:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:38:23.763 07:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:38:23.763 07:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:23.763 07:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:38:23.763 07:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:38:23.763 07:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:23.763 07:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:38:23.763 07:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:38:23.763 07:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:38:23.763 07:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:38:23.763 07:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:38:23.763 07:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:38:23.763 07:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=435014 00:38:23.763 07:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:38:23.763 07:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 435014 00:38:23.763 07:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 435014 ']' 00:38:23.763 07:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:23.763 07:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:23.763 07:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:23.763 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:23.763 07:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:23.763 07:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:38:23.763 [2024-11-18 07:23:44.418457] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:38:23.763 [2024-11-18 07:23:44.419697] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:38:23.763 [2024-11-18 07:23:44.419761] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:23.763 [2024-11-18 07:23:44.501471] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:23.763 [2024-11-18 07:23:44.549558] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:23.763 [2024-11-18 07:23:44.549623] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:23.763 [2024-11-18 07:23:44.549638] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:23.763 [2024-11-18 07:23:44.549650] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:23.763 [2024-11-18 07:23:44.549661] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:23.763 [2024-11-18 07:23:44.550300] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:23.763 [2024-11-18 07:23:44.644786] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:38:23.763 [2024-11-18 07:23:44.645095] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:38:23.763 07:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:23.763 07:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:38:23.763 07:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:38:23.763 07:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:38:23.763 07:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:38:23.763 07:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:23.763 07:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:38:23.763 07:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:23.763 07:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:38:23.764 [2024-11-18 07:23:44.694865] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:23.764 07:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:23.764 07:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:38:23.764 07:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:23.764 07:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:38:23.764 Malloc0 00:38:23.764 07:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:23.764 07:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:38:23.764 07:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:23.764 07:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:38:23.764 07:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:23.764 07:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:38:23.764 07:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:23.764 07:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:38:24.023 07:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:24.023 07:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:24.023 07:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:24.023 07:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:38:24.023 [2024-11-18 07:23:44.751056] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:24.023 07:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:24.023 07:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=435135 00:38:24.023 07:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:38:24.023 07:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 435135 /var/tmp/bdevperf.sock 00:38:24.023 07:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:38:24.023 07:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 435135 ']' 00:38:24.023 07:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:38:24.023 07:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:24.023 07:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:38:24.023 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:38:24.023 07:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:24.023 07:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:38:24.023 [2024-11-18 07:23:44.801784] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:38:24.023 [2024-11-18 07:23:44.801875] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid435135 ] 00:38:24.023 [2024-11-18 07:23:44.868620] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:24.023 [2024-11-18 07:23:44.914769] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:24.281 07:23:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:24.281 07:23:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:38:24.281 07:23:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:38:24.281 07:23:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:24.281 07:23:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:38:24.281 NVMe0n1 00:38:24.281 07:23:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:24.281 07:23:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:38:24.540 Running I/O for 10 seconds... 00:38:26.418 8192.00 IOPS, 32.00 MiB/s [2024-11-18T06:23:48.780Z] 8504.00 IOPS, 33.22 MiB/s [2024-11-18T06:23:49.718Z] 8538.00 IOPS, 33.35 MiB/s [2024-11-18T06:23:50.656Z] 8610.50 IOPS, 33.63 MiB/s [2024-11-18T06:23:51.593Z] 8603.00 IOPS, 33.61 MiB/s [2024-11-18T06:23:52.528Z] 8662.50 IOPS, 33.84 MiB/s [2024-11-18T06:23:53.466Z] 8646.00 IOPS, 33.77 MiB/s [2024-11-18T06:23:54.400Z] 8700.12 IOPS, 33.98 MiB/s [2024-11-18T06:23:55.780Z] 8726.11 IOPS, 34.09 MiB/s [2024-11-18T06:23:55.780Z] 8712.80 IOPS, 34.03 MiB/s 00:38:34.802 Latency(us) 00:38:34.802 [2024-11-18T06:23:55.780Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:34.802 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:38:34.802 Verification LBA range: start 0x0 length 0x4000 00:38:34.802 NVMe0n1 : 10.06 8757.37 34.21 0.00 0.00 116461.11 13010.11 69905.07 00:38:34.802 [2024-11-18T06:23:55.780Z] =================================================================================================================== 00:38:34.802 [2024-11-18T06:23:55.780Z] Total : 8757.37 34.21 0.00 0.00 116461.11 13010.11 69905.07 00:38:34.802 { 00:38:34.802 "results": [ 00:38:34.802 { 00:38:34.802 "job": "NVMe0n1", 00:38:34.802 "core_mask": "0x1", 00:38:34.802 "workload": "verify", 00:38:34.802 "status": "finished", 00:38:34.802 "verify_range": { 00:38:34.802 "start": 0, 00:38:34.802 "length": 16384 00:38:34.802 }, 00:38:34.802 "queue_depth": 1024, 00:38:34.802 "io_size": 4096, 00:38:34.802 "runtime": 10.064204, 00:38:34.802 "iops": 8757.374154975396, 00:38:34.802 "mibps": 34.20849279287264, 00:38:34.802 "io_failed": 0, 00:38:34.802 "io_timeout": 0, 00:38:34.802 "avg_latency_us": 116461.11156697225, 00:38:34.802 "min_latency_us": 13010.10962962963, 00:38:34.802 "max_latency_us": 69905.06666666667 00:38:34.802 } 00:38:34.802 ], 00:38:34.802 "core_count": 1 00:38:34.802 } 00:38:34.802 07:23:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 435135 00:38:34.802 07:23:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 435135 ']' 00:38:34.802 07:23:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 435135 00:38:34.802 07:23:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:38:34.802 07:23:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:34.802 07:23:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 435135 00:38:34.802 07:23:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:38:34.802 07:23:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:38:34.802 07:23:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 435135' 00:38:34.802 killing process with pid 435135 00:38:34.802 07:23:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 435135 00:38:34.802 Received shutdown signal, test time was about 10.000000 seconds 00:38:34.802 00:38:34.802 Latency(us) 00:38:34.802 [2024-11-18T06:23:55.780Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:34.802 [2024-11-18T06:23:55.780Z] =================================================================================================================== 00:38:34.802 [2024-11-18T06:23:55.780Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:38:34.802 07:23:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 435135 00:38:34.802 07:23:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:38:34.802 07:23:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:38:34.802 07:23:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:38:34.802 07:23:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:38:34.803 07:23:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:34.803 07:23:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:38:34.803 07:23:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:34.803 07:23:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:34.803 rmmod nvme_tcp 00:38:34.803 rmmod nvme_fabrics 00:38:34.803 rmmod nvme_keyring 00:38:34.803 07:23:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:34.803 07:23:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:38:34.803 07:23:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:38:34.803 07:23:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 435014 ']' 00:38:34.803 07:23:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 435014 00:38:34.803 07:23:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 435014 ']' 00:38:34.803 07:23:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 435014 00:38:34.803 07:23:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:38:34.803 07:23:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:34.803 07:23:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 435014 00:38:35.061 07:23:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:38:35.061 07:23:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:38:35.061 07:23:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 435014' 00:38:35.061 killing process with pid 435014 00:38:35.061 07:23:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 435014 00:38:35.061 07:23:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 435014 00:38:35.061 07:23:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:38:35.061 07:23:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:38:35.061 07:23:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:38:35.061 07:23:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:38:35.061 07:23:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:38:35.061 07:23:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:38:35.061 07:23:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:38:35.061 07:23:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:35.061 07:23:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:38:35.061 07:23:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:35.061 07:23:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:35.061 07:23:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:37.599 07:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:38:37.599 00:38:37.599 real 0m16.095s 00:38:37.599 user 0m22.372s 00:38:37.599 sys 0m3.280s 00:38:37.599 07:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:37.599 07:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:38:37.599 ************************************ 00:38:37.599 END TEST nvmf_queue_depth 00:38:37.599 ************************************ 00:38:37.599 07:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:38:37.599 07:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:38:37.599 07:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:37.599 07:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:38:37.599 ************************************ 00:38:37.599 START TEST nvmf_target_multipath 00:38:37.599 ************************************ 00:38:37.599 07:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:38:37.599 * Looking for test storage... 00:38:37.599 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:38:37.599 07:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:38:37.599 07:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lcov --version 00:38:37.599 07:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:38:37.599 07:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:38:37.599 07:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:37.599 07:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:37.599 07:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:37.599 07:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:38:37.599 07:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:38:37.599 07:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:38:37.599 07:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:38:37.599 07:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:38:37.599 07:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:38:37.599 07:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:38:37.599 07:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:37.599 07:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:38:37.599 07:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:38:37.599 07:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:37.599 07:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:37.599 07:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:38:37.599 07:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:38:37.599 07:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:37.599 07:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:38:37.599 07:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:38:37.599 07:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:38:37.599 07:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:38:37.599 07:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:37.600 07:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:38:37.600 07:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:38:37.600 07:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:37.600 07:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:37.600 07:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:38:37.600 07:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:37.600 07:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:38:37.600 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:37.600 --rc genhtml_branch_coverage=1 00:38:37.600 --rc genhtml_function_coverage=1 00:38:37.600 --rc genhtml_legend=1 00:38:37.600 --rc geninfo_all_blocks=1 00:38:37.600 --rc geninfo_unexecuted_blocks=1 00:38:37.600 00:38:37.600 ' 00:38:37.600 07:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:38:37.600 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:37.600 --rc genhtml_branch_coverage=1 00:38:37.600 --rc genhtml_function_coverage=1 00:38:37.600 --rc genhtml_legend=1 00:38:37.600 --rc geninfo_all_blocks=1 00:38:37.600 --rc geninfo_unexecuted_blocks=1 00:38:37.600 00:38:37.600 ' 00:38:37.600 07:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:38:37.600 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:37.600 --rc genhtml_branch_coverage=1 00:38:37.600 --rc genhtml_function_coverage=1 00:38:37.600 --rc genhtml_legend=1 00:38:37.600 --rc geninfo_all_blocks=1 00:38:37.600 --rc geninfo_unexecuted_blocks=1 00:38:37.600 00:38:37.600 ' 00:38:37.600 07:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:38:37.600 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:37.600 --rc genhtml_branch_coverage=1 00:38:37.600 --rc genhtml_function_coverage=1 00:38:37.600 --rc genhtml_legend=1 00:38:37.600 --rc geninfo_all_blocks=1 00:38:37.600 --rc geninfo_unexecuted_blocks=1 00:38:37.600 00:38:37.600 ' 00:38:37.600 07:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:37.600 07:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:38:37.600 07:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:37.600 07:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:37.600 07:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:37.600 07:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:37.600 07:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:37.600 07:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:37.600 07:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:37.600 07:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:37.600 07:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:37.600 07:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:37.600 07:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:38:37.600 07:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:38:37.600 07:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:37.600 07:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:37.600 07:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:37.600 07:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:37.600 07:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:37.600 07:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:38:37.600 07:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:37.600 07:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:37.600 07:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:37.600 07:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:37.600 07:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:37.600 07:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:37.600 07:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:38:37.600 07:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:37.600 07:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:38:37.600 07:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:37.600 07:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:37.600 07:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:37.600 07:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:37.600 07:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:37.600 07:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:38:37.600 07:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:38:37.600 07:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:37.600 07:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:37.600 07:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:37.600 07:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:38:37.601 07:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:38:37.601 07:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:38:37.601 07:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:38:37.601 07:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:38:37.601 07:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:38:37.601 07:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:37.601 07:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:38:37.601 07:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:38:37.601 07:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:38:37.601 07:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:37.601 07:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:37.601 07:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:37.601 07:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:38:37.601 07:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:38:37.601 07:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:38:37.601 07:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:38:39.507 07:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:39.507 07:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:38:39.507 07:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:38:39.507 07:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:38:39.507 07:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:38:39.507 07:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:38:39.507 07:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:38:39.507 07:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:38:39.507 07:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:38:39.507 07:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:38:39.507 07:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:38:39.507 07:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:38:39.507 07:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:38:39.507 07:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:38:39.507 07:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:38:39.507 07:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:39.507 07:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:39.507 07:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:39.507 07:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:39.507 07:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:39.507 07:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:39.507 07:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:39.507 07:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:38:39.507 07:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:39.507 07:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:39.507 07:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:39.507 07:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:39.507 07:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:38:39.507 07:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:38:39.507 07:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:38:39.507 07:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:38:39.507 07:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:38:39.507 07:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:38:39.507 07:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:39.507 07:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:38:39.507 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:38:39.507 07:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:39.507 07:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:39.507 07:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:39.507 07:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:39.507 07:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:39.507 07:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:39.507 07:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:38:39.507 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:38:39.507 07:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:39.507 07:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:39.507 07:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:39.507 07:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:39.507 07:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:39.507 07:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:38:39.507 07:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:38:39.507 07:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:38:39.507 07:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:39.507 07:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:39.507 07:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:39.507 07:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:39.507 07:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:39.507 07:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:39.507 07:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:39.507 07:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:38:39.507 Found net devices under 0000:0a:00.0: cvl_0_0 00:38:39.507 07:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:39.507 07:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:39.507 07:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:39.507 07:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:39.507 07:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:39.507 07:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:39.507 07:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:39.507 07:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:39.507 07:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:38:39.507 Found net devices under 0000:0a:00.1: cvl_0_1 00:38:39.507 07:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:39.507 07:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:38:39.507 07:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:38:39.507 07:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:38:39.507 07:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:38:39.507 07:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:38:39.507 07:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:38:39.507 07:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:39.507 07:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:39.507 07:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:39.507 07:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:38:39.507 07:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:39.507 07:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:39.507 07:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:38:39.507 07:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:38:39.507 07:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:39.507 07:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:39.507 07:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:38:39.507 07:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:38:39.507 07:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:38:39.507 07:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:39.507 07:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:39.508 07:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:39.508 07:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:38:39.508 07:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:39.508 07:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:39.508 07:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:39.508 07:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:38:39.508 07:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:38:39.508 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:39.508 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.319 ms 00:38:39.508 00:38:39.508 --- 10.0.0.2 ping statistics --- 00:38:39.508 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:39.508 rtt min/avg/max/mdev = 0.319/0.319/0.319/0.000 ms 00:38:39.508 07:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:39.508 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:39.508 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.120 ms 00:38:39.508 00:38:39.508 --- 10.0.0.1 ping statistics --- 00:38:39.508 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:39.508 rtt min/avg/max/mdev = 0.120/0.120/0.120/0.000 ms 00:38:39.508 07:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:39.508 07:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:38:39.508 07:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:38:39.508 07:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:39.508 07:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:38:39.508 07:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:38:39.508 07:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:39.508 07:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:38:39.508 07:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:38:39.508 07:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:38:39.508 07:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:38:39.508 only one NIC for nvmf test 00:38:39.508 07:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:38:39.508 07:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:38:39.508 07:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:38:39.508 07:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:39.508 07:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:38:39.508 07:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:39.508 07:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:39.508 rmmod nvme_tcp 00:38:39.508 rmmod nvme_fabrics 00:38:39.508 rmmod nvme_keyring 00:38:39.766 07:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:39.766 07:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:38:39.766 07:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:38:39.766 07:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:38:39.766 07:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:38:39.766 07:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:38:39.766 07:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:38:39.766 07:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:38:39.766 07:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:38:39.766 07:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:38:39.766 07:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:38:39.766 07:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:39.766 07:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:38:39.766 07:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:39.766 07:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:39.766 07:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:41.671 07:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:38:41.672 07:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:38:41.672 07:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:38:41.672 07:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:38:41.672 07:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:38:41.672 07:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:41.672 07:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:38:41.672 07:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:41.672 07:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:41.672 07:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:41.672 07:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:38:41.672 07:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:38:41.672 07:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:38:41.672 07:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:38:41.672 07:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:38:41.672 07:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:38:41.672 07:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:38:41.672 07:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:38:41.672 07:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:38:41.672 07:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:38:41.672 07:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:41.672 07:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:38:41.672 07:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:41.672 07:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:41.672 07:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:41.672 07:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:38:41.672 00:38:41.672 real 0m4.454s 00:38:41.672 user 0m0.862s 00:38:41.672 sys 0m1.592s 00:38:41.672 07:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:41.672 07:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:38:41.672 ************************************ 00:38:41.672 END TEST nvmf_target_multipath 00:38:41.672 ************************************ 00:38:41.672 07:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:38:41.672 07:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:38:41.672 07:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:41.672 07:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:38:41.672 ************************************ 00:38:41.672 START TEST nvmf_zcopy 00:38:41.672 ************************************ 00:38:41.672 07:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:38:41.931 * Looking for test storage... 00:38:41.931 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:38:41.931 07:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:38:41.931 07:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lcov --version 00:38:41.931 07:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:38:41.931 07:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:38:41.931 07:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:41.931 07:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:41.931 07:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:41.931 07:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:38:41.931 07:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:38:41.931 07:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:38:41.931 07:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:38:41.931 07:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:38:41.931 07:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:38:41.931 07:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:38:41.931 07:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:41.931 07:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:38:41.931 07:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:38:41.931 07:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:41.931 07:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:41.931 07:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:38:41.931 07:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:38:41.931 07:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:41.932 07:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:38:41.932 07:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:38:41.932 07:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:38:41.932 07:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:38:41.932 07:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:41.932 07:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:38:41.932 07:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:38:41.932 07:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:41.932 07:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:41.932 07:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:38:41.932 07:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:41.932 07:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:38:41.932 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:41.932 --rc genhtml_branch_coverage=1 00:38:41.932 --rc genhtml_function_coverage=1 00:38:41.932 --rc genhtml_legend=1 00:38:41.932 --rc geninfo_all_blocks=1 00:38:41.932 --rc geninfo_unexecuted_blocks=1 00:38:41.932 00:38:41.932 ' 00:38:41.932 07:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:38:41.932 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:41.932 --rc genhtml_branch_coverage=1 00:38:41.932 --rc genhtml_function_coverage=1 00:38:41.932 --rc genhtml_legend=1 00:38:41.932 --rc geninfo_all_blocks=1 00:38:41.932 --rc geninfo_unexecuted_blocks=1 00:38:41.932 00:38:41.932 ' 00:38:41.932 07:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:38:41.932 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:41.932 --rc genhtml_branch_coverage=1 00:38:41.932 --rc genhtml_function_coverage=1 00:38:41.932 --rc genhtml_legend=1 00:38:41.932 --rc geninfo_all_blocks=1 00:38:41.932 --rc geninfo_unexecuted_blocks=1 00:38:41.932 00:38:41.932 ' 00:38:41.932 07:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:38:41.932 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:41.932 --rc genhtml_branch_coverage=1 00:38:41.932 --rc genhtml_function_coverage=1 00:38:41.932 --rc genhtml_legend=1 00:38:41.932 --rc geninfo_all_blocks=1 00:38:41.932 --rc geninfo_unexecuted_blocks=1 00:38:41.932 00:38:41.932 ' 00:38:41.932 07:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:41.932 07:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:38:41.932 07:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:41.932 07:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:41.932 07:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:41.932 07:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:41.932 07:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:41.932 07:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:41.932 07:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:41.932 07:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:41.932 07:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:41.932 07:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:41.932 07:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:38:41.932 07:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:38:41.932 07:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:41.932 07:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:41.932 07:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:41.932 07:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:41.932 07:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:41.932 07:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:38:41.932 07:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:41.932 07:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:41.932 07:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:41.932 07:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:41.932 07:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:41.932 07:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:41.932 07:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:38:41.932 07:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:41.932 07:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:38:41.932 07:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:41.932 07:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:41.932 07:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:41.932 07:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:41.932 07:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:41.932 07:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:38:41.932 07:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:38:41.932 07:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:41.932 07:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:41.932 07:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:41.932 07:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:38:41.932 07:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:38:41.932 07:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:41.932 07:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:38:41.932 07:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:38:41.932 07:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:38:41.932 07:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:41.932 07:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:41.932 07:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:41.932 07:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:38:41.932 07:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:38:41.932 07:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:38:41.932 07:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:38:44.469 07:24:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:44.469 07:24:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:38:44.469 07:24:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:38:44.469 07:24:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:38:44.469 07:24:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:38:44.469 07:24:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:38:44.469 07:24:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:38:44.469 07:24:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:38:44.469 07:24:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:38:44.469 07:24:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:38:44.469 07:24:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:38:44.469 07:24:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:38:44.469 07:24:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:38:44.469 07:24:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:38:44.469 07:24:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:38:44.469 07:24:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:44.469 07:24:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:44.469 07:24:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:44.469 07:24:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:44.469 07:24:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:44.469 07:24:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:44.469 07:24:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:44.469 07:24:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:38:44.469 07:24:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:44.469 07:24:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:44.469 07:24:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:44.469 07:24:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:44.469 07:24:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:38:44.469 07:24:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:38:44.469 07:24:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:38:44.469 07:24:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:38:44.469 07:24:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:38:44.469 07:24:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:38:44.469 07:24:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:44.469 07:24:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:38:44.469 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:38:44.469 07:24:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:44.470 07:24:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:44.470 07:24:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:44.470 07:24:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:44.470 07:24:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:44.470 07:24:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:44.470 07:24:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:38:44.470 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:38:44.470 07:24:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:44.470 07:24:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:44.470 07:24:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:44.470 07:24:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:44.470 07:24:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:44.470 07:24:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:38:44.470 07:24:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:38:44.470 07:24:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:38:44.470 07:24:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:44.470 07:24:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:44.470 07:24:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:44.470 07:24:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:44.470 07:24:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:44.470 07:24:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:44.470 07:24:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:44.470 07:24:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:38:44.470 Found net devices under 0000:0a:00.0: cvl_0_0 00:38:44.470 07:24:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:44.470 07:24:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:44.470 07:24:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:44.470 07:24:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:44.470 07:24:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:44.470 07:24:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:44.470 07:24:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:44.470 07:24:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:44.470 07:24:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:38:44.470 Found net devices under 0000:0a:00.1: cvl_0_1 00:38:44.470 07:24:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:44.470 07:24:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:38:44.470 07:24:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:38:44.470 07:24:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:38:44.470 07:24:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:38:44.470 07:24:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:38:44.470 07:24:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:38:44.470 07:24:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:44.470 07:24:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:44.470 07:24:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:44.470 07:24:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:38:44.470 07:24:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:44.470 07:24:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:44.470 07:24:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:38:44.470 07:24:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:38:44.470 07:24:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:44.470 07:24:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:44.470 07:24:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:38:44.470 07:24:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:38:44.470 07:24:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:38:44.470 07:24:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:44.470 07:24:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:44.470 07:24:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:44.470 07:24:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:38:44.470 07:24:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:44.470 07:24:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:44.470 07:24:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:44.470 07:24:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:38:44.470 07:24:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:38:44.470 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:44.470 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.176 ms 00:38:44.470 00:38:44.470 --- 10.0.0.2 ping statistics --- 00:38:44.470 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:44.470 rtt min/avg/max/mdev = 0.176/0.176/0.176/0.000 ms 00:38:44.470 07:24:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:44.470 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:44.470 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.116 ms 00:38:44.470 00:38:44.470 --- 10.0.0.1 ping statistics --- 00:38:44.470 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:44.470 rtt min/avg/max/mdev = 0.116/0.116/0.116/0.000 ms 00:38:44.470 07:24:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:44.470 07:24:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:38:44.470 07:24:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:38:44.470 07:24:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:44.470 07:24:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:38:44.470 07:24:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:38:44.470 07:24:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:44.470 07:24:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:38:44.470 07:24:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:38:44.470 07:24:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:38:44.470 07:24:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:38:44.470 07:24:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:38:44.470 07:24:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:38:44.470 07:24:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=440383 00:38:44.470 07:24:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:38:44.470 07:24:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 440383 00:38:44.470 07:24:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 440383 ']' 00:38:44.470 07:24:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:44.470 07:24:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:44.470 07:24:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:44.470 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:44.470 07:24:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:44.470 07:24:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:38:44.470 [2024-11-18 07:24:05.206718] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:38:44.470 [2024-11-18 07:24:05.207847] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:38:44.470 [2024-11-18 07:24:05.207919] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:44.470 [2024-11-18 07:24:05.281613] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:44.470 [2024-11-18 07:24:05.325678] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:44.471 [2024-11-18 07:24:05.325735] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:44.471 [2024-11-18 07:24:05.325750] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:44.471 [2024-11-18 07:24:05.325763] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:44.471 [2024-11-18 07:24:05.325784] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:44.471 [2024-11-18 07:24:05.326347] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:44.471 [2024-11-18 07:24:05.410206] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:38:44.471 [2024-11-18 07:24:05.410529] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:38:44.471 07:24:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:44.471 07:24:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:38:44.471 07:24:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:38:44.471 07:24:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:38:44.471 07:24:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:38:44.730 07:24:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:44.730 07:24:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:38:44.730 07:24:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:38:44.730 07:24:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:44.730 07:24:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:38:44.730 [2024-11-18 07:24:05.466954] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:44.730 07:24:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:44.730 07:24:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:38:44.730 07:24:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:44.730 07:24:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:38:44.730 07:24:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:44.730 07:24:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:44.730 07:24:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:44.730 07:24:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:38:44.730 [2024-11-18 07:24:05.483102] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:44.730 07:24:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:44.730 07:24:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:38:44.730 07:24:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:44.730 07:24:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:38:44.730 07:24:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:44.730 07:24:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:38:44.730 07:24:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:44.730 07:24:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:38:44.730 malloc0 00:38:44.730 07:24:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:44.730 07:24:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:38:44.730 07:24:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:44.730 07:24:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:38:44.730 07:24:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:44.730 07:24:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:38:44.730 07:24:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:38:44.730 07:24:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:38:44.730 07:24:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:38:44.730 07:24:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:38:44.730 07:24:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:38:44.730 { 00:38:44.730 "params": { 00:38:44.730 "name": "Nvme$subsystem", 00:38:44.730 "trtype": "$TEST_TRANSPORT", 00:38:44.730 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:44.730 "adrfam": "ipv4", 00:38:44.730 "trsvcid": "$NVMF_PORT", 00:38:44.730 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:44.730 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:44.730 "hdgst": ${hdgst:-false}, 00:38:44.730 "ddgst": ${ddgst:-false} 00:38:44.730 }, 00:38:44.730 "method": "bdev_nvme_attach_controller" 00:38:44.730 } 00:38:44.730 EOF 00:38:44.730 )") 00:38:44.730 07:24:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:38:44.730 07:24:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:38:44.730 07:24:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:38:44.730 07:24:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:38:44.730 "params": { 00:38:44.730 "name": "Nvme1", 00:38:44.730 "trtype": "tcp", 00:38:44.730 "traddr": "10.0.0.2", 00:38:44.730 "adrfam": "ipv4", 00:38:44.730 "trsvcid": "4420", 00:38:44.730 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:38:44.730 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:38:44.730 "hdgst": false, 00:38:44.730 "ddgst": false 00:38:44.730 }, 00:38:44.730 "method": "bdev_nvme_attach_controller" 00:38:44.730 }' 00:38:44.730 [2024-11-18 07:24:05.562406] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:38:44.730 [2024-11-18 07:24:05.562512] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid440442 ] 00:38:44.730 [2024-11-18 07:24:05.630013] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:44.730 [2024-11-18 07:24:05.675218] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:44.988 Running I/O for 10 seconds... 00:38:47.303 5556.00 IOPS, 43.41 MiB/s [2024-11-18T06:24:09.291Z] 5612.00 IOPS, 43.84 MiB/s [2024-11-18T06:24:10.258Z] 5637.67 IOPS, 44.04 MiB/s [2024-11-18T06:24:11.194Z] 5642.00 IOPS, 44.08 MiB/s [2024-11-18T06:24:12.135Z] 5635.60 IOPS, 44.03 MiB/s [2024-11-18T06:24:13.080Z] 5643.00 IOPS, 44.09 MiB/s [2024-11-18T06:24:14.015Z] 5647.29 IOPS, 44.12 MiB/s [2024-11-18T06:24:15.390Z] 5653.00 IOPS, 44.16 MiB/s [2024-11-18T06:24:16.326Z] 5653.56 IOPS, 44.17 MiB/s [2024-11-18T06:24:16.326Z] 5657.10 IOPS, 44.20 MiB/s 00:38:55.348 Latency(us) 00:38:55.348 [2024-11-18T06:24:16.326Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:55.348 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:38:55.348 Verification LBA range: start 0x0 length 0x1000 00:38:55.348 Nvme1n1 : 10.02 5658.75 44.21 0.00 0.00 22558.40 4174.89 29903.83 00:38:55.348 [2024-11-18T06:24:16.326Z] =================================================================================================================== 00:38:55.348 [2024-11-18T06:24:16.326Z] Total : 5658.75 44.21 0.00 0.00 22558.40 4174.89 29903.83 00:38:55.348 07:24:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=442139 00:38:55.348 07:24:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:38:55.348 07:24:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:38:55.348 07:24:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:38:55.348 07:24:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:38:55.348 07:24:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:38:55.348 07:24:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:38:55.348 07:24:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:38:55.348 07:24:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:38:55.348 { 00:38:55.348 "params": { 00:38:55.348 "name": "Nvme$subsystem", 00:38:55.348 "trtype": "$TEST_TRANSPORT", 00:38:55.348 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:55.348 "adrfam": "ipv4", 00:38:55.348 "trsvcid": "$NVMF_PORT", 00:38:55.348 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:55.348 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:55.348 "hdgst": ${hdgst:-false}, 00:38:55.348 "ddgst": ${ddgst:-false} 00:38:55.348 }, 00:38:55.348 "method": "bdev_nvme_attach_controller" 00:38:55.348 } 00:38:55.348 EOF 00:38:55.348 )") 00:38:55.348 07:24:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:38:55.348 [2024-11-18 07:24:16.210965] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:55.348 [2024-11-18 07:24:16.211010] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:55.348 07:24:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:38:55.348 07:24:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:38:55.348 07:24:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:38:55.348 "params": { 00:38:55.348 "name": "Nvme1", 00:38:55.348 "trtype": "tcp", 00:38:55.349 "traddr": "10.0.0.2", 00:38:55.349 "adrfam": "ipv4", 00:38:55.349 "trsvcid": "4420", 00:38:55.349 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:38:55.349 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:38:55.349 "hdgst": false, 00:38:55.349 "ddgst": false 00:38:55.349 }, 00:38:55.349 "method": "bdev_nvme_attach_controller" 00:38:55.349 }' 00:38:55.349 [2024-11-18 07:24:16.218867] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:55.349 [2024-11-18 07:24:16.218889] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:55.349 [2024-11-18 07:24:16.226849] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:55.349 [2024-11-18 07:24:16.226870] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:55.349 [2024-11-18 07:24:16.234849] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:55.349 [2024-11-18 07:24:16.234869] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:55.349 [2024-11-18 07:24:16.242866] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:55.349 [2024-11-18 07:24:16.242887] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:55.349 [2024-11-18 07:24:16.249545] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:38:55.349 [2024-11-18 07:24:16.249607] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid442139 ] 00:38:55.349 [2024-11-18 07:24:16.250868] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:55.349 [2024-11-18 07:24:16.250888] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:55.349 [2024-11-18 07:24:16.258875] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:55.349 [2024-11-18 07:24:16.258895] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:55.349 [2024-11-18 07:24:16.266857] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:55.349 [2024-11-18 07:24:16.266877] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:55.349 [2024-11-18 07:24:16.274848] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:55.349 [2024-11-18 07:24:16.274868] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:55.349 [2024-11-18 07:24:16.282866] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:55.349 [2024-11-18 07:24:16.282887] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:55.349 [2024-11-18 07:24:16.290864] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:55.349 [2024-11-18 07:24:16.290883] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:55.349 [2024-11-18 07:24:16.298865] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:55.349 [2024-11-18 07:24:16.298885] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:55.349 [2024-11-18 07:24:16.306864] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:55.349 [2024-11-18 07:24:16.306884] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:55.349 [2024-11-18 07:24:16.314864] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:55.349 [2024-11-18 07:24:16.314884] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:55.349 [2024-11-18 07:24:16.317299] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:55.349 [2024-11-18 07:24:16.322885] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:55.349 [2024-11-18 07:24:16.322911] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:55.608 [2024-11-18 07:24:16.330915] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:55.608 [2024-11-18 07:24:16.330954] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:55.608 [2024-11-18 07:24:16.338876] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:55.608 [2024-11-18 07:24:16.338900] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:55.608 [2024-11-18 07:24:16.346868] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:55.608 [2024-11-18 07:24:16.346889] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:55.608 [2024-11-18 07:24:16.354864] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:55.608 [2024-11-18 07:24:16.354884] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:55.608 [2024-11-18 07:24:16.362869] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:55.608 [2024-11-18 07:24:16.362892] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:55.608 [2024-11-18 07:24:16.367914] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:55.608 [2024-11-18 07:24:16.370864] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:55.608 [2024-11-18 07:24:16.370884] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:55.608 [2024-11-18 07:24:16.378864] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:55.608 [2024-11-18 07:24:16.378884] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:55.608 [2024-11-18 07:24:16.386905] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:55.608 [2024-11-18 07:24:16.386942] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:55.608 [2024-11-18 07:24:16.394907] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:55.608 [2024-11-18 07:24:16.394947] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:55.608 [2024-11-18 07:24:16.402914] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:55.608 [2024-11-18 07:24:16.402954] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:55.608 [2024-11-18 07:24:16.410915] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:55.608 [2024-11-18 07:24:16.410955] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:55.608 [2024-11-18 07:24:16.418915] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:55.608 [2024-11-18 07:24:16.418957] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:55.608 [2024-11-18 07:24:16.426913] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:55.608 [2024-11-18 07:24:16.426953] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:55.608 [2024-11-18 07:24:16.434872] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:55.608 [2024-11-18 07:24:16.434894] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:55.608 [2024-11-18 07:24:16.442893] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:55.608 [2024-11-18 07:24:16.442923] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:55.608 [2024-11-18 07:24:16.450914] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:55.608 [2024-11-18 07:24:16.450952] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:55.608 [2024-11-18 07:24:16.458906] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:55.608 [2024-11-18 07:24:16.458943] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:55.608 [2024-11-18 07:24:16.466866] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:55.608 [2024-11-18 07:24:16.466887] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:55.608 [2024-11-18 07:24:16.474866] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:55.608 [2024-11-18 07:24:16.474886] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:55.608 [2024-11-18 07:24:16.482858] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:55.608 [2024-11-18 07:24:16.482899] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:55.608 [2024-11-18 07:24:16.490871] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:55.608 [2024-11-18 07:24:16.490895] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:55.608 [2024-11-18 07:24:16.498873] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:55.608 [2024-11-18 07:24:16.498897] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:55.608 [2024-11-18 07:24:16.506871] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:55.608 [2024-11-18 07:24:16.506894] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:55.608 [2024-11-18 07:24:16.514868] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:55.608 [2024-11-18 07:24:16.514891] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:55.608 [2024-11-18 07:24:16.522878] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:55.608 [2024-11-18 07:24:16.522902] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:55.608 [2024-11-18 07:24:16.530867] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:55.608 [2024-11-18 07:24:16.530888] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:55.608 [2024-11-18 07:24:16.538873] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:55.608 [2024-11-18 07:24:16.538898] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:55.608 [2024-11-18 07:24:16.546869] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:55.608 [2024-11-18 07:24:16.546893] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:55.608 Running I/O for 5 seconds... 00:38:55.609 [2024-11-18 07:24:16.565264] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:55.609 [2024-11-18 07:24:16.565294] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:55.609 [2024-11-18 07:24:16.579821] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:55.609 [2024-11-18 07:24:16.579850] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:55.868 [2024-11-18 07:24:16.590186] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:55.868 [2024-11-18 07:24:16.590215] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:55.868 [2024-11-18 07:24:16.603641] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:55.868 [2024-11-18 07:24:16.603670] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:55.868 [2024-11-18 07:24:16.616074] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:55.868 [2024-11-18 07:24:16.616100] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:55.868 [2024-11-18 07:24:16.633992] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:55.868 [2024-11-18 07:24:16.634018] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:55.868 [2024-11-18 07:24:16.644659] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:55.868 [2024-11-18 07:24:16.644700] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:55.868 [2024-11-18 07:24:16.661316] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:55.868 [2024-11-18 07:24:16.661341] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:55.868 [2024-11-18 07:24:16.672927] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:55.868 [2024-11-18 07:24:16.672952] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:55.868 [2024-11-18 07:24:16.689896] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:55.868 [2024-11-18 07:24:16.689921] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:55.868 [2024-11-18 07:24:16.701380] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:55.868 [2024-11-18 07:24:16.701418] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:55.868 [2024-11-18 07:24:16.717460] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:55.868 [2024-11-18 07:24:16.717485] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:55.868 [2024-11-18 07:24:16.728979] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:55.868 [2024-11-18 07:24:16.729005] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:55.868 [2024-11-18 07:24:16.745105] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:55.868 [2024-11-18 07:24:16.745130] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:55.868 [2024-11-18 07:24:16.758918] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:55.868 [2024-11-18 07:24:16.758945] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:55.868 [2024-11-18 07:24:16.769387] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:55.868 [2024-11-18 07:24:16.769426] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:55.868 [2024-11-18 07:24:16.785697] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:55.868 [2024-11-18 07:24:16.785724] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:55.868 [2024-11-18 07:24:16.798365] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:55.868 [2024-11-18 07:24:16.798392] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:55.868 [2024-11-18 07:24:16.808749] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:55.868 [2024-11-18 07:24:16.808791] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:55.868 [2024-11-18 07:24:16.821647] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:55.868 [2024-11-18 07:24:16.821675] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:55.868 [2024-11-18 07:24:16.832887] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:55.868 [2024-11-18 07:24:16.832912] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:56.127 [2024-11-18 07:24:16.850249] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:56.127 [2024-11-18 07:24:16.850274] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:56.127 [2024-11-18 07:24:16.860600] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:56.127 [2024-11-18 07:24:16.860627] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:56.127 [2024-11-18 07:24:16.874121] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:56.127 [2024-11-18 07:24:16.874146] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:56.127 [2024-11-18 07:24:16.886590] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:56.127 [2024-11-18 07:24:16.886616] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:56.127 [2024-11-18 07:24:16.898876] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:56.127 [2024-11-18 07:24:16.898906] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:56.127 [2024-11-18 07:24:16.910667] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:56.127 [2024-11-18 07:24:16.910695] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:56.127 [2024-11-18 07:24:16.922682] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:56.127 [2024-11-18 07:24:16.922709] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:56.127 [2024-11-18 07:24:16.934912] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:56.127 [2024-11-18 07:24:16.934936] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:56.127 [2024-11-18 07:24:16.946814] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:56.127 [2024-11-18 07:24:16.946839] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:56.127 [2024-11-18 07:24:16.959192] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:56.127 [2024-11-18 07:24:16.959217] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:56.127 [2024-11-18 07:24:16.970958] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:56.127 [2024-11-18 07:24:16.970983] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:56.127 [2024-11-18 07:24:16.983079] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:56.127 [2024-11-18 07:24:16.983118] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:56.127 [2024-11-18 07:24:16.995119] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:56.127 [2024-11-18 07:24:16.995144] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:56.127 [2024-11-18 07:24:17.007433] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:56.127 [2024-11-18 07:24:17.007459] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:56.127 [2024-11-18 07:24:17.019415] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:56.127 [2024-11-18 07:24:17.019441] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:56.127 [2024-11-18 07:24:17.031939] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:56.127 [2024-11-18 07:24:17.031964] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:56.127 [2024-11-18 07:24:17.048595] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:56.127 [2024-11-18 07:24:17.048621] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:56.127 [2024-11-18 07:24:17.059795] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:56.127 [2024-11-18 07:24:17.059821] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:56.127 [2024-11-18 07:24:17.073237] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:56.127 [2024-11-18 07:24:17.073265] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:56.127 [2024-11-18 07:24:17.085059] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:56.127 [2024-11-18 07:24:17.085084] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:56.127 [2024-11-18 07:24:17.097947] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:56.127 [2024-11-18 07:24:17.097986] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:56.386 [2024-11-18 07:24:17.110651] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:56.386 [2024-11-18 07:24:17.110679] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:56.386 [2024-11-18 07:24:17.122487] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:56.386 [2024-11-18 07:24:17.122539] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:56.386 [2024-11-18 07:24:17.134326] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:56.386 [2024-11-18 07:24:17.134357] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:56.386 [2024-11-18 07:24:17.146468] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:56.386 [2024-11-18 07:24:17.146516] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:56.386 [2024-11-18 07:24:17.158975] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:56.386 [2024-11-18 07:24:17.159000] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:56.386 [2024-11-18 07:24:17.170658] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:56.386 [2024-11-18 07:24:17.170685] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:56.386 [2024-11-18 07:24:17.182647] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:56.386 [2024-11-18 07:24:17.182674] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:56.386 [2024-11-18 07:24:17.195027] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:56.386 [2024-11-18 07:24:17.195052] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:56.386 [2024-11-18 07:24:17.207602] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:56.386 [2024-11-18 07:24:17.207629] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:56.386 [2024-11-18 07:24:17.224386] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:56.386 [2024-11-18 07:24:17.224413] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:56.386 [2024-11-18 07:24:17.235592] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:56.386 [2024-11-18 07:24:17.235620] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:56.386 [2024-11-18 07:24:17.248703] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:56.386 [2024-11-18 07:24:17.248731] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:56.386 [2024-11-18 07:24:17.265974] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:56.386 [2024-11-18 07:24:17.265999] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:56.386 [2024-11-18 07:24:17.276258] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:56.386 [2024-11-18 07:24:17.276285] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:56.386 [2024-11-18 07:24:17.289250] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:56.386 [2024-11-18 07:24:17.289274] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:56.386 [2024-11-18 07:24:17.301610] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:56.386 [2024-11-18 07:24:17.301637] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:56.386 [2024-11-18 07:24:17.315514] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:56.386 [2024-11-18 07:24:17.315542] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:56.386 [2024-11-18 07:24:17.326562] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:56.386 [2024-11-18 07:24:17.326602] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:56.386 [2024-11-18 07:24:17.339723] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:56.386 [2024-11-18 07:24:17.339751] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:56.386 [2024-11-18 07:24:17.357536] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:56.386 [2024-11-18 07:24:17.357564] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:56.645 [2024-11-18 07:24:17.373371] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:56.645 [2024-11-18 07:24:17.373399] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:56.645 [2024-11-18 07:24:17.386281] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:56.645 [2024-11-18 07:24:17.386316] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:56.645 [2024-11-18 07:24:17.397129] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:56.645 [2024-11-18 07:24:17.397155] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:56.645 [2024-11-18 07:24:17.413700] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:56.645 [2024-11-18 07:24:17.413727] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:56.645 [2024-11-18 07:24:17.425084] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:56.645 [2024-11-18 07:24:17.425109] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:56.645 [2024-11-18 07:24:17.441614] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:56.645 [2024-11-18 07:24:17.441640] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:56.645 [2024-11-18 07:24:17.452285] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:56.645 [2024-11-18 07:24:17.452310] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:56.645 [2024-11-18 07:24:17.465365] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:56.645 [2024-11-18 07:24:17.465390] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:56.645 [2024-11-18 07:24:17.477475] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:56.645 [2024-11-18 07:24:17.477523] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:56.645 [2024-11-18 07:24:17.490178] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:56.645 [2024-11-18 07:24:17.490202] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:56.645 [2024-11-18 07:24:17.502321] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:56.645 [2024-11-18 07:24:17.502346] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:56.645 [2024-11-18 07:24:17.514592] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:56.645 [2024-11-18 07:24:17.514633] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:56.645 [2024-11-18 07:24:17.526359] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:56.645 [2024-11-18 07:24:17.526385] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:56.645 [2024-11-18 07:24:17.538027] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:56.645 [2024-11-18 07:24:17.538052] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:56.645 [2024-11-18 07:24:17.550429] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:56.645 [2024-11-18 07:24:17.550454] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:56.645 10394.00 IOPS, 81.20 MiB/s [2024-11-18T06:24:17.623Z] [2024-11-18 07:24:17.563082] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:56.645 [2024-11-18 07:24:17.563107] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:56.645 [2024-11-18 07:24:17.575234] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:56.645 [2024-11-18 07:24:17.575259] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:56.645 [2024-11-18 07:24:17.586877] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:56.645 [2024-11-18 07:24:17.586902] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:56.645 [2024-11-18 07:24:17.598914] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:56.645 [2024-11-18 07:24:17.598939] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:56.645 [2024-11-18 07:24:17.611337] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:56.645 [2024-11-18 07:24:17.611362] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:56.903 [2024-11-18 07:24:17.623597] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:56.903 [2024-11-18 07:24:17.623624] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:56.903 [2024-11-18 07:24:17.635360] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:56.903 [2024-11-18 07:24:17.635401] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:56.903 [2024-11-18 07:24:17.652777] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:56.903 [2024-11-18 07:24:17.652820] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:56.903 [2024-11-18 07:24:17.664199] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:56.903 [2024-11-18 07:24:17.664240] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:56.903 [2024-11-18 07:24:17.681040] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:56.903 [2024-11-18 07:24:17.681080] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:56.903 [2024-11-18 07:24:17.692185] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:56.903 [2024-11-18 07:24:17.692210] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:56.903 [2024-11-18 07:24:17.708929] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:56.903 [2024-11-18 07:24:17.708954] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:56.903 [2024-11-18 07:24:17.724816] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:56.904 [2024-11-18 07:24:17.724844] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:56.904 [2024-11-18 07:24:17.735767] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:56.904 [2024-11-18 07:24:17.735810] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:56.904 [2024-11-18 07:24:17.748748] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:56.904 [2024-11-18 07:24:17.748793] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:56.904 [2024-11-18 07:24:17.764987] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:56.904 [2024-11-18 07:24:17.765029] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:56.904 [2024-11-18 07:24:17.782138] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:56.904 [2024-11-18 07:24:17.782164] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:56.904 [2024-11-18 07:24:17.793063] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:56.904 [2024-11-18 07:24:17.793090] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:56.904 [2024-11-18 07:24:17.806115] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:56.904 [2024-11-18 07:24:17.806141] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:56.904 [2024-11-18 07:24:17.817611] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:56.904 [2024-11-18 07:24:17.817643] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:56.904 [2024-11-18 07:24:17.829340] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:56.904 [2024-11-18 07:24:17.829366] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:56.904 [2024-11-18 07:24:17.846585] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:56.904 [2024-11-18 07:24:17.846612] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:56.904 [2024-11-18 07:24:17.857450] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:56.904 [2024-11-18 07:24:17.857476] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:56.904 [2024-11-18 07:24:17.873563] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:56.904 [2024-11-18 07:24:17.873590] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:57.162 [2024-11-18 07:24:17.886448] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:57.162 [2024-11-18 07:24:17.886477] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:57.162 [2024-11-18 07:24:17.896748] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:57.162 [2024-11-18 07:24:17.896775] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:57.162 [2024-11-18 07:24:17.909419] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:57.162 [2024-11-18 07:24:17.909445] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:57.162 [2024-11-18 07:24:17.922776] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:57.162 [2024-11-18 07:24:17.922820] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:57.162 [2024-11-18 07:24:17.933613] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:57.162 [2024-11-18 07:24:17.933641] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:57.162 [2024-11-18 07:24:17.946162] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:57.162 [2024-11-18 07:24:17.946188] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:57.162 [2024-11-18 07:24:17.958075] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:57.162 [2024-11-18 07:24:17.958102] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:57.162 [2024-11-18 07:24:17.969753] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:57.163 [2024-11-18 07:24:17.969781] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:57.163 [2024-11-18 07:24:17.981761] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:57.163 [2024-11-18 07:24:17.981806] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:57.163 [2024-11-18 07:24:17.992768] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:57.163 [2024-11-18 07:24:17.992795] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:57.163 [2024-11-18 07:24:18.006013] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:57.163 [2024-11-18 07:24:18.006039] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:57.163 [2024-11-18 07:24:18.018277] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:57.163 [2024-11-18 07:24:18.018303] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:57.163 [2024-11-18 07:24:18.030059] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:57.163 [2024-11-18 07:24:18.030085] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:57.163 [2024-11-18 07:24:18.041686] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:57.163 [2024-11-18 07:24:18.041715] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:57.163 [2024-11-18 07:24:18.055605] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:57.163 [2024-11-18 07:24:18.055633] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:57.163 [2024-11-18 07:24:18.066003] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:57.163 [2024-11-18 07:24:18.066029] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:57.163 [2024-11-18 07:24:18.079217] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:57.163 [2024-11-18 07:24:18.079243] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:57.163 [2024-11-18 07:24:18.091051] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:57.163 [2024-11-18 07:24:18.091077] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:57.163 [2024-11-18 07:24:18.102894] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:57.163 [2024-11-18 07:24:18.102920] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:57.163 [2024-11-18 07:24:18.114932] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:57.163 [2024-11-18 07:24:18.114958] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:57.163 [2024-11-18 07:24:18.127668] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:57.163 [2024-11-18 07:24:18.127696] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:57.421 [2024-11-18 07:24:18.144703] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:57.421 [2024-11-18 07:24:18.144731] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:57.421 [2024-11-18 07:24:18.161802] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:57.421 [2024-11-18 07:24:18.161845] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:57.421 [2024-11-18 07:24:18.172364] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:57.421 [2024-11-18 07:24:18.172391] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:57.421 [2024-11-18 07:24:18.185351] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:57.421 [2024-11-18 07:24:18.185376] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:57.421 [2024-11-18 07:24:18.196789] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:57.421 [2024-11-18 07:24:18.196815] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:57.422 [2024-11-18 07:24:18.212374] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:57.422 [2024-11-18 07:24:18.212400] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:57.422 [2024-11-18 07:24:18.223182] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:57.422 [2024-11-18 07:24:18.223209] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:57.422 [2024-11-18 07:24:18.236596] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:57.422 [2024-11-18 07:24:18.236625] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:57.422 [2024-11-18 07:24:18.252172] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:57.422 [2024-11-18 07:24:18.252201] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:57.422 [2024-11-18 07:24:18.262724] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:57.422 [2024-11-18 07:24:18.262751] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:57.422 [2024-11-18 07:24:18.276031] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:57.422 [2024-11-18 07:24:18.276057] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:57.422 [2024-11-18 07:24:18.294092] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:57.422 [2024-11-18 07:24:18.294117] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:57.422 [2024-11-18 07:24:18.304845] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:57.422 [2024-11-18 07:24:18.304872] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:57.422 [2024-11-18 07:24:18.318230] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:57.422 [2024-11-18 07:24:18.318256] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:57.422 [2024-11-18 07:24:18.329949] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:57.422 [2024-11-18 07:24:18.329974] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:57.422 [2024-11-18 07:24:18.342068] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:57.422 [2024-11-18 07:24:18.342095] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:57.422 [2024-11-18 07:24:18.353760] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:57.422 [2024-11-18 07:24:18.353813] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:57.422 [2024-11-18 07:24:18.369525] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:57.422 [2024-11-18 07:24:18.369553] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:57.422 [2024-11-18 07:24:18.380383] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:57.422 [2024-11-18 07:24:18.380411] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:57.422 [2024-11-18 07:24:18.393576] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:57.422 [2024-11-18 07:24:18.393604] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:57.681 [2024-11-18 07:24:18.405551] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:57.681 [2024-11-18 07:24:18.405578] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:57.681 [2024-11-18 07:24:18.417849] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:57.681 [2024-11-18 07:24:18.417889] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:57.681 [2024-11-18 07:24:18.430389] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:57.681 [2024-11-18 07:24:18.430415] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:57.681 [2024-11-18 07:24:18.441994] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:57.681 [2024-11-18 07:24:18.442020] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:57.681 [2024-11-18 07:24:18.454330] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:57.681 [2024-11-18 07:24:18.454356] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:57.681 [2024-11-18 07:24:18.467173] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:57.681 [2024-11-18 07:24:18.467198] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:57.681 [2024-11-18 07:24:18.479389] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:57.681 [2024-11-18 07:24:18.479429] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:57.681 [2024-11-18 07:24:18.491422] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:57.681 [2024-11-18 07:24:18.491449] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:57.681 [2024-11-18 07:24:18.503342] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:57.681 [2024-11-18 07:24:18.503367] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:57.681 [2024-11-18 07:24:18.515531] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:57.681 [2024-11-18 07:24:18.515558] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:57.681 [2024-11-18 07:24:18.526329] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:57.681 [2024-11-18 07:24:18.526356] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:57.681 [2024-11-18 07:24:18.539604] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:57.681 [2024-11-18 07:24:18.539631] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:57.681 [2024-11-18 07:24:18.551271] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:57.681 [2024-11-18 07:24:18.551297] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:57.681 10463.00 IOPS, 81.74 MiB/s [2024-11-18T06:24:18.659Z] [2024-11-18 07:24:18.563486] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:57.681 [2024-11-18 07:24:18.563539] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:57.681 [2024-11-18 07:24:18.581062] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:57.681 [2024-11-18 07:24:18.581089] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:57.681 [2024-11-18 07:24:18.597377] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:57.681 [2024-11-18 07:24:18.597411] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:57.681 [2024-11-18 07:24:18.608127] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:57.681 [2024-11-18 07:24:18.608155] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:57.681 [2024-11-18 07:24:18.624877] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:57.681 [2024-11-18 07:24:18.624903] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:57.681 [2024-11-18 07:24:18.636048] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:57.681 [2024-11-18 07:24:18.636074] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:57.681 [2024-11-18 07:24:18.652041] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:57.681 [2024-11-18 07:24:18.652068] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:57.940 [2024-11-18 07:24:18.667290] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:57.940 [2024-11-18 07:24:18.667331] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:57.940 [2024-11-18 07:24:18.677867] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:57.940 [2024-11-18 07:24:18.677894] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:57.940 [2024-11-18 07:24:18.694562] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:57.940 [2024-11-18 07:24:18.694603] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:57.940 [2024-11-18 07:24:18.705199] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:57.940 [2024-11-18 07:24:18.705224] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:57.940 [2024-11-18 07:24:18.721241] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:57.940 [2024-11-18 07:24:18.721266] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:57.940 [2024-11-18 07:24:18.732856] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:57.940 [2024-11-18 07:24:18.732882] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:57.940 [2024-11-18 07:24:18.747555] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:57.940 [2024-11-18 07:24:18.747598] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:57.940 [2024-11-18 07:24:18.758849] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:57.940 [2024-11-18 07:24:18.758876] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:57.940 [2024-11-18 07:24:18.772407] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:57.940 [2024-11-18 07:24:18.772433] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:57.940 [2024-11-18 07:24:18.789221] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:57.940 [2024-11-18 07:24:18.789264] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:57.940 [2024-11-18 07:24:18.799882] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:57.940 [2024-11-18 07:24:18.799908] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:57.940 [2024-11-18 07:24:18.817163] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:57.940 [2024-11-18 07:24:18.817189] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:57.940 [2024-11-18 07:24:18.827990] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:57.940 [2024-11-18 07:24:18.828016] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:57.940 [2024-11-18 07:24:18.844065] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:57.940 [2024-11-18 07:24:18.844092] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:57.940 [2024-11-18 07:24:18.854925] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:57.940 [2024-11-18 07:24:18.854957] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:57.940 [2024-11-18 07:24:18.867848] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:57.940 [2024-11-18 07:24:18.867873] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:57.940 [2024-11-18 07:24:18.883804] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:57.940 [2024-11-18 07:24:18.883847] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:57.940 [2024-11-18 07:24:18.894158] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:57.940 [2024-11-18 07:24:18.894183] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:57.940 [2024-11-18 07:24:18.907320] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:57.940 [2024-11-18 07:24:18.907345] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.198 [2024-11-18 07:24:18.919774] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.198 [2024-11-18 07:24:18.919803] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.198 [2024-11-18 07:24:18.937406] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.198 [2024-11-18 07:24:18.937432] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.198 [2024-11-18 07:24:18.948445] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.198 [2024-11-18 07:24:18.948497] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.198 [2024-11-18 07:24:18.961646] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.198 [2024-11-18 07:24:18.961674] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.198 [2024-11-18 07:24:18.973692] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.198 [2024-11-18 07:24:18.973720] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.198 [2024-11-18 07:24:18.987888] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.198 [2024-11-18 07:24:18.987928] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.198 [2024-11-18 07:24:18.998368] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.198 [2024-11-18 07:24:18.998395] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.198 [2024-11-18 07:24:19.011503] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.198 [2024-11-18 07:24:19.011543] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.198 [2024-11-18 07:24:19.022936] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.198 [2024-11-18 07:24:19.022976] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.198 [2024-11-18 07:24:19.035743] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.198 [2024-11-18 07:24:19.035770] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.198 [2024-11-18 07:24:19.052050] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.198 [2024-11-18 07:24:19.052075] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.198 [2024-11-18 07:24:19.062414] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.198 [2024-11-18 07:24:19.062441] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.198 [2024-11-18 07:24:19.075654] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.199 [2024-11-18 07:24:19.075680] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.199 [2024-11-18 07:24:19.087222] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.199 [2024-11-18 07:24:19.087247] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.199 [2024-11-18 07:24:19.099139] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.199 [2024-11-18 07:24:19.099165] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.199 [2024-11-18 07:24:19.111960] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.199 [2024-11-18 07:24:19.111999] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.199 [2024-11-18 07:24:19.127870] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.199 [2024-11-18 07:24:19.127898] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.199 [2024-11-18 07:24:19.138753] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.199 [2024-11-18 07:24:19.138798] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.199 [2024-11-18 07:24:19.151747] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.199 [2024-11-18 07:24:19.151799] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.199 [2024-11-18 07:24:19.168334] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.199 [2024-11-18 07:24:19.168359] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.457 [2024-11-18 07:24:19.179340] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.457 [2024-11-18 07:24:19.179367] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.457 [2024-11-18 07:24:19.193007] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.457 [2024-11-18 07:24:19.193032] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.457 [2024-11-18 07:24:19.208622] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.457 [2024-11-18 07:24:19.208650] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.457 [2024-11-18 07:24:19.225807] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.457 [2024-11-18 07:24:19.225835] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.457 [2024-11-18 07:24:19.236312] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.457 [2024-11-18 07:24:19.236337] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.457 [2024-11-18 07:24:19.249501] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.457 [2024-11-18 07:24:19.249542] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.457 [2024-11-18 07:24:19.262108] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.457 [2024-11-18 07:24:19.262148] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.457 [2024-11-18 07:24:19.274048] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.457 [2024-11-18 07:24:19.274073] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.457 [2024-11-18 07:24:19.285970] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.457 [2024-11-18 07:24:19.285996] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.457 [2024-11-18 07:24:19.298170] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.457 [2024-11-18 07:24:19.298196] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.457 [2024-11-18 07:24:19.310552] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.457 [2024-11-18 07:24:19.310579] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.457 [2024-11-18 07:24:19.322949] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.457 [2024-11-18 07:24:19.322975] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.457 [2024-11-18 07:24:19.335176] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.457 [2024-11-18 07:24:19.335201] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.457 [2024-11-18 07:24:19.347421] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.457 [2024-11-18 07:24:19.347447] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.457 [2024-11-18 07:24:19.360079] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.457 [2024-11-18 07:24:19.360103] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.457 [2024-11-18 07:24:19.377533] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.457 [2024-11-18 07:24:19.377573] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.457 [2024-11-18 07:24:19.388436] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.457 [2024-11-18 07:24:19.388463] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.457 [2024-11-18 07:24:19.401670] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.457 [2024-11-18 07:24:19.401699] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.457 [2024-11-18 07:24:19.413955] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.457 [2024-11-18 07:24:19.413980] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.457 [2024-11-18 07:24:19.425555] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.457 [2024-11-18 07:24:19.425582] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.716 [2024-11-18 07:24:19.437067] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.716 [2024-11-18 07:24:19.437092] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.716 [2024-11-18 07:24:19.453574] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.716 [2024-11-18 07:24:19.453601] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.716 [2024-11-18 07:24:19.464238] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.716 [2024-11-18 07:24:19.464264] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.716 [2024-11-18 07:24:19.479390] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.716 [2024-11-18 07:24:19.479416] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.716 [2024-11-18 07:24:19.489977] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.716 [2024-11-18 07:24:19.490003] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.716 [2024-11-18 07:24:19.503312] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.716 [2024-11-18 07:24:19.503337] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.716 [2024-11-18 07:24:19.515345] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.716 [2024-11-18 07:24:19.515370] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.716 [2024-11-18 07:24:19.527683] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.716 [2024-11-18 07:24:19.527710] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.716 [2024-11-18 07:24:19.545069] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.716 [2024-11-18 07:24:19.545094] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.716 [2024-11-18 07:24:19.556106] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.716 [2024-11-18 07:24:19.556131] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.716 10468.67 IOPS, 81.79 MiB/s [2024-11-18T06:24:19.694Z] [2024-11-18 07:24:19.573518] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.716 [2024-11-18 07:24:19.573543] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.716 [2024-11-18 07:24:19.584042] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.716 [2024-11-18 07:24:19.584074] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.716 [2024-11-18 07:24:19.600579] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.716 [2024-11-18 07:24:19.600606] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.716 [2024-11-18 07:24:19.616192] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.716 [2024-11-18 07:24:19.616219] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.716 [2024-11-18 07:24:19.627155] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.716 [2024-11-18 07:24:19.627181] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.716 [2024-11-18 07:24:19.639994] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.716 [2024-11-18 07:24:19.640020] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.716 [2024-11-18 07:24:19.657897] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.716 [2024-11-18 07:24:19.657923] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.716 [2024-11-18 07:24:19.668138] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.716 [2024-11-18 07:24:19.668163] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.716 [2024-11-18 07:24:19.681167] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.716 [2024-11-18 07:24:19.681192] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.975 [2024-11-18 07:24:19.695024] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.975 [2024-11-18 07:24:19.695052] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.975 [2024-11-18 07:24:19.705532] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.975 [2024-11-18 07:24:19.705558] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.975 [2024-11-18 07:24:19.721743] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.975 [2024-11-18 07:24:19.721770] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.975 [2024-11-18 07:24:19.732379] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.975 [2024-11-18 07:24:19.732405] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.975 [2024-11-18 07:24:19.748901] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.975 [2024-11-18 07:24:19.748941] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.975 [2024-11-18 07:24:19.765568] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.975 [2024-11-18 07:24:19.765598] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.975 [2024-11-18 07:24:19.776852] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.975 [2024-11-18 07:24:19.776880] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.975 [2024-11-18 07:24:19.792878] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.975 [2024-11-18 07:24:19.792917] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.975 [2024-11-18 07:24:19.809474] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.975 [2024-11-18 07:24:19.809524] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.975 [2024-11-18 07:24:19.820130] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.975 [2024-11-18 07:24:19.820154] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.975 [2024-11-18 07:24:19.837640] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.975 [2024-11-18 07:24:19.837666] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.975 [2024-11-18 07:24:19.848369] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.975 [2024-11-18 07:24:19.848400] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.975 [2024-11-18 07:24:19.861858] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.975 [2024-11-18 07:24:19.861883] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.975 [2024-11-18 07:24:19.875057] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.975 [2024-11-18 07:24:19.875084] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.975 [2024-11-18 07:24:19.885514] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.975 [2024-11-18 07:24:19.885541] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.975 [2024-11-18 07:24:19.900693] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.975 [2024-11-18 07:24:19.900734] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.975 [2024-11-18 07:24:19.917734] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.975 [2024-11-18 07:24:19.917762] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.975 [2024-11-18 07:24:19.928413] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.975 [2024-11-18 07:24:19.928439] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.975 [2024-11-18 07:24:19.941773] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.975 [2024-11-18 07:24:19.941817] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.975 [2024-11-18 07:24:19.953575] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.975 [2024-11-18 07:24:19.953604] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.235 [2024-11-18 07:24:19.966686] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.235 [2024-11-18 07:24:19.966726] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.235 [2024-11-18 07:24:19.977345] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.235 [2024-11-18 07:24:19.977370] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.235 [2024-11-18 07:24:19.993125] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.235 [2024-11-18 07:24:19.993153] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.235 [2024-11-18 07:24:20.003860] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.235 [2024-11-18 07:24:20.003892] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.235 [2024-11-18 07:24:20.022182] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.235 [2024-11-18 07:24:20.022222] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.235 [2024-11-18 07:24:20.033395] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.235 [2024-11-18 07:24:20.033428] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.235 [2024-11-18 07:24:20.048379] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.235 [2024-11-18 07:24:20.048414] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.235 [2024-11-18 07:24:20.059670] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.235 [2024-11-18 07:24:20.059699] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.235 [2024-11-18 07:24:20.076609] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.235 [2024-11-18 07:24:20.076637] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.235 [2024-11-18 07:24:20.088375] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.235 [2024-11-18 07:24:20.088401] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.235 [2024-11-18 07:24:20.105311] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.235 [2024-11-18 07:24:20.105347] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.235 [2024-11-18 07:24:20.116141] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.235 [2024-11-18 07:24:20.116167] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.235 [2024-11-18 07:24:20.132341] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.235 [2024-11-18 07:24:20.132367] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.235 [2024-11-18 07:24:20.143363] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.235 [2024-11-18 07:24:20.143389] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.235 [2024-11-18 07:24:20.156581] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.235 [2024-11-18 07:24:20.156608] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.235 [2024-11-18 07:24:20.173557] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.235 [2024-11-18 07:24:20.173585] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.235 [2024-11-18 07:24:20.184441] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.235 [2024-11-18 07:24:20.184470] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.235 [2024-11-18 07:24:20.197653] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.235 [2024-11-18 07:24:20.197694] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.235 [2024-11-18 07:24:20.209426] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.235 [2024-11-18 07:24:20.209453] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.494 [2024-11-18 07:24:20.221434] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.494 [2024-11-18 07:24:20.221460] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.494 [2024-11-18 07:24:20.234246] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.494 [2024-11-18 07:24:20.234271] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.494 [2024-11-18 07:24:20.246619] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.494 [2024-11-18 07:24:20.246647] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.494 [2024-11-18 07:24:20.259158] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.494 [2024-11-18 07:24:20.259186] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.494 [2024-11-18 07:24:20.271420] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.494 [2024-11-18 07:24:20.271446] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.494 [2024-11-18 07:24:20.282093] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.494 [2024-11-18 07:24:20.282119] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.494 [2024-11-18 07:24:20.295203] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.494 [2024-11-18 07:24:20.295230] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.494 [2024-11-18 07:24:20.306910] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.494 [2024-11-18 07:24:20.306935] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.494 [2024-11-18 07:24:20.319500] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.494 [2024-11-18 07:24:20.319542] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.494 [2024-11-18 07:24:20.332094] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.494 [2024-11-18 07:24:20.332120] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.494 [2024-11-18 07:24:20.349465] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.494 [2024-11-18 07:24:20.349523] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.494 [2024-11-18 07:24:20.359860] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.494 [2024-11-18 07:24:20.359886] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.494 [2024-11-18 07:24:20.376533] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.494 [2024-11-18 07:24:20.376561] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.494 [2024-11-18 07:24:20.390458] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.494 [2024-11-18 07:24:20.390520] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.494 [2024-11-18 07:24:20.401512] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.494 [2024-11-18 07:24:20.401539] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.494 [2024-11-18 07:24:20.417896] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.494 [2024-11-18 07:24:20.417923] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.494 [2024-11-18 07:24:20.429907] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.494 [2024-11-18 07:24:20.429932] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.494 [2024-11-18 07:24:20.441824] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.494 [2024-11-18 07:24:20.441849] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.494 [2024-11-18 07:24:20.453726] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.494 [2024-11-18 07:24:20.453753] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.494 [2024-11-18 07:24:20.466444] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.494 [2024-11-18 07:24:20.466486] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.752 [2024-11-18 07:24:20.478443] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.752 [2024-11-18 07:24:20.478482] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.752 [2024-11-18 07:24:20.490593] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.752 [2024-11-18 07:24:20.490620] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.752 [2024-11-18 07:24:20.502576] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.752 [2024-11-18 07:24:20.502603] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.752 [2024-11-18 07:24:20.514699] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.752 [2024-11-18 07:24:20.514727] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.752 [2024-11-18 07:24:20.526581] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.752 [2024-11-18 07:24:20.526612] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.752 [2024-11-18 07:24:20.538727] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.752 [2024-11-18 07:24:20.538787] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.752 [2024-11-18 07:24:20.551810] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.752 [2024-11-18 07:24:20.551851] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.752 10471.00 IOPS, 81.80 MiB/s [2024-11-18T06:24:20.730Z] [2024-11-18 07:24:20.567473] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.752 [2024-11-18 07:24:20.567521] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.752 [2024-11-18 07:24:20.578317] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.752 [2024-11-18 07:24:20.578344] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.752 [2024-11-18 07:24:20.591721] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.752 [2024-11-18 07:24:20.591752] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.752 [2024-11-18 07:24:20.603317] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.752 [2024-11-18 07:24:20.603343] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.752 [2024-11-18 07:24:20.620870] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.752 [2024-11-18 07:24:20.620896] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.752 [2024-11-18 07:24:20.636829] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.752 [2024-11-18 07:24:20.636856] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.752 [2024-11-18 07:24:20.648017] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.753 [2024-11-18 07:24:20.648043] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.753 [2024-11-18 07:24:20.664246] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.753 [2024-11-18 07:24:20.664271] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.753 [2024-11-18 07:24:20.679542] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.753 [2024-11-18 07:24:20.679571] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.753 [2024-11-18 07:24:20.690302] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.753 [2024-11-18 07:24:20.690328] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.753 [2024-11-18 07:24:20.703497] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.753 [2024-11-18 07:24:20.703523] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.753 [2024-11-18 07:24:20.716073] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.753 [2024-11-18 07:24:20.716098] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.011 [2024-11-18 07:24:20.734091] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.011 [2024-11-18 07:24:20.734119] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.011 [2024-11-18 07:24:20.744713] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.011 [2024-11-18 07:24:20.744740] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.011 [2024-11-18 07:24:20.761063] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.011 [2024-11-18 07:24:20.761088] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.011 [2024-11-18 07:24:20.772299] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.011 [2024-11-18 07:24:20.772324] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.011 [2024-11-18 07:24:20.788744] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.011 [2024-11-18 07:24:20.788771] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.011 [2024-11-18 07:24:20.804364] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.011 [2024-11-18 07:24:20.804393] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.011 [2024-11-18 07:24:20.814656] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.011 [2024-11-18 07:24:20.814685] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.011 [2024-11-18 07:24:20.827645] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.011 [2024-11-18 07:24:20.827672] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.011 [2024-11-18 07:24:20.846031] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.011 [2024-11-18 07:24:20.846057] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.011 [2024-11-18 07:24:20.857026] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.011 [2024-11-18 07:24:20.857052] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.011 [2024-11-18 07:24:20.872806] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.011 [2024-11-18 07:24:20.872848] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.011 [2024-11-18 07:24:20.889042] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.011 [2024-11-18 07:24:20.889069] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.011 [2024-11-18 07:24:20.905089] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.011 [2024-11-18 07:24:20.905132] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.011 [2024-11-18 07:24:20.916103] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.011 [2024-11-18 07:24:20.916129] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.011 [2024-11-18 07:24:20.932123] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.011 [2024-11-18 07:24:20.932149] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.011 [2024-11-18 07:24:20.943176] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.011 [2024-11-18 07:24:20.943202] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.011 [2024-11-18 07:24:20.956387] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.011 [2024-11-18 07:24:20.956413] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.011 [2024-11-18 07:24:20.973547] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.011 [2024-11-18 07:24:20.973590] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.011 [2024-11-18 07:24:20.984277] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.011 [2024-11-18 07:24:20.984319] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.270 [2024-11-18 07:24:20.999711] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.270 [2024-11-18 07:24:20.999739] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.270 [2024-11-18 07:24:21.009964] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.270 [2024-11-18 07:24:21.009990] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.270 [2024-11-18 07:24:21.023032] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.270 [2024-11-18 07:24:21.023057] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.270 [2024-11-18 07:24:21.034666] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.270 [2024-11-18 07:24:21.034693] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.270 [2024-11-18 07:24:21.045956] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.270 [2024-11-18 07:24:21.045981] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.270 [2024-11-18 07:24:21.062511] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.270 [2024-11-18 07:24:21.062549] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.270 [2024-11-18 07:24:21.072758] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.270 [2024-11-18 07:24:21.072799] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.270 [2024-11-18 07:24:21.086231] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.270 [2024-11-18 07:24:21.086258] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.270 [2024-11-18 07:24:21.098128] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.270 [2024-11-18 07:24:21.098155] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.270 [2024-11-18 07:24:21.110859] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.270 [2024-11-18 07:24:21.110885] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.270 [2024-11-18 07:24:21.123082] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.270 [2024-11-18 07:24:21.123109] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.270 [2024-11-18 07:24:21.135577] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.270 [2024-11-18 07:24:21.135606] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.270 [2024-11-18 07:24:21.147948] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.270 [2024-11-18 07:24:21.147974] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.270 [2024-11-18 07:24:21.164807] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.270 [2024-11-18 07:24:21.164835] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.270 [2024-11-18 07:24:21.175533] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.270 [2024-11-18 07:24:21.175562] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.270 [2024-11-18 07:24:21.188453] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.270 [2024-11-18 07:24:21.188504] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.270 [2024-11-18 07:24:21.205468] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.270 [2024-11-18 07:24:21.205503] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.270 [2024-11-18 07:24:21.215811] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.270 [2024-11-18 07:24:21.215853] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.270 [2024-11-18 07:24:21.229400] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.270 [2024-11-18 07:24:21.229426] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.270 [2024-11-18 07:24:21.243862] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.270 [2024-11-18 07:24:21.243890] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.529 [2024-11-18 07:24:21.254399] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.529 [2024-11-18 07:24:21.254425] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.529 [2024-11-18 07:24:21.268009] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.529 [2024-11-18 07:24:21.268033] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.529 [2024-11-18 07:24:21.285156] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.529 [2024-11-18 07:24:21.285182] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.529 [2024-11-18 07:24:21.298078] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.529 [2024-11-18 07:24:21.298117] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.529 [2024-11-18 07:24:21.308944] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.529 [2024-11-18 07:24:21.308985] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.529 [2024-11-18 07:24:21.325304] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.529 [2024-11-18 07:24:21.325330] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.529 [2024-11-18 07:24:21.337913] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.529 [2024-11-18 07:24:21.337952] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.529 [2024-11-18 07:24:21.349824] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.529 [2024-11-18 07:24:21.349856] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.529 [2024-11-18 07:24:21.361693] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.529 [2024-11-18 07:24:21.361719] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.529 [2024-11-18 07:24:21.373373] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.529 [2024-11-18 07:24:21.373398] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.529 [2024-11-18 07:24:21.387443] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.529 [2024-11-18 07:24:21.387485] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.529 [2024-11-18 07:24:21.398189] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.529 [2024-11-18 07:24:21.398215] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.529 [2024-11-18 07:24:21.410992] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.529 [2024-11-18 07:24:21.411017] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.529 [2024-11-18 07:24:21.423441] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.529 [2024-11-18 07:24:21.423467] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.529 [2024-11-18 07:24:21.436013] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.529 [2024-11-18 07:24:21.436039] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.529 [2024-11-18 07:24:21.453654] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.529 [2024-11-18 07:24:21.453695] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.529 [2024-11-18 07:24:21.464435] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.529 [2024-11-18 07:24:21.464461] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.529 [2024-11-18 07:24:21.477561] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.529 [2024-11-18 07:24:21.477587] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.529 [2024-11-18 07:24:21.489906] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.529 [2024-11-18 07:24:21.489931] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.529 [2024-11-18 07:24:21.501543] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.529 [2024-11-18 07:24:21.501570] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.789 [2024-11-18 07:24:21.512918] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.789 [2024-11-18 07:24:21.512943] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.789 [2024-11-18 07:24:21.529932] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.789 [2024-11-18 07:24:21.529958] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.789 [2024-11-18 07:24:21.540523] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.789 [2024-11-18 07:24:21.540549] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.789 [2024-11-18 07:24:21.553692] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.789 [2024-11-18 07:24:21.553719] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.789 10468.20 IOPS, 81.78 MiB/s [2024-11-18T06:24:21.767Z] [2024-11-18 07:24:21.565899] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.789 [2024-11-18 07:24:21.565925] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.789 [2024-11-18 07:24:21.575002] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.789 [2024-11-18 07:24:21.575029] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.789 00:39:00.789 Latency(us) 00:39:00.789 [2024-11-18T06:24:21.767Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:00.789 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:39:00.789 Nvme1n1 : 5.01 10470.07 81.80 0.00 0.00 12209.34 3155.44 19806.44 00:39:00.789 [2024-11-18T06:24:21.767Z] =================================================================================================================== 00:39:00.789 [2024-11-18T06:24:21.767Z] Total : 10470.07 81.80 0.00 0.00 12209.34 3155.44 19806.44 00:39:00.789 [2024-11-18 07:24:21.582887] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.789 [2024-11-18 07:24:21.582912] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.789 [2024-11-18 07:24:21.590870] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.789 [2024-11-18 07:24:21.590893] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.789 [2024-11-18 07:24:21.598920] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.789 [2024-11-18 07:24:21.598972] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.789 [2024-11-18 07:24:21.606915] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.789 [2024-11-18 07:24:21.606966] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.789 [2024-11-18 07:24:21.614920] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.789 [2024-11-18 07:24:21.614971] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.789 [2024-11-18 07:24:21.622918] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.789 [2024-11-18 07:24:21.622964] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.789 [2024-11-18 07:24:21.630914] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.789 [2024-11-18 07:24:21.630963] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.789 [2024-11-18 07:24:21.638917] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.789 [2024-11-18 07:24:21.638964] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.789 [2024-11-18 07:24:21.646918] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.789 [2024-11-18 07:24:21.646967] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.789 [2024-11-18 07:24:21.654909] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.789 [2024-11-18 07:24:21.654957] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.789 [2024-11-18 07:24:21.662915] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.789 [2024-11-18 07:24:21.662962] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.789 [2024-11-18 07:24:21.670918] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.789 [2024-11-18 07:24:21.670966] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.789 [2024-11-18 07:24:21.678916] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.789 [2024-11-18 07:24:21.678963] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.789 [2024-11-18 07:24:21.686914] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.789 [2024-11-18 07:24:21.686960] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.789 [2024-11-18 07:24:21.694915] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.789 [2024-11-18 07:24:21.694961] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.789 [2024-11-18 07:24:21.702906] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.789 [2024-11-18 07:24:21.702946] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.789 [2024-11-18 07:24:21.710881] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.789 [2024-11-18 07:24:21.710913] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.789 [2024-11-18 07:24:21.718902] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.789 [2024-11-18 07:24:21.718947] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.789 [2024-11-18 07:24:21.726915] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.789 [2024-11-18 07:24:21.726962] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.789 [2024-11-18 07:24:21.734916] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.789 [2024-11-18 07:24:21.734951] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.789 [2024-11-18 07:24:21.742882] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.789 [2024-11-18 07:24:21.742902] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.790 [2024-11-18 07:24:21.750879] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.790 [2024-11-18 07:24:21.750898] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.790 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (442139) - No such process 00:39:00.790 07:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 442139 00:39:00.790 07:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:00.790 07:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:00.790 07:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:00.790 07:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:00.790 07:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:39:00.790 07:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:00.790 07:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:01.048 delay0 00:39:01.048 07:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:01.048 07:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:39:01.048 07:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:01.048 07:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:01.048 07:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:01.048 07:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:39:01.048 [2024-11-18 07:24:21.907638] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:39:09.158 Initializing NVMe Controllers 00:39:09.158 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:39:09.158 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:39:09.158 Initialization complete. Launching workers. 00:39:09.158 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 228, failed: 26683 00:39:09.158 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 26766, failed to submit 145 00:39:09.158 success 26703, unsuccessful 63, failed 0 00:39:09.158 07:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:39:09.158 07:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:39:09.158 07:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:39:09.158 07:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:39:09.158 07:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:39:09.158 07:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:39:09.158 07:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:39:09.158 07:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:39:09.158 rmmod nvme_tcp 00:39:09.158 rmmod nvme_fabrics 00:39:09.158 rmmod nvme_keyring 00:39:09.158 07:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:39:09.158 07:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:39:09.158 07:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:39:09.158 07:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 440383 ']' 00:39:09.158 07:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 440383 00:39:09.158 07:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 440383 ']' 00:39:09.158 07:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 440383 00:39:09.158 07:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:39:09.158 07:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:09.158 07:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 440383 00:39:09.158 07:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:39:09.158 07:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:39:09.158 07:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 440383' 00:39:09.158 killing process with pid 440383 00:39:09.158 07:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 440383 00:39:09.158 07:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 440383 00:39:09.158 07:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:39:09.158 07:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:39:09.158 07:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:39:09.158 07:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:39:09.158 07:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:39:09.158 07:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:39:09.158 07:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:39:09.158 07:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:39:09.158 07:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:39:09.158 07:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:09.158 07:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:09.158 07:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:10.537 07:24:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:39:10.537 00:39:10.537 real 0m28.735s 00:39:10.537 user 0m40.143s 00:39:10.537 sys 0m10.182s 00:39:10.537 07:24:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:10.537 07:24:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:10.537 ************************************ 00:39:10.537 END TEST nvmf_zcopy 00:39:10.537 ************************************ 00:39:10.537 07:24:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:39:10.537 07:24:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:39:10.537 07:24:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:10.537 07:24:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:39:10.537 ************************************ 00:39:10.537 START TEST nvmf_nmic 00:39:10.537 ************************************ 00:39:10.537 07:24:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:39:10.537 * Looking for test storage... 00:39:10.537 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:39:10.537 07:24:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:39:10.537 07:24:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1693 -- # lcov --version 00:39:10.537 07:24:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:39:10.798 07:24:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:39:10.798 07:24:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:10.798 07:24:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:10.798 07:24:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:10.798 07:24:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:39:10.798 07:24:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:39:10.798 07:24:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:39:10.798 07:24:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:39:10.798 07:24:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:39:10.798 07:24:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:39:10.798 07:24:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:39:10.798 07:24:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:10.798 07:24:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:39:10.798 07:24:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:39:10.798 07:24:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:10.798 07:24:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:10.799 07:24:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:39:10.799 07:24:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:39:10.799 07:24:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:10.799 07:24:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:39:10.799 07:24:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:39:10.799 07:24:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:39:10.799 07:24:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:39:10.799 07:24:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:10.799 07:24:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:39:10.799 07:24:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:39:10.799 07:24:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:10.799 07:24:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:10.799 07:24:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:39:10.799 07:24:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:10.799 07:24:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:39:10.799 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:10.799 --rc genhtml_branch_coverage=1 00:39:10.799 --rc genhtml_function_coverage=1 00:39:10.799 --rc genhtml_legend=1 00:39:10.799 --rc geninfo_all_blocks=1 00:39:10.799 --rc geninfo_unexecuted_blocks=1 00:39:10.799 00:39:10.799 ' 00:39:10.799 07:24:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:39:10.799 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:10.799 --rc genhtml_branch_coverage=1 00:39:10.799 --rc genhtml_function_coverage=1 00:39:10.799 --rc genhtml_legend=1 00:39:10.799 --rc geninfo_all_blocks=1 00:39:10.799 --rc geninfo_unexecuted_blocks=1 00:39:10.799 00:39:10.799 ' 00:39:10.799 07:24:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:39:10.799 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:10.799 --rc genhtml_branch_coverage=1 00:39:10.799 --rc genhtml_function_coverage=1 00:39:10.799 --rc genhtml_legend=1 00:39:10.799 --rc geninfo_all_blocks=1 00:39:10.799 --rc geninfo_unexecuted_blocks=1 00:39:10.799 00:39:10.799 ' 00:39:10.799 07:24:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:39:10.799 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:10.799 --rc genhtml_branch_coverage=1 00:39:10.799 --rc genhtml_function_coverage=1 00:39:10.799 --rc genhtml_legend=1 00:39:10.799 --rc geninfo_all_blocks=1 00:39:10.799 --rc geninfo_unexecuted_blocks=1 00:39:10.799 00:39:10.799 ' 00:39:10.799 07:24:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:10.799 07:24:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:39:10.799 07:24:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:10.799 07:24:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:10.799 07:24:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:10.799 07:24:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:10.799 07:24:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:10.799 07:24:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:10.799 07:24:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:10.799 07:24:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:10.799 07:24:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:10.799 07:24:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:10.799 07:24:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:39:10.799 07:24:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:39:10.799 07:24:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:10.799 07:24:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:10.799 07:24:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:10.799 07:24:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:10.799 07:24:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:10.799 07:24:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:39:10.799 07:24:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:10.799 07:24:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:10.799 07:24:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:10.799 07:24:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:10.799 07:24:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:10.799 07:24:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:10.799 07:24:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:39:10.799 07:24:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:10.799 07:24:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:39:10.799 07:24:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:10.799 07:24:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:10.799 07:24:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:10.799 07:24:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:10.799 07:24:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:10.799 07:24:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:39:10.799 07:24:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:39:10.799 07:24:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:10.799 07:24:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:10.799 07:24:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:10.799 07:24:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:39:10.799 07:24:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:39:10.799 07:24:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:39:10.799 07:24:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:39:10.799 07:24:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:10.799 07:24:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:39:10.799 07:24:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:39:10.799 07:24:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:39:10.799 07:24:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:10.799 07:24:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:10.799 07:24:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:10.799 07:24:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:39:10.799 07:24:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:39:10.799 07:24:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:39:10.800 07:24:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:12.706 07:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:39:12.706 07:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:39:12.706 07:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:39:12.706 07:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:39:12.706 07:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:39:12.706 07:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:39:12.706 07:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:39:12.706 07:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:39:12.706 07:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:39:12.706 07:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:39:12.706 07:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:39:12.706 07:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:39:12.706 07:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:39:12.706 07:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:39:12.706 07:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:39:12.706 07:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:39:12.706 07:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:39:12.706 07:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:39:12.706 07:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:39:12.706 07:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:39:12.706 07:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:39:12.706 07:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:39:12.706 07:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:39:12.706 07:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:39:12.706 07:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:39:12.706 07:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:39:12.706 07:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:39:12.706 07:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:39:12.706 07:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:39:12.706 07:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:39:12.706 07:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:39:12.706 07:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:39:12.706 07:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:39:12.706 07:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:12.706 07:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:39:12.706 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:39:12.706 07:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:12.706 07:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:12.706 07:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:12.706 07:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:12.706 07:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:12.706 07:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:12.706 07:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:39:12.706 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:39:12.706 07:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:12.706 07:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:12.706 07:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:12.706 07:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:12.706 07:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:12.706 07:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:39:12.706 07:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:39:12.706 07:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:39:12.706 07:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:12.706 07:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:12.706 07:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:12.706 07:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:12.706 07:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:12.706 07:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:12.706 07:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:12.706 07:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:39:12.706 Found net devices under 0000:0a:00.0: cvl_0_0 00:39:12.706 07:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:12.706 07:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:12.706 07:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:12.706 07:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:12.706 07:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:12.706 07:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:12.706 07:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:12.706 07:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:12.706 07:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:39:12.706 Found net devices under 0000:0a:00.1: cvl_0_1 00:39:12.706 07:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:12.706 07:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:39:12.706 07:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:39:12.706 07:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:39:12.706 07:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:39:12.706 07:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:39:12.707 07:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:39:12.707 07:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:12.707 07:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:12.707 07:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:39:12.707 07:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:39:12.707 07:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:39:12.707 07:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:39:12.966 07:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:39:12.966 07:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:39:12.966 07:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:39:12.966 07:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:12.966 07:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:39:12.966 07:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:39:12.966 07:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:39:12.966 07:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:39:12.966 07:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:39:12.966 07:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:39:12.966 07:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:39:12.966 07:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:39:12.966 07:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:39:12.966 07:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:39:12.966 07:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:39:12.966 07:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:39:12.966 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:12.966 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.197 ms 00:39:12.966 00:39:12.966 --- 10.0.0.2 ping statistics --- 00:39:12.966 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:12.966 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:39:12.966 07:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:39:12.966 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:12.966 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.123 ms 00:39:12.966 00:39:12.966 --- 10.0.0.1 ping statistics --- 00:39:12.966 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:12.966 rtt min/avg/max/mdev = 0.123/0.123/0.123/0.000 ms 00:39:12.966 07:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:12.966 07:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:39:12.966 07:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:39:12.966 07:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:12.966 07:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:39:12.966 07:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:39:12.966 07:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:12.966 07:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:39:12.966 07:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:39:12.966 07:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:39:12.966 07:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:39:12.966 07:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:39:12.966 07:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:12.966 07:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=445631 00:39:12.966 07:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 445631 00:39:12.966 07:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 445631 ']' 00:39:12.966 07:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:12.966 07:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:39:12.966 07:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:12.966 07:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:12.966 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:12.966 07:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:12.966 07:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:12.966 [2024-11-18 07:24:33.896935] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:39:12.966 [2024-11-18 07:24:33.898010] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:39:12.966 [2024-11-18 07:24:33.898085] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:13.226 [2024-11-18 07:24:33.972822] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:39:13.226 [2024-11-18 07:24:34.017774] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:13.226 [2024-11-18 07:24:34.017851] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:13.226 [2024-11-18 07:24:34.017875] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:13.226 [2024-11-18 07:24:34.017886] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:13.226 [2024-11-18 07:24:34.017896] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:13.226 [2024-11-18 07:24:34.019453] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:39:13.226 [2024-11-18 07:24:34.019529] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:39:13.226 [2024-11-18 07:24:34.019597] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:39:13.226 [2024-11-18 07:24:34.019600] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:13.226 [2024-11-18 07:24:34.099312] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:39:13.226 [2024-11-18 07:24:34.099514] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:39:13.226 [2024-11-18 07:24:34.099813] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:39:13.226 [2024-11-18 07:24:34.100411] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:39:13.226 [2024-11-18 07:24:34.100652] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:39:13.226 07:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:13.226 07:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:39:13.226 07:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:39:13.226 07:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:39:13.226 07:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:13.226 07:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:13.226 07:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:39:13.226 07:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:13.226 07:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:13.226 [2024-11-18 07:24:34.156312] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:13.226 07:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:13.226 07:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:39:13.226 07:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:13.226 07:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:13.485 Malloc0 00:39:13.485 07:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:13.485 07:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:39:13.485 07:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:13.485 07:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:13.485 07:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:13.485 07:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:39:13.485 07:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:13.485 07:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:13.485 07:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:13.485 07:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:39:13.485 07:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:13.485 07:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:13.485 [2024-11-18 07:24:34.232626] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:13.485 07:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:13.485 07:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:39:13.485 test case1: single bdev can't be used in multiple subsystems 00:39:13.485 07:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:39:13.485 07:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:13.485 07:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:13.485 07:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:13.485 07:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:39:13.485 07:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:13.485 07:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:13.485 07:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:13.485 07:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:39:13.485 07:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:39:13.485 07:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:13.485 07:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:13.485 [2024-11-18 07:24:34.256246] bdev.c:8198:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:39:13.485 [2024-11-18 07:24:34.256286] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:39:13.485 [2024-11-18 07:24:34.256307] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:13.485 request: 00:39:13.485 { 00:39:13.485 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:39:13.485 "namespace": { 00:39:13.485 "bdev_name": "Malloc0", 00:39:13.485 "no_auto_visible": false 00:39:13.485 }, 00:39:13.486 "method": "nvmf_subsystem_add_ns", 00:39:13.486 "req_id": 1 00:39:13.486 } 00:39:13.486 Got JSON-RPC error response 00:39:13.486 response: 00:39:13.486 { 00:39:13.486 "code": -32602, 00:39:13.486 "message": "Invalid parameters" 00:39:13.486 } 00:39:13.486 07:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:39:13.486 07:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:39:13.486 07:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:39:13.486 07:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:39:13.486 Adding namespace failed - expected result. 00:39:13.486 07:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:39:13.486 test case2: host connect to nvmf target in multiple paths 00:39:13.486 07:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:39:13.486 07:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:13.486 07:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:13.486 [2024-11-18 07:24:34.264358] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:39:13.486 07:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:13.486 07:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:39:13.745 07:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:39:13.745 07:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:39:13.745 07:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:39:13.745 07:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:39:13.745 07:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:39:13.745 07:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:39:16.275 07:24:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:39:16.275 07:24:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:39:16.275 07:24:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:39:16.275 07:24:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:39:16.275 07:24:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:39:16.275 07:24:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:39:16.275 07:24:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:39:16.275 [global] 00:39:16.275 thread=1 00:39:16.275 invalidate=1 00:39:16.275 rw=write 00:39:16.275 time_based=1 00:39:16.275 runtime=1 00:39:16.275 ioengine=libaio 00:39:16.275 direct=1 00:39:16.275 bs=4096 00:39:16.275 iodepth=1 00:39:16.275 norandommap=0 00:39:16.275 numjobs=1 00:39:16.275 00:39:16.275 verify_dump=1 00:39:16.275 verify_backlog=512 00:39:16.275 verify_state_save=0 00:39:16.275 do_verify=1 00:39:16.275 verify=crc32c-intel 00:39:16.275 [job0] 00:39:16.275 filename=/dev/nvme0n1 00:39:16.275 Could not set queue depth (nvme0n1) 00:39:16.275 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:39:16.275 fio-3.35 00:39:16.275 Starting 1 thread 00:39:17.210 00:39:17.210 job0: (groupid=0, jobs=1): err= 0: pid=446014: Mon Nov 18 07:24:38 2024 00:39:17.210 read: IOPS=20, BW=81.6KiB/s (83.5kB/s)(84.0KiB/1030msec) 00:39:17.210 slat (nsec): min=6440, max=16944, avg=13442.10, stdev=1979.53 00:39:17.210 clat (usec): min=40528, max=42029, avg=41913.86, stdev=319.86 00:39:17.210 lat (usec): min=40534, max=42042, avg=41927.30, stdev=321.43 00:39:17.210 clat percentiles (usec): 00:39:17.210 | 1.00th=[40633], 5.00th=[41681], 10.00th=[41681], 20.00th=[41681], 00:39:17.210 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:39:17.210 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:39:17.210 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:39:17.210 | 99.99th=[42206] 00:39:17.210 write: IOPS=497, BW=1988KiB/s (2036kB/s)(2048KiB/1030msec); 0 zone resets 00:39:17.210 slat (usec): min=5, max=32835, avg=71.84, stdev=1450.81 00:39:17.210 clat (usec): min=136, max=704, avg=217.18, stdev=63.56 00:39:17.210 lat (usec): min=143, max=33122, avg=289.01, stdev=1455.29 00:39:17.210 clat percentiles (usec): 00:39:17.210 | 1.00th=[ 139], 5.00th=[ 141], 10.00th=[ 143], 20.00th=[ 147], 00:39:17.210 | 30.00th=[ 157], 40.00th=[ 241], 50.00th=[ 245], 60.00th=[ 247], 00:39:17.210 | 70.00th=[ 247], 80.00th=[ 247], 90.00th=[ 249], 95.00th=[ 269], 00:39:17.210 | 99.00th=[ 404], 99.50th=[ 545], 99.90th=[ 701], 99.95th=[ 701], 00:39:17.210 | 99.99th=[ 701] 00:39:17.210 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:39:17.210 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:39:17.210 lat (usec) : 250=87.43%, 500=8.07%, 750=0.56% 00:39:17.210 lat (msec) : 50=3.94% 00:39:17.210 cpu : usr=0.19%, sys=0.39%, ctx=538, majf=0, minf=1 00:39:17.210 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:17.210 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:17.210 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:17.210 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:17.210 latency : target=0, window=0, percentile=100.00%, depth=1 00:39:17.211 00:39:17.211 Run status group 0 (all jobs): 00:39:17.211 READ: bw=81.6KiB/s (83.5kB/s), 81.6KiB/s-81.6KiB/s (83.5kB/s-83.5kB/s), io=84.0KiB (86.0kB), run=1030-1030msec 00:39:17.211 WRITE: bw=1988KiB/s (2036kB/s), 1988KiB/s-1988KiB/s (2036kB/s-2036kB/s), io=2048KiB (2097kB), run=1030-1030msec 00:39:17.211 00:39:17.211 Disk stats (read/write): 00:39:17.211 nvme0n1: ios=70/512, merge=0/0, ticks=922/106, in_queue=1028, util=98.90% 00:39:17.211 07:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:39:17.469 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:39:17.469 07:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:39:17.469 07:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:39:17.469 07:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:39:17.469 07:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:39:17.469 07:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:39:17.469 07:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:39:17.469 07:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:39:17.469 07:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:39:17.469 07:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:39:17.469 07:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:39:17.469 07:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:39:17.469 07:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:39:17.469 07:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:39:17.469 07:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:39:17.469 07:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:39:17.469 rmmod nvme_tcp 00:39:17.469 rmmod nvme_fabrics 00:39:17.469 rmmod nvme_keyring 00:39:17.469 07:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:39:17.469 07:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:39:17.469 07:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:39:17.469 07:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 445631 ']' 00:39:17.469 07:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 445631 00:39:17.469 07:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 445631 ']' 00:39:17.469 07:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 445631 00:39:17.469 07:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:39:17.469 07:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:17.469 07:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 445631 00:39:17.470 07:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:39:17.470 07:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:39:17.470 07:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 445631' 00:39:17.470 killing process with pid 445631 00:39:17.470 07:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 445631 00:39:17.470 07:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 445631 00:39:17.731 07:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:39:17.731 07:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:39:17.731 07:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:39:17.731 07:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:39:17.731 07:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:39:17.731 07:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:39:17.731 07:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:39:17.731 07:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:39:17.731 07:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:39:17.731 07:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:17.731 07:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:17.731 07:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:20.277 07:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:39:20.277 00:39:20.277 real 0m9.282s 00:39:20.277 user 0m17.501s 00:39:20.277 sys 0m3.362s 00:39:20.277 07:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:20.277 07:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:20.277 ************************************ 00:39:20.277 END TEST nvmf_nmic 00:39:20.277 ************************************ 00:39:20.277 07:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:39:20.277 07:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:39:20.277 07:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:20.277 07:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:39:20.277 ************************************ 00:39:20.277 START TEST nvmf_fio_target 00:39:20.277 ************************************ 00:39:20.277 07:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:39:20.277 * Looking for test storage... 00:39:20.277 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:39:20.277 07:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:39:20.277 07:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lcov --version 00:39:20.277 07:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:39:20.277 07:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:39:20.277 07:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:20.277 07:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:20.277 07:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:20.277 07:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:39:20.277 07:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:39:20.277 07:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:39:20.277 07:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:39:20.277 07:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:39:20.277 07:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:39:20.277 07:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:39:20.277 07:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:20.277 07:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:39:20.277 07:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:39:20.277 07:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:20.277 07:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:20.277 07:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:39:20.277 07:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:39:20.277 07:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:20.277 07:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:39:20.277 07:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:39:20.277 07:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:39:20.277 07:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:39:20.277 07:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:20.277 07:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:39:20.277 07:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:39:20.277 07:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:20.277 07:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:20.277 07:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:39:20.277 07:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:20.277 07:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:39:20.277 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:20.277 --rc genhtml_branch_coverage=1 00:39:20.277 --rc genhtml_function_coverage=1 00:39:20.277 --rc genhtml_legend=1 00:39:20.277 --rc geninfo_all_blocks=1 00:39:20.277 --rc geninfo_unexecuted_blocks=1 00:39:20.277 00:39:20.277 ' 00:39:20.277 07:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:39:20.277 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:20.277 --rc genhtml_branch_coverage=1 00:39:20.277 --rc genhtml_function_coverage=1 00:39:20.277 --rc genhtml_legend=1 00:39:20.277 --rc geninfo_all_blocks=1 00:39:20.277 --rc geninfo_unexecuted_blocks=1 00:39:20.277 00:39:20.277 ' 00:39:20.277 07:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:39:20.277 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:20.277 --rc genhtml_branch_coverage=1 00:39:20.277 --rc genhtml_function_coverage=1 00:39:20.277 --rc genhtml_legend=1 00:39:20.277 --rc geninfo_all_blocks=1 00:39:20.277 --rc geninfo_unexecuted_blocks=1 00:39:20.277 00:39:20.277 ' 00:39:20.277 07:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:39:20.277 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:20.277 --rc genhtml_branch_coverage=1 00:39:20.277 --rc genhtml_function_coverage=1 00:39:20.277 --rc genhtml_legend=1 00:39:20.277 --rc geninfo_all_blocks=1 00:39:20.277 --rc geninfo_unexecuted_blocks=1 00:39:20.277 00:39:20.277 ' 00:39:20.277 07:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:20.277 07:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:39:20.277 07:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:20.277 07:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:20.277 07:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:20.277 07:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:20.277 07:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:20.277 07:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:20.277 07:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:20.277 07:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:20.277 07:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:20.277 07:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:20.277 07:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:39:20.277 07:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:39:20.277 07:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:20.277 07:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:20.277 07:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:20.277 07:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:20.277 07:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:20.277 07:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:39:20.277 07:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:20.277 07:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:20.277 07:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:20.277 07:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:20.278 07:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:20.278 07:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:20.278 07:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:39:20.278 07:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:20.278 07:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:39:20.278 07:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:20.278 07:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:20.278 07:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:20.278 07:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:20.278 07:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:20.278 07:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:39:20.278 07:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:39:20.278 07:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:20.278 07:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:20.278 07:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:20.278 07:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:39:20.278 07:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:39:20.278 07:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:39:20.278 07:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:39:20.278 07:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:39:20.278 07:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:20.278 07:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:39:20.278 07:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:39:20.278 07:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:39:20.278 07:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:20.278 07:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:20.278 07:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:20.278 07:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:39:20.278 07:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:39:20.278 07:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:39:20.278 07:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:39:22.184 07:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:39:22.184 07:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:39:22.184 07:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:39:22.184 07:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:39:22.184 07:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:39:22.184 07:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:39:22.184 07:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:39:22.184 07:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:39:22.184 07:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:39:22.184 07:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:39:22.184 07:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:39:22.184 07:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:39:22.184 07:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:39:22.184 07:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:39:22.184 07:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:39:22.184 07:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:39:22.184 07:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:39:22.184 07:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:39:22.184 07:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:39:22.184 07:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:39:22.184 07:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:39:22.184 07:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:39:22.184 07:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:39:22.184 07:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:39:22.184 07:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:39:22.184 07:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:39:22.184 07:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:39:22.184 07:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:39:22.184 07:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:39:22.184 07:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:39:22.184 07:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:39:22.184 07:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:39:22.184 07:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:39:22.184 07:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:22.184 07:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:39:22.184 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:39:22.185 07:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:22.185 07:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:22.185 07:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:22.185 07:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:22.185 07:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:22.185 07:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:22.185 07:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:39:22.185 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:39:22.185 07:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:22.185 07:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:22.185 07:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:22.185 07:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:22.185 07:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:22.185 07:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:39:22.185 07:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:39:22.185 07:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:39:22.185 07:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:22.185 07:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:22.185 07:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:22.185 07:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:22.185 07:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:22.185 07:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:22.185 07:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:22.185 07:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:39:22.185 Found net devices under 0000:0a:00.0: cvl_0_0 00:39:22.185 07:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:22.185 07:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:22.185 07:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:22.185 07:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:22.185 07:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:22.185 07:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:22.185 07:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:22.185 07:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:22.185 07:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:39:22.185 Found net devices under 0000:0a:00.1: cvl_0_1 00:39:22.185 07:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:22.185 07:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:39:22.185 07:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:39:22.185 07:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:39:22.185 07:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:39:22.185 07:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:39:22.185 07:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:39:22.185 07:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:22.185 07:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:22.185 07:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:39:22.185 07:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:39:22.185 07:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:39:22.185 07:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:39:22.185 07:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:39:22.185 07:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:39:22.185 07:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:39:22.185 07:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:22.185 07:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:39:22.185 07:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:39:22.185 07:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:39:22.185 07:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:39:22.185 07:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:39:22.185 07:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:39:22.185 07:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:39:22.185 07:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:39:22.185 07:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:39:22.185 07:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:39:22.185 07:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:39:22.185 07:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:39:22.185 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:22.185 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.155 ms 00:39:22.185 00:39:22.185 --- 10.0.0.2 ping statistics --- 00:39:22.185 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:22.185 rtt min/avg/max/mdev = 0.155/0.155/0.155/0.000 ms 00:39:22.185 07:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:39:22.444 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:22.444 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.065 ms 00:39:22.444 00:39:22.444 --- 10.0.0.1 ping statistics --- 00:39:22.444 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:22.444 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:39:22.444 07:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:22.444 07:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:39:22.444 07:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:39:22.444 07:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:22.444 07:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:39:22.444 07:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:39:22.444 07:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:22.444 07:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:39:22.444 07:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:39:22.444 07:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:39:22.444 07:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:39:22.444 07:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:39:22.444 07:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:39:22.444 07:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=448205 00:39:22.444 07:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:39:22.444 07:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 448205 00:39:22.444 07:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 448205 ']' 00:39:22.444 07:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:22.444 07:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:22.444 07:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:22.444 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:22.444 07:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:22.444 07:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:39:22.444 [2024-11-18 07:24:43.248119] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:39:22.444 [2024-11-18 07:24:43.249167] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:39:22.444 [2024-11-18 07:24:43.249228] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:22.444 [2024-11-18 07:24:43.320859] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:39:22.444 [2024-11-18 07:24:43.365137] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:22.444 [2024-11-18 07:24:43.365194] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:22.444 [2024-11-18 07:24:43.365215] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:22.444 [2024-11-18 07:24:43.365226] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:22.444 [2024-11-18 07:24:43.365235] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:22.444 [2024-11-18 07:24:43.366629] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:39:22.444 [2024-11-18 07:24:43.366693] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:39:22.444 [2024-11-18 07:24:43.366762] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:39:22.444 [2024-11-18 07:24:43.366764] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:22.704 [2024-11-18 07:24:43.448985] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:39:22.704 [2024-11-18 07:24:43.449213] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:39:22.704 [2024-11-18 07:24:43.449545] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:39:22.704 [2024-11-18 07:24:43.450161] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:39:22.704 [2024-11-18 07:24:43.450364] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:39:22.704 07:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:22.704 07:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:39:22.704 07:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:39:22.704 07:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:39:22.704 07:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:39:22.704 07:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:22.704 07:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:39:22.962 [2024-11-18 07:24:43.743454] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:22.962 07:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:39:23.220 07:24:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:39:23.220 07:24:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:39:23.478 07:24:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:39:23.478 07:24:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:39:23.736 07:24:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:39:23.736 07:24:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:39:23.994 07:24:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:39:23.994 07:24:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:39:24.560 07:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:39:24.818 07:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:39:24.818 07:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:39:25.076 07:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:39:25.076 07:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:39:25.334 07:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:39:25.334 07:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:39:25.591 07:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:39:25.849 07:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:39:25.849 07:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:39:26.106 07:24:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:39:26.106 07:24:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:39:26.364 07:24:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:39:26.622 [2024-11-18 07:24:47.523647] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:26.622 07:24:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:39:26.880 07:24:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:39:27.138 07:24:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:39:27.397 07:24:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:39:27.397 07:24:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:39:27.397 07:24:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:39:27.397 07:24:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:39:27.397 07:24:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:39:27.397 07:24:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:39:29.296 07:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:39:29.296 07:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:39:29.296 07:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:39:29.296 07:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:39:29.296 07:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:39:29.296 07:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:39:29.296 07:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:39:29.296 [global] 00:39:29.296 thread=1 00:39:29.296 invalidate=1 00:39:29.296 rw=write 00:39:29.296 time_based=1 00:39:29.296 runtime=1 00:39:29.296 ioengine=libaio 00:39:29.296 direct=1 00:39:29.296 bs=4096 00:39:29.296 iodepth=1 00:39:29.296 norandommap=0 00:39:29.296 numjobs=1 00:39:29.296 00:39:29.296 verify_dump=1 00:39:29.296 verify_backlog=512 00:39:29.296 verify_state_save=0 00:39:29.296 do_verify=1 00:39:29.296 verify=crc32c-intel 00:39:29.296 [job0] 00:39:29.296 filename=/dev/nvme0n1 00:39:29.554 [job1] 00:39:29.554 filename=/dev/nvme0n2 00:39:29.555 [job2] 00:39:29.555 filename=/dev/nvme0n3 00:39:29.555 [job3] 00:39:29.555 filename=/dev/nvme0n4 00:39:29.555 Could not set queue depth (nvme0n1) 00:39:29.555 Could not set queue depth (nvme0n2) 00:39:29.555 Could not set queue depth (nvme0n3) 00:39:29.555 Could not set queue depth (nvme0n4) 00:39:29.555 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:39:29.555 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:39:29.555 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:39:29.555 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:39:29.555 fio-3.35 00:39:29.555 Starting 4 threads 00:39:30.929 00:39:30.929 job0: (groupid=0, jobs=1): err= 0: pid=449150: Mon Nov 18 07:24:51 2024 00:39:30.929 read: IOPS=21, BW=85.6KiB/s (87.7kB/s)(88.0KiB/1028msec) 00:39:30.929 slat (nsec): min=12797, max=15729, avg=14764.27, stdev=832.49 00:39:30.929 clat (usec): min=40863, max=41062, avg=40973.92, stdev=42.07 00:39:30.929 lat (usec): min=40877, max=41077, avg=40988.69, stdev=42.13 00:39:30.929 clat percentiles (usec): 00:39:30.929 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:39:30.929 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:39:30.929 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:39:30.929 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:39:30.929 | 99.99th=[41157] 00:39:30.929 write: IOPS=498, BW=1992KiB/s (2040kB/s)(2048KiB/1028msec); 0 zone resets 00:39:30.929 slat (nsec): min=6697, max=60280, avg=16401.72, stdev=7706.86 00:39:30.929 clat (usec): min=186, max=403, avg=226.52, stdev=25.20 00:39:30.929 lat (usec): min=197, max=418, avg=242.92, stdev=26.51 00:39:30.929 clat percentiles (usec): 00:39:30.929 | 1.00th=[ 194], 5.00th=[ 200], 10.00th=[ 206], 20.00th=[ 208], 00:39:30.929 | 30.00th=[ 212], 40.00th=[ 219], 50.00th=[ 221], 60.00th=[ 225], 00:39:30.929 | 70.00th=[ 233], 80.00th=[ 239], 90.00th=[ 255], 95.00th=[ 269], 00:39:30.929 | 99.00th=[ 334], 99.50th=[ 351], 99.90th=[ 404], 99.95th=[ 404], 00:39:30.929 | 99.99th=[ 404] 00:39:30.929 bw ( KiB/s): min= 4096, max= 4096, per=51.40%, avg=4096.00, stdev= 0.00, samples=1 00:39:30.929 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:39:30.929 lat (usec) : 250=84.64%, 500=11.24% 00:39:30.929 lat (msec) : 50=4.12% 00:39:30.929 cpu : usr=0.19%, sys=0.97%, ctx=534, majf=0, minf=1 00:39:30.929 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:30.929 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:30.929 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:30.929 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:30.929 latency : target=0, window=0, percentile=100.00%, depth=1 00:39:30.930 job1: (groupid=0, jobs=1): err= 0: pid=449151: Mon Nov 18 07:24:51 2024 00:39:30.930 read: IOPS=447, BW=1792KiB/s (1835kB/s)(1824KiB/1018msec) 00:39:30.930 slat (nsec): min=5624, max=63058, avg=14478.05, stdev=7498.28 00:39:30.930 clat (usec): min=195, max=41085, avg=1967.27, stdev=8144.17 00:39:30.930 lat (usec): min=205, max=41101, avg=1981.75, stdev=8144.31 00:39:30.930 clat percentiles (usec): 00:39:30.930 | 1.00th=[ 200], 5.00th=[ 202], 10.00th=[ 204], 20.00th=[ 210], 00:39:30.930 | 30.00th=[ 217], 40.00th=[ 241], 50.00th=[ 273], 60.00th=[ 285], 00:39:30.930 | 70.00th=[ 297], 80.00th=[ 314], 90.00th=[ 429], 95.00th=[ 482], 00:39:30.930 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:39:30.930 | 99.99th=[41157] 00:39:30.930 write: IOPS=502, BW=2012KiB/s (2060kB/s)(2048KiB/1018msec); 0 zone resets 00:39:30.930 slat (nsec): min=8604, max=54304, avg=17714.09, stdev=7126.75 00:39:30.930 clat (usec): min=151, max=261, avg=196.73, stdev=18.33 00:39:30.930 lat (usec): min=161, max=287, avg=214.44, stdev=21.35 00:39:30.930 clat percentiles (usec): 00:39:30.930 | 1.00th=[ 157], 5.00th=[ 169], 10.00th=[ 176], 20.00th=[ 182], 00:39:30.930 | 30.00th=[ 186], 40.00th=[ 190], 50.00th=[ 196], 60.00th=[ 200], 00:39:30.930 | 70.00th=[ 206], 80.00th=[ 215], 90.00th=[ 223], 95.00th=[ 229], 00:39:30.930 | 99.00th=[ 239], 99.50th=[ 243], 99.90th=[ 262], 99.95th=[ 262], 00:39:30.930 | 99.99th=[ 262] 00:39:30.930 bw ( KiB/s): min= 4096, max= 4096, per=51.40%, avg=4096.00, stdev= 0.00, samples=1 00:39:30.930 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:39:30.930 lat (usec) : 250=72.83%, 500=25.21% 00:39:30.930 lat (msec) : 50=1.96% 00:39:30.930 cpu : usr=0.49%, sys=2.16%, ctx=969, majf=0, minf=1 00:39:30.930 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:30.930 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:30.930 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:30.930 issued rwts: total=456,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:30.930 latency : target=0, window=0, percentile=100.00%, depth=1 00:39:30.930 job2: (groupid=0, jobs=1): err= 0: pid=449152: Mon Nov 18 07:24:51 2024 00:39:30.930 read: IOPS=21, BW=87.3KiB/s (89.4kB/s)(88.0KiB/1008msec) 00:39:30.930 slat (nsec): min=12688, max=18948, avg=15361.95, stdev=1320.66 00:39:30.930 clat (usec): min=40802, max=41041, avg=40974.58, stdev=46.64 00:39:30.930 lat (usec): min=40818, max=41057, avg=40989.94, stdev=46.65 00:39:30.930 clat percentiles (usec): 00:39:30.930 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:39:30.930 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:39:30.930 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:39:30.930 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:39:30.930 | 99.99th=[41157] 00:39:30.930 write: IOPS=507, BW=2032KiB/s (2081kB/s)(2048KiB/1008msec); 0 zone resets 00:39:30.930 slat (nsec): min=6776, max=40390, avg=13743.65, stdev=5800.31 00:39:30.930 clat (usec): min=157, max=315, avg=190.76, stdev=16.19 00:39:30.930 lat (usec): min=166, max=342, avg=204.50, stdev=17.69 00:39:30.930 clat percentiles (usec): 00:39:30.930 | 1.00th=[ 161], 5.00th=[ 167], 10.00th=[ 172], 20.00th=[ 178], 00:39:30.930 | 30.00th=[ 182], 40.00th=[ 186], 50.00th=[ 190], 60.00th=[ 194], 00:39:30.930 | 70.00th=[ 200], 80.00th=[ 204], 90.00th=[ 212], 95.00th=[ 219], 00:39:30.930 | 99.00th=[ 227], 99.50th=[ 235], 99.90th=[ 314], 99.95th=[ 314], 00:39:30.930 | 99.99th=[ 314] 00:39:30.930 bw ( KiB/s): min= 4096, max= 4096, per=51.40%, avg=4096.00, stdev= 0.00, samples=1 00:39:30.930 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:39:30.930 lat (usec) : 250=95.69%, 500=0.19% 00:39:30.930 lat (msec) : 50=4.12% 00:39:30.930 cpu : usr=0.20%, sys=0.79%, ctx=534, majf=0, minf=2 00:39:30.930 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:30.930 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:30.930 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:30.930 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:30.930 latency : target=0, window=0, percentile=100.00%, depth=1 00:39:30.930 job3: (groupid=0, jobs=1): err= 0: pid=449153: Mon Nov 18 07:24:51 2024 00:39:30.930 read: IOPS=20, BW=83.7KiB/s (85.7kB/s)(84.0KiB/1004msec) 00:39:30.930 slat (nsec): min=9899, max=19746, avg=15563.48, stdev=1844.80 00:39:30.930 clat (usec): min=40935, max=42077, avg=41792.28, stdev=407.68 00:39:30.930 lat (usec): min=40950, max=42094, avg=41807.84, stdev=408.19 00:39:30.930 clat percentiles (usec): 00:39:30.930 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41681], 00:39:30.930 | 30.00th=[41681], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:39:30.930 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:39:30.930 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:39:30.930 | 99.99th=[42206] 00:39:30.930 write: IOPS=509, BW=2040KiB/s (2089kB/s)(2048KiB/1004msec); 0 zone resets 00:39:30.930 slat (nsec): min=7605, max=48712, avg=20414.50, stdev=8714.95 00:39:30.930 clat (usec): min=190, max=370, avg=221.88, stdev=21.98 00:39:30.930 lat (usec): min=199, max=398, avg=242.29, stdev=24.27 00:39:30.930 clat percentiles (usec): 00:39:30.930 | 1.00th=[ 196], 5.00th=[ 202], 10.00th=[ 204], 20.00th=[ 208], 00:39:30.930 | 30.00th=[ 212], 40.00th=[ 215], 50.00th=[ 219], 60.00th=[ 221], 00:39:30.930 | 70.00th=[ 225], 80.00th=[ 229], 90.00th=[ 237], 95.00th=[ 251], 00:39:30.930 | 99.00th=[ 330], 99.50th=[ 347], 99.90th=[ 371], 99.95th=[ 371], 00:39:30.930 | 99.99th=[ 371] 00:39:30.930 bw ( KiB/s): min= 4096, max= 4096, per=51.40%, avg=4096.00, stdev= 0.00, samples=1 00:39:30.930 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:39:30.930 lat (usec) : 250=91.18%, 500=4.88% 00:39:30.930 lat (msec) : 50=3.94% 00:39:30.930 cpu : usr=0.40%, sys=1.00%, ctx=534, majf=0, minf=1 00:39:30.930 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:30.930 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:30.930 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:30.930 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:30.930 latency : target=0, window=0, percentile=100.00%, depth=1 00:39:30.930 00:39:30.930 Run status group 0 (all jobs): 00:39:30.930 READ: bw=2027KiB/s (2076kB/s), 83.7KiB/s-1792KiB/s (85.7kB/s-1835kB/s), io=2084KiB (2134kB), run=1004-1028msec 00:39:30.930 WRITE: bw=7969KiB/s (8160kB/s), 1992KiB/s-2040KiB/s (2040kB/s-2089kB/s), io=8192KiB (8389kB), run=1004-1028msec 00:39:30.930 00:39:30.930 Disk stats (read/write): 00:39:30.930 nvme0n1: ios=67/512, merge=0/0, ticks=721/116, in_queue=837, util=86.77% 00:39:30.930 nvme0n2: ios=471/512, merge=0/0, ticks=710/96, in_queue=806, util=86.85% 00:39:30.930 nvme0n3: ios=18/512, merge=0/0, ticks=738/97, in_queue=835, util=88.99% 00:39:30.930 nvme0n4: ios=41/512, merge=0/0, ticks=1694/105, in_queue=1799, util=98.42% 00:39:30.930 07:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:39:30.930 [global] 00:39:30.930 thread=1 00:39:30.930 invalidate=1 00:39:30.930 rw=randwrite 00:39:30.930 time_based=1 00:39:30.930 runtime=1 00:39:30.930 ioengine=libaio 00:39:30.930 direct=1 00:39:30.930 bs=4096 00:39:30.930 iodepth=1 00:39:30.930 norandommap=0 00:39:30.930 numjobs=1 00:39:30.930 00:39:30.930 verify_dump=1 00:39:30.930 verify_backlog=512 00:39:30.930 verify_state_save=0 00:39:30.930 do_verify=1 00:39:30.930 verify=crc32c-intel 00:39:30.930 [job0] 00:39:30.930 filename=/dev/nvme0n1 00:39:30.930 [job1] 00:39:30.930 filename=/dev/nvme0n2 00:39:30.930 [job2] 00:39:30.930 filename=/dev/nvme0n3 00:39:30.930 [job3] 00:39:30.930 filename=/dev/nvme0n4 00:39:30.930 Could not set queue depth (nvme0n1) 00:39:30.930 Could not set queue depth (nvme0n2) 00:39:30.930 Could not set queue depth (nvme0n3) 00:39:30.930 Could not set queue depth (nvme0n4) 00:39:31.189 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:39:31.189 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:39:31.189 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:39:31.189 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:39:31.189 fio-3.35 00:39:31.189 Starting 4 threads 00:39:32.564 00:39:32.564 job0: (groupid=0, jobs=1): err= 0: pid=449384: Mon Nov 18 07:24:53 2024 00:39:32.564 read: IOPS=1729, BW=6917KiB/s (7083kB/s)(6924KiB/1001msec) 00:39:32.564 slat (nsec): min=4779, max=60812, avg=17104.79, stdev=8714.66 00:39:32.564 clat (usec): min=209, max=41812, avg=297.67, stdev=999.20 00:39:32.564 lat (usec): min=216, max=41845, avg=314.77, stdev=999.77 00:39:32.564 clat percentiles (usec): 00:39:32.565 | 1.00th=[ 221], 5.00th=[ 231], 10.00th=[ 237], 20.00th=[ 245], 00:39:32.565 | 30.00th=[ 249], 40.00th=[ 258], 50.00th=[ 269], 60.00th=[ 277], 00:39:32.565 | 70.00th=[ 277], 80.00th=[ 289], 90.00th=[ 330], 95.00th=[ 343], 00:39:32.565 | 99.00th=[ 478], 99.50th=[ 490], 99.90th=[ 519], 99.95th=[41681], 00:39:32.565 | 99.99th=[41681] 00:39:32.565 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:39:32.565 slat (nsec): min=5493, max=49163, avg=13893.05, stdev=4624.53 00:39:32.565 clat (usec): min=148, max=603, avg=199.96, stdev=37.68 00:39:32.565 lat (usec): min=163, max=617, avg=213.86, stdev=37.18 00:39:32.565 clat percentiles (usec): 00:39:32.565 | 1.00th=[ 165], 5.00th=[ 169], 10.00th=[ 172], 20.00th=[ 174], 00:39:32.565 | 30.00th=[ 178], 40.00th=[ 182], 50.00th=[ 184], 60.00th=[ 190], 00:39:32.565 | 70.00th=[ 210], 80.00th=[ 237], 90.00th=[ 245], 95.00th=[ 251], 00:39:32.565 | 99.00th=[ 338], 99.50th=[ 367], 99.90th=[ 494], 99.95th=[ 594], 00:39:32.565 | 99.99th=[ 603] 00:39:32.565 bw ( KiB/s): min= 8192, max= 8192, per=35.31%, avg=8192.00, stdev= 0.00, samples=1 00:39:32.565 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:39:32.565 lat (usec) : 250=65.94%, 500=33.92%, 750=0.11% 00:39:32.565 lat (msec) : 50=0.03% 00:39:32.565 cpu : usr=3.30%, sys=6.20%, ctx=3780, majf=0, minf=1 00:39:32.565 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:32.565 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:32.565 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:32.565 issued rwts: total=1731,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:32.565 latency : target=0, window=0, percentile=100.00%, depth=1 00:39:32.565 job1: (groupid=0, jobs=1): err= 0: pid=449385: Mon Nov 18 07:24:53 2024 00:39:32.565 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:39:32.565 slat (nsec): min=5042, max=51385, avg=14451.70, stdev=7895.26 00:39:32.565 clat (usec): min=195, max=450, avg=245.98, stdev=45.69 00:39:32.565 lat (usec): min=200, max=483, avg=260.43, stdev=51.94 00:39:32.565 clat percentiles (usec): 00:39:32.565 | 1.00th=[ 202], 5.00th=[ 208], 10.00th=[ 210], 20.00th=[ 215], 00:39:32.565 | 30.00th=[ 219], 40.00th=[ 223], 50.00th=[ 227], 60.00th=[ 233], 00:39:32.565 | 70.00th=[ 243], 80.00th=[ 281], 90.00th=[ 343], 95.00th=[ 351], 00:39:32.565 | 99.00th=[ 367], 99.50th=[ 371], 99.90th=[ 400], 99.95th=[ 412], 00:39:32.565 | 99.99th=[ 449] 00:39:32.565 write: IOPS=2434, BW=9738KiB/s (9972kB/s)(9748KiB/1001msec); 0 zone resets 00:39:32.565 slat (nsec): min=5913, max=40914, avg=13207.54, stdev=4639.00 00:39:32.565 clat (usec): min=144, max=442, avg=171.04, stdev=17.91 00:39:32.565 lat (usec): min=151, max=458, avg=184.24, stdev=18.75 00:39:32.565 clat percentiles (usec): 00:39:32.565 | 1.00th=[ 151], 5.00th=[ 153], 10.00th=[ 157], 20.00th=[ 159], 00:39:32.565 | 30.00th=[ 163], 40.00th=[ 165], 50.00th=[ 167], 60.00th=[ 172], 00:39:32.565 | 70.00th=[ 174], 80.00th=[ 180], 90.00th=[ 190], 95.00th=[ 200], 00:39:32.565 | 99.00th=[ 229], 99.50th=[ 253], 99.90th=[ 351], 99.95th=[ 392], 00:39:32.565 | 99.99th=[ 441] 00:39:32.565 bw ( KiB/s): min=10608, max=10608, per=45.72%, avg=10608.00, stdev= 0.00, samples=1 00:39:32.565 iops : min= 2652, max= 2652, avg=2652.00, stdev= 0.00, samples=1 00:39:32.565 lat (usec) : 250=87.69%, 500=12.31% 00:39:32.565 cpu : usr=3.40%, sys=6.40%, ctx=4487, majf=0, minf=1 00:39:32.565 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:32.565 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:32.565 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:32.565 issued rwts: total=2048,2437,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:32.565 latency : target=0, window=0, percentile=100.00%, depth=1 00:39:32.565 job2: (groupid=0, jobs=1): err= 0: pid=449386: Mon Nov 18 07:24:53 2024 00:39:32.565 read: IOPS=809, BW=3237KiB/s (3315kB/s)(3344KiB/1033msec) 00:39:32.565 slat (nsec): min=5816, max=52265, avg=15654.47, stdev=6966.41 00:39:32.565 clat (usec): min=196, max=41194, avg=935.78, stdev=5228.20 00:39:32.565 lat (usec): min=208, max=41211, avg=951.43, stdev=5230.10 00:39:32.565 clat percentiles (usec): 00:39:32.565 | 1.00th=[ 212], 5.00th=[ 221], 10.00th=[ 223], 20.00th=[ 227], 00:39:32.565 | 30.00th=[ 229], 40.00th=[ 235], 50.00th=[ 239], 60.00th=[ 243], 00:39:32.565 | 70.00th=[ 245], 80.00th=[ 269], 90.00th=[ 326], 95.00th=[ 461], 00:39:32.565 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:39:32.565 | 99.99th=[41157] 00:39:32.565 write: IOPS=991, BW=3965KiB/s (4060kB/s)(4096KiB/1033msec); 0 zone resets 00:39:32.565 slat (nsec): min=6579, max=41602, avg=14653.38, stdev=5030.51 00:39:32.565 clat (usec): min=159, max=1274, avg=208.63, stdev=67.91 00:39:32.565 lat (usec): min=167, max=1296, avg=223.29, stdev=67.35 00:39:32.565 clat percentiles (usec): 00:39:32.565 | 1.00th=[ 163], 5.00th=[ 165], 10.00th=[ 167], 20.00th=[ 172], 00:39:32.565 | 30.00th=[ 176], 40.00th=[ 180], 50.00th=[ 186], 60.00th=[ 208], 00:39:32.565 | 70.00th=[ 233], 80.00th=[ 239], 90.00th=[ 249], 95.00th=[ 265], 00:39:32.565 | 99.00th=[ 441], 99.50th=[ 494], 99.90th=[ 988], 99.95th=[ 1270], 00:39:32.565 | 99.99th=[ 1270] 00:39:32.565 bw ( KiB/s): min= 8192, max= 8192, per=35.31%, avg=8192.00, stdev= 0.00, samples=1 00:39:32.565 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:39:32.565 lat (usec) : 250=83.33%, 500=15.59%, 750=0.16%, 1000=0.11% 00:39:32.565 lat (msec) : 2=0.05%, 50=0.75% 00:39:32.565 cpu : usr=1.65%, sys=2.62%, ctx=1861, majf=0, minf=1 00:39:32.565 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:32.565 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:32.565 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:32.565 issued rwts: total=836,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:32.565 latency : target=0, window=0, percentile=100.00%, depth=1 00:39:32.565 job3: (groupid=0, jobs=1): err= 0: pid=449389: Mon Nov 18 07:24:53 2024 00:39:32.565 read: IOPS=21, BW=84.8KiB/s (86.8kB/s)(88.0KiB/1038msec) 00:39:32.565 slat (nsec): min=15170, max=34835, avg=29394.27, stdev=8251.56 00:39:32.565 clat (usec): min=40595, max=41048, avg=40939.98, stdev=86.80 00:39:32.565 lat (usec): min=40610, max=41064, avg=40969.37, stdev=88.36 00:39:32.565 clat percentiles (usec): 00:39:32.565 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:39:32.565 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:39:32.565 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:39:32.565 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:39:32.565 | 99.99th=[41157] 00:39:32.565 write: IOPS=493, BW=1973KiB/s (2020kB/s)(2048KiB/1038msec); 0 zone resets 00:39:32.565 slat (nsec): min=6284, max=57593, avg=12336.36, stdev=6130.55 00:39:32.565 clat (usec): min=162, max=1108, avg=250.01, stdev=66.98 00:39:32.565 lat (usec): min=177, max=1117, avg=262.35, stdev=66.36 00:39:32.565 clat percentiles (usec): 00:39:32.565 | 1.00th=[ 184], 5.00th=[ 221], 10.00th=[ 227], 20.00th=[ 233], 00:39:32.565 | 30.00th=[ 235], 40.00th=[ 237], 50.00th=[ 243], 60.00th=[ 245], 00:39:32.565 | 70.00th=[ 247], 80.00th=[ 251], 90.00th=[ 265], 95.00th=[ 293], 00:39:32.565 | 99.00th=[ 457], 99.50th=[ 873], 99.90th=[ 1106], 99.95th=[ 1106], 00:39:32.565 | 99.99th=[ 1106] 00:39:32.565 bw ( KiB/s): min= 4096, max= 4096, per=17.65%, avg=4096.00, stdev= 0.00, samples=1 00:39:32.565 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:39:32.565 lat (usec) : 250=76.78%, 500=18.35%, 1000=0.56% 00:39:32.565 lat (msec) : 2=0.19%, 50=4.12% 00:39:32.565 cpu : usr=0.58%, sys=0.29%, ctx=536, majf=0, minf=1 00:39:32.565 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:32.565 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:32.565 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:32.565 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:32.565 latency : target=0, window=0, percentile=100.00%, depth=1 00:39:32.565 00:39:32.565 Run status group 0 (all jobs): 00:39:32.565 READ: bw=17.4MiB/s (18.3MB/s), 84.8KiB/s-8184KiB/s (86.8kB/s-8380kB/s), io=18.1MiB (19.0MB), run=1001-1038msec 00:39:32.565 WRITE: bw=22.7MiB/s (23.8MB/s), 1973KiB/s-9738KiB/s (2020kB/s-9972kB/s), io=23.5MiB (24.7MB), run=1001-1038msec 00:39:32.565 00:39:32.565 Disk stats (read/write): 00:39:32.565 nvme0n1: ios=1577/1566, merge=0/0, ticks=583/325, in_queue=908, util=96.89% 00:39:32.565 nvme0n2: ios=1832/2048, merge=0/0, ticks=1420/339, in_queue=1759, util=98.48% 00:39:32.565 nvme0n3: ios=855/1024, merge=0/0, ticks=1557/206, in_queue=1763, util=97.70% 00:39:32.565 nvme0n4: ios=74/512, merge=0/0, ticks=917/122, in_queue=1039, util=97.68% 00:39:32.565 07:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:39:32.565 [global] 00:39:32.565 thread=1 00:39:32.565 invalidate=1 00:39:32.565 rw=write 00:39:32.565 time_based=1 00:39:32.565 runtime=1 00:39:32.565 ioengine=libaio 00:39:32.565 direct=1 00:39:32.565 bs=4096 00:39:32.565 iodepth=128 00:39:32.565 norandommap=0 00:39:32.565 numjobs=1 00:39:32.565 00:39:32.565 verify_dump=1 00:39:32.565 verify_backlog=512 00:39:32.565 verify_state_save=0 00:39:32.565 do_verify=1 00:39:32.565 verify=crc32c-intel 00:39:32.565 [job0] 00:39:32.565 filename=/dev/nvme0n1 00:39:32.565 [job1] 00:39:32.565 filename=/dev/nvme0n2 00:39:32.565 [job2] 00:39:32.565 filename=/dev/nvme0n3 00:39:32.565 [job3] 00:39:32.565 filename=/dev/nvme0n4 00:39:32.565 Could not set queue depth (nvme0n1) 00:39:32.565 Could not set queue depth (nvme0n2) 00:39:32.565 Could not set queue depth (nvme0n3) 00:39:32.565 Could not set queue depth (nvme0n4) 00:39:32.565 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:39:32.565 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:39:32.565 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:39:32.565 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:39:32.565 fio-3.35 00:39:32.566 Starting 4 threads 00:39:33.942 00:39:33.942 job0: (groupid=0, jobs=1): err= 0: pid=449729: Mon Nov 18 07:24:54 2024 00:39:33.942 read: IOPS=4308, BW=16.8MiB/s (17.6MB/s)(16.9MiB/1006msec) 00:39:33.942 slat (usec): min=2, max=25818, avg=99.04, stdev=748.12 00:39:33.942 clat (usec): min=2709, max=61622, avg=12540.25, stdev=6689.72 00:39:33.942 lat (usec): min=5167, max=61631, avg=12639.29, stdev=6751.02 00:39:33.942 clat percentiles (usec): 00:39:33.942 | 1.00th=[ 6063], 5.00th=[ 8717], 10.00th=[ 9503], 20.00th=[10028], 00:39:33.943 | 30.00th=[10290], 40.00th=[10421], 50.00th=[10552], 60.00th=[11076], 00:39:33.943 | 70.00th=[12125], 80.00th=[13960], 90.00th=[15139], 95.00th=[17695], 00:39:33.943 | 99.00th=[51119], 99.50th=[51643], 99.90th=[61604], 99.95th=[61604], 00:39:33.943 | 99.99th=[61604] 00:39:33.943 write: IOPS=4580, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1006msec); 0 zone resets 00:39:33.943 slat (usec): min=3, max=21142, avg=118.86, stdev=783.97 00:39:33.943 clat (usec): min=3715, max=57985, avg=15779.48, stdev=9743.32 00:39:33.943 lat (usec): min=3723, max=57996, avg=15898.34, stdev=9804.05 00:39:33.943 clat percentiles (usec): 00:39:33.943 | 1.00th=[ 6325], 5.00th=[10159], 10.00th=[10421], 20.00th=[10683], 00:39:33.943 | 30.00th=[10814], 40.00th=[11207], 50.00th=[11600], 60.00th=[11863], 00:39:33.943 | 70.00th=[12125], 80.00th=[20841], 90.00th=[30278], 95.00th=[39060], 00:39:33.943 | 99.00th=[56886], 99.50th=[57934], 99.90th=[57934], 99.95th=[57934], 00:39:33.943 | 99.99th=[57934] 00:39:33.943 bw ( KiB/s): min=13512, max=23352, per=30.06%, avg=18432.00, stdev=6957.93, samples=2 00:39:33.943 iops : min= 3378, max= 5838, avg=4608.00, stdev=1739.48, samples=2 00:39:33.943 lat (msec) : 4=0.06%, 10=12.07%, 20=74.95%, 50=11.36%, 100=1.57% 00:39:33.943 cpu : usr=3.58%, sys=4.38%, ctx=375, majf=0, minf=2 00:39:33.943 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:39:33.943 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:33.943 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:39:33.943 issued rwts: total=4334,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:33.943 latency : target=0, window=0, percentile=100.00%, depth=128 00:39:33.943 job1: (groupid=0, jobs=1): err= 0: pid=449730: Mon Nov 18 07:24:54 2024 00:39:33.943 read: IOPS=3838, BW=15.0MiB/s (15.7MB/s)(15.7MiB/1049msec) 00:39:33.943 slat (usec): min=2, max=16280, avg=136.66, stdev=959.89 00:39:33.943 clat (usec): min=5009, max=62760, avg=18734.34, stdev=9771.10 00:39:33.943 lat (usec): min=5013, max=62767, avg=18871.00, stdev=9811.89 00:39:33.943 clat percentiles (usec): 00:39:33.943 | 1.00th=[ 8356], 5.00th=[ 9503], 10.00th=[10421], 20.00th=[11600], 00:39:33.943 | 30.00th=[13304], 40.00th=[14091], 50.00th=[15533], 60.00th=[17695], 00:39:33.943 | 70.00th=[20317], 80.00th=[24773], 90.00th=[31327], 95.00th=[40109], 00:39:33.943 | 99.00th=[56886], 99.50th=[57410], 99.90th=[57410], 99.95th=[57410], 00:39:33.943 | 99.99th=[62653] 00:39:33.943 write: IOPS=3904, BW=15.3MiB/s (16.0MB/s)(16.0MiB/1049msec); 0 zone resets 00:39:33.943 slat (usec): min=3, max=11238, avg=105.99, stdev=720.00 00:39:33.943 clat (usec): min=5651, max=32866, avg=13983.24, stdev=3882.65 00:39:33.943 lat (usec): min=5662, max=32870, avg=14089.23, stdev=3937.74 00:39:33.943 clat percentiles (usec): 00:39:33.943 | 1.00th=[ 7635], 5.00th=[ 8979], 10.00th=[10421], 20.00th=[11338], 00:39:33.943 | 30.00th=[11731], 40.00th=[12256], 50.00th=[13304], 60.00th=[13829], 00:39:33.943 | 70.00th=[15533], 80.00th=[16712], 90.00th=[18220], 95.00th=[21365], 00:39:33.943 | 99.00th=[26870], 99.50th=[32900], 99.90th=[32900], 99.95th=[32900], 00:39:33.943 | 99.99th=[32900] 00:39:33.943 bw ( KiB/s): min=16384, max=16384, per=26.72%, avg=16384.00, stdev= 0.00, samples=2 00:39:33.943 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=2 00:39:33.943 lat (msec) : 10=7.92%, 20=72.79%, 50=18.15%, 100=1.14% 00:39:33.943 cpu : usr=2.29%, sys=3.15%, ctx=337, majf=0, minf=1 00:39:33.943 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:39:33.943 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:33.943 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:39:33.943 issued rwts: total=4027,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:33.943 latency : target=0, window=0, percentile=100.00%, depth=128 00:39:33.943 job2: (groupid=0, jobs=1): err= 0: pid=449731: Mon Nov 18 07:24:54 2024 00:39:33.943 read: IOPS=3032, BW=11.8MiB/s (12.4MB/s)(12.0MiB/1013msec) 00:39:33.943 slat (usec): min=2, max=22763, avg=167.87, stdev=1299.88 00:39:33.943 clat (usec): min=5568, max=64542, avg=20871.84, stdev=11857.51 00:39:33.943 lat (usec): min=5588, max=64546, avg=21039.71, stdev=11939.38 00:39:33.943 clat percentiles (usec): 00:39:33.943 | 1.00th=[ 8455], 5.00th=[10028], 10.00th=[10683], 20.00th=[11469], 00:39:33.943 | 30.00th=[13173], 40.00th=[14091], 50.00th=[15008], 60.00th=[17433], 00:39:33.943 | 70.00th=[23725], 80.00th=[32113], 90.00th=[41157], 95.00th=[43254], 00:39:33.943 | 99.00th=[56886], 99.50th=[56886], 99.90th=[64750], 99.95th=[64750], 00:39:33.943 | 99.99th=[64750] 00:39:33.943 write: IOPS=3307, BW=12.9MiB/s (13.5MB/s)(13.1MiB/1013msec); 0 zone resets 00:39:33.943 slat (usec): min=3, max=14280, avg=136.85, stdev=894.69 00:39:33.943 clat (usec): min=4403, max=82534, avg=19113.30, stdev=10219.11 00:39:33.943 lat (usec): min=4417, max=83940, avg=19250.15, stdev=10279.79 00:39:33.943 clat percentiles (usec): 00:39:33.943 | 1.00th=[ 6456], 5.00th=[ 8848], 10.00th=[11076], 20.00th=[12649], 00:39:33.943 | 30.00th=[13435], 40.00th=[13960], 50.00th=[16188], 60.00th=[19006], 00:39:33.943 | 70.00th=[21890], 80.00th=[24249], 90.00th=[27657], 95.00th=[34866], 00:39:33.943 | 99.00th=[69731], 99.50th=[71828], 99.90th=[82314], 99.95th=[82314], 00:39:33.943 | 99.99th=[82314] 00:39:33.943 bw ( KiB/s): min=12288, max=13488, per=21.02%, avg=12888.00, stdev=848.53, samples=2 00:39:33.943 iops : min= 3072, max= 3372, avg=3222.00, stdev=212.13, samples=2 00:39:33.943 lat (msec) : 10=5.22%, 20=57.09%, 50=35.46%, 100=2.24% 00:39:33.943 cpu : usr=2.08%, sys=3.75%, ctx=250, majf=0, minf=1 00:39:33.943 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:39:33.943 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:33.943 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:39:33.943 issued rwts: total=3072,3350,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:33.943 latency : target=0, window=0, percentile=100.00%, depth=128 00:39:33.943 job3: (groupid=0, jobs=1): err= 0: pid=449732: Mon Nov 18 07:24:54 2024 00:39:33.943 read: IOPS=3559, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1007msec) 00:39:33.943 slat (usec): min=2, max=16145, avg=120.98, stdev=873.91 00:39:33.943 clat (usec): min=7031, max=29461, avg=15603.18, stdev=3956.74 00:39:33.943 lat (usec): min=7037, max=35012, avg=15724.16, stdev=4019.69 00:39:33.943 clat percentiles (usec): 00:39:33.943 | 1.00th=[ 7111], 5.00th=[ 9503], 10.00th=[10683], 20.00th=[12387], 00:39:33.943 | 30.00th=[13566], 40.00th=[14353], 50.00th=[15139], 60.00th=[16319], 00:39:33.943 | 70.00th=[17433], 80.00th=[18482], 90.00th=[21103], 95.00th=[22414], 00:39:33.943 | 99.00th=[26084], 99.50th=[28705], 99.90th=[28705], 99.95th=[28705], 00:39:33.943 | 99.99th=[29492] 00:39:33.943 write: IOPS=4000, BW=15.6MiB/s (16.4MB/s)(15.7MiB/1007msec); 0 zone resets 00:39:33.943 slat (usec): min=3, max=18624, avg=133.19, stdev=949.95 00:39:33.943 clat (usec): min=1205, max=56468, avg=17713.83, stdev=8990.11 00:39:33.943 lat (usec): min=1212, max=57395, avg=17847.02, stdev=9073.16 00:39:33.943 clat percentiles (usec): 00:39:33.943 | 1.00th=[ 5866], 5.00th=[ 8225], 10.00th=[10421], 20.00th=[12387], 00:39:33.943 | 30.00th=[13042], 40.00th=[13304], 50.00th=[14222], 60.00th=[16319], 00:39:33.943 | 70.00th=[19006], 80.00th=[21103], 90.00th=[31327], 95.00th=[33817], 00:39:33.943 | 99.00th=[55313], 99.50th=[56361], 99.90th=[56361], 99.95th=[56361], 00:39:33.943 | 99.99th=[56361] 00:39:33.943 bw ( KiB/s): min=14824, max=16384, per=25.45%, avg=15604.00, stdev=1103.09, samples=2 00:39:33.943 iops : min= 3706, max= 4096, avg=3901.00, stdev=275.77, samples=2 00:39:33.943 lat (msec) : 2=0.21%, 10=7.54%, 20=74.12%, 50=17.30%, 100=0.83% 00:39:33.943 cpu : usr=3.88%, sys=4.47%, ctx=331, majf=0, minf=1 00:39:33.943 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:39:33.943 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:33.943 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:39:33.943 issued rwts: total=3584,4028,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:33.943 latency : target=0, window=0, percentile=100.00%, depth=128 00:39:33.943 00:39:33.943 Run status group 0 (all jobs): 00:39:33.943 READ: bw=55.9MiB/s (58.6MB/s), 11.8MiB/s-16.8MiB/s (12.4MB/s-17.6MB/s), io=58.7MiB (61.5MB), run=1006-1049msec 00:39:33.943 WRITE: bw=59.9MiB/s (62.8MB/s), 12.9MiB/s-17.9MiB/s (13.5MB/s-18.8MB/s), io=62.8MiB (65.9MB), run=1006-1049msec 00:39:33.943 00:39:33.943 Disk stats (read/write): 00:39:33.943 nvme0n1: ios=4086/4096, merge=0/0, ticks=23954/24338, in_queue=48292, util=97.70% 00:39:33.943 nvme0n2: ios=3077/3534, merge=0/0, ticks=27490/22817, in_queue=50307, util=84.87% 00:39:33.943 nvme0n3: ios=2560/2679, merge=0/0, ticks=34995/30173, in_queue=65168, util=87.92% 00:39:33.943 nvme0n4: ios=3129/3584, merge=0/0, ticks=41260/45240, in_queue=86500, util=98.01% 00:39:33.943 07:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:39:33.943 [global] 00:39:33.943 thread=1 00:39:33.943 invalidate=1 00:39:33.943 rw=randwrite 00:39:33.943 time_based=1 00:39:33.943 runtime=1 00:39:33.943 ioengine=libaio 00:39:33.943 direct=1 00:39:33.943 bs=4096 00:39:33.943 iodepth=128 00:39:33.943 norandommap=0 00:39:33.943 numjobs=1 00:39:33.943 00:39:33.943 verify_dump=1 00:39:33.943 verify_backlog=512 00:39:33.943 verify_state_save=0 00:39:33.943 do_verify=1 00:39:33.943 verify=crc32c-intel 00:39:33.943 [job0] 00:39:33.943 filename=/dev/nvme0n1 00:39:33.943 [job1] 00:39:33.943 filename=/dev/nvme0n2 00:39:33.943 [job2] 00:39:33.943 filename=/dev/nvme0n3 00:39:33.943 [job3] 00:39:33.943 filename=/dev/nvme0n4 00:39:33.943 Could not set queue depth (nvme0n1) 00:39:33.943 Could not set queue depth (nvme0n2) 00:39:33.943 Could not set queue depth (nvme0n3) 00:39:33.943 Could not set queue depth (nvme0n4) 00:39:34.202 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:39:34.202 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:39:34.202 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:39:34.202 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:39:34.202 fio-3.35 00:39:34.202 Starting 4 threads 00:39:35.577 00:39:35.577 job0: (groupid=0, jobs=1): err= 0: pid=449964: Mon Nov 18 07:24:56 2024 00:39:35.577 read: IOPS=5620, BW=22.0MiB/s (23.0MB/s)(22.0MiB/1002msec) 00:39:35.577 slat (usec): min=2, max=5188, avg=85.17, stdev=500.94 00:39:35.577 clat (usec): min=5597, max=18106, avg=11180.41, stdev=1699.88 00:39:35.577 lat (usec): min=5602, max=18111, avg=11265.59, stdev=1719.38 00:39:35.577 clat percentiles (usec): 00:39:35.577 | 1.00th=[ 6587], 5.00th=[ 8586], 10.00th=[ 9110], 20.00th=[ 9896], 00:39:35.577 | 30.00th=[10290], 40.00th=[10814], 50.00th=[11076], 60.00th=[11469], 00:39:35.577 | 70.00th=[11994], 80.00th=[12518], 90.00th=[13173], 95.00th=[14091], 00:39:35.577 | 99.00th=[15664], 99.50th=[16319], 99.90th=[16712], 99.95th=[16712], 00:39:35.577 | 99.99th=[18220] 00:39:35.577 write: IOPS=5634, BW=22.0MiB/s (23.1MB/s)(22.1MiB/1002msec); 0 zone resets 00:39:35.577 slat (usec): min=3, max=8990, avg=83.50, stdev=494.06 00:39:35.577 clat (usec): min=296, max=21883, avg=11137.57, stdev=1611.70 00:39:35.577 lat (usec): min=2947, max=21913, avg=11221.07, stdev=1643.51 00:39:35.577 clat percentiles (usec): 00:39:35.577 | 1.00th=[ 6718], 5.00th=[ 8848], 10.00th=[ 9765], 20.00th=[10421], 00:39:35.577 | 30.00th=[10552], 40.00th=[10814], 50.00th=[10945], 60.00th=[11207], 00:39:35.577 | 70.00th=[11469], 80.00th=[11863], 90.00th=[12518], 95.00th=[13960], 00:39:35.577 | 99.00th=[17695], 99.50th=[19268], 99.90th=[19268], 99.95th=[19268], 00:39:35.577 | 99.99th=[21890] 00:39:35.577 bw ( KiB/s): min=21576, max=23480, per=35.04%, avg=22528.00, stdev=1346.33, samples=2 00:39:35.577 iops : min= 5394, max= 5870, avg=5632.00, stdev=336.58, samples=2 00:39:35.577 lat (usec) : 500=0.01% 00:39:35.577 lat (msec) : 4=0.05%, 10=16.41%, 20=83.52%, 50=0.01% 00:39:35.577 cpu : usr=6.79%, sys=8.79%, ctx=446, majf=0, minf=1 00:39:35.577 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:39:35.577 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:35.577 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:39:35.577 issued rwts: total=5632,5646,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:35.577 latency : target=0, window=0, percentile=100.00%, depth=128 00:39:35.577 job1: (groupid=0, jobs=1): err= 0: pid=449965: Mon Nov 18 07:24:56 2024 00:39:35.577 read: IOPS=2662, BW=10.4MiB/s (10.9MB/s)(10.9MiB/1047msec) 00:39:35.577 slat (usec): min=2, max=15416, avg=152.62, stdev=1123.94 00:39:35.577 clat (msec): min=4, max=121, avg=23.53, stdev=16.98 00:39:35.577 lat (msec): min=4, max=131, avg=23.68, stdev=17.04 00:39:35.577 clat percentiles (msec): 00:39:35.578 | 1.00th=[ 6], 5.00th=[ 9], 10.00th=[ 12], 20.00th=[ 13], 00:39:35.578 | 30.00th=[ 16], 40.00th=[ 17], 50.00th=[ 18], 60.00th=[ 21], 00:39:35.578 | 70.00th=[ 26], 80.00th=[ 31], 90.00th=[ 43], 95.00th=[ 57], 00:39:35.578 | 99.00th=[ 120], 99.50th=[ 120], 99.90th=[ 122], 99.95th=[ 122], 00:39:35.578 | 99.99th=[ 122] 00:39:35.578 write: IOPS=2934, BW=11.5MiB/s (12.0MB/s)(12.0MiB/1047msec); 0 zone resets 00:39:35.578 slat (usec): min=3, max=15369, avg=169.24, stdev=1014.57 00:39:35.578 clat (usec): min=222, max=123554, avg=21903.82, stdev=17659.90 00:39:35.578 lat (usec): min=684, max=123572, avg=22073.06, stdev=17799.71 00:39:35.578 clat percentiles (usec): 00:39:35.578 | 1.00th=[ 1237], 5.00th=[ 3490], 10.00th=[ 5604], 20.00th=[ 11076], 00:39:35.578 | 30.00th=[ 14877], 40.00th=[ 15664], 50.00th=[ 16057], 60.00th=[ 20579], 00:39:35.578 | 70.00th=[ 23987], 80.00th=[ 26608], 90.00th=[ 43779], 95.00th=[ 57934], 00:39:35.578 | 99.00th=[ 94897], 99.50th=[111674], 99.90th=[121111], 99.95th=[121111], 00:39:35.578 | 99.99th=[123208] 00:39:35.578 bw ( KiB/s): min=10432, max=14144, per=19.11%, avg=12288.00, stdev=2624.78, samples=2 00:39:35.578 iops : min= 2608, max= 3536, avg=3072.00, stdev=656.20, samples=2 00:39:35.578 lat (usec) : 250=0.02%, 750=0.10%, 1000=0.12% 00:39:35.578 lat (msec) : 2=0.87%, 4=2.10%, 10=9.16%, 20=46.42%, 50=34.16% 00:39:35.578 lat (msec) : 100=6.01%, 250=1.04% 00:39:35.578 cpu : usr=2.29%, sys=4.02%, ctx=258, majf=0, minf=2 00:39:35.578 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:39:35.578 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:35.578 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:39:35.578 issued rwts: total=2788,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:35.578 latency : target=0, window=0, percentile=100.00%, depth=128 00:39:35.578 job2: (groupid=0, jobs=1): err= 0: pid=449967: Mon Nov 18 07:24:56 2024 00:39:35.578 read: IOPS=3566, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1003msec) 00:39:35.578 slat (usec): min=2, max=12603, avg=120.33, stdev=935.12 00:39:35.578 clat (usec): min=1528, max=38159, avg=16341.61, stdev=6319.52 00:39:35.578 lat (usec): min=3591, max=38164, avg=16461.95, stdev=6363.09 00:39:35.578 clat percentiles (usec): 00:39:35.578 | 1.00th=[ 9896], 5.00th=[11076], 10.00th=[11338], 20.00th=[11731], 00:39:35.578 | 30.00th=[11994], 40.00th=[12387], 50.00th=[13566], 60.00th=[16581], 00:39:35.578 | 70.00th=[17695], 80.00th=[19792], 90.00th=[25560], 95.00th=[32637], 00:39:35.578 | 99.00th=[35914], 99.50th=[36439], 99.90th=[38011], 99.95th=[38011], 00:39:35.578 | 99.99th=[38011] 00:39:35.578 write: IOPS=3573, BW=14.0MiB/s (14.6MB/s)(14.0MiB/1003msec); 0 zone resets 00:39:35.578 slat (usec): min=3, max=24895, avg=127.92, stdev=1025.03 00:39:35.578 clat (usec): min=1110, max=90297, avg=19161.39, stdev=12493.23 00:39:35.578 lat (usec): min=1119, max=90304, avg=19289.31, stdev=12575.58 00:39:35.578 clat percentiles (usec): 00:39:35.578 | 1.00th=[ 4293], 5.00th=[ 8586], 10.00th=[ 9634], 20.00th=[11207], 00:39:35.578 | 30.00th=[11863], 40.00th=[13042], 50.00th=[13435], 60.00th=[16057], 00:39:35.578 | 70.00th=[19268], 80.00th=[27132], 90.00th=[36963], 95.00th=[47449], 00:39:35.578 | 99.00th=[61604], 99.50th=[61604], 99.90th=[89654], 99.95th=[89654], 00:39:35.578 | 99.99th=[90702] 00:39:35.578 bw ( KiB/s): min=12232, max=16440, per=22.30%, avg=14336.00, stdev=2975.51, samples=2 00:39:35.578 iops : min= 3058, max= 4110, avg=3584.00, stdev=743.88, samples=2 00:39:35.578 lat (msec) : 2=0.07%, 4=0.43%, 10=7.19%, 20=67.94%, 50=22.47% 00:39:35.578 lat (msec) : 100=1.90% 00:39:35.578 cpu : usr=3.59%, sys=4.89%, ctx=271, majf=0, minf=1 00:39:35.578 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:39:35.578 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:35.578 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:39:35.578 issued rwts: total=3577,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:35.578 latency : target=0, window=0, percentile=100.00%, depth=128 00:39:35.578 job3: (groupid=0, jobs=1): err= 0: pid=449968: Mon Nov 18 07:24:56 2024 00:39:35.578 read: IOPS=4043, BW=15.8MiB/s (16.6MB/s)(16.0MiB/1013msec) 00:39:35.578 slat (usec): min=3, max=14894, avg=107.68, stdev=823.01 00:39:35.578 clat (usec): min=4702, max=49069, avg=14131.07, stdev=4794.40 00:39:35.578 lat (usec): min=4710, max=49077, avg=14238.75, stdev=4864.15 00:39:35.578 clat percentiles (usec): 00:39:35.578 | 1.00th=[ 7767], 5.00th=[10290], 10.00th=[10945], 20.00th=[11338], 00:39:35.578 | 30.00th=[11731], 40.00th=[12256], 50.00th=[13042], 60.00th=[13829], 00:39:35.578 | 70.00th=[14353], 80.00th=[15270], 90.00th=[18744], 95.00th=[21890], 00:39:35.578 | 99.00th=[40633], 99.50th=[46400], 99.90th=[49021], 99.95th=[49021], 00:39:35.578 | 99.99th=[49021] 00:39:35.578 write: IOPS=4466, BW=17.4MiB/s (18.3MB/s)(17.7MiB/1013msec); 0 zone resets 00:39:35.578 slat (usec): min=4, max=11397, avg=115.10, stdev=751.92 00:39:35.578 clat (usec): min=3250, max=49379, avg=15595.21, stdev=8075.58 00:39:35.578 lat (usec): min=3256, max=49387, avg=15710.30, stdev=8139.64 00:39:35.578 clat percentiles (usec): 00:39:35.578 | 1.00th=[ 6783], 5.00th=[ 7701], 10.00th=[ 9503], 20.00th=[11600], 00:39:35.578 | 30.00th=[11994], 40.00th=[12125], 50.00th=[12518], 60.00th=[13042], 00:39:35.578 | 70.00th=[13698], 80.00th=[19792], 90.00th=[24249], 95.00th=[37487], 00:39:35.578 | 99.00th=[43254], 99.50th=[44303], 99.90th=[49546], 99.95th=[49546], 00:39:35.578 | 99.99th=[49546] 00:39:35.578 bw ( KiB/s): min=14696, max=20521, per=27.39%, avg=17608.50, stdev=4118.90, samples=2 00:39:35.578 iops : min= 3674, max= 5130, avg=4402.00, stdev=1029.55, samples=2 00:39:35.578 lat (msec) : 4=0.07%, 10=7.75%, 20=78.46%, 50=13.72% 00:39:35.578 cpu : usr=5.73%, sys=7.21%, ctx=289, majf=0, minf=1 00:39:35.578 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:39:35.578 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:35.578 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:39:35.578 issued rwts: total=4096,4525,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:35.578 latency : target=0, window=0, percentile=100.00%, depth=128 00:39:35.578 00:39:35.578 Run status group 0 (all jobs): 00:39:35.578 READ: bw=60.0MiB/s (63.0MB/s), 10.4MiB/s-22.0MiB/s (10.9MB/s-23.0MB/s), io=62.9MiB (65.9MB), run=1002-1047msec 00:39:35.578 WRITE: bw=62.8MiB/s (65.8MB/s), 11.5MiB/s-22.0MiB/s (12.0MB/s-23.1MB/s), io=65.7MiB (68.9MB), run=1002-1047msec 00:39:35.578 00:39:35.578 Disk stats (read/write): 00:39:35.578 nvme0n1: ios=4645/4976, merge=0/0, ticks=24533/23427, in_queue=47960, util=98.80% 00:39:35.578 nvme0n2: ios=2361/2560, merge=0/0, ticks=37934/46333, in_queue=84267, util=87.01% 00:39:35.578 nvme0n3: ios=2604/2951, merge=0/0, ticks=36361/42701, in_queue=79062, util=99.90% 00:39:35.578 nvme0n4: ios=3703/4096, merge=0/0, ticks=49800/54171, in_queue=103971, util=89.60% 00:39:35.578 07:24:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:39:35.578 07:24:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=450103 00:39:35.578 07:24:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:39:35.578 07:24:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:39:35.578 [global] 00:39:35.578 thread=1 00:39:35.578 invalidate=1 00:39:35.578 rw=read 00:39:35.578 time_based=1 00:39:35.578 runtime=10 00:39:35.578 ioengine=libaio 00:39:35.578 direct=1 00:39:35.578 bs=4096 00:39:35.578 iodepth=1 00:39:35.578 norandommap=1 00:39:35.578 numjobs=1 00:39:35.578 00:39:35.578 [job0] 00:39:35.578 filename=/dev/nvme0n1 00:39:35.578 [job1] 00:39:35.578 filename=/dev/nvme0n2 00:39:35.578 [job2] 00:39:35.578 filename=/dev/nvme0n3 00:39:35.578 [job3] 00:39:35.578 filename=/dev/nvme0n4 00:39:35.578 Could not set queue depth (nvme0n1) 00:39:35.578 Could not set queue depth (nvme0n2) 00:39:35.578 Could not set queue depth (nvme0n3) 00:39:35.578 Could not set queue depth (nvme0n4) 00:39:35.578 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:39:35.578 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:39:35.578 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:39:35.578 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:39:35.578 fio-3.35 00:39:35.578 Starting 4 threads 00:39:38.862 07:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:39:38.862 07:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:39:38.862 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=16080896, buflen=4096 00:39:38.862 fio: pid=450194, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:39:38.862 07:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:39:38.862 07:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:39:38.862 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=33464320, buflen=4096 00:39:38.862 fio: pid=450193, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:39:39.428 07:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:39:39.428 07:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:39:39.428 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=29417472, buflen=4096 00:39:39.428 fio: pid=450191, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:39:39.686 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=15740928, buflen=4096 00:39:39.686 fio: pid=450192, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:39:39.686 07:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:39:39.686 07:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:39:39.686 00:39:39.687 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=450191: Mon Nov 18 07:25:00 2024 00:39:39.687 read: IOPS=2025, BW=8102KiB/s (8296kB/s)(28.1MiB/3546msec) 00:39:39.687 slat (usec): min=4, max=33873, avg=14.27, stdev=413.18 00:39:39.687 clat (usec): min=188, max=42023, avg=476.17, stdev=3149.04 00:39:39.687 lat (usec): min=193, max=74953, avg=490.44, stdev=3251.90 00:39:39.687 clat percentiles (usec): 00:39:39.687 | 1.00th=[ 194], 5.00th=[ 198], 10.00th=[ 200], 20.00th=[ 204], 00:39:39.687 | 30.00th=[ 206], 40.00th=[ 210], 50.00th=[ 215], 60.00th=[ 221], 00:39:39.687 | 70.00th=[ 231], 80.00th=[ 247], 90.00th=[ 285], 95.00th=[ 310], 00:39:39.687 | 99.00th=[ 545], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:39:39.687 | 99.99th=[42206] 00:39:39.687 bw ( KiB/s): min= 104, max=17416, per=39.23%, avg=9480.00, stdev=7962.74, samples=6 00:39:39.687 iops : min= 26, max= 4354, avg=2370.00, stdev=1990.68, samples=6 00:39:39.687 lat (usec) : 250=81.44%, 500=17.19%, 750=0.72%, 1000=0.01% 00:39:39.687 lat (msec) : 20=0.01%, 50=0.60% 00:39:39.687 cpu : usr=0.48%, sys=2.37%, ctx=7188, majf=0, minf=2 00:39:39.687 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:39.687 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:39.687 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:39.687 issued rwts: total=7183,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:39.687 latency : target=0, window=0, percentile=100.00%, depth=1 00:39:39.687 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=450192: Mon Nov 18 07:25:00 2024 00:39:39.687 read: IOPS=1004, BW=4017KiB/s (4113kB/s)(15.0MiB/3827msec) 00:39:39.687 slat (usec): min=3, max=18551, avg=22.91, stdev=465.13 00:39:39.687 clat (usec): min=188, max=41237, avg=964.88, stdev=5333.99 00:39:39.687 lat (usec): min=197, max=53962, avg=987.80, stdev=5379.50 00:39:39.687 clat percentiles (usec): 00:39:39.687 | 1.00th=[ 194], 5.00th=[ 200], 10.00th=[ 208], 20.00th=[ 229], 00:39:39.687 | 30.00th=[ 239], 40.00th=[ 243], 50.00th=[ 249], 60.00th=[ 255], 00:39:39.687 | 70.00th=[ 265], 80.00th=[ 273], 90.00th=[ 297], 95.00th=[ 375], 00:39:39.687 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:39:39.687 | 99.99th=[41157] 00:39:39.687 bw ( KiB/s): min= 96, max= 7344, per=13.65%, avg=3299.14, stdev=2707.59, samples=7 00:39:39.687 iops : min= 24, max= 1836, avg=824.71, stdev=676.90, samples=7 00:39:39.687 lat (usec) : 250=54.27%, 500=43.55%, 750=0.29%, 1000=0.08% 00:39:39.687 lat (msec) : 2=0.05%, 50=1.74% 00:39:39.687 cpu : usr=0.34%, sys=0.97%, ctx=3851, majf=0, minf=2 00:39:39.687 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:39.687 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:39.687 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:39.687 issued rwts: total=3844,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:39.687 latency : target=0, window=0, percentile=100.00%, depth=1 00:39:39.687 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=450193: Mon Nov 18 07:25:00 2024 00:39:39.687 read: IOPS=2528, BW=9.88MiB/s (10.4MB/s)(31.9MiB/3231msec) 00:39:39.687 slat (nsec): min=3952, max=51460, avg=6812.53, stdev=3840.47 00:39:39.687 clat (usec): min=193, max=41989, avg=384.19, stdev=2368.42 00:39:39.687 lat (usec): min=199, max=42009, avg=391.00, stdev=2368.89 00:39:39.687 clat percentiles (usec): 00:39:39.687 | 1.00th=[ 202], 5.00th=[ 208], 10.00th=[ 215], 20.00th=[ 223], 00:39:39.687 | 30.00th=[ 229], 40.00th=[ 233], 50.00th=[ 239], 60.00th=[ 243], 00:39:39.687 | 70.00th=[ 249], 80.00th=[ 258], 90.00th=[ 281], 95.00th=[ 310], 00:39:39.687 | 99.00th=[ 537], 99.50th=[ 578], 99.90th=[41681], 99.95th=[41681], 00:39:39.687 | 99.99th=[42206] 00:39:39.687 bw ( KiB/s): min= 192, max=16016, per=39.86%, avg=9632.00, stdev=6711.69, samples=6 00:39:39.687 iops : min= 48, max= 4004, avg=2408.00, stdev=1677.92, samples=6 00:39:39.687 lat (usec) : 250=71.52%, 500=27.06%, 750=1.06% 00:39:39.687 lat (msec) : 10=0.01%, 50=0.33% 00:39:39.687 cpu : usr=0.62%, sys=2.32%, ctx=8172, majf=0, minf=1 00:39:39.687 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:39.687 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:39.687 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:39.687 issued rwts: total=8171,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:39.687 latency : target=0, window=0, percentile=100.00%, depth=1 00:39:39.687 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=450194: Mon Nov 18 07:25:00 2024 00:39:39.687 read: IOPS=1332, BW=5329KiB/s (5457kB/s)(15.3MiB/2947msec) 00:39:39.687 slat (nsec): min=4286, max=40352, avg=9133.12, stdev=5199.54 00:39:39.687 clat (usec): min=197, max=42060, avg=733.20, stdev=4279.82 00:39:39.687 lat (usec): min=202, max=42077, avg=742.33, stdev=4280.81 00:39:39.687 clat percentiles (usec): 00:39:39.687 | 1.00th=[ 221], 5.00th=[ 227], 10.00th=[ 231], 20.00th=[ 237], 00:39:39.687 | 30.00th=[ 243], 40.00th=[ 251], 50.00th=[ 269], 60.00th=[ 277], 00:39:39.687 | 70.00th=[ 289], 80.00th=[ 302], 90.00th=[ 351], 95.00th=[ 494], 00:39:39.687 | 99.00th=[41157], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:39:39.687 | 99.99th=[42206] 00:39:39.687 bw ( KiB/s): min= 152, max= 7272, per=16.68%, avg=4030.40, stdev=3443.95, samples=5 00:39:39.687 iops : min= 38, max= 1818, avg=1007.60, stdev=860.99, samples=5 00:39:39.687 lat (usec) : 250=40.01%, 500=55.26%, 750=3.62% 00:39:39.687 lat (msec) : 50=1.09% 00:39:39.687 cpu : usr=0.48%, sys=1.80%, ctx=3929, majf=0, minf=2 00:39:39.687 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:39.687 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:39.687 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:39.687 issued rwts: total=3927,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:39.687 latency : target=0, window=0, percentile=100.00%, depth=1 00:39:39.687 00:39:39.687 Run status group 0 (all jobs): 00:39:39.687 READ: bw=23.6MiB/s (24.7MB/s), 4017KiB/s-9.88MiB/s (4113kB/s-10.4MB/s), io=90.3MiB (94.7MB), run=2947-3827msec 00:39:39.687 00:39:39.687 Disk stats (read/write): 00:39:39.687 nvme0n1: ios=7157/0, merge=0/0, ticks=4197/0, in_queue=4197, util=99.11% 00:39:39.687 nvme0n2: ios=2972/0, merge=0/0, ticks=3503/0, in_queue=3503, util=95.15% 00:39:39.687 nvme0n3: ios=7697/0, merge=0/0, ticks=2990/0, in_queue=2990, util=96.76% 00:39:39.687 nvme0n4: ios=3696/0, merge=0/0, ticks=2886/0, in_queue=2886, util=99.36% 00:39:39.945 07:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:39:39.945 07:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:39:40.204 07:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:39:40.204 07:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:39:40.463 07:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:39:40.464 07:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:39:40.722 07:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:39:40.722 07:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:39:40.981 07:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:39:40.981 07:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # wait 450103 00:39:40.981 07:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:39:40.981 07:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:39:41.239 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:39:41.239 07:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:39:41.239 07:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:39:41.239 07:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:39:41.239 07:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:39:41.239 07:25:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:39:41.239 07:25:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:39:41.239 07:25:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:39:41.239 07:25:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:39:41.240 07:25:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:39:41.240 nvmf hotplug test: fio failed as expected 00:39:41.240 07:25:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:39:41.498 07:25:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:39:41.498 07:25:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:39:41.498 07:25:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:39:41.498 07:25:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:39:41.498 07:25:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:39:41.498 07:25:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:39:41.498 07:25:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:39:41.498 07:25:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:39:41.498 07:25:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:39:41.498 07:25:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:39:41.498 07:25:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:39:41.498 rmmod nvme_tcp 00:39:41.498 rmmod nvme_fabrics 00:39:41.498 rmmod nvme_keyring 00:39:41.498 07:25:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:39:41.498 07:25:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:39:41.498 07:25:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:39:41.498 07:25:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 448205 ']' 00:39:41.498 07:25:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 448205 00:39:41.498 07:25:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 448205 ']' 00:39:41.498 07:25:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 448205 00:39:41.498 07:25:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:39:41.498 07:25:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:41.498 07:25:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 448205 00:39:41.498 07:25:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:39:41.498 07:25:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:39:41.498 07:25:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 448205' 00:39:41.498 killing process with pid 448205 00:39:41.498 07:25:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 448205 00:39:41.498 07:25:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 448205 00:39:41.757 07:25:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:39:41.757 07:25:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:39:41.757 07:25:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:39:41.757 07:25:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:39:41.757 07:25:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:39:41.757 07:25:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:39:41.757 07:25:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:39:41.757 07:25:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:39:41.757 07:25:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:39:41.758 07:25:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:41.758 07:25:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:41.758 07:25:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:44.294 07:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:39:44.294 00:39:44.294 real 0m23.928s 00:39:44.294 user 1m8.236s 00:39:44.294 sys 0m9.957s 00:39:44.294 07:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:44.294 07:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:39:44.294 ************************************ 00:39:44.294 END TEST nvmf_fio_target 00:39:44.294 ************************************ 00:39:44.294 07:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:39:44.294 07:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:39:44.294 07:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:44.294 07:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:39:44.294 ************************************ 00:39:44.294 START TEST nvmf_bdevio 00:39:44.294 ************************************ 00:39:44.294 07:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:39:44.294 * Looking for test storage... 00:39:44.294 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:39:44.294 07:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:39:44.294 07:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lcov --version 00:39:44.294 07:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:39:44.294 07:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:39:44.294 07:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:44.294 07:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:44.294 07:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:44.294 07:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:39:44.294 07:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:39:44.294 07:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:39:44.294 07:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:39:44.294 07:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:39:44.294 07:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:39:44.294 07:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:39:44.294 07:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:44.294 07:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:39:44.294 07:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:39:44.294 07:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:44.294 07:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:44.294 07:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:39:44.294 07:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:39:44.294 07:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:44.294 07:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:39:44.294 07:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:39:44.294 07:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:39:44.294 07:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:39:44.294 07:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:44.294 07:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:39:44.294 07:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:39:44.294 07:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:44.294 07:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:44.294 07:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:39:44.295 07:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:44.295 07:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:39:44.295 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:44.295 --rc genhtml_branch_coverage=1 00:39:44.295 --rc genhtml_function_coverage=1 00:39:44.295 --rc genhtml_legend=1 00:39:44.295 --rc geninfo_all_blocks=1 00:39:44.295 --rc geninfo_unexecuted_blocks=1 00:39:44.295 00:39:44.295 ' 00:39:44.295 07:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:39:44.295 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:44.295 --rc genhtml_branch_coverage=1 00:39:44.295 --rc genhtml_function_coverage=1 00:39:44.295 --rc genhtml_legend=1 00:39:44.295 --rc geninfo_all_blocks=1 00:39:44.295 --rc geninfo_unexecuted_blocks=1 00:39:44.295 00:39:44.295 ' 00:39:44.295 07:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:39:44.295 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:44.295 --rc genhtml_branch_coverage=1 00:39:44.295 --rc genhtml_function_coverage=1 00:39:44.295 --rc genhtml_legend=1 00:39:44.295 --rc geninfo_all_blocks=1 00:39:44.295 --rc geninfo_unexecuted_blocks=1 00:39:44.295 00:39:44.295 ' 00:39:44.295 07:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:39:44.295 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:44.295 --rc genhtml_branch_coverage=1 00:39:44.295 --rc genhtml_function_coverage=1 00:39:44.295 --rc genhtml_legend=1 00:39:44.295 --rc geninfo_all_blocks=1 00:39:44.295 --rc geninfo_unexecuted_blocks=1 00:39:44.295 00:39:44.295 ' 00:39:44.295 07:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:44.295 07:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:39:44.295 07:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:44.295 07:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:44.295 07:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:44.295 07:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:44.295 07:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:44.295 07:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:44.295 07:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:44.295 07:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:44.295 07:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:44.295 07:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:44.295 07:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:39:44.295 07:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:39:44.295 07:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:44.295 07:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:44.295 07:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:44.295 07:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:44.295 07:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:44.295 07:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:39:44.295 07:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:44.295 07:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:44.295 07:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:44.295 07:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:44.295 07:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:44.295 07:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:44.295 07:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:39:44.295 07:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:44.295 07:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:39:44.295 07:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:44.295 07:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:44.295 07:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:44.295 07:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:44.295 07:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:44.295 07:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:39:44.295 07:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:39:44.295 07:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:44.295 07:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:44.295 07:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:44.295 07:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:39:44.295 07:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:39:44.295 07:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:39:44.295 07:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:39:44.295 07:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:44.295 07:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:39:44.295 07:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:39:44.295 07:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:39:44.295 07:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:44.295 07:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:44.295 07:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:44.295 07:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:39:44.295 07:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:39:44.295 07:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:39:44.295 07:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:39:46.200 07:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:39:46.200 07:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:39:46.200 07:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:39:46.200 07:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:39:46.200 07:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:39:46.200 07:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:39:46.200 07:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:39:46.200 07:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:39:46.200 07:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:39:46.200 07:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:39:46.200 07:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:39:46.200 07:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:39:46.200 07:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:39:46.200 07:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:39:46.200 07:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:39:46.200 07:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:39:46.200 07:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:39:46.200 07:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:39:46.200 07:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:39:46.200 07:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:39:46.200 07:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:39:46.200 07:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:39:46.200 07:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:39:46.200 07:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:39:46.200 07:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:39:46.200 07:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:39:46.200 07:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:39:46.200 07:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:39:46.200 07:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:39:46.200 07:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:39:46.200 07:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:39:46.200 07:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:39:46.200 07:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:39:46.200 07:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:46.200 07:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:39:46.200 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:39:46.200 07:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:46.200 07:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:46.200 07:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:46.200 07:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:46.200 07:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:46.200 07:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:46.200 07:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:39:46.200 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:39:46.200 07:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:46.200 07:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:46.200 07:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:46.200 07:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:46.200 07:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:46.200 07:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:39:46.200 07:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:39:46.200 07:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:39:46.200 07:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:46.200 07:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:46.200 07:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:46.200 07:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:46.200 07:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:46.200 07:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:46.200 07:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:46.200 07:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:39:46.200 Found net devices under 0000:0a:00.0: cvl_0_0 00:39:46.200 07:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:46.200 07:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:46.200 07:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:46.200 07:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:46.200 07:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:46.200 07:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:46.200 07:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:46.200 07:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:46.200 07:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:39:46.200 Found net devices under 0000:0a:00.1: cvl_0_1 00:39:46.200 07:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:46.200 07:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:39:46.200 07:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:39:46.200 07:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:39:46.200 07:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:39:46.200 07:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:39:46.200 07:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:39:46.200 07:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:46.200 07:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:46.200 07:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:39:46.200 07:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:39:46.200 07:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:39:46.200 07:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:39:46.200 07:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:39:46.200 07:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:39:46.200 07:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:39:46.200 07:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:46.201 07:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:39:46.201 07:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:39:46.201 07:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:39:46.201 07:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:39:46.201 07:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:39:46.201 07:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:39:46.201 07:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:39:46.201 07:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:39:46.201 07:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:39:46.201 07:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:39:46.201 07:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:39:46.201 07:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:39:46.201 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:46.201 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.211 ms 00:39:46.201 00:39:46.201 --- 10.0.0.2 ping statistics --- 00:39:46.201 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:46.201 rtt min/avg/max/mdev = 0.211/0.211/0.211/0.000 ms 00:39:46.201 07:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:39:46.201 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:46.201 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.142 ms 00:39:46.201 00:39:46.201 --- 10.0.0.1 ping statistics --- 00:39:46.201 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:46.201 rtt min/avg/max/mdev = 0.142/0.142/0.142/0.000 ms 00:39:46.201 07:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:46.201 07:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:39:46.201 07:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:39:46.201 07:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:46.201 07:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:39:46.201 07:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:39:46.201 07:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:46.201 07:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:39:46.201 07:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:39:46.201 07:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:39:46.201 07:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:39:46.201 07:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:39:46.201 07:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:39:46.201 07:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=452928 00:39:46.201 07:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x78 00:39:46.201 07:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 452928 00:39:46.201 07:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 452928 ']' 00:39:46.201 07:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:46.201 07:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:46.201 07:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:46.201 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:46.201 07:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:46.201 07:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:39:46.201 [2024-11-18 07:25:07.147362] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:39:46.201 [2024-11-18 07:25:07.148571] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:39:46.201 [2024-11-18 07:25:07.148646] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:46.460 [2024-11-18 07:25:07.225405] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:39:46.461 [2024-11-18 07:25:07.276735] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:46.461 [2024-11-18 07:25:07.276806] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:46.461 [2024-11-18 07:25:07.276821] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:46.461 [2024-11-18 07:25:07.276833] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:46.461 [2024-11-18 07:25:07.276843] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:46.461 [2024-11-18 07:25:07.278441] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:39:46.461 [2024-11-18 07:25:07.278535] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:39:46.461 [2024-11-18 07:25:07.278539] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:39:46.461 [2024-11-18 07:25:07.278469] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:39:46.461 [2024-11-18 07:25:07.374186] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:39:46.461 [2024-11-18 07:25:07.374403] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:39:46.461 [2024-11-18 07:25:07.374731] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:39:46.461 [2024-11-18 07:25:07.375359] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:39:46.461 [2024-11-18 07:25:07.375627] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:39:46.461 07:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:46.461 07:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:39:46.461 07:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:39:46.461 07:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:39:46.461 07:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:39:46.461 07:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:46.461 07:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:39:46.461 07:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:46.461 07:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:39:46.461 [2024-11-18 07:25:07.427201] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:46.718 07:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:46.718 07:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:39:46.718 07:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:46.718 07:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:39:46.718 Malloc0 00:39:46.718 07:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:46.718 07:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:39:46.718 07:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:46.718 07:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:39:46.718 07:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:46.718 07:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:39:46.718 07:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:46.718 07:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:39:46.718 07:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:46.719 07:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:39:46.719 07:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:46.719 07:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:39:46.719 [2024-11-18 07:25:07.499413] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:46.719 07:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:46.719 07:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:39:46.719 07:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:39:46.719 07:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:39:46.719 07:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:39:46.719 07:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:39:46.719 07:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:39:46.719 { 00:39:46.719 "params": { 00:39:46.719 "name": "Nvme$subsystem", 00:39:46.719 "trtype": "$TEST_TRANSPORT", 00:39:46.719 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:46.719 "adrfam": "ipv4", 00:39:46.719 "trsvcid": "$NVMF_PORT", 00:39:46.719 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:46.719 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:46.719 "hdgst": ${hdgst:-false}, 00:39:46.719 "ddgst": ${ddgst:-false} 00:39:46.719 }, 00:39:46.719 "method": "bdev_nvme_attach_controller" 00:39:46.719 } 00:39:46.719 EOF 00:39:46.719 )") 00:39:46.719 07:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:39:46.719 07:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:39:46.719 07:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:39:46.719 07:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:39:46.719 "params": { 00:39:46.719 "name": "Nvme1", 00:39:46.719 "trtype": "tcp", 00:39:46.719 "traddr": "10.0.0.2", 00:39:46.719 "adrfam": "ipv4", 00:39:46.719 "trsvcid": "4420", 00:39:46.719 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:39:46.719 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:39:46.719 "hdgst": false, 00:39:46.719 "ddgst": false 00:39:46.719 }, 00:39:46.719 "method": "bdev_nvme_attach_controller" 00:39:46.719 }' 00:39:46.719 [2024-11-18 07:25:07.549811] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:39:46.719 [2024-11-18 07:25:07.549899] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid452959 ] 00:39:46.719 [2024-11-18 07:25:07.621123] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:39:46.719 [2024-11-18 07:25:07.670932] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:39:46.719 [2024-11-18 07:25:07.670983] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:39:46.719 [2024-11-18 07:25:07.670987] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:46.977 I/O targets: 00:39:46.977 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:39:46.977 00:39:46.977 00:39:46.977 CUnit - A unit testing framework for C - Version 2.1-3 00:39:46.977 http://cunit.sourceforge.net/ 00:39:46.977 00:39:46.977 00:39:46.977 Suite: bdevio tests on: Nvme1n1 00:39:46.977 Test: blockdev write read block ...passed 00:39:47.235 Test: blockdev write zeroes read block ...passed 00:39:47.235 Test: blockdev write zeroes read no split ...passed 00:39:47.235 Test: blockdev write zeroes read split ...passed 00:39:47.235 Test: blockdev write zeroes read split partial ...passed 00:39:47.235 Test: blockdev reset ...[2024-11-18 07:25:08.022347] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:39:47.235 [2024-11-18 07:25:08.022471] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12abb70 (9): Bad file descriptor 00:39:47.235 [2024-11-18 07:25:08.074603] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:39:47.235 passed 00:39:47.235 Test: blockdev write read 8 blocks ...passed 00:39:47.235 Test: blockdev write read size > 128k ...passed 00:39:47.235 Test: blockdev write read invalid size ...passed 00:39:47.235 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:39:47.235 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:39:47.235 Test: blockdev write read max offset ...passed 00:39:47.494 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:39:47.494 Test: blockdev writev readv 8 blocks ...passed 00:39:47.494 Test: blockdev writev readv 30 x 1block ...passed 00:39:47.494 Test: blockdev writev readv block ...passed 00:39:47.494 Test: blockdev writev readv size > 128k ...passed 00:39:47.494 Test: blockdev writev readv size > 128k in two iovs ...passed 00:39:47.494 Test: blockdev comparev and writev ...[2024-11-18 07:25:08.368093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:39:47.494 [2024-11-18 07:25:08.368128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:47.494 [2024-11-18 07:25:08.368153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:39:47.494 [2024-11-18 07:25:08.368170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:47.494 [2024-11-18 07:25:08.368581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:39:47.494 [2024-11-18 07:25:08.368606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:39:47.494 [2024-11-18 07:25:08.368627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:39:47.494 [2024-11-18 07:25:08.368643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:39:47.494 [2024-11-18 07:25:08.369044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:39:47.494 [2024-11-18 07:25:08.369068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:39:47.494 [2024-11-18 07:25:08.369090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:39:47.494 [2024-11-18 07:25:08.369105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:39:47.494 [2024-11-18 07:25:08.369497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:39:47.494 [2024-11-18 07:25:08.369522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:39:47.494 [2024-11-18 07:25:08.369543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:39:47.494 [2024-11-18 07:25:08.369559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:39:47.494 passed 00:39:47.494 Test: blockdev nvme passthru rw ...passed 00:39:47.494 Test: blockdev nvme passthru vendor specific ...[2024-11-18 07:25:08.451755] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:39:47.494 [2024-11-18 07:25:08.451781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:39:47.494 [2024-11-18 07:25:08.451923] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:39:47.494 [2024-11-18 07:25:08.451946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:39:47.494 [2024-11-18 07:25:08.452084] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:39:47.494 [2024-11-18 07:25:08.452107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:39:47.494 [2024-11-18 07:25:08.452246] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:39:47.494 [2024-11-18 07:25:08.452269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:39:47.494 passed 00:39:47.494 Test: blockdev nvme admin passthru ...passed 00:39:47.753 Test: blockdev copy ...passed 00:39:47.753 00:39:47.753 Run Summary: Type Total Ran Passed Failed Inactive 00:39:47.753 suites 1 1 n/a 0 0 00:39:47.753 tests 23 23 23 0 0 00:39:47.753 asserts 152 152 152 0 n/a 00:39:47.753 00:39:47.753 Elapsed time = 1.248 seconds 00:39:47.753 07:25:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:39:47.753 07:25:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:47.753 07:25:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:39:47.753 07:25:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:47.753 07:25:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:39:47.753 07:25:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:39:47.753 07:25:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:39:47.753 07:25:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:39:47.753 07:25:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:39:47.753 07:25:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:39:47.753 07:25:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:39:47.753 07:25:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:39:47.753 rmmod nvme_tcp 00:39:47.753 rmmod nvme_fabrics 00:39:47.753 rmmod nvme_keyring 00:39:48.020 07:25:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:39:48.020 07:25:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:39:48.020 07:25:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:39:48.020 07:25:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 452928 ']' 00:39:48.020 07:25:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 452928 00:39:48.020 07:25:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 452928 ']' 00:39:48.020 07:25:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 452928 00:39:48.020 07:25:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:39:48.020 07:25:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:48.020 07:25:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 452928 00:39:48.020 07:25:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:39:48.020 07:25:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:39:48.020 07:25:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 452928' 00:39:48.020 killing process with pid 452928 00:39:48.020 07:25:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 452928 00:39:48.020 07:25:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 452928 00:39:48.297 07:25:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:39:48.297 07:25:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:39:48.297 07:25:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:39:48.297 07:25:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:39:48.297 07:25:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:39:48.297 07:25:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:39:48.297 07:25:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:39:48.297 07:25:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:39:48.297 07:25:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:39:48.297 07:25:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:48.297 07:25:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:48.297 07:25:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:50.246 07:25:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:39:50.246 00:39:50.246 real 0m6.368s 00:39:50.246 user 0m8.370s 00:39:50.246 sys 0m2.459s 00:39:50.246 07:25:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:50.246 07:25:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:39:50.246 ************************************ 00:39:50.246 END TEST nvmf_bdevio 00:39:50.246 ************************************ 00:39:50.246 07:25:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:39:50.246 00:39:50.246 real 3m54.736s 00:39:50.246 user 8m54.154s 00:39:50.246 sys 1m23.848s 00:39:50.246 07:25:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:50.246 07:25:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:39:50.247 ************************************ 00:39:50.247 END TEST nvmf_target_core_interrupt_mode 00:39:50.247 ************************************ 00:39:50.247 07:25:11 nvmf_tcp -- nvmf/nvmf.sh@21 -- # run_test nvmf_interrupt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:39:50.247 07:25:11 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:39:50.247 07:25:11 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:50.247 07:25:11 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:39:50.247 ************************************ 00:39:50.247 START TEST nvmf_interrupt 00:39:50.247 ************************************ 00:39:50.247 07:25:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:39:50.247 * Looking for test storage... 00:39:50.247 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:39:50.247 07:25:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:39:50.247 07:25:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1693 -- # lcov --version 00:39:50.247 07:25:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:39:50.508 07:25:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:39:50.508 07:25:11 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:50.508 07:25:11 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:50.508 07:25:11 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:50.508 07:25:11 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # IFS=.-: 00:39:50.508 07:25:11 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # read -ra ver1 00:39:50.508 07:25:11 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # IFS=.-: 00:39:50.508 07:25:11 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # read -ra ver2 00:39:50.508 07:25:11 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@338 -- # local 'op=<' 00:39:50.508 07:25:11 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@340 -- # ver1_l=2 00:39:50.508 07:25:11 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@341 -- # ver2_l=1 00:39:50.508 07:25:11 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:50.508 07:25:11 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@344 -- # case "$op" in 00:39:50.508 07:25:11 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@345 -- # : 1 00:39:50.508 07:25:11 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:50.508 07:25:11 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:50.508 07:25:11 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # decimal 1 00:39:50.508 07:25:11 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=1 00:39:50.508 07:25:11 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:50.508 07:25:11 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 1 00:39:50.508 07:25:11 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # ver1[v]=1 00:39:50.508 07:25:11 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # decimal 2 00:39:50.508 07:25:11 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=2 00:39:50.508 07:25:11 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:50.508 07:25:11 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 2 00:39:50.508 07:25:11 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # ver2[v]=2 00:39:50.508 07:25:11 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:50.508 07:25:11 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:50.508 07:25:11 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # return 0 00:39:50.508 07:25:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:50.508 07:25:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:39:50.508 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:50.508 --rc genhtml_branch_coverage=1 00:39:50.508 --rc genhtml_function_coverage=1 00:39:50.508 --rc genhtml_legend=1 00:39:50.508 --rc geninfo_all_blocks=1 00:39:50.508 --rc geninfo_unexecuted_blocks=1 00:39:50.508 00:39:50.508 ' 00:39:50.508 07:25:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:39:50.508 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:50.508 --rc genhtml_branch_coverage=1 00:39:50.508 --rc genhtml_function_coverage=1 00:39:50.508 --rc genhtml_legend=1 00:39:50.508 --rc geninfo_all_blocks=1 00:39:50.508 --rc geninfo_unexecuted_blocks=1 00:39:50.508 00:39:50.508 ' 00:39:50.508 07:25:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:39:50.508 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:50.508 --rc genhtml_branch_coverage=1 00:39:50.508 --rc genhtml_function_coverage=1 00:39:50.508 --rc genhtml_legend=1 00:39:50.508 --rc geninfo_all_blocks=1 00:39:50.508 --rc geninfo_unexecuted_blocks=1 00:39:50.508 00:39:50.508 ' 00:39:50.508 07:25:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:39:50.508 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:50.508 --rc genhtml_branch_coverage=1 00:39:50.508 --rc genhtml_function_coverage=1 00:39:50.508 --rc genhtml_legend=1 00:39:50.508 --rc geninfo_all_blocks=1 00:39:50.508 --rc geninfo_unexecuted_blocks=1 00:39:50.508 00:39:50.508 ' 00:39:50.508 07:25:11 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:50.508 07:25:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # uname -s 00:39:50.508 07:25:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:50.508 07:25:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:50.508 07:25:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:50.508 07:25:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:50.508 07:25:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:50.508 07:25:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:50.508 07:25:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:50.508 07:25:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:50.508 07:25:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:50.508 07:25:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:50.508 07:25:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:39:50.508 07:25:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:39:50.508 07:25:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:50.508 07:25:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:50.508 07:25:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:50.508 07:25:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:50.508 07:25:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:50.508 07:25:11 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@15 -- # shopt -s extglob 00:39:50.508 07:25:11 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:50.508 07:25:11 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:50.508 07:25:11 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:50.508 07:25:11 nvmf_tcp.nvmf_interrupt -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:50.508 07:25:11 nvmf_tcp.nvmf_interrupt -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:50.508 07:25:11 nvmf_tcp.nvmf_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:50.509 07:25:11 nvmf_tcp.nvmf_interrupt -- paths/export.sh@5 -- # export PATH 00:39:50.509 07:25:11 nvmf_tcp.nvmf_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:50.509 07:25:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@51 -- # : 0 00:39:50.509 07:25:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:50.509 07:25:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:50.509 07:25:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:50.509 07:25:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:50.509 07:25:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:50.509 07:25:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:39:50.509 07:25:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:39:50.509 07:25:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:50.509 07:25:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:50.509 07:25:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:50.509 07:25:11 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/interrupt/common.sh 00:39:50.509 07:25:11 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@12 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:39:50.509 07:25:11 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@14 -- # nvmftestinit 00:39:50.509 07:25:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:39:50.509 07:25:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:50.509 07:25:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@476 -- # prepare_net_devs 00:39:50.509 07:25:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@438 -- # local -g is_hw=no 00:39:50.509 07:25:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # remove_spdk_ns 00:39:50.509 07:25:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:50.509 07:25:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:39:50.509 07:25:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:50.509 07:25:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:39:50.509 07:25:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:39:50.509 07:25:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@309 -- # xtrace_disable 00:39:50.509 07:25:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:39:53.050 07:25:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:39:53.050 07:25:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # pci_devs=() 00:39:53.050 07:25:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # local -a pci_devs 00:39:53.050 07:25:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # pci_net_devs=() 00:39:53.050 07:25:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:39:53.050 07:25:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # pci_drivers=() 00:39:53.050 07:25:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # local -A pci_drivers 00:39:53.050 07:25:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # net_devs=() 00:39:53.050 07:25:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # local -ga net_devs 00:39:53.050 07:25:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # e810=() 00:39:53.050 07:25:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # local -ga e810 00:39:53.050 07:25:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # x722=() 00:39:53.050 07:25:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # local -ga x722 00:39:53.050 07:25:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # mlx=() 00:39:53.050 07:25:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # local -ga mlx 00:39:53.050 07:25:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:39:53.050 07:25:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:39:53.050 07:25:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:39:53.050 07:25:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:39:53.050 07:25:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:39:53.050 07:25:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:39:53.050 07:25:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:39:53.050 07:25:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:39:53.050 07:25:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:39:53.050 07:25:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:39:53.050 07:25:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:39:53.050 07:25:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:39:53.050 07:25:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:39:53.050 07:25:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:39:53.050 07:25:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:39:53.050 07:25:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:39:53.050 07:25:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:39:53.050 07:25:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:39:53.050 07:25:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:53.050 07:25:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:39:53.050 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:39:53.050 07:25:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:53.050 07:25:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:53.050 07:25:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:53.050 07:25:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:53.050 07:25:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:53.050 07:25:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:53.050 07:25:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:39:53.050 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:39:53.050 07:25:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:53.050 07:25:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:53.050 07:25:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:53.050 07:25:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:53.050 07:25:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:53.050 07:25:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:39:53.050 07:25:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:39:53.050 07:25:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:39:53.050 07:25:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:53.050 07:25:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:53.050 07:25:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:53.050 07:25:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:53.050 07:25:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:53.050 07:25:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:53.050 07:25:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:53.051 07:25:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:39:53.051 Found net devices under 0000:0a:00.0: cvl_0_0 00:39:53.051 07:25:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:53.051 07:25:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:53.051 07:25:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:53.051 07:25:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:53.051 07:25:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:53.051 07:25:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:53.051 07:25:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:53.051 07:25:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:53.051 07:25:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:39:53.051 Found net devices under 0000:0a:00.1: cvl_0_1 00:39:53.051 07:25:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:53.051 07:25:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:39:53.051 07:25:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # is_hw=yes 00:39:53.051 07:25:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:39:53.051 07:25:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:39:53.051 07:25:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:39:53.051 07:25:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:39:53.051 07:25:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:53.051 07:25:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:53.051 07:25:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:39:53.051 07:25:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:39:53.051 07:25:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:39:53.051 07:25:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:39:53.051 07:25:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:39:53.051 07:25:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:39:53.051 07:25:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:39:53.051 07:25:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:53.051 07:25:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:39:53.051 07:25:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:39:53.051 07:25:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:39:53.051 07:25:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:39:53.051 07:25:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:39:53.051 07:25:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:39:53.051 07:25:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:39:53.051 07:25:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:39:53.051 07:25:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:39:53.051 07:25:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:39:53.051 07:25:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:39:53.051 07:25:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:39:53.051 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:53.051 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.244 ms 00:39:53.051 00:39:53.051 --- 10.0.0.2 ping statistics --- 00:39:53.051 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:53.051 rtt min/avg/max/mdev = 0.244/0.244/0.244/0.000 ms 00:39:53.051 07:25:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:39:53.051 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:53.051 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.096 ms 00:39:53.051 00:39:53.051 --- 10.0.0.1 ping statistics --- 00:39:53.051 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:53.051 rtt min/avg/max/mdev = 0.096/0.096/0.096/0.000 ms 00:39:53.051 07:25:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:53.051 07:25:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@450 -- # return 0 00:39:53.051 07:25:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:39:53.051 07:25:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:53.051 07:25:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:39:53.051 07:25:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:39:53.051 07:25:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:53.051 07:25:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:39:53.051 07:25:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:39:53.051 07:25:13 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@15 -- # nvmfappstart -m 0x3 00:39:53.051 07:25:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:39:53.051 07:25:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@726 -- # xtrace_disable 00:39:53.051 07:25:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:39:53.051 07:25:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@509 -- # nvmfpid=455051 00:39:53.051 07:25:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:39:53.051 07:25:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@510 -- # waitforlisten 455051 00:39:53.051 07:25:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@835 -- # '[' -z 455051 ']' 00:39:53.051 07:25:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:53.051 07:25:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:53.051 07:25:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:53.051 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:53.051 07:25:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:53.051 07:25:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:39:53.051 [2024-11-18 07:25:13.674947] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:39:53.051 [2024-11-18 07:25:13.675996] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:39:53.051 [2024-11-18 07:25:13.676057] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:53.051 [2024-11-18 07:25:13.748429] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:39:53.051 [2024-11-18 07:25:13.790324] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:53.051 [2024-11-18 07:25:13.790388] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:53.051 [2024-11-18 07:25:13.790416] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:53.051 [2024-11-18 07:25:13.790427] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:53.051 [2024-11-18 07:25:13.790437] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:53.051 [2024-11-18 07:25:13.791896] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:39:53.051 [2024-11-18 07:25:13.791902] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:53.051 [2024-11-18 07:25:13.874819] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:39:53.051 [2024-11-18 07:25:13.874864] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:39:53.051 [2024-11-18 07:25:13.875090] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:39:53.051 07:25:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:53.051 07:25:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@868 -- # return 0 00:39:53.051 07:25:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:39:53.051 07:25:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@732 -- # xtrace_disable 00:39:53.051 07:25:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:39:53.051 07:25:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:53.051 07:25:13 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@16 -- # setup_bdev_aio 00:39:53.051 07:25:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # uname -s 00:39:53.051 07:25:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:39:53.051 07:25:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@78 -- # dd if=/dev/zero of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile bs=2048 count=5000 00:39:53.051 5000+0 records in 00:39:53.051 5000+0 records out 00:39:53.052 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0141492 s, 724 MB/s 00:39:53.052 07:25:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@79 -- # rpc_cmd bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile AIO0 2048 00:39:53.052 07:25:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:53.052 07:25:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:39:53.052 AIO0 00:39:53.052 07:25:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:53.052 07:25:13 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -q 256 00:39:53.052 07:25:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:53.052 07:25:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:39:53.052 [2024-11-18 07:25:13.988620] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:53.052 07:25:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:53.052 07:25:13 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:39:53.052 07:25:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:53.052 07:25:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:39:53.052 07:25:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:53.052 07:25:14 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 AIO0 00:39:53.052 07:25:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:53.052 07:25:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:39:53.052 07:25:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:53.052 07:25:14 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:39:53.052 07:25:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:53.052 07:25:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:39:53.052 [2024-11-18 07:25:14.016810] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:53.052 07:25:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:53.052 07:25:14 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:39:53.052 07:25:14 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 455051 0 00:39:53.052 07:25:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 455051 0 idle 00:39:53.052 07:25:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=455051 00:39:53.052 07:25:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:39:53.052 07:25:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:39:53.052 07:25:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:39:53.052 07:25:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:39:53.052 07:25:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:39:53.052 07:25:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:39:53.052 07:25:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:39:53.052 07:25:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:39:53.052 07:25:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:39:53.052 07:25:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 455051 -w 256 00:39:53.311 07:25:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:39:53.312 07:25:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 455051 root 20 0 128.2g 46848 34176 S 0.0 0.1 0:00.25 reactor_0' 00:39:53.312 07:25:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 455051 root 20 0 128.2g 46848 34176 S 0.0 0.1 0:00.25 reactor_0 00:39:53.312 07:25:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:39:53.312 07:25:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:39:53.312 07:25:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:39:53.312 07:25:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:39:53.312 07:25:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:39:53.312 07:25:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:39:53.312 07:25:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:39:53.312 07:25:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:39:53.312 07:25:14 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:39:53.312 07:25:14 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 455051 1 00:39:53.312 07:25:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 455051 1 idle 00:39:53.312 07:25:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=455051 00:39:53.312 07:25:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:39:53.312 07:25:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:39:53.312 07:25:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:39:53.312 07:25:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:39:53.312 07:25:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:39:53.312 07:25:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:39:53.312 07:25:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:39:53.312 07:25:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:39:53.312 07:25:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:39:53.312 07:25:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 455051 -w 256 00:39:53.312 07:25:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:39:53.571 07:25:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 455055 root 20 0 128.2g 46848 34176 S 0.0 0.1 0:00.00 reactor_1' 00:39:53.571 07:25:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 455055 root 20 0 128.2g 46848 34176 S 0.0 0.1 0:00.00 reactor_1 00:39:53.571 07:25:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:39:53.571 07:25:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:39:53.571 07:25:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:39:53.571 07:25:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:39:53.571 07:25:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:39:53.571 07:25:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:39:53.571 07:25:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:39:53.571 07:25:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:39:53.571 07:25:14 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@28 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:39:53.571 07:25:14 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@35 -- # perf_pid=455215 00:39:53.571 07:25:14 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 256 -o 4096 -w randrw -M 30 -t 10 -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:39:53.571 07:25:14 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:39:53.571 07:25:14 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:39:53.571 07:25:14 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 455051 0 00:39:53.571 07:25:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 455051 0 busy 00:39:53.571 07:25:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=455051 00:39:53.571 07:25:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:39:53.571 07:25:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:39:53.571 07:25:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:39:53.571 07:25:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:39:53.571 07:25:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:39:53.571 07:25:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:39:53.571 07:25:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:39:53.571 07:25:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:39:53.571 07:25:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 455051 -w 256 00:39:53.571 07:25:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:39:53.571 07:25:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 455051 root 20 0 128.2g 47616 34560 S 0.0 0.1 0:00.25 reactor_0' 00:39:53.571 07:25:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 455051 root 20 0 128.2g 47616 34560 S 0.0 0.1 0:00.25 reactor_0 00:39:53.571 07:25:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:39:53.571 07:25:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:39:53.571 07:25:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:39:53.571 07:25:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:39:53.571 07:25:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:39:53.571 07:25:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:39:53.571 07:25:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@31 -- # sleep 1 00:39:54.946 07:25:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j-- )) 00:39:54.946 07:25:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:39:54.946 07:25:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 455051 -w 256 00:39:54.946 07:25:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:39:54.946 07:25:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 455051 root 20 0 128.2g 48000 34560 R 99.9 0.1 0:02.54 reactor_0' 00:39:54.946 07:25:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 455051 root 20 0 128.2g 48000 34560 R 99.9 0.1 0:02.54 reactor_0 00:39:54.946 07:25:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:39:54.946 07:25:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:39:54.946 07:25:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:39:54.946 07:25:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:39:54.946 07:25:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:39:54.946 07:25:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:39:54.946 07:25:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:39:54.946 07:25:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:39:54.946 07:25:15 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:39:54.946 07:25:15 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:39:54.946 07:25:15 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 455051 1 00:39:54.946 07:25:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 455051 1 busy 00:39:54.946 07:25:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=455051 00:39:54.946 07:25:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:39:54.946 07:25:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:39:54.946 07:25:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:39:54.946 07:25:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:39:54.946 07:25:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:39:54.946 07:25:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:39:54.946 07:25:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:39:54.946 07:25:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:39:54.946 07:25:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 455051 -w 256 00:39:54.946 07:25:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:39:54.946 07:25:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 455055 root 20 0 128.2g 48000 34560 R 99.9 0.1 0:01.31 reactor_1' 00:39:54.946 07:25:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 455055 root 20 0 128.2g 48000 34560 R 99.9 0.1 0:01.31 reactor_1 00:39:54.946 07:25:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:39:54.946 07:25:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:39:54.946 07:25:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:39:54.946 07:25:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:39:54.946 07:25:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:39:54.946 07:25:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:39:54.946 07:25:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:39:54.946 07:25:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:39:54.946 07:25:15 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@42 -- # wait 455215 00:40:04.916 Initializing NVMe Controllers 00:40:04.916 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:40:04.916 Controller IO queue size 256, less than required. 00:40:04.916 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:40:04.916 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:40:04.916 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:40:04.916 Initialization complete. Launching workers. 00:40:04.916 ======================================================== 00:40:04.916 Latency(us) 00:40:04.916 Device Information : IOPS MiB/s Average min max 00:40:04.916 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 13969.20 54.57 18336.92 4051.60 23706.98 00:40:04.916 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 13297.10 51.94 19266.60 3940.82 23795.78 00:40:04.916 ======================================================== 00:40:04.916 Total : 27266.30 106.51 18790.30 3940.82 23795.78 00:40:04.916 00:40:04.916 07:25:24 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:40:04.916 07:25:24 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 455051 0 00:40:04.916 07:25:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 455051 0 idle 00:40:04.916 07:25:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=455051 00:40:04.916 07:25:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:40:04.916 07:25:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:40:04.916 07:25:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:40:04.916 07:25:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:40:04.916 07:25:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:40:04.916 07:25:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:40:04.916 07:25:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:40:04.916 07:25:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:40:04.916 07:25:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:40:04.916 07:25:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 455051 -w 256 00:40:04.916 07:25:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:40:04.916 07:25:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 455051 root 20 0 128.2g 48000 34560 S 0.0 0.1 0:20.20 reactor_0' 00:40:04.916 07:25:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 455051 root 20 0 128.2g 48000 34560 S 0.0 0.1 0:20.20 reactor_0 00:40:04.916 07:25:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:40:04.916 07:25:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:40:04.916 07:25:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:40:04.916 07:25:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:40:04.916 07:25:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:40:04.916 07:25:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:40:04.916 07:25:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:40:04.916 07:25:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:40:04.916 07:25:24 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:40:04.916 07:25:24 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 455051 1 00:40:04.916 07:25:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 455051 1 idle 00:40:04.916 07:25:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=455051 00:40:04.916 07:25:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:40:04.916 07:25:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:40:04.916 07:25:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:40:04.916 07:25:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:40:04.916 07:25:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:40:04.916 07:25:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:40:04.916 07:25:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:40:04.916 07:25:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:40:04.916 07:25:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:40:04.916 07:25:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 455051 -w 256 00:40:04.916 07:25:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:40:04.916 07:25:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 455055 root 20 0 128.2g 48000 34560 S 0.0 0.1 0:09.97 reactor_1' 00:40:04.916 07:25:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 455055 root 20 0 128.2g 48000 34560 S 0.0 0.1 0:09.97 reactor_1 00:40:04.916 07:25:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:40:04.916 07:25:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:40:04.916 07:25:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:40:04.916 07:25:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:40:04.916 07:25:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:40:04.916 07:25:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:40:04.916 07:25:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:40:04.916 07:25:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:40:04.916 07:25:24 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@50 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:40:04.916 07:25:25 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@51 -- # waitforserial SPDKISFASTANDAWESOME 00:40:04.916 07:25:25 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1202 -- # local i=0 00:40:04.916 07:25:25 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:40:04.916 07:25:25 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:40:04.916 07:25:25 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1209 -- # sleep 2 00:40:06.293 07:25:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:40:06.293 07:25:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:40:06.293 07:25:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:40:06.293 07:25:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:40:06.293 07:25:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:40:06.293 07:25:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # return 0 00:40:06.293 07:25:27 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:40:06.293 07:25:27 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 455051 0 00:40:06.293 07:25:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 455051 0 idle 00:40:06.293 07:25:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=455051 00:40:06.293 07:25:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:40:06.293 07:25:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:40:06.293 07:25:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:40:06.293 07:25:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:40:06.293 07:25:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:40:06.293 07:25:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:40:06.293 07:25:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:40:06.293 07:25:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:40:06.293 07:25:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:40:06.293 07:25:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:40:06.293 07:25:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 455051 -w 256 00:40:06.553 07:25:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 455051 root 20 0 128.2g 60288 34560 S 0.0 0.1 0:20.29 reactor_0' 00:40:06.553 07:25:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 455051 root 20 0 128.2g 60288 34560 S 0.0 0.1 0:20.29 reactor_0 00:40:06.553 07:25:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:40:06.553 07:25:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:40:06.553 07:25:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:40:06.553 07:25:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:40:06.553 07:25:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:40:06.553 07:25:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:40:06.553 07:25:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:40:06.553 07:25:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:40:06.553 07:25:27 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:40:06.553 07:25:27 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 455051 1 00:40:06.553 07:25:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 455051 1 idle 00:40:06.553 07:25:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=455051 00:40:06.553 07:25:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:40:06.553 07:25:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:40:06.553 07:25:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:40:06.553 07:25:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:40:06.553 07:25:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:40:06.553 07:25:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:40:06.553 07:25:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:40:06.553 07:25:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:40:06.553 07:25:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:40:06.553 07:25:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 455051 -w 256 00:40:06.553 07:25:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:40:06.553 07:25:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 455055 root 20 0 128.2g 60288 34560 S 0.0 0.1 0:10.01 reactor_1' 00:40:06.553 07:25:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 455055 root 20 0 128.2g 60288 34560 S 0.0 0.1 0:10.01 reactor_1 00:40:06.553 07:25:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:40:06.553 07:25:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:40:06.553 07:25:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:40:06.553 07:25:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:40:06.553 07:25:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:40:06.553 07:25:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:40:06.553 07:25:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:40:06.553 07:25:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:40:06.553 07:25:27 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@55 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:40:06.812 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:40:06.812 07:25:27 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@56 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:40:06.812 07:25:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1223 -- # local i=0 00:40:06.812 07:25:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:40:06.812 07:25:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:40:06.812 07:25:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:40:06.812 07:25:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:40:06.812 07:25:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1235 -- # return 0 00:40:06.812 07:25:27 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:40:06.812 07:25:27 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@59 -- # nvmftestfini 00:40:06.812 07:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@516 -- # nvmfcleanup 00:40:06.812 07:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@121 -- # sync 00:40:06.812 07:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:40:06.812 07:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@124 -- # set +e 00:40:06.812 07:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@125 -- # for i in {1..20} 00:40:06.812 07:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:40:06.812 rmmod nvme_tcp 00:40:06.812 rmmod nvme_fabrics 00:40:06.812 rmmod nvme_keyring 00:40:06.812 07:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:40:06.812 07:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@128 -- # set -e 00:40:06.812 07:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@129 -- # return 0 00:40:06.812 07:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@517 -- # '[' -n 455051 ']' 00:40:06.812 07:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@518 -- # killprocess 455051 00:40:06.812 07:25:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@954 -- # '[' -z 455051 ']' 00:40:06.812 07:25:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@958 -- # kill -0 455051 00:40:06.813 07:25:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # uname 00:40:06.813 07:25:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:40:06.813 07:25:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 455051 00:40:06.813 07:25:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:40:06.813 07:25:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:40:06.813 07:25:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 455051' 00:40:06.813 killing process with pid 455051 00:40:06.813 07:25:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@973 -- # kill 455051 00:40:06.813 07:25:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@978 -- # wait 455051 00:40:07.071 07:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:40:07.071 07:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:40:07.071 07:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:40:07.071 07:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@297 -- # iptr 00:40:07.071 07:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-save 00:40:07.071 07:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:40:07.071 07:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-restore 00:40:07.071 07:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:40:07.071 07:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@302 -- # remove_spdk_ns 00:40:07.071 07:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:07.071 07:25:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:40:07.071 07:25:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:09.611 07:25:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:40:09.611 00:40:09.611 real 0m18.871s 00:40:09.611 user 0m37.746s 00:40:09.611 sys 0m6.283s 00:40:09.611 07:25:30 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:09.611 07:25:30 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:40:09.611 ************************************ 00:40:09.611 END TEST nvmf_interrupt 00:40:09.611 ************************************ 00:40:09.611 00:40:09.611 real 33m8.068s 00:40:09.611 user 87m59.556s 00:40:09.611 sys 8m6.086s 00:40:09.611 07:25:30 nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:09.611 07:25:30 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:09.611 ************************************ 00:40:09.611 END TEST nvmf_tcp 00:40:09.611 ************************************ 00:40:09.611 07:25:30 -- spdk/autotest.sh@285 -- # [[ 0 -eq 0 ]] 00:40:09.611 07:25:30 -- spdk/autotest.sh@286 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:40:09.611 07:25:30 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:40:09.611 07:25:30 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:09.611 07:25:30 -- common/autotest_common.sh@10 -- # set +x 00:40:09.611 ************************************ 00:40:09.611 START TEST spdkcli_nvmf_tcp 00:40:09.611 ************************************ 00:40:09.611 07:25:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:40:09.611 * Looking for test storage... 00:40:09.611 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:40:09.611 07:25:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:40:09.611 07:25:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:40:09.611 07:25:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:40:09.611 07:25:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:40:09.611 07:25:30 spdkcli_nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:09.611 07:25:30 spdkcli_nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:09.611 07:25:30 spdkcli_nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:09.611 07:25:30 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:40:09.611 07:25:30 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:40:09.611 07:25:30 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:40:09.611 07:25:30 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:40:09.611 07:25:30 spdkcli_nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:40:09.611 07:25:30 spdkcli_nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:40:09.611 07:25:30 spdkcli_nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:40:09.611 07:25:30 spdkcli_nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:09.611 07:25:30 spdkcli_nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:40:09.611 07:25:30 spdkcli_nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:40:09.611 07:25:30 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:09.611 07:25:30 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:09.611 07:25:30 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:40:09.611 07:25:30 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:40:09.611 07:25:30 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:09.611 07:25:30 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:40:09.611 07:25:30 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:40:09.611 07:25:30 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:40:09.611 07:25:30 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:40:09.611 07:25:30 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:09.611 07:25:30 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:40:09.611 07:25:30 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:40:09.611 07:25:30 spdkcli_nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:09.611 07:25:30 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:09.611 07:25:30 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:40:09.611 07:25:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:09.611 07:25:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:40:09.611 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:09.611 --rc genhtml_branch_coverage=1 00:40:09.611 --rc genhtml_function_coverage=1 00:40:09.611 --rc genhtml_legend=1 00:40:09.612 --rc geninfo_all_blocks=1 00:40:09.612 --rc geninfo_unexecuted_blocks=1 00:40:09.612 00:40:09.612 ' 00:40:09.612 07:25:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:40:09.612 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:09.612 --rc genhtml_branch_coverage=1 00:40:09.612 --rc genhtml_function_coverage=1 00:40:09.612 --rc genhtml_legend=1 00:40:09.612 --rc geninfo_all_blocks=1 00:40:09.612 --rc geninfo_unexecuted_blocks=1 00:40:09.612 00:40:09.612 ' 00:40:09.612 07:25:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:40:09.612 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:09.612 --rc genhtml_branch_coverage=1 00:40:09.612 --rc genhtml_function_coverage=1 00:40:09.612 --rc genhtml_legend=1 00:40:09.612 --rc geninfo_all_blocks=1 00:40:09.612 --rc geninfo_unexecuted_blocks=1 00:40:09.612 00:40:09.612 ' 00:40:09.612 07:25:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:40:09.612 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:09.612 --rc genhtml_branch_coverage=1 00:40:09.612 --rc genhtml_function_coverage=1 00:40:09.612 --rc genhtml_legend=1 00:40:09.612 --rc geninfo_all_blocks=1 00:40:09.612 --rc geninfo_unexecuted_blocks=1 00:40:09.612 00:40:09.612 ' 00:40:09.612 07:25:30 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:40:09.612 07:25:30 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:40:09.612 07:25:30 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:40:09.612 07:25:30 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:09.612 07:25:30 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:40:09.612 07:25:30 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:09.612 07:25:30 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:09.612 07:25:30 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:09.612 07:25:30 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:09.612 07:25:30 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:09.612 07:25:30 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:09.612 07:25:30 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:09.612 07:25:30 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:09.612 07:25:30 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:09.612 07:25:30 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:09.612 07:25:30 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:40:09.612 07:25:30 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:40:09.612 07:25:30 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:09.612 07:25:30 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:09.612 07:25:30 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:09.612 07:25:30 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:09.612 07:25:30 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:09.612 07:25:30 spdkcli_nvmf_tcp -- scripts/common.sh@15 -- # shopt -s extglob 00:40:09.612 07:25:30 spdkcli_nvmf_tcp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:09.612 07:25:30 spdkcli_nvmf_tcp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:09.612 07:25:30 spdkcli_nvmf_tcp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:09.612 07:25:30 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:09.612 07:25:30 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:09.612 07:25:30 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:09.612 07:25:30 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:40:09.612 07:25:30 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:09.612 07:25:30 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # : 0 00:40:09.612 07:25:30 spdkcli_nvmf_tcp -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:40:09.612 07:25:30 spdkcli_nvmf_tcp -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:40:09.612 07:25:30 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:09.612 07:25:30 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:09.612 07:25:30 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:09.612 07:25:30 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:40:09.612 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:40:09.612 07:25:30 spdkcli_nvmf_tcp -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:40:09.612 07:25:30 spdkcli_nvmf_tcp -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:40:09.612 07:25:30 spdkcli_nvmf_tcp -- nvmf/common.sh@55 -- # have_pci_nics=0 00:40:09.612 07:25:30 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:40:09.612 07:25:30 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:40:09.612 07:25:30 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:40:09.612 07:25:30 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:40:09.612 07:25:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:40:09.612 07:25:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:09.612 07:25:30 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:40:09.612 07:25:30 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=457215 00:40:09.612 07:25:30 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:40:09.612 07:25:30 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 457215 00:40:09.612 07:25:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # '[' -z 457215 ']' 00:40:09.612 07:25:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:09.612 07:25:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:40:09.612 07:25:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:09.612 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:09.612 07:25:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:40:09.612 07:25:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:09.612 [2024-11-18 07:25:30.318012] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:40:09.612 [2024-11-18 07:25:30.318112] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid457215 ] 00:40:09.612 [2024-11-18 07:25:30.383105] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:40:09.612 [2024-11-18 07:25:30.432032] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:40:09.612 [2024-11-18 07:25:30.432035] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:09.612 07:25:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:40:09.612 07:25:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@868 -- # return 0 00:40:09.612 07:25:30 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:40:09.612 07:25:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:40:09.612 07:25:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:09.870 07:25:30 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:40:09.870 07:25:30 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:40:09.870 07:25:30 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:40:09.870 07:25:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:40:09.870 07:25:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:09.870 07:25:30 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:40:09.870 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:40:09.870 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:40:09.870 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:40:09.870 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:40:09.870 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:40:09.870 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:40:09.870 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:40:09.870 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:40:09.870 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:40:09.870 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:40:09.870 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:40:09.870 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:40:09.870 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:40:09.870 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:40:09.870 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:40:09.870 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:40:09.870 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:40:09.870 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:40:09.871 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:40:09.871 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:40:09.871 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:40:09.871 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:40:09.871 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:40:09.871 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:40:09.871 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:40:09.871 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:40:09.871 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:40:09.871 ' 00:40:12.401 [2024-11-18 07:25:33.283348] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:13.773 [2024-11-18 07:25:34.551683] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:40:16.301 [2024-11-18 07:25:36.894971] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:40:18.206 [2024-11-18 07:25:38.909188] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:40:19.580 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:40:19.580 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:40:19.580 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:40:19.580 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:40:19.580 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:40:19.580 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:40:19.580 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:40:19.580 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:40:19.580 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:40:19.580 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:40:19.580 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:40:19.580 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:40:19.580 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:40:19.580 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:40:19.580 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:40:19.580 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:40:19.580 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:40:19.580 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:40:19.580 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:40:19.580 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:40:19.580 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:40:19.580 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:40:19.580 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:40:19.580 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:40:19.580 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:40:19.580 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:40:19.580 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:40:19.580 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:40:19.838 07:25:40 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:40:19.838 07:25:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:40:19.838 07:25:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:19.838 07:25:40 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:40:19.838 07:25:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:40:19.838 07:25:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:19.838 07:25:40 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:40:19.838 07:25:40 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:40:20.096 07:25:41 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:40:20.096 07:25:41 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:40:20.096 07:25:41 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:40:20.096 07:25:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:40:20.096 07:25:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:20.354 07:25:41 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:40:20.354 07:25:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:40:20.354 07:25:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:20.354 07:25:41 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:40:20.354 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:40:20.354 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:40:20.354 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:40:20.354 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:40:20.354 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:40:20.354 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:40:20.354 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:40:20.354 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:40:20.354 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:40:20.354 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:40:20.354 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:40:20.354 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:40:20.354 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:40:20.354 ' 00:40:25.629 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:40:25.629 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:40:25.629 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:40:25.629 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:40:25.629 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:40:25.629 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:40:25.629 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:40:25.629 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:40:25.629 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:40:25.629 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:40:25.629 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:40:25.629 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:40:25.629 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:40:25.629 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:40:25.629 07:25:46 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:40:25.629 07:25:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:40:25.629 07:25:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:25.629 07:25:46 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 457215 00:40:25.629 07:25:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 457215 ']' 00:40:25.629 07:25:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 457215 00:40:25.629 07:25:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # uname 00:40:25.629 07:25:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:40:25.629 07:25:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 457215 00:40:25.629 07:25:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:40:25.629 07:25:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:40:25.629 07:25:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 457215' 00:40:25.629 killing process with pid 457215 00:40:25.629 07:25:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@973 -- # kill 457215 00:40:25.629 07:25:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@978 -- # wait 457215 00:40:25.887 07:25:46 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:40:25.887 07:25:46 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:40:25.887 07:25:46 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 457215 ']' 00:40:25.887 07:25:46 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 457215 00:40:25.887 07:25:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 457215 ']' 00:40:25.887 07:25:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 457215 00:40:25.887 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (457215) - No such process 00:40:25.887 07:25:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@981 -- # echo 'Process with pid 457215 is not found' 00:40:25.887 Process with pid 457215 is not found 00:40:25.887 07:25:46 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:40:25.887 07:25:46 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:40:25.887 07:25:46 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:40:25.887 00:40:25.887 real 0m16.613s 00:40:25.887 user 0m35.408s 00:40:25.887 sys 0m0.808s 00:40:25.887 07:25:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:25.887 07:25:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:25.887 ************************************ 00:40:25.887 END TEST spdkcli_nvmf_tcp 00:40:25.887 ************************************ 00:40:25.887 07:25:46 -- spdk/autotest.sh@287 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:40:25.887 07:25:46 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:40:25.887 07:25:46 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:25.887 07:25:46 -- common/autotest_common.sh@10 -- # set +x 00:40:25.887 ************************************ 00:40:25.887 START TEST nvmf_identify_passthru 00:40:25.887 ************************************ 00:40:25.887 07:25:46 nvmf_identify_passthru -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:40:25.887 * Looking for test storage... 00:40:25.887 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:40:25.887 07:25:46 nvmf_identify_passthru -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:40:25.887 07:25:46 nvmf_identify_passthru -- common/autotest_common.sh@1693 -- # lcov --version 00:40:25.887 07:25:46 nvmf_identify_passthru -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:40:26.146 07:25:46 nvmf_identify_passthru -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:40:26.146 07:25:46 nvmf_identify_passthru -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:26.146 07:25:46 nvmf_identify_passthru -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:26.146 07:25:46 nvmf_identify_passthru -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:26.146 07:25:46 nvmf_identify_passthru -- scripts/common.sh@336 -- # IFS=.-: 00:40:26.146 07:25:46 nvmf_identify_passthru -- scripts/common.sh@336 -- # read -ra ver1 00:40:26.146 07:25:46 nvmf_identify_passthru -- scripts/common.sh@337 -- # IFS=.-: 00:40:26.146 07:25:46 nvmf_identify_passthru -- scripts/common.sh@337 -- # read -ra ver2 00:40:26.146 07:25:46 nvmf_identify_passthru -- scripts/common.sh@338 -- # local 'op=<' 00:40:26.146 07:25:46 nvmf_identify_passthru -- scripts/common.sh@340 -- # ver1_l=2 00:40:26.146 07:25:46 nvmf_identify_passthru -- scripts/common.sh@341 -- # ver2_l=1 00:40:26.146 07:25:46 nvmf_identify_passthru -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:26.146 07:25:46 nvmf_identify_passthru -- scripts/common.sh@344 -- # case "$op" in 00:40:26.146 07:25:46 nvmf_identify_passthru -- scripts/common.sh@345 -- # : 1 00:40:26.146 07:25:46 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:26.146 07:25:46 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:26.146 07:25:46 nvmf_identify_passthru -- scripts/common.sh@365 -- # decimal 1 00:40:26.146 07:25:46 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=1 00:40:26.146 07:25:46 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:26.146 07:25:46 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 1 00:40:26.146 07:25:46 nvmf_identify_passthru -- scripts/common.sh@365 -- # ver1[v]=1 00:40:26.146 07:25:46 nvmf_identify_passthru -- scripts/common.sh@366 -- # decimal 2 00:40:26.146 07:25:46 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=2 00:40:26.146 07:25:46 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:26.146 07:25:46 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 2 00:40:26.146 07:25:46 nvmf_identify_passthru -- scripts/common.sh@366 -- # ver2[v]=2 00:40:26.146 07:25:46 nvmf_identify_passthru -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:26.146 07:25:46 nvmf_identify_passthru -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:26.146 07:25:46 nvmf_identify_passthru -- scripts/common.sh@368 -- # return 0 00:40:26.146 07:25:46 nvmf_identify_passthru -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:26.146 07:25:46 nvmf_identify_passthru -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:40:26.146 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:26.146 --rc genhtml_branch_coverage=1 00:40:26.146 --rc genhtml_function_coverage=1 00:40:26.146 --rc genhtml_legend=1 00:40:26.146 --rc geninfo_all_blocks=1 00:40:26.146 --rc geninfo_unexecuted_blocks=1 00:40:26.146 00:40:26.146 ' 00:40:26.146 07:25:46 nvmf_identify_passthru -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:40:26.146 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:26.146 --rc genhtml_branch_coverage=1 00:40:26.146 --rc genhtml_function_coverage=1 00:40:26.146 --rc genhtml_legend=1 00:40:26.146 --rc geninfo_all_blocks=1 00:40:26.146 --rc geninfo_unexecuted_blocks=1 00:40:26.146 00:40:26.146 ' 00:40:26.146 07:25:46 nvmf_identify_passthru -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:40:26.146 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:26.146 --rc genhtml_branch_coverage=1 00:40:26.146 --rc genhtml_function_coverage=1 00:40:26.146 --rc genhtml_legend=1 00:40:26.146 --rc geninfo_all_blocks=1 00:40:26.146 --rc geninfo_unexecuted_blocks=1 00:40:26.146 00:40:26.146 ' 00:40:26.146 07:25:46 nvmf_identify_passthru -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:40:26.146 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:26.146 --rc genhtml_branch_coverage=1 00:40:26.146 --rc genhtml_function_coverage=1 00:40:26.146 --rc genhtml_legend=1 00:40:26.146 --rc geninfo_all_blocks=1 00:40:26.146 --rc geninfo_unexecuted_blocks=1 00:40:26.146 00:40:26.146 ' 00:40:26.146 07:25:46 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:26.146 07:25:46 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:40:26.146 07:25:46 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:26.146 07:25:46 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:26.146 07:25:46 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:26.146 07:25:46 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:26.146 07:25:46 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:26.146 07:25:46 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:26.146 07:25:46 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:26.146 07:25:46 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:26.146 07:25:46 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:26.146 07:25:46 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:26.146 07:25:46 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:40:26.146 07:25:46 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:40:26.146 07:25:46 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:26.146 07:25:46 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:26.146 07:25:46 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:26.146 07:25:46 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:26.146 07:25:46 nvmf_identify_passthru -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:26.146 07:25:46 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:40:26.146 07:25:46 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:26.146 07:25:46 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:26.146 07:25:46 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:26.146 07:25:46 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:26.146 07:25:46 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:26.147 07:25:46 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:26.147 07:25:46 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:40:26.147 07:25:46 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:26.147 07:25:46 nvmf_identify_passthru -- nvmf/common.sh@51 -- # : 0 00:40:26.147 07:25:46 nvmf_identify_passthru -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:40:26.147 07:25:46 nvmf_identify_passthru -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:40:26.147 07:25:46 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:26.147 07:25:46 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:26.147 07:25:46 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:26.147 07:25:46 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:40:26.147 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:40:26.147 07:25:46 nvmf_identify_passthru -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:40:26.147 07:25:46 nvmf_identify_passthru -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:40:26.147 07:25:46 nvmf_identify_passthru -- nvmf/common.sh@55 -- # have_pci_nics=0 00:40:26.147 07:25:46 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:26.147 07:25:46 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:40:26.147 07:25:46 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:26.147 07:25:46 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:26.147 07:25:46 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:26.147 07:25:46 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:26.147 07:25:46 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:26.147 07:25:46 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:26.147 07:25:46 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:40:26.147 07:25:46 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:26.147 07:25:46 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:40:26.147 07:25:46 nvmf_identify_passthru -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:40:26.147 07:25:46 nvmf_identify_passthru -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:40:26.147 07:25:46 nvmf_identify_passthru -- nvmf/common.sh@476 -- # prepare_net_devs 00:40:26.147 07:25:46 nvmf_identify_passthru -- nvmf/common.sh@438 -- # local -g is_hw=no 00:40:26.147 07:25:46 nvmf_identify_passthru -- nvmf/common.sh@440 -- # remove_spdk_ns 00:40:26.147 07:25:46 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:26.147 07:25:46 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:40:26.147 07:25:46 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:26.147 07:25:46 nvmf_identify_passthru -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:40:26.147 07:25:46 nvmf_identify_passthru -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:40:26.147 07:25:46 nvmf_identify_passthru -- nvmf/common.sh@309 -- # xtrace_disable 00:40:26.147 07:25:46 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:28.682 07:25:49 nvmf_identify_passthru -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:40:28.682 07:25:49 nvmf_identify_passthru -- nvmf/common.sh@315 -- # pci_devs=() 00:40:28.682 07:25:49 nvmf_identify_passthru -- nvmf/common.sh@315 -- # local -a pci_devs 00:40:28.682 07:25:49 nvmf_identify_passthru -- nvmf/common.sh@316 -- # pci_net_devs=() 00:40:28.682 07:25:49 nvmf_identify_passthru -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:40:28.682 07:25:49 nvmf_identify_passthru -- nvmf/common.sh@317 -- # pci_drivers=() 00:40:28.682 07:25:49 nvmf_identify_passthru -- nvmf/common.sh@317 -- # local -A pci_drivers 00:40:28.682 07:25:49 nvmf_identify_passthru -- nvmf/common.sh@319 -- # net_devs=() 00:40:28.682 07:25:49 nvmf_identify_passthru -- nvmf/common.sh@319 -- # local -ga net_devs 00:40:28.682 07:25:49 nvmf_identify_passthru -- nvmf/common.sh@320 -- # e810=() 00:40:28.682 07:25:49 nvmf_identify_passthru -- nvmf/common.sh@320 -- # local -ga e810 00:40:28.682 07:25:49 nvmf_identify_passthru -- nvmf/common.sh@321 -- # x722=() 00:40:28.682 07:25:49 nvmf_identify_passthru -- nvmf/common.sh@321 -- # local -ga x722 00:40:28.682 07:25:49 nvmf_identify_passthru -- nvmf/common.sh@322 -- # mlx=() 00:40:28.682 07:25:49 nvmf_identify_passthru -- nvmf/common.sh@322 -- # local -ga mlx 00:40:28.682 07:25:49 nvmf_identify_passthru -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:40:28.682 07:25:49 nvmf_identify_passthru -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:40:28.682 07:25:49 nvmf_identify_passthru -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:40:28.682 07:25:49 nvmf_identify_passthru -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:40:28.682 07:25:49 nvmf_identify_passthru -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:40:28.682 07:25:49 nvmf_identify_passthru -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:40:28.682 07:25:49 nvmf_identify_passthru -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:40:28.682 07:25:49 nvmf_identify_passthru -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:40:28.682 07:25:49 nvmf_identify_passthru -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:40:28.682 07:25:49 nvmf_identify_passthru -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:40:28.682 07:25:49 nvmf_identify_passthru -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:40:28.682 07:25:49 nvmf_identify_passthru -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:40:28.682 07:25:49 nvmf_identify_passthru -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:40:28.682 07:25:49 nvmf_identify_passthru -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:40:28.682 07:25:49 nvmf_identify_passthru -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:40:28.682 07:25:49 nvmf_identify_passthru -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:40:28.682 07:25:49 nvmf_identify_passthru -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:40:28.682 07:25:49 nvmf_identify_passthru -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:40:28.682 07:25:49 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:28.682 07:25:49 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:40:28.682 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:40:28.682 07:25:49 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:28.682 07:25:49 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:28.682 07:25:49 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:28.682 07:25:49 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:28.682 07:25:49 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:28.682 07:25:49 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:28.682 07:25:49 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:40:28.682 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:40:28.682 07:25:49 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:28.682 07:25:49 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:28.682 07:25:49 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:28.682 07:25:49 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:28.682 07:25:49 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:28.682 07:25:49 nvmf_identify_passthru -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:40:28.682 07:25:49 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:40:28.682 07:25:49 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:40:28.682 07:25:49 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:40:28.682 07:25:49 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:28.682 07:25:49 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:40:28.682 07:25:49 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:28.682 07:25:49 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:40:28.682 07:25:49 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:40:28.682 07:25:49 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:28.682 07:25:49 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:40:28.682 Found net devices under 0000:0a:00.0: cvl_0_0 00:40:28.682 07:25:49 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:40:28.682 07:25:49 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:40:28.682 07:25:49 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:28.682 07:25:49 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:40:28.682 07:25:49 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:28.682 07:25:49 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:40:28.682 07:25:49 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:40:28.682 07:25:49 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:28.682 07:25:49 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:40:28.682 Found net devices under 0000:0a:00.1: cvl_0_1 00:40:28.682 07:25:49 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:40:28.682 07:25:49 nvmf_identify_passthru -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:40:28.682 07:25:49 nvmf_identify_passthru -- nvmf/common.sh@442 -- # is_hw=yes 00:40:28.682 07:25:49 nvmf_identify_passthru -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:40:28.682 07:25:49 nvmf_identify_passthru -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:40:28.682 07:25:49 nvmf_identify_passthru -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:40:28.682 07:25:49 nvmf_identify_passthru -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:40:28.682 07:25:49 nvmf_identify_passthru -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:40:28.682 07:25:49 nvmf_identify_passthru -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:40:28.682 07:25:49 nvmf_identify_passthru -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:40:28.682 07:25:49 nvmf_identify_passthru -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:40:28.682 07:25:49 nvmf_identify_passthru -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:40:28.682 07:25:49 nvmf_identify_passthru -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:40:28.682 07:25:49 nvmf_identify_passthru -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:40:28.682 07:25:49 nvmf_identify_passthru -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:40:28.682 07:25:49 nvmf_identify_passthru -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:40:28.682 07:25:49 nvmf_identify_passthru -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:40:28.682 07:25:49 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:40:28.682 07:25:49 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:40:28.682 07:25:49 nvmf_identify_passthru -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:40:28.682 07:25:49 nvmf_identify_passthru -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:40:28.682 07:25:49 nvmf_identify_passthru -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:40:28.682 07:25:49 nvmf_identify_passthru -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:40:28.682 07:25:49 nvmf_identify_passthru -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:40:28.682 07:25:49 nvmf_identify_passthru -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:40:28.682 07:25:49 nvmf_identify_passthru -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:40:28.682 07:25:49 nvmf_identify_passthru -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:40:28.682 07:25:49 nvmf_identify_passthru -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:40:28.682 07:25:49 nvmf_identify_passthru -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:40:28.682 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:40:28.682 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.390 ms 00:40:28.682 00:40:28.682 --- 10.0.0.2 ping statistics --- 00:40:28.682 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:28.682 rtt min/avg/max/mdev = 0.390/0.390/0.390/0.000 ms 00:40:28.683 07:25:49 nvmf_identify_passthru -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:40:28.683 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:40:28.683 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.171 ms 00:40:28.683 00:40:28.683 --- 10.0.0.1 ping statistics --- 00:40:28.683 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:28.683 rtt min/avg/max/mdev = 0.171/0.171/0.171/0.000 ms 00:40:28.683 07:25:49 nvmf_identify_passthru -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:40:28.683 07:25:49 nvmf_identify_passthru -- nvmf/common.sh@450 -- # return 0 00:40:28.683 07:25:49 nvmf_identify_passthru -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:40:28.683 07:25:49 nvmf_identify_passthru -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:40:28.683 07:25:49 nvmf_identify_passthru -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:40:28.683 07:25:49 nvmf_identify_passthru -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:40:28.683 07:25:49 nvmf_identify_passthru -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:40:28.683 07:25:49 nvmf_identify_passthru -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:40:28.683 07:25:49 nvmf_identify_passthru -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:40:28.683 07:25:49 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:40:28.683 07:25:49 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:40:28.683 07:25:49 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:28.683 07:25:49 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:40:28.683 07:25:49 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # bdfs=() 00:40:28.683 07:25:49 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # local bdfs 00:40:28.683 07:25:49 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:40:28.683 07:25:49 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:40:28.683 07:25:49 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # bdfs=() 00:40:28.683 07:25:49 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # local bdfs 00:40:28.683 07:25:49 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:40:28.683 07:25:49 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:40:28.683 07:25:49 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:40:28.683 07:25:49 nvmf_identify_passthru -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:40:28.683 07:25:49 nvmf_identify_passthru -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:88:00.0 00:40:28.683 07:25:49 nvmf_identify_passthru -- common/autotest_common.sh@1512 -- # echo 0000:88:00.0 00:40:28.683 07:25:49 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:88:00.0 00:40:28.683 07:25:49 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:88:00.0 ']' 00:40:28.683 07:25:49 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:88:00.0' -i 0 00:40:28.683 07:25:49 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:40:28.683 07:25:49 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:40:32.869 07:25:53 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=PHLJ916004901P0FGN 00:40:32.869 07:25:53 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:88:00.0' -i 0 00:40:32.869 07:25:53 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:40:32.869 07:25:53 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:40:37.055 07:25:57 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:40:37.055 07:25:57 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:40:37.055 07:25:57 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:40:37.055 07:25:57 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:37.055 07:25:57 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:40:37.055 07:25:57 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:40:37.055 07:25:57 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:37.055 07:25:57 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=461841 00:40:37.055 07:25:57 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:40:37.055 07:25:57 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:40:37.055 07:25:57 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 461841 00:40:37.055 07:25:57 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # '[' -z 461841 ']' 00:40:37.055 07:25:57 nvmf_identify_passthru -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:37.055 07:25:57 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # local max_retries=100 00:40:37.055 07:25:57 nvmf_identify_passthru -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:37.055 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:37.055 07:25:57 nvmf_identify_passthru -- common/autotest_common.sh@844 -- # xtrace_disable 00:40:37.055 07:25:57 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:37.055 [2024-11-18 07:25:57.795290] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:40:37.055 [2024-11-18 07:25:57.795392] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:37.055 [2024-11-18 07:25:57.868399] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:40:37.055 [2024-11-18 07:25:57.917929] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:40:37.055 [2024-11-18 07:25:57.917992] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:40:37.055 [2024-11-18 07:25:57.918005] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:40:37.055 [2024-11-18 07:25:57.918017] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:40:37.055 [2024-11-18 07:25:57.918026] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:40:37.055 [2024-11-18 07:25:57.919573] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:40:37.055 [2024-11-18 07:25:57.919642] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:40:37.055 [2024-11-18 07:25:57.919701] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:40:37.055 [2024-11-18 07:25:57.919704] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:37.313 07:25:58 nvmf_identify_passthru -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:40:37.313 07:25:58 nvmf_identify_passthru -- common/autotest_common.sh@868 -- # return 0 00:40:37.313 07:25:58 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:40:37.313 07:25:58 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:37.313 07:25:58 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:37.313 INFO: Log level set to 20 00:40:37.313 INFO: Requests: 00:40:37.313 { 00:40:37.313 "jsonrpc": "2.0", 00:40:37.313 "method": "nvmf_set_config", 00:40:37.313 "id": 1, 00:40:37.313 "params": { 00:40:37.313 "admin_cmd_passthru": { 00:40:37.313 "identify_ctrlr": true 00:40:37.313 } 00:40:37.313 } 00:40:37.313 } 00:40:37.313 00:40:37.313 INFO: response: 00:40:37.313 { 00:40:37.313 "jsonrpc": "2.0", 00:40:37.313 "id": 1, 00:40:37.313 "result": true 00:40:37.313 } 00:40:37.313 00:40:37.313 07:25:58 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:37.313 07:25:58 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:40:37.313 07:25:58 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:37.313 07:25:58 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:37.313 INFO: Setting log level to 20 00:40:37.313 INFO: Setting log level to 20 00:40:37.313 INFO: Log level set to 20 00:40:37.313 INFO: Log level set to 20 00:40:37.313 INFO: Requests: 00:40:37.313 { 00:40:37.313 "jsonrpc": "2.0", 00:40:37.313 "method": "framework_start_init", 00:40:37.313 "id": 1 00:40:37.313 } 00:40:37.313 00:40:37.313 INFO: Requests: 00:40:37.313 { 00:40:37.313 "jsonrpc": "2.0", 00:40:37.313 "method": "framework_start_init", 00:40:37.313 "id": 1 00:40:37.313 } 00:40:37.313 00:40:37.313 [2024-11-18 07:25:58.147785] nvmf_tgt.c: 462:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:40:37.313 INFO: response: 00:40:37.313 { 00:40:37.313 "jsonrpc": "2.0", 00:40:37.313 "id": 1, 00:40:37.313 "result": true 00:40:37.313 } 00:40:37.313 00:40:37.313 INFO: response: 00:40:37.313 { 00:40:37.313 "jsonrpc": "2.0", 00:40:37.313 "id": 1, 00:40:37.313 "result": true 00:40:37.313 } 00:40:37.313 00:40:37.313 07:25:58 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:37.313 07:25:58 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:40:37.313 07:25:58 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:37.313 07:25:58 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:37.313 INFO: Setting log level to 40 00:40:37.313 INFO: Setting log level to 40 00:40:37.313 INFO: Setting log level to 40 00:40:37.313 [2024-11-18 07:25:58.157923] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:37.313 07:25:58 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:37.313 07:25:58 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:40:37.313 07:25:58 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:40:37.313 07:25:58 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:37.313 07:25:58 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:88:00.0 00:40:37.313 07:25:58 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:37.313 07:25:58 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:40.592 Nvme0n1 00:40:40.592 07:26:01 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:40.592 07:26:01 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:40:40.592 07:26:01 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:40.592 07:26:01 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:40.592 07:26:01 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:40.592 07:26:01 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:40:40.592 07:26:01 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:40.592 07:26:01 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:40.592 07:26:01 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:40.592 07:26:01 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:40:40.592 07:26:01 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:40.592 07:26:01 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:40.592 [2024-11-18 07:26:01.057425] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:40.592 07:26:01 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:40.592 07:26:01 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:40:40.592 07:26:01 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:40.592 07:26:01 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:40.592 [ 00:40:40.592 { 00:40:40.592 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:40:40.592 "subtype": "Discovery", 00:40:40.592 "listen_addresses": [], 00:40:40.592 "allow_any_host": true, 00:40:40.592 "hosts": [] 00:40:40.592 }, 00:40:40.592 { 00:40:40.592 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:40:40.592 "subtype": "NVMe", 00:40:40.592 "listen_addresses": [ 00:40:40.592 { 00:40:40.592 "trtype": "TCP", 00:40:40.592 "adrfam": "IPv4", 00:40:40.592 "traddr": "10.0.0.2", 00:40:40.592 "trsvcid": "4420" 00:40:40.592 } 00:40:40.592 ], 00:40:40.592 "allow_any_host": true, 00:40:40.592 "hosts": [], 00:40:40.592 "serial_number": "SPDK00000000000001", 00:40:40.592 "model_number": "SPDK bdev Controller", 00:40:40.592 "max_namespaces": 1, 00:40:40.592 "min_cntlid": 1, 00:40:40.592 "max_cntlid": 65519, 00:40:40.592 "namespaces": [ 00:40:40.592 { 00:40:40.592 "nsid": 1, 00:40:40.592 "bdev_name": "Nvme0n1", 00:40:40.592 "name": "Nvme0n1", 00:40:40.592 "nguid": "DF6273693AFF42FFA659776E6EA3AF13", 00:40:40.592 "uuid": "df627369-3aff-42ff-a659-776e6ea3af13" 00:40:40.592 } 00:40:40.592 ] 00:40:40.592 } 00:40:40.592 ] 00:40:40.592 07:26:01 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:40.592 07:26:01 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:40:40.592 07:26:01 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:40:40.592 07:26:01 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:40:40.592 07:26:01 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=PHLJ916004901P0FGN 00:40:40.592 07:26:01 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:40:40.592 07:26:01 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:40:40.592 07:26:01 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:40:40.592 07:26:01 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:40:40.592 07:26:01 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' PHLJ916004901P0FGN '!=' PHLJ916004901P0FGN ']' 00:40:40.592 07:26:01 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:40:40.592 07:26:01 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:40:40.592 07:26:01 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:40.592 07:26:01 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:40.851 07:26:01 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:40.851 07:26:01 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:40:40.851 07:26:01 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:40:40.851 07:26:01 nvmf_identify_passthru -- nvmf/common.sh@516 -- # nvmfcleanup 00:40:40.851 07:26:01 nvmf_identify_passthru -- nvmf/common.sh@121 -- # sync 00:40:40.851 07:26:01 nvmf_identify_passthru -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:40:40.851 07:26:01 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set +e 00:40:40.851 07:26:01 nvmf_identify_passthru -- nvmf/common.sh@125 -- # for i in {1..20} 00:40:40.851 07:26:01 nvmf_identify_passthru -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:40:40.851 rmmod nvme_tcp 00:40:40.851 rmmod nvme_fabrics 00:40:40.851 rmmod nvme_keyring 00:40:40.851 07:26:01 nvmf_identify_passthru -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:40:40.851 07:26:01 nvmf_identify_passthru -- nvmf/common.sh@128 -- # set -e 00:40:40.851 07:26:01 nvmf_identify_passthru -- nvmf/common.sh@129 -- # return 0 00:40:40.851 07:26:01 nvmf_identify_passthru -- nvmf/common.sh@517 -- # '[' -n 461841 ']' 00:40:40.851 07:26:01 nvmf_identify_passthru -- nvmf/common.sh@518 -- # killprocess 461841 00:40:40.851 07:26:01 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # '[' -z 461841 ']' 00:40:40.851 07:26:01 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # kill -0 461841 00:40:40.851 07:26:01 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # uname 00:40:40.851 07:26:01 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:40:40.851 07:26:01 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 461841 00:40:40.851 07:26:01 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:40:40.851 07:26:01 nvmf_identify_passthru -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:40:40.851 07:26:01 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # echo 'killing process with pid 461841' 00:40:40.851 killing process with pid 461841 00:40:40.851 07:26:01 nvmf_identify_passthru -- common/autotest_common.sh@973 -- # kill 461841 00:40:40.851 07:26:01 nvmf_identify_passthru -- common/autotest_common.sh@978 -- # wait 461841 00:40:42.224 07:26:03 nvmf_identify_passthru -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:40:42.224 07:26:03 nvmf_identify_passthru -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:40:42.224 07:26:03 nvmf_identify_passthru -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:40:42.224 07:26:03 nvmf_identify_passthru -- nvmf/common.sh@297 -- # iptr 00:40:42.224 07:26:03 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-save 00:40:42.224 07:26:03 nvmf_identify_passthru -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:40:42.224 07:26:03 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-restore 00:40:42.224 07:26:03 nvmf_identify_passthru -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:40:42.224 07:26:03 nvmf_identify_passthru -- nvmf/common.sh@302 -- # remove_spdk_ns 00:40:42.224 07:26:03 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:42.224 07:26:03 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:40:42.224 07:26:03 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:44.760 07:26:05 nvmf_identify_passthru -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:40:44.760 00:40:44.760 real 0m18.482s 00:40:44.760 user 0m27.717s 00:40:44.760 sys 0m2.567s 00:40:44.760 07:26:05 nvmf_identify_passthru -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:44.760 07:26:05 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:44.760 ************************************ 00:40:44.760 END TEST nvmf_identify_passthru 00:40:44.760 ************************************ 00:40:44.760 07:26:05 -- spdk/autotest.sh@289 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:40:44.760 07:26:05 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:40:44.760 07:26:05 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:44.760 07:26:05 -- common/autotest_common.sh@10 -- # set +x 00:40:44.760 ************************************ 00:40:44.760 START TEST nvmf_dif 00:40:44.760 ************************************ 00:40:44.760 07:26:05 nvmf_dif -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:40:44.760 * Looking for test storage... 00:40:44.760 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:40:44.760 07:26:05 nvmf_dif -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:40:44.760 07:26:05 nvmf_dif -- common/autotest_common.sh@1693 -- # lcov --version 00:40:44.760 07:26:05 nvmf_dif -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:40:44.760 07:26:05 nvmf_dif -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:40:44.760 07:26:05 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:44.760 07:26:05 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:44.760 07:26:05 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:44.760 07:26:05 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:40:44.760 07:26:05 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:40:44.760 07:26:05 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:40:44.760 07:26:05 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:40:44.760 07:26:05 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:40:44.760 07:26:05 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:40:44.760 07:26:05 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:40:44.760 07:26:05 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:44.760 07:26:05 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:40:44.760 07:26:05 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:40:44.760 07:26:05 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:44.760 07:26:05 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:44.760 07:26:05 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:40:44.761 07:26:05 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:40:44.761 07:26:05 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:44.761 07:26:05 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:40:44.761 07:26:05 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:40:44.761 07:26:05 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:40:44.761 07:26:05 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:40:44.761 07:26:05 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:44.761 07:26:05 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:40:44.761 07:26:05 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:40:44.761 07:26:05 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:44.761 07:26:05 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:44.761 07:26:05 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:40:44.761 07:26:05 nvmf_dif -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:44.761 07:26:05 nvmf_dif -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:40:44.761 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:44.761 --rc genhtml_branch_coverage=1 00:40:44.761 --rc genhtml_function_coverage=1 00:40:44.761 --rc genhtml_legend=1 00:40:44.761 --rc geninfo_all_blocks=1 00:40:44.761 --rc geninfo_unexecuted_blocks=1 00:40:44.761 00:40:44.761 ' 00:40:44.761 07:26:05 nvmf_dif -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:40:44.761 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:44.761 --rc genhtml_branch_coverage=1 00:40:44.761 --rc genhtml_function_coverage=1 00:40:44.761 --rc genhtml_legend=1 00:40:44.761 --rc geninfo_all_blocks=1 00:40:44.761 --rc geninfo_unexecuted_blocks=1 00:40:44.761 00:40:44.761 ' 00:40:44.761 07:26:05 nvmf_dif -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:40:44.761 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:44.761 --rc genhtml_branch_coverage=1 00:40:44.761 --rc genhtml_function_coverage=1 00:40:44.761 --rc genhtml_legend=1 00:40:44.761 --rc geninfo_all_blocks=1 00:40:44.761 --rc geninfo_unexecuted_blocks=1 00:40:44.761 00:40:44.761 ' 00:40:44.761 07:26:05 nvmf_dif -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:40:44.761 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:44.761 --rc genhtml_branch_coverage=1 00:40:44.761 --rc genhtml_function_coverage=1 00:40:44.761 --rc genhtml_legend=1 00:40:44.761 --rc geninfo_all_blocks=1 00:40:44.761 --rc geninfo_unexecuted_blocks=1 00:40:44.761 00:40:44.761 ' 00:40:44.761 07:26:05 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:44.761 07:26:05 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:40:44.761 07:26:05 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:44.761 07:26:05 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:44.761 07:26:05 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:44.761 07:26:05 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:44.761 07:26:05 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:44.761 07:26:05 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:44.761 07:26:05 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:44.761 07:26:05 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:44.761 07:26:05 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:44.761 07:26:05 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:44.761 07:26:05 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:40:44.761 07:26:05 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:40:44.761 07:26:05 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:44.761 07:26:05 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:44.761 07:26:05 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:44.761 07:26:05 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:44.761 07:26:05 nvmf_dif -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:44.761 07:26:05 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:40:44.761 07:26:05 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:44.761 07:26:05 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:44.761 07:26:05 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:44.761 07:26:05 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:44.761 07:26:05 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:44.761 07:26:05 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:44.761 07:26:05 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:40:44.761 07:26:05 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:44.761 07:26:05 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:40:44.761 07:26:05 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:40:44.761 07:26:05 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:40:44.761 07:26:05 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:44.761 07:26:05 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:44.761 07:26:05 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:44.761 07:26:05 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:40:44.761 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:40:44.761 07:26:05 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:40:44.761 07:26:05 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:40:44.761 07:26:05 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:40:44.761 07:26:05 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:40:44.761 07:26:05 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:40:44.761 07:26:05 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:40:44.761 07:26:05 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:40:44.761 07:26:05 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:40:44.761 07:26:05 nvmf_dif -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:40:44.761 07:26:05 nvmf_dif -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:40:44.761 07:26:05 nvmf_dif -- nvmf/common.sh@476 -- # prepare_net_devs 00:40:44.761 07:26:05 nvmf_dif -- nvmf/common.sh@438 -- # local -g is_hw=no 00:40:44.761 07:26:05 nvmf_dif -- nvmf/common.sh@440 -- # remove_spdk_ns 00:40:44.761 07:26:05 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:44.761 07:26:05 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:40:44.761 07:26:05 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:44.761 07:26:05 nvmf_dif -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:40:44.761 07:26:05 nvmf_dif -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:40:44.761 07:26:05 nvmf_dif -- nvmf/common.sh@309 -- # xtrace_disable 00:40:44.761 07:26:05 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:40:46.663 07:26:07 nvmf_dif -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:40:46.663 07:26:07 nvmf_dif -- nvmf/common.sh@315 -- # pci_devs=() 00:40:46.663 07:26:07 nvmf_dif -- nvmf/common.sh@315 -- # local -a pci_devs 00:40:46.663 07:26:07 nvmf_dif -- nvmf/common.sh@316 -- # pci_net_devs=() 00:40:46.663 07:26:07 nvmf_dif -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:40:46.663 07:26:07 nvmf_dif -- nvmf/common.sh@317 -- # pci_drivers=() 00:40:46.663 07:26:07 nvmf_dif -- nvmf/common.sh@317 -- # local -A pci_drivers 00:40:46.663 07:26:07 nvmf_dif -- nvmf/common.sh@319 -- # net_devs=() 00:40:46.663 07:26:07 nvmf_dif -- nvmf/common.sh@319 -- # local -ga net_devs 00:40:46.663 07:26:07 nvmf_dif -- nvmf/common.sh@320 -- # e810=() 00:40:46.663 07:26:07 nvmf_dif -- nvmf/common.sh@320 -- # local -ga e810 00:40:46.663 07:26:07 nvmf_dif -- nvmf/common.sh@321 -- # x722=() 00:40:46.663 07:26:07 nvmf_dif -- nvmf/common.sh@321 -- # local -ga x722 00:40:46.663 07:26:07 nvmf_dif -- nvmf/common.sh@322 -- # mlx=() 00:40:46.663 07:26:07 nvmf_dif -- nvmf/common.sh@322 -- # local -ga mlx 00:40:46.663 07:26:07 nvmf_dif -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:40:46.663 07:26:07 nvmf_dif -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:40:46.663 07:26:07 nvmf_dif -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:40:46.663 07:26:07 nvmf_dif -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:40:46.663 07:26:07 nvmf_dif -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:40:46.663 07:26:07 nvmf_dif -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:40:46.663 07:26:07 nvmf_dif -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:40:46.663 07:26:07 nvmf_dif -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:40:46.663 07:26:07 nvmf_dif -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:40:46.663 07:26:07 nvmf_dif -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:40:46.663 07:26:07 nvmf_dif -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:40:46.663 07:26:07 nvmf_dif -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:40:46.663 07:26:07 nvmf_dif -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:40:46.663 07:26:07 nvmf_dif -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:40:46.663 07:26:07 nvmf_dif -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:40:46.663 07:26:07 nvmf_dif -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:40:46.663 07:26:07 nvmf_dif -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:40:46.663 07:26:07 nvmf_dif -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:40:46.663 07:26:07 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:46.663 07:26:07 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:40:46.663 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:40:46.663 07:26:07 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:46.663 07:26:07 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:46.663 07:26:07 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:46.663 07:26:07 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:46.663 07:26:07 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:46.663 07:26:07 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:46.663 07:26:07 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:40:46.663 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:40:46.663 07:26:07 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:46.663 07:26:07 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:46.663 07:26:07 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:46.664 07:26:07 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:46.664 07:26:07 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:46.664 07:26:07 nvmf_dif -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:40:46.664 07:26:07 nvmf_dif -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:40:46.664 07:26:07 nvmf_dif -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:40:46.664 07:26:07 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:40:46.664 07:26:07 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:46.664 07:26:07 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:40:46.664 07:26:07 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:46.664 07:26:07 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:40:46.664 07:26:07 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:40:46.664 07:26:07 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:46.664 07:26:07 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:40:46.664 Found net devices under 0000:0a:00.0: cvl_0_0 00:40:46.664 07:26:07 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:40:46.664 07:26:07 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:40:46.664 07:26:07 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:46.664 07:26:07 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:40:46.664 07:26:07 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:46.664 07:26:07 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:40:46.664 07:26:07 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:40:46.664 07:26:07 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:46.664 07:26:07 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:40:46.664 Found net devices under 0000:0a:00.1: cvl_0_1 00:40:46.664 07:26:07 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:40:46.664 07:26:07 nvmf_dif -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:40:46.664 07:26:07 nvmf_dif -- nvmf/common.sh@442 -- # is_hw=yes 00:40:46.664 07:26:07 nvmf_dif -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:40:46.664 07:26:07 nvmf_dif -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:40:46.664 07:26:07 nvmf_dif -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:40:46.664 07:26:07 nvmf_dif -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:40:46.664 07:26:07 nvmf_dif -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:40:46.664 07:26:07 nvmf_dif -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:40:46.664 07:26:07 nvmf_dif -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:40:46.664 07:26:07 nvmf_dif -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:40:46.664 07:26:07 nvmf_dif -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:40:46.664 07:26:07 nvmf_dif -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:40:46.664 07:26:07 nvmf_dif -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:40:46.664 07:26:07 nvmf_dif -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:40:46.664 07:26:07 nvmf_dif -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:40:46.664 07:26:07 nvmf_dif -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:40:46.664 07:26:07 nvmf_dif -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:40:46.664 07:26:07 nvmf_dif -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:40:46.664 07:26:07 nvmf_dif -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:40:46.664 07:26:07 nvmf_dif -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:40:46.664 07:26:07 nvmf_dif -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:40:46.664 07:26:07 nvmf_dif -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:40:46.664 07:26:07 nvmf_dif -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:40:46.664 07:26:07 nvmf_dif -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:40:46.923 07:26:07 nvmf_dif -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:40:46.923 07:26:07 nvmf_dif -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:40:46.923 07:26:07 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:40:46.923 07:26:07 nvmf_dif -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:40:46.923 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:40:46.923 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.291 ms 00:40:46.923 00:40:46.923 --- 10.0.0.2 ping statistics --- 00:40:46.923 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:46.923 rtt min/avg/max/mdev = 0.291/0.291/0.291/0.000 ms 00:40:46.923 07:26:07 nvmf_dif -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:40:46.923 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:40:46.923 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.086 ms 00:40:46.923 00:40:46.923 --- 10.0.0.1 ping statistics --- 00:40:46.923 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:46.923 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:40:46.923 07:26:07 nvmf_dif -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:40:46.923 07:26:07 nvmf_dif -- nvmf/common.sh@450 -- # return 0 00:40:46.923 07:26:07 nvmf_dif -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:40:46.923 07:26:07 nvmf_dif -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:40:47.858 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:40:47.858 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:40:47.858 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:40:47.858 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:40:47.858 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:40:47.858 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:40:47.858 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:40:47.858 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:40:47.858 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:40:47.858 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:40:47.858 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:40:47.858 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:40:47.858 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:40:47.858 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:40:47.858 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:40:47.858 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:40:47.859 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:40:48.117 07:26:08 nvmf_dif -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:40:48.117 07:26:08 nvmf_dif -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:40:48.117 07:26:08 nvmf_dif -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:40:48.117 07:26:08 nvmf_dif -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:40:48.117 07:26:08 nvmf_dif -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:40:48.117 07:26:08 nvmf_dif -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:40:48.117 07:26:09 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:40:48.117 07:26:09 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:40:48.117 07:26:09 nvmf_dif -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:40:48.117 07:26:09 nvmf_dif -- common/autotest_common.sh@726 -- # xtrace_disable 00:40:48.117 07:26:09 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:40:48.117 07:26:09 nvmf_dif -- nvmf/common.sh@509 -- # nvmfpid=464992 00:40:48.117 07:26:09 nvmf_dif -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:40:48.117 07:26:09 nvmf_dif -- nvmf/common.sh@510 -- # waitforlisten 464992 00:40:48.117 07:26:09 nvmf_dif -- common/autotest_common.sh@835 -- # '[' -z 464992 ']' 00:40:48.117 07:26:09 nvmf_dif -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:48.117 07:26:09 nvmf_dif -- common/autotest_common.sh@840 -- # local max_retries=100 00:40:48.117 07:26:09 nvmf_dif -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:48.117 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:48.117 07:26:09 nvmf_dif -- common/autotest_common.sh@844 -- # xtrace_disable 00:40:48.117 07:26:09 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:40:48.117 [2024-11-18 07:26:09.060176] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:40:48.117 [2024-11-18 07:26:09.060251] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:48.375 [2024-11-18 07:26:09.133695] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:48.375 [2024-11-18 07:26:09.180615] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:40:48.375 [2024-11-18 07:26:09.180669] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:40:48.375 [2024-11-18 07:26:09.180683] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:40:48.375 [2024-11-18 07:26:09.180694] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:40:48.375 [2024-11-18 07:26:09.180704] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:40:48.375 [2024-11-18 07:26:09.181291] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:48.375 07:26:09 nvmf_dif -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:40:48.375 07:26:09 nvmf_dif -- common/autotest_common.sh@868 -- # return 0 00:40:48.375 07:26:09 nvmf_dif -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:40:48.375 07:26:09 nvmf_dif -- common/autotest_common.sh@732 -- # xtrace_disable 00:40:48.375 07:26:09 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:40:48.375 07:26:09 nvmf_dif -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:40:48.375 07:26:09 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:40:48.375 07:26:09 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:40:48.375 07:26:09 nvmf_dif -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:48.375 07:26:09 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:40:48.375 [2024-11-18 07:26:09.321231] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:48.375 07:26:09 nvmf_dif -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:48.375 07:26:09 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:40:48.375 07:26:09 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:40:48.375 07:26:09 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:48.375 07:26:09 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:40:48.375 ************************************ 00:40:48.375 START TEST fio_dif_1_default 00:40:48.375 ************************************ 00:40:48.375 07:26:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1129 -- # fio_dif_1 00:40:48.375 07:26:09 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:40:48.375 07:26:09 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:40:48.375 07:26:09 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:40:48.375 07:26:09 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:40:48.375 07:26:09 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:40:48.375 07:26:09 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:40:48.375 07:26:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:48.375 07:26:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:40:48.633 bdev_null0 00:40:48.633 07:26:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:48.633 07:26:09 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:40:48.633 07:26:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:48.633 07:26:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:40:48.633 07:26:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:48.633 07:26:09 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:40:48.633 07:26:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:48.633 07:26:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:40:48.633 07:26:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:48.633 07:26:09 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:40:48.633 07:26:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:48.633 07:26:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:40:48.633 [2024-11-18 07:26:09.377582] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:48.633 07:26:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:48.633 07:26:09 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:40:48.633 07:26:09 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:40:48.633 07:26:09 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:40:48.633 07:26:09 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # config=() 00:40:48.633 07:26:09 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # local subsystem config 00:40:48.633 07:26:09 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:40:48.633 07:26:09 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:40:48.633 { 00:40:48.633 "params": { 00:40:48.633 "name": "Nvme$subsystem", 00:40:48.633 "trtype": "$TEST_TRANSPORT", 00:40:48.633 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:48.633 "adrfam": "ipv4", 00:40:48.633 "trsvcid": "$NVMF_PORT", 00:40:48.633 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:48.633 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:48.633 "hdgst": ${hdgst:-false}, 00:40:48.633 "ddgst": ${ddgst:-false} 00:40:48.633 }, 00:40:48.633 "method": "bdev_nvme_attach_controller" 00:40:48.633 } 00:40:48.633 EOF 00:40:48.633 )") 00:40:48.633 07:26:09 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:40:48.633 07:26:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:40:48.633 07:26:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:40:48.633 07:26:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:40:48.633 07:26:09 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:40:48.633 07:26:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local sanitizers 00:40:48.633 07:26:09 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:40:48.633 07:26:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:40:48.633 07:26:09 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:40:48.633 07:26:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # shift 00:40:48.633 07:26:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # local asan_lib= 00:40:48.633 07:26:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:40:48.633 07:26:09 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # cat 00:40:48.633 07:26:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:40:48.633 07:26:09 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:40:48.633 07:26:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libasan 00:40:48.633 07:26:09 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:40:48.633 07:26:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:40:48.633 07:26:09 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # jq . 00:40:48.633 07:26:09 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@585 -- # IFS=, 00:40:48.633 07:26:09 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:40:48.633 "params": { 00:40:48.633 "name": "Nvme0", 00:40:48.633 "trtype": "tcp", 00:40:48.633 "traddr": "10.0.0.2", 00:40:48.633 "adrfam": "ipv4", 00:40:48.633 "trsvcid": "4420", 00:40:48.633 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:40:48.633 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:40:48.633 "hdgst": false, 00:40:48.633 "ddgst": false 00:40:48.633 }, 00:40:48.633 "method": "bdev_nvme_attach_controller" 00:40:48.633 }' 00:40:48.633 07:26:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:40:48.633 07:26:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:40:48.633 07:26:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:40:48.633 07:26:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:40:48.633 07:26:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:40:48.633 07:26:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:40:48.633 07:26:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:40:48.633 07:26:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:40:48.633 07:26:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:40:48.633 07:26:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:40:48.891 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:40:48.891 fio-3.35 00:40:48.891 Starting 1 thread 00:41:01.082 00:41:01.082 filename0: (groupid=0, jobs=1): err= 0: pid=465219: Mon Nov 18 07:26:20 2024 00:41:01.082 read: IOPS=216, BW=864KiB/s (885kB/s)(8656KiB/10017msec) 00:41:01.082 slat (nsec): min=4144, max=30661, avg=9220.03, stdev=2653.36 00:41:01.082 clat (usec): min=529, max=46399, avg=18486.01, stdev=20166.29 00:41:01.082 lat (usec): min=537, max=46413, avg=18495.23, stdev=20166.22 00:41:01.082 clat percentiles (usec): 00:41:01.082 | 1.00th=[ 545], 5.00th=[ 562], 10.00th=[ 570], 20.00th=[ 586], 00:41:01.082 | 30.00th=[ 611], 40.00th=[ 635], 50.00th=[ 701], 60.00th=[41157], 00:41:01.082 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:41:01.082 | 99.00th=[41681], 99.50th=[41681], 99.90th=[46400], 99.95th=[46400], 00:41:01.082 | 99.99th=[46400] 00:41:01.082 bw ( KiB/s): min= 512, max= 1216, per=99.98%, avg=864.00, stdev=184.85, samples=20 00:41:01.082 iops : min= 128, max= 304, avg=216.00, stdev=46.21, samples=20 00:41:01.082 lat (usec) : 750=53.56%, 1000=2.45% 00:41:01.082 lat (msec) : 50=43.99% 00:41:01.082 cpu : usr=91.21%, sys=8.50%, ctx=13, majf=0, minf=227 00:41:01.082 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:01.082 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:01.082 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:01.082 issued rwts: total=2164,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:01.082 latency : target=0, window=0, percentile=100.00%, depth=4 00:41:01.082 00:41:01.082 Run status group 0 (all jobs): 00:41:01.082 READ: bw=864KiB/s (885kB/s), 864KiB/s-864KiB/s (885kB/s-885kB/s), io=8656KiB (8864kB), run=10017-10017msec 00:41:01.082 07:26:20 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:41:01.082 07:26:20 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:41:01.082 07:26:20 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:41:01.082 07:26:20 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:41:01.082 07:26:20 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:41:01.082 07:26:20 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:41:01.082 07:26:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:01.082 07:26:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:41:01.082 07:26:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:01.082 07:26:20 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:41:01.082 07:26:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:01.082 07:26:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:41:01.082 07:26:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:01.082 00:41:01.082 real 0m11.166s 00:41:01.082 user 0m10.495s 00:41:01.082 sys 0m1.113s 00:41:01.082 07:26:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:01.082 07:26:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:41:01.082 ************************************ 00:41:01.082 END TEST fio_dif_1_default 00:41:01.082 ************************************ 00:41:01.082 07:26:20 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:41:01.082 07:26:20 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:41:01.082 07:26:20 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:41:01.082 07:26:20 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:41:01.082 ************************************ 00:41:01.082 START TEST fio_dif_1_multi_subsystems 00:41:01.082 ************************************ 00:41:01.082 07:26:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1129 -- # fio_dif_1_multi_subsystems 00:41:01.082 07:26:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:41:01.082 07:26:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:41:01.082 07:26:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:41:01.082 07:26:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:41:01.082 07:26:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:41:01.082 07:26:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:41:01.082 07:26:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:41:01.082 07:26:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:01.082 07:26:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:41:01.082 bdev_null0 00:41:01.082 07:26:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:01.082 07:26:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:41:01.082 07:26:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:01.082 07:26:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:41:01.082 07:26:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:01.082 07:26:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:41:01.082 07:26:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:01.082 07:26:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:41:01.082 07:26:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:01.082 07:26:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:41:01.082 07:26:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:01.082 07:26:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:41:01.082 [2024-11-18 07:26:20.586599] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:01.083 07:26:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:01.083 07:26:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:41:01.083 07:26:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:41:01.083 07:26:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:41:01.083 07:26:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:41:01.083 07:26:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:01.083 07:26:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:41:01.083 bdev_null1 00:41:01.083 07:26:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:01.083 07:26:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:41:01.083 07:26:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:01.083 07:26:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:41:01.083 07:26:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:01.083 07:26:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:41:01.083 07:26:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:01.083 07:26:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:41:01.083 07:26:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:01.083 07:26:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:41:01.083 07:26:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:01.083 07:26:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:41:01.083 07:26:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:01.083 07:26:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:41:01.083 07:26:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:41:01.083 07:26:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:41:01.083 07:26:20 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # config=() 00:41:01.083 07:26:20 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # local subsystem config 00:41:01.083 07:26:20 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:41:01.083 07:26:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:01.083 07:26:20 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:41:01.083 { 00:41:01.083 "params": { 00:41:01.083 "name": "Nvme$subsystem", 00:41:01.083 "trtype": "$TEST_TRANSPORT", 00:41:01.083 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:01.083 "adrfam": "ipv4", 00:41:01.083 "trsvcid": "$NVMF_PORT", 00:41:01.083 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:01.083 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:01.083 "hdgst": ${hdgst:-false}, 00:41:01.083 "ddgst": ${ddgst:-false} 00:41:01.083 }, 00:41:01.083 "method": "bdev_nvme_attach_controller" 00:41:01.083 } 00:41:01.083 EOF 00:41:01.083 )") 00:41:01.083 07:26:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:41:01.083 07:26:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:01.083 07:26:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:41:01.083 07:26:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:41:01.083 07:26:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:41:01.083 07:26:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:41:01.083 07:26:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local sanitizers 00:41:01.083 07:26:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:01.083 07:26:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # shift 00:41:01.083 07:26:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # local asan_lib= 00:41:01.083 07:26:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:41:01.083 07:26:20 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:41:01.083 07:26:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:41:01.083 07:26:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:01.083 07:26:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libasan 00:41:01.083 07:26:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:41:01.083 07:26:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:41:01.083 07:26:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:41:01.083 07:26:20 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:41:01.083 07:26:20 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:41:01.083 { 00:41:01.083 "params": { 00:41:01.083 "name": "Nvme$subsystem", 00:41:01.083 "trtype": "$TEST_TRANSPORT", 00:41:01.083 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:01.083 "adrfam": "ipv4", 00:41:01.083 "trsvcid": "$NVMF_PORT", 00:41:01.083 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:01.083 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:01.083 "hdgst": ${hdgst:-false}, 00:41:01.083 "ddgst": ${ddgst:-false} 00:41:01.083 }, 00:41:01.083 "method": "bdev_nvme_attach_controller" 00:41:01.083 } 00:41:01.083 EOF 00:41:01.083 )") 00:41:01.083 07:26:20 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:41:01.083 07:26:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:41:01.083 07:26:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:41:01.083 07:26:20 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # jq . 00:41:01.083 07:26:20 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@585 -- # IFS=, 00:41:01.083 07:26:20 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:41:01.083 "params": { 00:41:01.083 "name": "Nvme0", 00:41:01.083 "trtype": "tcp", 00:41:01.083 "traddr": "10.0.0.2", 00:41:01.083 "adrfam": "ipv4", 00:41:01.083 "trsvcid": "4420", 00:41:01.083 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:41:01.083 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:41:01.083 "hdgst": false, 00:41:01.083 "ddgst": false 00:41:01.083 }, 00:41:01.083 "method": "bdev_nvme_attach_controller" 00:41:01.083 },{ 00:41:01.083 "params": { 00:41:01.083 "name": "Nvme1", 00:41:01.083 "trtype": "tcp", 00:41:01.083 "traddr": "10.0.0.2", 00:41:01.083 "adrfam": "ipv4", 00:41:01.083 "trsvcid": "4420", 00:41:01.083 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:41:01.083 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:41:01.083 "hdgst": false, 00:41:01.083 "ddgst": false 00:41:01.083 }, 00:41:01.083 "method": "bdev_nvme_attach_controller" 00:41:01.083 }' 00:41:01.083 07:26:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:41:01.083 07:26:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:41:01.083 07:26:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:41:01.083 07:26:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:01.083 07:26:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:41:01.083 07:26:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:41:01.083 07:26:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:41:01.083 07:26:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:41:01.083 07:26:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:41:01.083 07:26:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:01.084 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:41:01.084 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:41:01.084 fio-3.35 00:41:01.084 Starting 2 threads 00:41:11.051 00:41:11.051 filename0: (groupid=0, jobs=1): err= 0: pid=466617: Mon Nov 18 07:26:31 2024 00:41:11.051 read: IOPS=97, BW=391KiB/s (400kB/s)(3920KiB/10036msec) 00:41:11.051 slat (nsec): min=4303, max=32602, avg=9856.57, stdev=2929.80 00:41:11.051 clat (usec): min=857, max=47963, avg=40932.45, stdev=2621.26 00:41:11.051 lat (usec): min=869, max=47978, avg=40942.31, stdev=2621.11 00:41:11.051 clat percentiles (usec): 00:41:11.051 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:41:11.051 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:41:11.051 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[42206], 00:41:11.051 | 99.00th=[42206], 99.50th=[42730], 99.90th=[47973], 99.95th=[47973], 00:41:11.051 | 99.99th=[47973] 00:41:11.051 bw ( KiB/s): min= 384, max= 416, per=32.66%, avg=390.40, stdev=13.13, samples=20 00:41:11.051 iops : min= 96, max= 104, avg=97.60, stdev= 3.28, samples=20 00:41:11.051 lat (usec) : 1000=0.41% 00:41:11.051 lat (msec) : 50=99.59% 00:41:11.051 cpu : usr=94.97%, sys=4.74%, ctx=18, majf=0, minf=127 00:41:11.051 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:11.051 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:11.051 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:11.051 issued rwts: total=980,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:11.051 latency : target=0, window=0, percentile=100.00%, depth=4 00:41:11.051 filename1: (groupid=0, jobs=1): err= 0: pid=466618: Mon Nov 18 07:26:31 2024 00:41:11.051 read: IOPS=200, BW=804KiB/s (823kB/s)(8064KiB/10031msec) 00:41:11.051 slat (nsec): min=4367, max=27245, avg=9684.82, stdev=2650.94 00:41:11.051 clat (usec): min=529, max=47909, avg=19872.94, stdev=20298.66 00:41:11.051 lat (usec): min=537, max=47923, avg=19882.63, stdev=20298.45 00:41:11.051 clat percentiles (usec): 00:41:11.051 | 1.00th=[ 553], 5.00th=[ 578], 10.00th=[ 603], 20.00th=[ 635], 00:41:11.051 | 30.00th=[ 685], 40.00th=[ 717], 50.00th=[ 889], 60.00th=[41157], 00:41:11.051 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:41:11.051 | 99.00th=[42206], 99.50th=[42206], 99.90th=[47973], 99.95th=[47973], 00:41:11.051 | 99.99th=[47973] 00:41:11.051 bw ( KiB/s): min= 704, max= 1024, per=67.33%, avg=804.80, stdev=66.70, samples=20 00:41:11.051 iops : min= 176, max= 256, avg=201.20, stdev=16.68, samples=20 00:41:11.051 lat (usec) : 750=44.94%, 1000=7.14% 00:41:11.051 lat (msec) : 2=0.69%, 50=47.22% 00:41:11.051 cpu : usr=95.02%, sys=4.70%, ctx=14, majf=0, minf=150 00:41:11.051 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:11.051 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:11.051 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:11.051 issued rwts: total=2016,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:11.051 latency : target=0, window=0, percentile=100.00%, depth=4 00:41:11.051 00:41:11.051 Run status group 0 (all jobs): 00:41:11.051 READ: bw=1194KiB/s (1223kB/s), 391KiB/s-804KiB/s (400kB/s-823kB/s), io=11.7MiB (12.3MB), run=10031-10036msec 00:41:11.051 07:26:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:41:11.051 07:26:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:41:11.051 07:26:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:41:11.051 07:26:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:41:11.051 07:26:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:41:11.051 07:26:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:41:11.051 07:26:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:11.051 07:26:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:41:11.051 07:26:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:11.051 07:26:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:41:11.051 07:26:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:11.051 07:26:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:41:11.051 07:26:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:11.051 07:26:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:41:11.051 07:26:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:41:11.051 07:26:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:41:11.051 07:26:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:41:11.051 07:26:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:11.051 07:26:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:41:11.051 07:26:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:11.051 07:26:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:41:11.051 07:26:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:11.051 07:26:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:41:11.051 07:26:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:11.051 00:41:11.051 real 0m11.306s 00:41:11.051 user 0m20.332s 00:41:11.051 sys 0m1.240s 00:41:11.051 07:26:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:11.051 07:26:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:41:11.051 ************************************ 00:41:11.051 END TEST fio_dif_1_multi_subsystems 00:41:11.051 ************************************ 00:41:11.051 07:26:31 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:41:11.051 07:26:31 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:41:11.051 07:26:31 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:41:11.051 07:26:31 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:41:11.051 ************************************ 00:41:11.051 START TEST fio_dif_rand_params 00:41:11.051 ************************************ 00:41:11.051 07:26:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1129 -- # fio_dif_rand_params 00:41:11.051 07:26:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:41:11.051 07:26:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:41:11.051 07:26:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:41:11.051 07:26:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:41:11.051 07:26:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:41:11.051 07:26:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:41:11.051 07:26:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:41:11.051 07:26:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:41:11.051 07:26:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:41:11.051 07:26:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:41:11.051 07:26:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:41:11.051 07:26:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:41:11.051 07:26:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:41:11.051 07:26:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:11.051 07:26:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:11.051 bdev_null0 00:41:11.051 07:26:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:11.051 07:26:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:41:11.051 07:26:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:11.051 07:26:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:11.052 07:26:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:11.052 07:26:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:41:11.052 07:26:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:11.052 07:26:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:11.052 07:26:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:11.052 07:26:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:41:11.052 07:26:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:11.052 07:26:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:11.052 [2024-11-18 07:26:31.935587] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:11.052 07:26:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:11.052 07:26:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:41:11.052 07:26:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:41:11.052 07:26:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:41:11.052 07:26:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:41:11.052 07:26:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:41:11.052 07:26:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:41:11.052 07:26:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:11.052 07:26:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:41:11.052 { 00:41:11.052 "params": { 00:41:11.052 "name": "Nvme$subsystem", 00:41:11.052 "trtype": "$TEST_TRANSPORT", 00:41:11.052 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:11.052 "adrfam": "ipv4", 00:41:11.052 "trsvcid": "$NVMF_PORT", 00:41:11.052 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:11.052 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:11.052 "hdgst": ${hdgst:-false}, 00:41:11.052 "ddgst": ${ddgst:-false} 00:41:11.052 }, 00:41:11.052 "method": "bdev_nvme_attach_controller" 00:41:11.052 } 00:41:11.052 EOF 00:41:11.052 )") 00:41:11.052 07:26:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:41:11.052 07:26:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:11.052 07:26:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:41:11.052 07:26:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:41:11.052 07:26:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:41:11.052 07:26:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:41:11.052 07:26:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:41:11.052 07:26:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:11.052 07:26:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:41:11.052 07:26:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:41:11.052 07:26:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:41:11.052 07:26:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:41:11.052 07:26:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:11.052 07:26:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:41:11.052 07:26:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:41:11.052 07:26:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:41:11.052 07:26:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:41:11.052 07:26:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:41:11.052 07:26:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:41:11.052 07:26:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:41:11.052 "params": { 00:41:11.052 "name": "Nvme0", 00:41:11.052 "trtype": "tcp", 00:41:11.052 "traddr": "10.0.0.2", 00:41:11.052 "adrfam": "ipv4", 00:41:11.052 "trsvcid": "4420", 00:41:11.052 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:41:11.052 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:41:11.052 "hdgst": false, 00:41:11.052 "ddgst": false 00:41:11.052 }, 00:41:11.052 "method": "bdev_nvme_attach_controller" 00:41:11.052 }' 00:41:11.052 07:26:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:41:11.052 07:26:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:41:11.052 07:26:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:41:11.052 07:26:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:11.052 07:26:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:41:11.052 07:26:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:41:11.052 07:26:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:41:11.052 07:26:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:41:11.052 07:26:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:41:11.052 07:26:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:11.311 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:41:11.311 ... 00:41:11.311 fio-3.35 00:41:11.311 Starting 3 threads 00:41:17.868 00:41:17.868 filename0: (groupid=0, jobs=1): err= 0: pid=468011: Mon Nov 18 07:26:37 2024 00:41:17.869 read: IOPS=226, BW=28.3MiB/s (29.7MB/s)(143MiB/5046msec) 00:41:17.869 slat (nsec): min=3969, max=55389, avg=16552.49, stdev=5358.70 00:41:17.869 clat (usec): min=5145, max=53309, avg=13172.19, stdev=6097.41 00:41:17.869 lat (usec): min=5158, max=53321, avg=13188.74, stdev=6097.32 00:41:17.869 clat percentiles (usec): 00:41:17.869 | 1.00th=[ 7832], 5.00th=[10028], 10.00th=[10552], 20.00th=[11076], 00:41:17.869 | 30.00th=[11469], 40.00th=[11731], 50.00th=[12125], 60.00th=[12518], 00:41:17.869 | 70.00th=[13042], 80.00th=[13566], 90.00th=[14615], 95.00th=[15664], 00:41:17.869 | 99.00th=[51119], 99.50th=[52691], 99.90th=[53216], 99.95th=[53216], 00:41:17.869 | 99.99th=[53216] 00:41:17.869 bw ( KiB/s): min=16896, max=32768, per=34.35%, avg=29235.20, stdev=4915.08, samples=10 00:41:17.869 iops : min= 132, max= 256, avg=228.40, stdev=38.40, samples=10 00:41:17.869 lat (msec) : 10=5.16%, 20=92.31%, 50=1.31%, 100=1.22% 00:41:17.869 cpu : usr=92.77%, sys=5.73%, ctx=69, majf=0, minf=105 00:41:17.869 IO depths : 1=0.8%, 2=99.2%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:17.869 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:17.869 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:17.869 issued rwts: total=1144,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:17.869 latency : target=0, window=0, percentile=100.00%, depth=3 00:41:17.869 filename0: (groupid=0, jobs=1): err= 0: pid=468012: Mon Nov 18 07:26:37 2024 00:41:17.869 read: IOPS=227, BW=28.5MiB/s (29.9MB/s)(144MiB/5045msec) 00:41:17.869 slat (nsec): min=4186, max=54897, avg=19714.05, stdev=5848.03 00:41:17.869 clat (usec): min=5060, max=54934, avg=13097.28, stdev=5770.25 00:41:17.869 lat (usec): min=5073, max=54954, avg=13116.99, stdev=5769.56 00:41:17.869 clat percentiles (usec): 00:41:17.869 | 1.00th=[ 7767], 5.00th=[ 9372], 10.00th=[10683], 20.00th=[11338], 00:41:17.869 | 30.00th=[11731], 40.00th=[11994], 50.00th=[12387], 60.00th=[12649], 00:41:17.869 | 70.00th=[13042], 80.00th=[13435], 90.00th=[14222], 95.00th=[15008], 00:41:17.869 | 99.00th=[50070], 99.50th=[53216], 99.90th=[54264], 99.95th=[54789], 00:41:17.869 | 99.99th=[54789] 00:41:17.869 bw ( KiB/s): min=21504, max=31744, per=34.53%, avg=29388.80, stdev=2928.31, samples=10 00:41:17.869 iops : min= 168, max= 248, avg=229.60, stdev=22.88, samples=10 00:41:17.869 lat (msec) : 10=6.09%, 20=91.65%, 50=1.30%, 100=0.96% 00:41:17.869 cpu : usr=95.20%, sys=4.30%, ctx=6, majf=0, minf=91 00:41:17.869 IO depths : 1=1.7%, 2=98.3%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:17.869 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:17.869 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:17.869 issued rwts: total=1150,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:17.869 latency : target=0, window=0, percentile=100.00%, depth=3 00:41:17.869 filename0: (groupid=0, jobs=1): err= 0: pid=468013: Mon Nov 18 07:26:37 2024 00:41:17.869 read: IOPS=212, BW=26.5MiB/s (27.8MB/s)(133MiB/5003msec) 00:41:17.869 slat (nsec): min=7748, max=59618, avg=16104.01, stdev=4540.42 00:41:17.869 clat (usec): min=3685, max=53189, avg=14123.08, stdev=4553.77 00:41:17.869 lat (usec): min=3698, max=53219, avg=14139.19, stdev=4553.97 00:41:17.869 clat percentiles (usec): 00:41:17.869 | 1.00th=[ 5014], 5.00th=[ 8848], 10.00th=[10552], 20.00th=[12125], 00:41:17.869 | 30.00th=[12780], 40.00th=[13435], 50.00th=[14222], 60.00th=[14746], 00:41:17.869 | 70.00th=[15401], 80.00th=[15926], 90.00th=[16712], 95.00th=[17171], 00:41:17.869 | 99.00th=[45876], 99.50th=[48497], 99.90th=[53216], 99.95th=[53216], 00:41:17.869 | 99.99th=[53216] 00:41:17.869 bw ( KiB/s): min=24320, max=33024, per=31.85%, avg=27110.40, stdev=2325.86, samples=10 00:41:17.869 iops : min= 190, max= 258, avg=211.80, stdev=18.17, samples=10 00:41:17.869 lat (msec) : 4=0.09%, 10=9.14%, 20=89.63%, 50=0.85%, 100=0.28% 00:41:17.869 cpu : usr=94.82%, sys=4.66%, ctx=7, majf=0, minf=146 00:41:17.869 IO depths : 1=0.6%, 2=99.4%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:17.869 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:17.869 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:17.869 issued rwts: total=1061,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:17.869 latency : target=0, window=0, percentile=100.00%, depth=3 00:41:17.869 00:41:17.869 Run status group 0 (all jobs): 00:41:17.869 READ: bw=83.1MiB/s (87.1MB/s), 26.5MiB/s-28.5MiB/s (27.8MB/s-29.9MB/s), io=419MiB (440MB), run=5003-5046msec 00:41:17.869 07:26:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:41:17.869 07:26:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:41:17.869 07:26:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:41:17.869 07:26:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:41:17.869 07:26:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:41:17.869 07:26:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:41:17.869 07:26:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:17.869 07:26:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:17.869 07:26:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:17.869 07:26:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:41:17.869 07:26:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:17.869 07:26:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:17.869 07:26:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:17.869 07:26:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:41:17.869 07:26:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:41:17.869 07:26:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:41:17.869 07:26:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:41:17.869 07:26:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:41:17.869 07:26:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:41:17.869 07:26:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:41:17.869 07:26:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:41:17.869 07:26:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:41:17.869 07:26:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:41:17.869 07:26:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:41:17.869 07:26:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:41:17.869 07:26:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:17.869 07:26:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:17.869 bdev_null0 00:41:17.869 07:26:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:17.869 07:26:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:41:17.869 07:26:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:17.869 07:26:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:17.869 07:26:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:17.869 07:26:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:41:17.869 07:26:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:17.869 07:26:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:17.869 07:26:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:17.869 07:26:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:41:17.869 07:26:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:17.869 07:26:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:17.869 [2024-11-18 07:26:38.105832] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:17.869 07:26:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:17.869 07:26:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:41:17.869 07:26:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:41:17.869 07:26:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:41:17.869 07:26:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:41:17.869 07:26:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:17.869 07:26:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:17.869 bdev_null1 00:41:17.869 07:26:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:17.869 07:26:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:41:17.869 07:26:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:17.869 07:26:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:17.869 07:26:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:17.869 07:26:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:41:17.869 07:26:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:17.869 07:26:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:17.869 07:26:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:17.869 07:26:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:41:17.869 07:26:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:17.869 07:26:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:17.869 07:26:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:17.869 07:26:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:41:17.869 07:26:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:41:17.869 07:26:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:41:17.870 07:26:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:41:17.870 07:26:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:17.870 07:26:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:17.870 bdev_null2 00:41:17.870 07:26:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:17.870 07:26:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:41:17.870 07:26:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:17.870 07:26:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:17.870 07:26:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:17.870 07:26:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:41:17.870 07:26:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:17.870 07:26:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:17.870 07:26:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:17.870 07:26:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:41:17.870 07:26:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:17.870 07:26:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:17.870 07:26:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:17.870 07:26:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:41:17.870 07:26:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:41:17.870 07:26:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:41:17.870 07:26:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:41:17.870 07:26:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:17.870 07:26:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:41:17.870 07:26:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:17.870 07:26:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:41:17.870 07:26:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:41:17.870 07:26:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:41:17.870 07:26:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:41:17.870 { 00:41:17.870 "params": { 00:41:17.870 "name": "Nvme$subsystem", 00:41:17.870 "trtype": "$TEST_TRANSPORT", 00:41:17.870 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:17.870 "adrfam": "ipv4", 00:41:17.870 "trsvcid": "$NVMF_PORT", 00:41:17.870 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:17.870 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:17.870 "hdgst": ${hdgst:-false}, 00:41:17.870 "ddgst": ${ddgst:-false} 00:41:17.870 }, 00:41:17.870 "method": "bdev_nvme_attach_controller" 00:41:17.870 } 00:41:17.870 EOF 00:41:17.870 )") 00:41:17.870 07:26:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:41:17.870 07:26:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:41:17.870 07:26:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:41:17.870 07:26:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:41:17.870 07:26:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:17.870 07:26:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:41:17.870 07:26:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:41:17.870 07:26:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:41:17.870 07:26:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:41:17.870 07:26:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:17.870 07:26:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:41:17.870 07:26:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:41:17.870 07:26:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:41:17.870 07:26:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:41:17.870 07:26:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:41:17.870 07:26:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:41:17.870 07:26:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:41:17.870 { 00:41:17.870 "params": { 00:41:17.870 "name": "Nvme$subsystem", 00:41:17.870 "trtype": "$TEST_TRANSPORT", 00:41:17.870 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:17.870 "adrfam": "ipv4", 00:41:17.870 "trsvcid": "$NVMF_PORT", 00:41:17.870 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:17.870 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:17.870 "hdgst": ${hdgst:-false}, 00:41:17.870 "ddgst": ${ddgst:-false} 00:41:17.870 }, 00:41:17.870 "method": "bdev_nvme_attach_controller" 00:41:17.870 } 00:41:17.870 EOF 00:41:17.870 )") 00:41:17.870 07:26:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:41:17.870 07:26:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:41:17.870 07:26:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:41:17.870 07:26:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:41:17.870 07:26:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:41:17.870 07:26:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:41:17.870 07:26:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:41:17.870 07:26:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:41:17.870 { 00:41:17.870 "params": { 00:41:17.870 "name": "Nvme$subsystem", 00:41:17.870 "trtype": "$TEST_TRANSPORT", 00:41:17.870 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:17.870 "adrfam": "ipv4", 00:41:17.870 "trsvcid": "$NVMF_PORT", 00:41:17.870 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:17.870 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:17.870 "hdgst": ${hdgst:-false}, 00:41:17.870 "ddgst": ${ddgst:-false} 00:41:17.870 }, 00:41:17.870 "method": "bdev_nvme_attach_controller" 00:41:17.870 } 00:41:17.870 EOF 00:41:17.870 )") 00:41:17.870 07:26:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:41:17.870 07:26:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:41:17.870 07:26:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:41:17.870 07:26:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:41:17.870 "params": { 00:41:17.870 "name": "Nvme0", 00:41:17.870 "trtype": "tcp", 00:41:17.870 "traddr": "10.0.0.2", 00:41:17.870 "adrfam": "ipv4", 00:41:17.870 "trsvcid": "4420", 00:41:17.870 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:41:17.870 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:41:17.870 "hdgst": false, 00:41:17.870 "ddgst": false 00:41:17.870 }, 00:41:17.870 "method": "bdev_nvme_attach_controller" 00:41:17.870 },{ 00:41:17.870 "params": { 00:41:17.870 "name": "Nvme1", 00:41:17.870 "trtype": "tcp", 00:41:17.870 "traddr": "10.0.0.2", 00:41:17.870 "adrfam": "ipv4", 00:41:17.870 "trsvcid": "4420", 00:41:17.870 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:41:17.870 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:41:17.870 "hdgst": false, 00:41:17.870 "ddgst": false 00:41:17.870 }, 00:41:17.870 "method": "bdev_nvme_attach_controller" 00:41:17.870 },{ 00:41:17.870 "params": { 00:41:17.870 "name": "Nvme2", 00:41:17.870 "trtype": "tcp", 00:41:17.870 "traddr": "10.0.0.2", 00:41:17.870 "adrfam": "ipv4", 00:41:17.870 "trsvcid": "4420", 00:41:17.870 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:41:17.870 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:41:17.870 "hdgst": false, 00:41:17.870 "ddgst": false 00:41:17.870 }, 00:41:17.870 "method": "bdev_nvme_attach_controller" 00:41:17.870 }' 00:41:17.870 07:26:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:41:17.870 07:26:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:41:17.870 07:26:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:41:17.870 07:26:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:17.870 07:26:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:41:17.870 07:26:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:41:17.870 07:26:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:41:17.870 07:26:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:41:17.870 07:26:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:41:17.870 07:26:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:17.870 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:41:17.870 ... 00:41:17.870 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:41:17.870 ... 00:41:17.870 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:41:17.870 ... 00:41:17.870 fio-3.35 00:41:17.870 Starting 24 threads 00:41:30.238 00:41:30.238 filename0: (groupid=0, jobs=1): err= 0: pid=468876: Mon Nov 18 07:26:49 2024 00:41:30.238 read: IOPS=471, BW=1887KiB/s (1932kB/s)(18.4MiB/10006msec) 00:41:30.238 slat (usec): min=8, max=117, avg=45.17, stdev=16.78 00:41:30.238 clat (usec): min=19883, max=58562, avg=33503.88, stdev=1876.44 00:41:30.238 lat (usec): min=19910, max=58656, avg=33549.04, stdev=1874.87 00:41:30.238 clat percentiles (usec): 00:41:30.238 | 1.00th=[32113], 5.00th=[32637], 10.00th=[32900], 20.00th=[33162], 00:41:30.238 | 30.00th=[33162], 40.00th=[33424], 50.00th=[33424], 60.00th=[33424], 00:41:30.238 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34341], 95.00th=[34866], 00:41:30.238 | 99.00th=[34866], 99.50th=[35914], 99.90th=[58459], 99.95th=[58459], 00:41:30.238 | 99.99th=[58459] 00:41:30.238 bw ( KiB/s): min= 1667, max= 1920, per=4.15%, avg=1879.74, stdev=74.07, samples=19 00:41:30.238 iops : min= 416, max= 480, avg=469.89, stdev=18.64, samples=19 00:41:30.238 lat (msec) : 20=0.08%, 50=99.58%, 100=0.34% 00:41:30.238 cpu : usr=96.75%, sys=2.19%, ctx=122, majf=0, minf=9 00:41:30.238 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:41:30.238 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:30.238 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:30.238 issued rwts: total=4720,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:30.238 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:30.238 filename0: (groupid=0, jobs=1): err= 0: pid=468877: Mon Nov 18 07:26:49 2024 00:41:30.238 read: IOPS=472, BW=1892KiB/s (1937kB/s)(18.5MiB/10013msec) 00:41:30.238 slat (usec): min=8, max=114, avg=44.73, stdev=24.42 00:41:30.238 clat (usec): min=18157, max=46064, avg=33426.14, stdev=2191.52 00:41:30.238 lat (usec): min=18222, max=46127, avg=33470.87, stdev=2191.38 00:41:30.238 clat percentiles (usec): 00:41:30.238 | 1.00th=[21890], 5.00th=[32375], 10.00th=[32637], 20.00th=[33162], 00:41:30.238 | 30.00th=[33162], 40.00th=[33424], 50.00th=[33424], 60.00th=[33817], 00:41:30.238 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34341], 95.00th=[34866], 00:41:30.238 | 99.00th=[43254], 99.50th=[44303], 99.90th=[45876], 99.95th=[45876], 00:41:30.238 | 99.99th=[45876] 00:41:30.238 bw ( KiB/s): min= 1792, max= 1920, per=4.18%, avg=1893.60, stdev=52.24, samples=20 00:41:30.238 iops : min= 448, max= 480, avg=473.40, stdev=13.06, samples=20 00:41:30.238 lat (msec) : 20=0.34%, 50=99.66% 00:41:30.238 cpu : usr=98.14%, sys=1.42%, ctx=15, majf=0, minf=10 00:41:30.238 IO depths : 1=5.4%, 2=11.6%, 4=25.0%, 8=50.9%, 16=7.1%, 32=0.0%, >=64=0.0% 00:41:30.238 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:30.238 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:30.238 issued rwts: total=4736,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:30.238 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:30.238 filename0: (groupid=0, jobs=1): err= 0: pid=468878: Mon Nov 18 07:26:49 2024 00:41:30.239 read: IOPS=472, BW=1890KiB/s (1936kB/s)(18.5MiB/10022msec) 00:41:30.239 slat (usec): min=8, max=124, avg=31.24, stdev=15.98 00:41:30.239 clat (usec): min=18024, max=45859, avg=33577.11, stdev=1161.34 00:41:30.239 lat (usec): min=18050, max=45875, avg=33608.35, stdev=1163.67 00:41:30.239 clat percentiles (usec): 00:41:30.239 | 1.00th=[32113], 5.00th=[32900], 10.00th=[33162], 20.00th=[33162], 00:41:30.239 | 30.00th=[33424], 40.00th=[33424], 50.00th=[33424], 60.00th=[33817], 00:41:30.239 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34341], 95.00th=[34866], 00:41:30.239 | 99.00th=[35390], 99.50th=[35390], 99.90th=[35914], 99.95th=[42730], 00:41:30.239 | 99.99th=[45876] 00:41:30.239 bw ( KiB/s): min= 1792, max= 1920, per=4.16%, avg=1886.32, stdev=56.16, samples=19 00:41:30.239 iops : min= 448, max= 480, avg=471.58, stdev=14.04, samples=19 00:41:30.239 lat (msec) : 20=0.34%, 50=99.66% 00:41:30.239 cpu : usr=98.16%, sys=1.33%, ctx=62, majf=0, minf=9 00:41:30.239 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:41:30.239 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:30.239 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:30.239 issued rwts: total=4736,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:30.239 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:30.239 filename0: (groupid=0, jobs=1): err= 0: pid=468879: Mon Nov 18 07:26:49 2024 00:41:30.239 read: IOPS=473, BW=1894KiB/s (1939kB/s)(18.5MiB/10003msec) 00:41:30.239 slat (nsec): min=11336, max=84326, avg=34996.69, stdev=11226.94 00:41:30.239 clat (usec): min=17708, max=35949, avg=33467.70, stdev=1414.61 00:41:30.239 lat (usec): min=17759, max=36006, avg=33502.69, stdev=1414.50 00:41:30.239 clat percentiles (usec): 00:41:30.239 | 1.00th=[24511], 5.00th=[32900], 10.00th=[32900], 20.00th=[33162], 00:41:30.239 | 30.00th=[33162], 40.00th=[33424], 50.00th=[33424], 60.00th=[33817], 00:41:30.239 | 70.00th=[33817], 80.00th=[34341], 90.00th=[34341], 95.00th=[34866], 00:41:30.239 | 99.00th=[34866], 99.50th=[35390], 99.90th=[35914], 99.95th=[35914], 00:41:30.239 | 99.99th=[35914] 00:41:30.239 bw ( KiB/s): min= 1792, max= 1920, per=4.18%, avg=1893.05, stdev=53.61, samples=19 00:41:30.239 iops : min= 448, max= 480, avg=473.26, stdev=13.40, samples=19 00:41:30.239 lat (msec) : 20=0.34%, 50=99.66% 00:41:30.239 cpu : usr=97.90%, sys=1.40%, ctx=78, majf=0, minf=9 00:41:30.239 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:41:30.239 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:30.239 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:30.239 issued rwts: total=4736,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:30.239 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:30.239 filename0: (groupid=0, jobs=1): err= 0: pid=468880: Mon Nov 18 07:26:49 2024 00:41:30.239 read: IOPS=472, BW=1891KiB/s (1937kB/s)(18.5MiB/10017msec) 00:41:30.239 slat (nsec): min=10233, max=87865, avg=34660.54, stdev=10218.61 00:41:30.239 clat (usec): min=19903, max=37570, avg=33539.70, stdev=1179.44 00:41:30.239 lat (usec): min=19940, max=37599, avg=33574.36, stdev=1178.77 00:41:30.239 clat percentiles (usec): 00:41:30.239 | 1.00th=[31065], 5.00th=[32900], 10.00th=[32900], 20.00th=[33162], 00:41:30.239 | 30.00th=[33162], 40.00th=[33424], 50.00th=[33424], 60.00th=[33817], 00:41:30.239 | 70.00th=[33817], 80.00th=[34341], 90.00th=[34341], 95.00th=[34866], 00:41:30.239 | 99.00th=[35390], 99.50th=[35914], 99.90th=[37487], 99.95th=[37487], 00:41:30.239 | 99.99th=[37487] 00:41:30.239 bw ( KiB/s): min= 1792, max= 1920, per=4.16%, avg=1886.32, stdev=57.91, samples=19 00:41:30.239 iops : min= 448, max= 480, avg=471.58, stdev=14.48, samples=19 00:41:30.239 lat (msec) : 20=0.06%, 50=99.94% 00:41:30.239 cpu : usr=97.32%, sys=1.80%, ctx=113, majf=0, minf=9 00:41:30.239 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:41:30.239 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:30.239 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:30.239 issued rwts: total=4736,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:30.239 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:30.239 filename0: (groupid=0, jobs=1): err= 0: pid=468881: Mon Nov 18 07:26:49 2024 00:41:30.239 read: IOPS=471, BW=1887KiB/s (1932kB/s)(18.4MiB/10005msec) 00:41:30.239 slat (usec): min=8, max=109, avg=34.38, stdev=11.84 00:41:30.239 clat (usec): min=8576, max=61883, avg=33588.47, stdev=2159.61 00:41:30.239 lat (usec): min=8612, max=61921, avg=33622.85, stdev=2160.39 00:41:30.239 clat percentiles (usec): 00:41:30.239 | 1.00th=[32637], 5.00th=[32900], 10.00th=[32900], 20.00th=[33162], 00:41:30.239 | 30.00th=[33162], 40.00th=[33424], 50.00th=[33424], 60.00th=[33817], 00:41:30.239 | 70.00th=[33817], 80.00th=[34341], 90.00th=[34341], 95.00th=[34866], 00:41:30.239 | 99.00th=[35390], 99.50th=[35390], 99.90th=[61604], 99.95th=[61604], 00:41:30.239 | 99.99th=[62129] 00:41:30.239 bw ( KiB/s): min= 1664, max= 1920, per=4.15%, avg=1879.58, stdev=74.55, samples=19 00:41:30.239 iops : min= 416, max= 480, avg=469.89, stdev=18.64, samples=19 00:41:30.239 lat (msec) : 10=0.04%, 20=0.59%, 50=99.03%, 100=0.34% 00:41:30.239 cpu : usr=98.07%, sys=1.39%, ctx=47, majf=0, minf=9 00:41:30.239 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:41:30.239 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:30.239 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:30.239 issued rwts: total=4720,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:30.239 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:30.239 filename0: (groupid=0, jobs=1): err= 0: pid=468882: Mon Nov 18 07:26:49 2024 00:41:30.239 read: IOPS=471, BW=1887KiB/s (1932kB/s)(18.4MiB/10005msec) 00:41:30.239 slat (nsec): min=7318, max=69033, avg=31930.55, stdev=10958.48 00:41:30.239 clat (usec): min=18492, max=48105, avg=33630.15, stdev=855.98 00:41:30.239 lat (usec): min=18553, max=48127, avg=33662.08, stdev=854.60 00:41:30.239 clat percentiles (usec): 00:41:30.239 | 1.00th=[32637], 5.00th=[32900], 10.00th=[32900], 20.00th=[33162], 00:41:30.239 | 30.00th=[33162], 40.00th=[33424], 50.00th=[33424], 60.00th=[33817], 00:41:30.239 | 70.00th=[33817], 80.00th=[34341], 90.00th=[34341], 95.00th=[34866], 00:41:30.239 | 99.00th=[35390], 99.50th=[35390], 99.90th=[37487], 99.95th=[45351], 00:41:30.239 | 99.99th=[47973] 00:41:30.239 bw ( KiB/s): min= 1792, max= 1920, per=4.16%, avg=1886.32, stdev=57.91, samples=19 00:41:30.239 iops : min= 448, max= 480, avg=471.58, stdev=14.48, samples=19 00:41:30.239 lat (msec) : 20=0.04%, 50=99.96% 00:41:30.239 cpu : usr=98.37%, sys=1.15%, ctx=38, majf=0, minf=9 00:41:30.239 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:41:30.239 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:30.239 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:30.239 issued rwts: total=4720,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:30.239 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:30.239 filename0: (groupid=0, jobs=1): err= 0: pid=468883: Mon Nov 18 07:26:49 2024 00:41:30.239 read: IOPS=473, BW=1894KiB/s (1940kB/s)(18.5MiB/10001msec) 00:41:30.239 slat (nsec): min=10903, max=75626, avg=35039.16, stdev=10572.57 00:41:30.239 clat (usec): min=18490, max=35955, avg=33482.58, stdev=1441.08 00:41:30.239 lat (usec): min=18525, max=35986, avg=33517.62, stdev=1440.71 00:41:30.239 clat percentiles (usec): 00:41:30.239 | 1.00th=[22152], 5.00th=[32900], 10.00th=[32900], 20.00th=[33162], 00:41:30.239 | 30.00th=[33162], 40.00th=[33424], 50.00th=[33424], 60.00th=[33817], 00:41:30.239 | 70.00th=[33817], 80.00th=[34341], 90.00th=[34341], 95.00th=[34866], 00:41:30.239 | 99.00th=[35390], 99.50th=[35390], 99.90th=[35914], 99.95th=[35914], 00:41:30.239 | 99.99th=[35914] 00:41:30.239 bw ( KiB/s): min= 1792, max= 1923, per=4.18%, avg=1893.21, stdev=53.70, samples=19 00:41:30.239 iops : min= 448, max= 480, avg=473.26, stdev=13.40, samples=19 00:41:30.239 lat (msec) : 20=0.34%, 50=99.66% 00:41:30.239 cpu : usr=97.36%, sys=1.84%, ctx=104, majf=0, minf=9 00:41:30.239 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:41:30.239 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:30.239 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:30.239 issued rwts: total=4736,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:30.239 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:30.239 filename1: (groupid=0, jobs=1): err= 0: pid=468884: Mon Nov 18 07:26:49 2024 00:41:30.239 read: IOPS=471, BW=1887KiB/s (1932kB/s)(18.4MiB/10007msec) 00:41:30.239 slat (nsec): min=5393, max=89783, avg=37979.40, stdev=14099.63 00:41:30.239 clat (usec): min=19797, max=67632, avg=33572.67, stdev=1952.73 00:41:30.239 lat (usec): min=19814, max=67647, avg=33610.65, stdev=1950.71 00:41:30.239 clat percentiles (usec): 00:41:30.239 | 1.00th=[32637], 5.00th=[32637], 10.00th=[32900], 20.00th=[33162], 00:41:30.239 | 30.00th=[33162], 40.00th=[33424], 50.00th=[33424], 60.00th=[33817], 00:41:30.239 | 70.00th=[33817], 80.00th=[34341], 90.00th=[34341], 95.00th=[34866], 00:41:30.239 | 99.00th=[35390], 99.50th=[35914], 99.90th=[58983], 99.95th=[58983], 00:41:30.239 | 99.99th=[67634] 00:41:30.239 bw ( KiB/s): min= 1664, max= 1920, per=4.15%, avg=1879.58, stdev=74.55, samples=19 00:41:30.239 iops : min= 416, max= 480, avg=469.89, stdev=18.64, samples=19 00:41:30.239 lat (msec) : 20=0.15%, 50=99.51%, 100=0.34% 00:41:30.239 cpu : usr=98.08%, sys=1.46%, ctx=13, majf=0, minf=9 00:41:30.239 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:41:30.239 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:30.239 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:30.239 issued rwts: total=4720,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:30.239 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:30.239 filename1: (groupid=0, jobs=1): err= 0: pid=468885: Mon Nov 18 07:26:49 2024 00:41:30.239 read: IOPS=471, BW=1887KiB/s (1933kB/s)(18.4MiB/10004msec) 00:41:30.239 slat (nsec): min=8117, max=93268, avg=35878.56, stdev=12941.39 00:41:30.239 clat (usec): min=4430, max=73746, avg=33583.51, stdev=2954.45 00:41:30.239 lat (usec): min=4452, max=73780, avg=33619.39, stdev=2955.58 00:41:30.239 clat percentiles (usec): 00:41:30.239 | 1.00th=[32637], 5.00th=[32900], 10.00th=[32900], 20.00th=[33162], 00:41:30.239 | 30.00th=[33162], 40.00th=[33424], 50.00th=[33424], 60.00th=[33817], 00:41:30.239 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34341], 95.00th=[34866], 00:41:30.239 | 99.00th=[35390], 99.50th=[35390], 99.90th=[73925], 99.95th=[73925], 00:41:30.239 | 99.99th=[73925] 00:41:30.240 bw ( KiB/s): min= 1667, max= 1920, per=4.15%, avg=1879.74, stdev=74.07, samples=19 00:41:30.240 iops : min= 416, max= 480, avg=469.89, stdev=18.64, samples=19 00:41:30.240 lat (msec) : 10=0.34%, 20=0.30%, 50=99.03%, 100=0.34% 00:41:30.240 cpu : usr=97.03%, sys=1.83%, ctx=128, majf=0, minf=9 00:41:30.240 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:41:30.240 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:30.240 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:30.240 issued rwts: total=4720,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:30.240 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:30.240 filename1: (groupid=0, jobs=1): err= 0: pid=468886: Mon Nov 18 07:26:49 2024 00:41:30.240 read: IOPS=472, BW=1892KiB/s (1937kB/s)(18.5MiB/10015msec) 00:41:30.240 slat (nsec): min=8419, max=76903, avg=33124.72, stdev=11095.58 00:41:30.240 clat (usec): min=17955, max=46016, avg=33540.60, stdev=1467.34 00:41:30.240 lat (usec): min=17969, max=46064, avg=33573.73, stdev=1467.49 00:41:30.240 clat percentiles (usec): 00:41:30.240 | 1.00th=[28967], 5.00th=[32900], 10.00th=[32900], 20.00th=[33162], 00:41:30.240 | 30.00th=[33162], 40.00th=[33424], 50.00th=[33424], 60.00th=[33817], 00:41:30.240 | 70.00th=[33817], 80.00th=[34341], 90.00th=[34341], 95.00th=[34866], 00:41:30.240 | 99.00th=[35390], 99.50th=[35390], 99.90th=[45351], 99.95th=[45351], 00:41:30.240 | 99.99th=[45876] 00:41:30.240 bw ( KiB/s): min= 1792, max= 1936, per=4.16%, avg=1886.32, stdev=58.15, samples=19 00:41:30.240 iops : min= 448, max= 484, avg=471.58, stdev=14.54, samples=19 00:41:30.240 lat (msec) : 20=0.34%, 50=99.66% 00:41:30.240 cpu : usr=96.53%, sys=2.02%, ctx=210, majf=0, minf=9 00:41:30.240 IO depths : 1=6.1%, 2=12.4%, 4=24.9%, 8=50.2%, 16=6.4%, 32=0.0%, >=64=0.0% 00:41:30.240 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:30.240 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:30.240 issued rwts: total=4736,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:30.240 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:30.240 filename1: (groupid=0, jobs=1): err= 0: pid=468887: Mon Nov 18 07:26:49 2024 00:41:30.240 read: IOPS=471, BW=1886KiB/s (1932kB/s)(18.4MiB/10008msec) 00:41:30.240 slat (usec): min=8, max=103, avg=42.44, stdev=22.41 00:41:30.240 clat (usec): min=28318, max=40774, avg=33560.73, stdev=782.03 00:41:30.240 lat (usec): min=28369, max=40804, avg=33603.16, stdev=776.98 00:41:30.240 clat percentiles (usec): 00:41:30.240 | 1.00th=[32113], 5.00th=[32637], 10.00th=[32900], 20.00th=[33162], 00:41:30.240 | 30.00th=[33162], 40.00th=[33424], 50.00th=[33424], 60.00th=[33817], 00:41:30.240 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34341], 95.00th=[34866], 00:41:30.240 | 99.00th=[35390], 99.50th=[35390], 99.90th=[40633], 99.95th=[40633], 00:41:30.240 | 99.99th=[40633] 00:41:30.240 bw ( KiB/s): min= 1792, max= 1920, per=4.16%, avg=1886.32, stdev=57.91, samples=19 00:41:30.240 iops : min= 448, max= 480, avg=471.58, stdev=14.48, samples=19 00:41:30.240 lat (msec) : 50=100.00% 00:41:30.240 cpu : usr=98.44%, sys=1.11%, ctx=53, majf=0, minf=9 00:41:30.240 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:41:30.240 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:30.240 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:30.240 issued rwts: total=4720,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:30.240 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:30.240 filename1: (groupid=0, jobs=1): err= 0: pid=468888: Mon Nov 18 07:26:49 2024 00:41:30.240 read: IOPS=471, BW=1887KiB/s (1932kB/s)(18.4MiB/10006msec) 00:41:30.240 slat (nsec): min=5925, max=84351, avg=36932.08, stdev=14086.86 00:41:30.240 clat (usec): min=19874, max=57871, avg=33595.44, stdev=1807.09 00:41:30.240 lat (usec): min=19898, max=57888, avg=33632.37, stdev=1806.82 00:41:30.240 clat percentiles (usec): 00:41:30.240 | 1.00th=[32637], 5.00th=[32900], 10.00th=[33162], 20.00th=[33162], 00:41:30.240 | 30.00th=[33162], 40.00th=[33424], 50.00th=[33424], 60.00th=[33817], 00:41:30.240 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34341], 95.00th=[34866], 00:41:30.240 | 99.00th=[34866], 99.50th=[35914], 99.90th=[57934], 99.95th=[57934], 00:41:30.240 | 99.99th=[57934] 00:41:30.240 bw ( KiB/s): min= 1660, max= 1920, per=4.15%, avg=1879.37, stdev=75.19, samples=19 00:41:30.240 iops : min= 415, max= 480, avg=469.84, stdev=18.80, samples=19 00:41:30.240 lat (msec) : 20=0.08%, 50=99.58%, 100=0.34% 00:41:30.240 cpu : usr=98.43%, sys=1.13%, ctx=19, majf=0, minf=9 00:41:30.240 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:41:30.240 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:30.240 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:30.240 issued rwts: total=4720,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:30.240 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:30.240 filename1: (groupid=0, jobs=1): err= 0: pid=468889: Mon Nov 18 07:26:49 2024 00:41:30.240 read: IOPS=473, BW=1894KiB/s (1939kB/s)(18.5MiB/10002msec) 00:41:30.240 slat (nsec): min=8182, max=89934, avg=37662.80, stdev=14105.01 00:41:30.240 clat (usec): min=18378, max=36013, avg=33456.57, stdev=1400.58 00:41:30.240 lat (usec): min=18418, max=36038, avg=33494.23, stdev=1401.27 00:41:30.240 clat percentiles (usec): 00:41:30.240 | 1.00th=[23200], 5.00th=[32900], 10.00th=[32900], 20.00th=[33162], 00:41:30.240 | 30.00th=[33162], 40.00th=[33424], 50.00th=[33424], 60.00th=[33817], 00:41:30.240 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34341], 95.00th=[34866], 00:41:30.240 | 99.00th=[34866], 99.50th=[35390], 99.90th=[35914], 99.95th=[35914], 00:41:30.240 | 99.99th=[35914] 00:41:30.240 bw ( KiB/s): min= 1792, max= 1920, per=4.18%, avg=1893.05, stdev=53.61, samples=19 00:41:30.240 iops : min= 448, max= 480, avg=473.26, stdev=13.40, samples=19 00:41:30.240 lat (msec) : 20=0.36%, 50=99.64% 00:41:30.240 cpu : usr=98.42%, sys=1.16%, ctx=13, majf=0, minf=9 00:41:30.240 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:41:30.240 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:30.240 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:30.240 issued rwts: total=4736,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:30.240 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:30.240 filename1: (groupid=0, jobs=1): err= 0: pid=468890: Mon Nov 18 07:26:49 2024 00:41:30.240 read: IOPS=473, BW=1894KiB/s (1939kB/s)(18.5MiB/10002msec) 00:41:30.240 slat (nsec): min=8170, max=82344, avg=27103.31, stdev=14292.00 00:41:30.240 clat (usec): min=18445, max=36052, avg=33572.39, stdev=1411.88 00:41:30.240 lat (usec): min=18480, max=36071, avg=33599.49, stdev=1411.14 00:41:30.240 clat percentiles (usec): 00:41:30.240 | 1.00th=[22938], 5.00th=[32900], 10.00th=[33162], 20.00th=[33162], 00:41:30.240 | 30.00th=[33424], 40.00th=[33424], 50.00th=[33817], 60.00th=[33817], 00:41:30.240 | 70.00th=[33817], 80.00th=[34341], 90.00th=[34341], 95.00th=[34866], 00:41:30.240 | 99.00th=[35390], 99.50th=[35390], 99.90th=[35914], 99.95th=[35914], 00:41:30.240 | 99.99th=[35914] 00:41:30.240 bw ( KiB/s): min= 1792, max= 1920, per=4.18%, avg=1893.05, stdev=53.61, samples=19 00:41:30.240 iops : min= 448, max= 480, avg=473.26, stdev=13.40, samples=19 00:41:30.240 lat (msec) : 20=0.34%, 50=99.66% 00:41:30.240 cpu : usr=98.38%, sys=1.20%, ctx=14, majf=0, minf=9 00:41:30.240 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:41:30.240 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:30.240 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:30.240 issued rwts: total=4736,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:30.240 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:30.240 filename1: (groupid=0, jobs=1): err= 0: pid=468891: Mon Nov 18 07:26:49 2024 00:41:30.240 read: IOPS=474, BW=1898KiB/s (1944kB/s)(18.6MiB/10013msec) 00:41:30.240 slat (nsec): min=8135, max=68178, avg=17754.00, stdev=10968.62 00:41:30.240 clat (usec): min=12499, max=46640, avg=33574.44, stdev=2181.27 00:41:30.240 lat (usec): min=12526, max=46650, avg=33592.19, stdev=2180.23 00:41:30.240 clat percentiles (usec): 00:41:30.240 | 1.00th=[21890], 5.00th=[32900], 10.00th=[33162], 20.00th=[33424], 00:41:30.240 | 30.00th=[33424], 40.00th=[33424], 50.00th=[33817], 60.00th=[33817], 00:41:30.240 | 70.00th=[33817], 80.00th=[34341], 90.00th=[34341], 95.00th=[34866], 00:41:30.240 | 99.00th=[41157], 99.50th=[41681], 99.90th=[43254], 99.95th=[43254], 00:41:30.240 | 99.99th=[46400] 00:41:30.240 bw ( KiB/s): min= 1792, max= 2048, per=4.18%, avg=1894.40, stdev=62.60, samples=20 00:41:30.240 iops : min= 448, max= 512, avg=473.60, stdev=15.65, samples=20 00:41:30.240 lat (msec) : 20=0.53%, 50=99.47% 00:41:30.240 cpu : usr=98.27%, sys=1.31%, ctx=11, majf=0, minf=9 00:41:30.240 IO depths : 1=1.6%, 2=7.8%, 4=24.9%, 8=54.8%, 16=10.9%, 32=0.0%, >=64=0.0% 00:41:30.240 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:30.240 complete : 0=0.0%, 4=94.3%, 8=0.1%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:30.240 issued rwts: total=4752,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:30.240 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:30.240 filename2: (groupid=0, jobs=1): err= 0: pid=468892: Mon Nov 18 07:26:49 2024 00:41:30.240 read: IOPS=473, BW=1894KiB/s (1939kB/s)(18.5MiB/10002msec) 00:41:30.240 slat (nsec): min=8078, max=78920, avg=22464.66, stdev=12578.05 00:41:30.240 clat (usec): min=18673, max=45984, avg=33598.14, stdev=1563.97 00:41:30.240 lat (usec): min=18693, max=46009, avg=33620.60, stdev=1562.89 00:41:30.240 clat percentiles (usec): 00:41:30.240 | 1.00th=[23200], 5.00th=[32900], 10.00th=[33162], 20.00th=[33162], 00:41:30.240 | 30.00th=[33424], 40.00th=[33424], 50.00th=[33817], 60.00th=[33817], 00:41:30.240 | 70.00th=[33817], 80.00th=[34341], 90.00th=[34341], 95.00th=[34866], 00:41:30.240 | 99.00th=[35390], 99.50th=[35390], 99.90th=[44827], 99.95th=[45351], 00:41:30.240 | 99.99th=[45876] 00:41:30.240 bw ( KiB/s): min= 1792, max= 1920, per=4.18%, avg=1893.05, stdev=53.61, samples=19 00:41:30.240 iops : min= 448, max= 480, avg=473.26, stdev=13.40, samples=19 00:41:30.240 lat (msec) : 20=0.34%, 50=99.66% 00:41:30.240 cpu : usr=98.26%, sys=1.32%, ctx=15, majf=0, minf=9 00:41:30.240 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:41:30.240 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:30.240 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:30.240 issued rwts: total=4736,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:30.241 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:30.241 filename2: (groupid=0, jobs=1): err= 0: pid=468893: Mon Nov 18 07:26:49 2024 00:41:30.241 read: IOPS=471, BW=1887KiB/s (1933kB/s)(18.4MiB/10004msec) 00:41:30.241 slat (nsec): min=9400, max=71600, avg=30648.13, stdev=9518.09 00:41:30.241 clat (usec): min=4472, max=73467, avg=33630.14, stdev=2943.21 00:41:30.241 lat (usec): min=4495, max=73500, avg=33660.79, stdev=2943.62 00:41:30.241 clat percentiles (usec): 00:41:30.241 | 1.00th=[32637], 5.00th=[32900], 10.00th=[33162], 20.00th=[33162], 00:41:30.241 | 30.00th=[33162], 40.00th=[33424], 50.00th=[33424], 60.00th=[33817], 00:41:30.241 | 70.00th=[33817], 80.00th=[34341], 90.00th=[34341], 95.00th=[34866], 00:41:30.241 | 99.00th=[35390], 99.50th=[35390], 99.90th=[72877], 99.95th=[73925], 00:41:30.241 | 99.99th=[73925] 00:41:30.241 bw ( KiB/s): min= 1667, max= 1920, per=4.15%, avg=1879.74, stdev=74.07, samples=19 00:41:30.241 iops : min= 416, max= 480, avg=469.89, stdev=18.64, samples=19 00:41:30.241 lat (msec) : 10=0.34%, 20=0.30%, 50=99.03%, 100=0.34% 00:41:30.241 cpu : usr=98.42%, sys=1.15%, ctx=15, majf=0, minf=9 00:41:30.241 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:41:30.241 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:30.241 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:30.241 issued rwts: total=4720,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:30.241 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:30.241 filename2: (groupid=0, jobs=1): err= 0: pid=468894: Mon Nov 18 07:26:49 2024 00:41:30.241 read: IOPS=473, BW=1894KiB/s (1940kB/s)(18.5MiB/10001msec) 00:41:30.241 slat (nsec): min=8531, max=79997, avg=35166.64, stdev=10929.02 00:41:30.241 clat (usec): min=18462, max=35959, avg=33471.70, stdev=1438.81 00:41:30.241 lat (usec): min=18505, max=35998, avg=33506.87, stdev=1438.88 00:41:30.241 clat percentiles (usec): 00:41:30.241 | 1.00th=[22152], 5.00th=[32900], 10.00th=[32900], 20.00th=[33162], 00:41:30.241 | 30.00th=[33162], 40.00th=[33424], 50.00th=[33424], 60.00th=[33817], 00:41:30.241 | 70.00th=[33817], 80.00th=[34341], 90.00th=[34341], 95.00th=[34866], 00:41:30.241 | 99.00th=[34866], 99.50th=[35390], 99.90th=[35914], 99.95th=[35914], 00:41:30.241 | 99.99th=[35914] 00:41:30.241 bw ( KiB/s): min= 1792, max= 1923, per=4.18%, avg=1893.21, stdev=53.70, samples=19 00:41:30.241 iops : min= 448, max= 480, avg=473.26, stdev=13.40, samples=19 00:41:30.241 lat (msec) : 20=0.34%, 50=99.66% 00:41:30.241 cpu : usr=97.49%, sys=1.71%, ctx=55, majf=0, minf=9 00:41:30.241 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:41:30.241 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:30.241 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:30.241 issued rwts: total=4736,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:30.241 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:30.241 filename2: (groupid=0, jobs=1): err= 0: pid=468895: Mon Nov 18 07:26:49 2024 00:41:30.241 read: IOPS=471, BW=1887KiB/s (1933kB/s)(18.4MiB/10003msec) 00:41:30.241 slat (nsec): min=16386, max=88745, avg=38088.04, stdev=14005.65 00:41:30.241 clat (usec): min=19772, max=54775, avg=33557.76, stdev=1750.00 00:41:30.241 lat (usec): min=19789, max=54813, avg=33595.85, stdev=1748.66 00:41:30.241 clat percentiles (usec): 00:41:30.241 | 1.00th=[32375], 5.00th=[32637], 10.00th=[32900], 20.00th=[33162], 00:41:30.241 | 30.00th=[33162], 40.00th=[33424], 50.00th=[33424], 60.00th=[33817], 00:41:30.241 | 70.00th=[33817], 80.00th=[34341], 90.00th=[34341], 95.00th=[34866], 00:41:30.241 | 99.00th=[35390], 99.50th=[36439], 99.90th=[54789], 99.95th=[54789], 00:41:30.241 | 99.99th=[54789] 00:41:30.241 bw ( KiB/s): min= 1664, max= 1920, per=4.15%, avg=1879.58, stdev=74.55, samples=19 00:41:30.241 iops : min= 416, max= 480, avg=469.89, stdev=18.64, samples=19 00:41:30.241 lat (msec) : 20=0.17%, 50=99.49%, 100=0.34% 00:41:30.241 cpu : usr=98.26%, sys=1.28%, ctx=11, majf=0, minf=9 00:41:30.241 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:41:30.241 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:30.241 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:30.241 issued rwts: total=4720,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:30.241 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:30.241 filename2: (groupid=0, jobs=1): err= 0: pid=468896: Mon Nov 18 07:26:49 2024 00:41:30.241 read: IOPS=471, BW=1887KiB/s (1932kB/s)(18.4MiB/10005msec) 00:41:30.241 slat (nsec): min=8080, max=78452, avg=31671.03, stdev=13551.80 00:41:30.241 clat (usec): min=14355, max=53341, avg=33667.15, stdev=1719.32 00:41:30.241 lat (usec): min=14363, max=53379, avg=33698.83, stdev=1718.25 00:41:30.241 clat percentiles (usec): 00:41:30.241 | 1.00th=[32637], 5.00th=[32900], 10.00th=[33162], 20.00th=[33162], 00:41:30.241 | 30.00th=[33424], 40.00th=[33424], 50.00th=[33424], 60.00th=[33817], 00:41:30.241 | 70.00th=[33817], 80.00th=[34341], 90.00th=[34341], 95.00th=[34866], 00:41:30.241 | 99.00th=[35914], 99.50th=[43779], 99.90th=[53216], 99.95th=[53216], 00:41:30.241 | 99.99th=[53216] 00:41:30.241 bw ( KiB/s): min= 1792, max= 1920, per=4.15%, avg=1879.58, stdev=56.03, samples=19 00:41:30.241 iops : min= 448, max= 480, avg=469.89, stdev=14.01, samples=19 00:41:30.241 lat (msec) : 20=0.08%, 50=99.58%, 100=0.34% 00:41:30.241 cpu : usr=98.19%, sys=1.40%, ctx=13, majf=0, minf=9 00:41:30.241 IO depths : 1=0.6%, 2=6.8%, 4=24.9%, 8=55.8%, 16=11.9%, 32=0.0%, >=64=0.0% 00:41:30.241 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:30.241 complete : 0=0.0%, 4=94.4%, 8=0.1%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:30.241 issued rwts: total=4720,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:30.241 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:30.241 filename2: (groupid=0, jobs=1): err= 0: pid=468897: Mon Nov 18 07:26:49 2024 00:41:30.241 read: IOPS=471, BW=1887KiB/s (1933kB/s)(18.4MiB/10004msec) 00:41:30.241 slat (nsec): min=8383, max=74731, avg=32111.09, stdev=12100.75 00:41:30.241 clat (usec): min=20105, max=55739, avg=33635.54, stdev=1718.76 00:41:30.241 lat (usec): min=20135, max=55762, avg=33667.65, stdev=1717.47 00:41:30.241 clat percentiles (usec): 00:41:30.241 | 1.00th=[32637], 5.00th=[32900], 10.00th=[32900], 20.00th=[33162], 00:41:30.241 | 30.00th=[33424], 40.00th=[33424], 50.00th=[33424], 60.00th=[33817], 00:41:30.241 | 70.00th=[33817], 80.00th=[34341], 90.00th=[34341], 95.00th=[34866], 00:41:30.241 | 99.00th=[35390], 99.50th=[35914], 99.90th=[55837], 99.95th=[55837], 00:41:30.241 | 99.99th=[55837] 00:41:30.241 bw ( KiB/s): min= 1667, max= 1920, per=4.15%, avg=1879.74, stdev=74.07, samples=19 00:41:30.241 iops : min= 416, max= 480, avg=469.89, stdev=18.64, samples=19 00:41:30.241 lat (msec) : 50=99.66%, 100=0.34% 00:41:30.241 cpu : usr=98.14%, sys=1.38%, ctx=39, majf=0, minf=9 00:41:30.241 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:41:30.241 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:30.241 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:30.241 issued rwts: total=4720,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:30.241 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:30.241 filename2: (groupid=0, jobs=1): err= 0: pid=468898: Mon Nov 18 07:26:49 2024 00:41:30.241 read: IOPS=473, BW=1894KiB/s (1939kB/s)(18.5MiB/10002msec) 00:41:30.241 slat (usec): min=11, max=111, avg=46.50, stdev=17.28 00:41:30.241 clat (usec): min=19100, max=36003, avg=33383.38, stdev=1409.87 00:41:30.241 lat (usec): min=19126, max=36027, avg=33429.88, stdev=1408.89 00:41:30.241 clat percentiles (usec): 00:41:30.241 | 1.00th=[22938], 5.00th=[32375], 10.00th=[32900], 20.00th=[33162], 00:41:30.241 | 30.00th=[33162], 40.00th=[33424], 50.00th=[33424], 60.00th=[33817], 00:41:30.241 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34341], 95.00th=[34866], 00:41:30.241 | 99.00th=[34866], 99.50th=[35390], 99.90th=[35914], 99.95th=[35914], 00:41:30.241 | 99.99th=[35914] 00:41:30.241 bw ( KiB/s): min= 1792, max= 1920, per=4.18%, avg=1893.05, stdev=53.61, samples=19 00:41:30.241 iops : min= 448, max= 480, avg=473.26, stdev=13.40, samples=19 00:41:30.241 lat (msec) : 20=0.34%, 50=99.66% 00:41:30.241 cpu : usr=98.18%, sys=1.39%, ctx=14, majf=0, minf=9 00:41:30.241 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:41:30.241 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:30.241 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:30.241 issued rwts: total=4736,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:30.241 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:30.241 filename2: (groupid=0, jobs=1): err= 0: pid=468899: Mon Nov 18 07:26:49 2024 00:41:30.241 read: IOPS=471, BW=1887KiB/s (1933kB/s)(18.4MiB/10004msec) 00:41:30.241 slat (nsec): min=10091, max=74731, avg=34129.25, stdev=10396.56 00:41:30.241 clat (usec): min=19923, max=55805, avg=33600.51, stdev=1746.14 00:41:30.241 lat (usec): min=19958, max=55826, avg=33634.64, stdev=1744.70 00:41:30.241 clat percentiles (usec): 00:41:30.241 | 1.00th=[32637], 5.00th=[32900], 10.00th=[32900], 20.00th=[33162], 00:41:30.241 | 30.00th=[33162], 40.00th=[33424], 50.00th=[33424], 60.00th=[33817], 00:41:30.241 | 70.00th=[33817], 80.00th=[34341], 90.00th=[34341], 95.00th=[34866], 00:41:30.241 | 99.00th=[35390], 99.50th=[35914], 99.90th=[55837], 99.95th=[55837], 00:41:30.241 | 99.99th=[55837] 00:41:30.241 bw ( KiB/s): min= 1667, max= 1920, per=4.15%, avg=1879.74, stdev=74.07, samples=19 00:41:30.241 iops : min= 416, max= 480, avg=469.89, stdev=18.64, samples=19 00:41:30.241 lat (msec) : 20=0.06%, 50=99.60%, 100=0.34% 00:41:30.241 cpu : usr=98.08%, sys=1.49%, ctx=18, majf=0, minf=9 00:41:30.241 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:41:30.241 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:30.241 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:30.241 issued rwts: total=4720,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:30.241 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:30.241 00:41:30.241 Run status group 0 (all jobs): 00:41:30.241 READ: bw=44.2MiB/s (46.4MB/s), 1886KiB/s-1898KiB/s (1932kB/s-1944kB/s), io=443MiB (465MB), run=10001-10022msec 00:41:30.241 07:26:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:41:30.241 07:26:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:41:30.241 07:26:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:41:30.241 07:26:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:41:30.241 07:26:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:41:30.241 07:26:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:41:30.242 07:26:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:30.242 07:26:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:30.242 07:26:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:30.242 07:26:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:41:30.242 07:26:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:30.242 07:26:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:30.242 07:26:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:30.242 07:26:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:41:30.242 07:26:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:41:30.242 07:26:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:41:30.242 07:26:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:41:30.242 07:26:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:30.242 07:26:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:30.242 07:26:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:30.242 07:26:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:41:30.242 07:26:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:30.242 07:26:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:30.242 07:26:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:30.242 07:26:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:41:30.242 07:26:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:41:30.242 07:26:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:41:30.242 07:26:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:41:30.242 07:26:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:30.242 07:26:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:30.242 07:26:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:30.242 07:26:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:41:30.242 07:26:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:30.242 07:26:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:30.242 07:26:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:30.242 07:26:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:41:30.242 07:26:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:41:30.242 07:26:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:41:30.242 07:26:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:41:30.242 07:26:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:41:30.242 07:26:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:41:30.242 07:26:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:41:30.242 07:26:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:41:30.242 07:26:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:41:30.242 07:26:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:41:30.242 07:26:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:41:30.242 07:26:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:41:30.242 07:26:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:30.242 07:26:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:30.242 bdev_null0 00:41:30.242 07:26:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:30.242 07:26:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:41:30.242 07:26:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:30.242 07:26:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:30.242 07:26:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:30.242 07:26:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:41:30.242 07:26:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:30.242 07:26:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:30.242 07:26:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:30.242 07:26:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:41:30.242 07:26:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:30.242 07:26:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:30.242 [2024-11-18 07:26:49.774248] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:30.242 07:26:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:30.242 07:26:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:41:30.242 07:26:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:41:30.242 07:26:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:41:30.242 07:26:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:41:30.242 07:26:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:30.242 07:26:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:30.242 bdev_null1 00:41:30.242 07:26:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:30.242 07:26:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:41:30.242 07:26:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:30.242 07:26:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:30.242 07:26:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:30.242 07:26:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:41:30.242 07:26:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:30.242 07:26:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:30.242 07:26:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:30.242 07:26:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:41:30.242 07:26:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:30.242 07:26:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:30.242 07:26:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:30.242 07:26:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:41:30.242 07:26:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:30.242 07:26:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:41:30.242 07:26:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:30.242 07:26:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:41:30.242 07:26:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:41:30.242 07:26:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:41:30.242 07:26:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:41:30.242 07:26:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:41:30.242 07:26:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:41:30.242 07:26:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:30.242 07:26:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:41:30.242 07:26:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:41:30.242 07:26:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:41:30.242 07:26:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:41:30.242 07:26:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:41:30.242 07:26:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:41:30.242 07:26:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:41:30.242 07:26:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:41:30.242 { 00:41:30.242 "params": { 00:41:30.242 "name": "Nvme$subsystem", 00:41:30.242 "trtype": "$TEST_TRANSPORT", 00:41:30.242 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:30.242 "adrfam": "ipv4", 00:41:30.242 "trsvcid": "$NVMF_PORT", 00:41:30.242 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:30.242 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:30.242 "hdgst": ${hdgst:-false}, 00:41:30.242 "ddgst": ${ddgst:-false} 00:41:30.242 }, 00:41:30.242 "method": "bdev_nvme_attach_controller" 00:41:30.242 } 00:41:30.242 EOF 00:41:30.242 )") 00:41:30.242 07:26:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:41:30.242 07:26:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:30.242 07:26:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:41:30.242 07:26:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:41:30.242 07:26:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:41:30.242 07:26:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:41:30.242 07:26:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:41:30.242 07:26:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:41:30.242 07:26:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:41:30.242 { 00:41:30.242 "params": { 00:41:30.242 "name": "Nvme$subsystem", 00:41:30.242 "trtype": "$TEST_TRANSPORT", 00:41:30.242 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:30.242 "adrfam": "ipv4", 00:41:30.242 "trsvcid": "$NVMF_PORT", 00:41:30.242 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:30.242 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:30.242 "hdgst": ${hdgst:-false}, 00:41:30.242 "ddgst": ${ddgst:-false} 00:41:30.242 }, 00:41:30.243 "method": "bdev_nvme_attach_controller" 00:41:30.243 } 00:41:30.243 EOF 00:41:30.243 )") 00:41:30.243 07:26:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:41:30.243 07:26:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:41:30.243 07:26:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:41:30.243 07:26:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:41:30.243 07:26:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:41:30.243 07:26:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:41:30.243 "params": { 00:41:30.243 "name": "Nvme0", 00:41:30.243 "trtype": "tcp", 00:41:30.243 "traddr": "10.0.0.2", 00:41:30.243 "adrfam": "ipv4", 00:41:30.243 "trsvcid": "4420", 00:41:30.243 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:41:30.243 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:41:30.243 "hdgst": false, 00:41:30.243 "ddgst": false 00:41:30.243 }, 00:41:30.243 "method": "bdev_nvme_attach_controller" 00:41:30.243 },{ 00:41:30.243 "params": { 00:41:30.243 "name": "Nvme1", 00:41:30.243 "trtype": "tcp", 00:41:30.243 "traddr": "10.0.0.2", 00:41:30.243 "adrfam": "ipv4", 00:41:30.243 "trsvcid": "4420", 00:41:30.243 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:41:30.243 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:41:30.243 "hdgst": false, 00:41:30.243 "ddgst": false 00:41:30.243 }, 00:41:30.243 "method": "bdev_nvme_attach_controller" 00:41:30.243 }' 00:41:30.243 07:26:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:41:30.243 07:26:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:41:30.243 07:26:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:41:30.243 07:26:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:30.243 07:26:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:41:30.243 07:26:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:41:30.243 07:26:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:41:30.243 07:26:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:41:30.243 07:26:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:41:30.243 07:26:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:30.243 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:41:30.243 ... 00:41:30.243 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:41:30.243 ... 00:41:30.243 fio-3.35 00:41:30.243 Starting 4 threads 00:41:35.507 00:41:35.507 filename0: (groupid=0, jobs=1): err= 0: pid=470243: Mon Nov 18 07:26:55 2024 00:41:35.507 read: IOPS=2016, BW=15.8MiB/s (16.5MB/s)(78.8MiB/5002msec) 00:41:35.507 slat (nsec): min=3995, max=65698, avg=15093.30, stdev=6125.69 00:41:35.507 clat (usec): min=756, max=7636, avg=3909.65, stdev=575.68 00:41:35.507 lat (usec): min=770, max=7655, avg=3924.75, stdev=576.01 00:41:35.507 clat percentiles (usec): 00:41:35.507 | 1.00th=[ 2057], 5.00th=[ 3163], 10.00th=[ 3359], 20.00th=[ 3621], 00:41:35.507 | 30.00th=[ 3752], 40.00th=[ 3884], 50.00th=[ 3949], 60.00th=[ 3982], 00:41:35.507 | 70.00th=[ 4047], 80.00th=[ 4113], 90.00th=[ 4293], 95.00th=[ 4752], 00:41:35.507 | 99.00th=[ 6063], 99.50th=[ 6652], 99.90th=[ 7111], 99.95th=[ 7242], 00:41:35.507 | 99.99th=[ 7504] 00:41:35.507 bw ( KiB/s): min=15856, max=16480, per=25.66%, avg=16139.20, stdev=249.36, samples=10 00:41:35.507 iops : min= 1982, max= 2060, avg=2017.40, stdev=31.17, samples=10 00:41:35.507 lat (usec) : 1000=0.12% 00:41:35.507 lat (msec) : 2=0.87%, 4=60.39%, 10=38.62% 00:41:35.507 cpu : usr=87.92%, sys=8.30%, ctx=466, majf=0, minf=0 00:41:35.507 IO depths : 1=0.4%, 2=18.4%, 4=54.3%, 8=26.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:35.507 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:35.507 complete : 0=0.0%, 4=91.8%, 8=8.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:35.507 issued rwts: total=10088,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:35.507 latency : target=0, window=0, percentile=100.00%, depth=8 00:41:35.507 filename0: (groupid=0, jobs=1): err= 0: pid=470244: Mon Nov 18 07:26:55 2024 00:41:35.507 read: IOPS=1951, BW=15.2MiB/s (16.0MB/s)(76.2MiB/5001msec) 00:41:35.507 slat (nsec): min=4299, max=37312, avg=13217.73, stdev=4371.51 00:41:35.507 clat (usec): min=769, max=10867, avg=4050.57, stdev=683.78 00:41:35.507 lat (usec): min=782, max=10880, avg=4063.79, stdev=683.62 00:41:35.507 clat percentiles (usec): 00:41:35.507 | 1.00th=[ 2245], 5.00th=[ 3326], 10.00th=[ 3523], 20.00th=[ 3687], 00:41:35.507 | 30.00th=[ 3884], 40.00th=[ 3949], 50.00th=[ 3982], 60.00th=[ 4047], 00:41:35.507 | 70.00th=[ 4080], 80.00th=[ 4178], 90.00th=[ 4686], 95.00th=[ 5473], 00:41:35.507 | 99.00th=[ 6521], 99.50th=[ 6980], 99.90th=[ 7373], 99.95th=[10683], 00:41:35.507 | 99.99th=[10814] 00:41:35.507 bw ( KiB/s): min=15198, max=15984, per=24.94%, avg=15686.89, stdev=315.26, samples=9 00:41:35.507 iops : min= 1899, max= 1998, avg=1960.78, stdev=39.55, samples=9 00:41:35.507 lat (usec) : 1000=0.10% 00:41:35.507 lat (msec) : 2=0.72%, 4=51.20%, 10=47.89%, 20=0.08% 00:41:35.507 cpu : usr=94.06%, sys=5.36%, ctx=16, majf=0, minf=9 00:41:35.507 IO depths : 1=0.1%, 2=16.1%, 4=56.1%, 8=27.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:35.507 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:35.507 complete : 0=0.0%, 4=92.4%, 8=7.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:35.507 issued rwts: total=9759,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:35.507 latency : target=0, window=0, percentile=100.00%, depth=8 00:41:35.507 filename1: (groupid=0, jobs=1): err= 0: pid=470245: Mon Nov 18 07:26:55 2024 00:41:35.507 read: IOPS=1964, BW=15.3MiB/s (16.1MB/s)(76.8MiB/5002msec) 00:41:35.507 slat (nsec): min=3858, max=35992, avg=13157.41, stdev=4032.86 00:41:35.507 clat (usec): min=834, max=7345, avg=4025.20, stdev=637.83 00:41:35.507 lat (usec): min=843, max=7353, avg=4038.36, stdev=637.59 00:41:35.507 clat percentiles (usec): 00:41:35.507 | 1.00th=[ 2376], 5.00th=[ 3294], 10.00th=[ 3490], 20.00th=[ 3687], 00:41:35.507 | 30.00th=[ 3851], 40.00th=[ 3949], 50.00th=[ 3982], 60.00th=[ 4047], 00:41:35.507 | 70.00th=[ 4080], 80.00th=[ 4178], 90.00th=[ 4555], 95.00th=[ 5211], 00:41:35.507 | 99.00th=[ 6718], 99.50th=[ 6980], 99.90th=[ 7242], 99.95th=[ 7308], 00:41:35.507 | 99.99th=[ 7373] 00:41:35.507 bw ( KiB/s): min=15376, max=16384, per=24.96%, avg=15696.00, stdev=303.26, samples=9 00:41:35.507 iops : min= 1922, max= 2048, avg=1962.00, stdev=37.91, samples=9 00:41:35.507 lat (usec) : 1000=0.04% 00:41:35.507 lat (msec) : 2=0.61%, 4=53.10%, 10=46.25% 00:41:35.507 cpu : usr=94.68%, sys=4.68%, ctx=60, majf=0, minf=0 00:41:35.507 IO depths : 1=0.3%, 2=14.4%, 4=57.5%, 8=27.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:35.507 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:35.507 complete : 0=0.0%, 4=92.6%, 8=7.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:35.507 issued rwts: total=9828,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:35.507 latency : target=0, window=0, percentile=100.00%, depth=8 00:41:35.507 filename1: (groupid=0, jobs=1): err= 0: pid=470246: Mon Nov 18 07:26:55 2024 00:41:35.507 read: IOPS=1929, BW=15.1MiB/s (15.8MB/s)(75.4MiB/5002msec) 00:41:35.507 slat (nsec): min=4014, max=90882, avg=12804.30, stdev=4093.64 00:41:35.507 clat (usec): min=893, max=10474, avg=4102.79, stdev=726.88 00:41:35.507 lat (usec): min=906, max=10485, avg=4115.60, stdev=726.51 00:41:35.507 clat percentiles (usec): 00:41:35.507 | 1.00th=[ 2573], 5.00th=[ 3326], 10.00th=[ 3556], 20.00th=[ 3687], 00:41:35.507 | 30.00th=[ 3884], 40.00th=[ 3949], 50.00th=[ 4015], 60.00th=[ 4047], 00:41:35.507 | 70.00th=[ 4113], 80.00th=[ 4228], 90.00th=[ 4817], 95.00th=[ 5735], 00:41:35.507 | 99.00th=[ 7046], 99.50th=[ 7177], 99.90th=[ 7373], 99.95th=[ 7504], 00:41:35.507 | 99.99th=[10421] 00:41:35.507 bw ( KiB/s): min=14848, max=15680, per=24.38%, avg=15336.89, stdev=295.04, samples=9 00:41:35.507 iops : min= 1856, max= 1960, avg=1917.11, stdev=36.88, samples=9 00:41:35.507 lat (usec) : 1000=0.02% 00:41:35.507 lat (msec) : 2=0.54%, 4=47.81%, 10=51.62%, 20=0.01% 00:41:35.507 cpu : usr=95.34%, sys=4.18%, ctx=7, majf=0, minf=0 00:41:35.507 IO depths : 1=0.2%, 2=11.8%, 4=59.6%, 8=28.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:35.507 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:35.507 complete : 0=0.0%, 4=93.2%, 8=6.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:35.507 issued rwts: total=9650,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:35.507 latency : target=0, window=0, percentile=100.00%, depth=8 00:41:35.507 00:41:35.507 Run status group 0 (all jobs): 00:41:35.507 READ: bw=61.4MiB/s (64.4MB/s), 15.1MiB/s-15.8MiB/s (15.8MB/s-16.5MB/s), io=307MiB (322MB), run=5001-5002msec 00:41:35.507 07:26:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:41:35.507 07:26:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:41:35.507 07:26:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:41:35.507 07:26:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:41:35.508 07:26:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:41:35.508 07:26:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:41:35.508 07:26:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:35.508 07:26:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:35.508 07:26:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:35.508 07:26:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:41:35.508 07:26:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:35.508 07:26:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:35.508 07:26:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:35.508 07:26:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:41:35.508 07:26:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:41:35.508 07:26:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:41:35.508 07:26:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:41:35.508 07:26:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:35.508 07:26:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:35.508 07:26:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:35.508 07:26:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:41:35.508 07:26:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:35.508 07:26:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:35.508 07:26:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:35.508 00:41:35.508 real 0m24.276s 00:41:35.508 user 4m32.720s 00:41:35.508 sys 0m6.309s 00:41:35.508 07:26:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:35.508 07:26:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:35.508 ************************************ 00:41:35.508 END TEST fio_dif_rand_params 00:41:35.508 ************************************ 00:41:35.508 07:26:56 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:41:35.508 07:26:56 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:41:35.508 07:26:56 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:41:35.508 07:26:56 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:41:35.508 ************************************ 00:41:35.508 START TEST fio_dif_digest 00:41:35.508 ************************************ 00:41:35.508 07:26:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1129 -- # fio_dif_digest 00:41:35.508 07:26:56 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:41:35.508 07:26:56 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:41:35.508 07:26:56 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:41:35.508 07:26:56 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:41:35.508 07:26:56 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:41:35.508 07:26:56 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:41:35.508 07:26:56 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:41:35.508 07:26:56 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:41:35.508 07:26:56 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:41:35.508 07:26:56 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:41:35.508 07:26:56 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:41:35.508 07:26:56 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:41:35.508 07:26:56 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:41:35.508 07:26:56 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:41:35.508 07:26:56 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:41:35.508 07:26:56 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:41:35.508 07:26:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:35.508 07:26:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:41:35.508 bdev_null0 00:41:35.508 07:26:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:35.508 07:26:56 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:41:35.508 07:26:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:35.508 07:26:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:41:35.508 07:26:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:35.508 07:26:56 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:41:35.508 07:26:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:35.508 07:26:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:41:35.508 07:26:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:35.508 07:26:56 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:41:35.508 07:26:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:35.508 07:26:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:41:35.508 [2024-11-18 07:26:56.253755] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:35.508 07:26:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:35.508 07:26:56 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:41:35.508 07:26:56 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:41:35.508 07:26:56 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:41:35.508 07:26:56 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # config=() 00:41:35.508 07:26:56 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # local subsystem config 00:41:35.508 07:26:56 nvmf_dif.fio_dif_digest -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:41:35.508 07:26:56 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:35.508 07:26:56 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:41:35.508 { 00:41:35.508 "params": { 00:41:35.508 "name": "Nvme$subsystem", 00:41:35.508 "trtype": "$TEST_TRANSPORT", 00:41:35.508 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:35.508 "adrfam": "ipv4", 00:41:35.508 "trsvcid": "$NVMF_PORT", 00:41:35.508 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:35.508 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:35.508 "hdgst": ${hdgst:-false}, 00:41:35.508 "ddgst": ${ddgst:-false} 00:41:35.508 }, 00:41:35.508 "method": "bdev_nvme_attach_controller" 00:41:35.508 } 00:41:35.508 EOF 00:41:35.508 )") 00:41:35.508 07:26:56 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:41:35.508 07:26:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:35.508 07:26:56 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:41:35.508 07:26:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:41:35.508 07:26:56 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:41:35.508 07:26:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:41:35.508 07:26:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local sanitizers 00:41:35.508 07:26:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:35.508 07:26:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # shift 00:41:35.508 07:26:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # local asan_lib= 00:41:35.508 07:26:56 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # cat 00:41:35.508 07:26:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:41:35.508 07:26:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:35.508 07:26:56 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:41:35.508 07:26:56 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:41:35.508 07:26:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libasan 00:41:35.508 07:26:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:41:35.508 07:26:56 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # jq . 00:41:35.508 07:26:56 nvmf_dif.fio_dif_digest -- nvmf/common.sh@585 -- # IFS=, 00:41:35.508 07:26:56 nvmf_dif.fio_dif_digest -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:41:35.508 "params": { 00:41:35.508 "name": "Nvme0", 00:41:35.508 "trtype": "tcp", 00:41:35.508 "traddr": "10.0.0.2", 00:41:35.508 "adrfam": "ipv4", 00:41:35.508 "trsvcid": "4420", 00:41:35.508 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:41:35.508 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:41:35.508 "hdgst": true, 00:41:35.508 "ddgst": true 00:41:35.508 }, 00:41:35.508 "method": "bdev_nvme_attach_controller" 00:41:35.508 }' 00:41:35.508 07:26:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:41:35.508 07:26:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:41:35.508 07:26:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:41:35.508 07:26:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:35.508 07:26:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:41:35.508 07:26:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:41:35.508 07:26:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:41:35.508 07:26:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:41:35.508 07:26:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:41:35.508 07:26:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:35.767 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:41:35.767 ... 00:41:35.767 fio-3.35 00:41:35.767 Starting 3 threads 00:41:47.969 00:41:47.969 filename0: (groupid=0, jobs=1): err= 0: pid=471032: Mon Nov 18 07:27:07 2024 00:41:47.969 read: IOPS=206, BW=25.8MiB/s (27.1MB/s)(259MiB/10043msec) 00:41:47.969 slat (nsec): min=7033, max=40484, avg=13510.74, stdev=3310.78 00:41:47.969 clat (usec): min=9594, max=51982, avg=14496.93, stdev=1554.55 00:41:47.969 lat (usec): min=9606, max=51995, avg=14510.44, stdev=1554.49 00:41:47.969 clat percentiles (usec): 00:41:47.969 | 1.00th=[11731], 5.00th=[12649], 10.00th=[13173], 20.00th=[13698], 00:41:47.969 | 30.00th=[13960], 40.00th=[14222], 50.00th=[14484], 60.00th=[14746], 00:41:47.969 | 70.00th=[15008], 80.00th=[15270], 90.00th=[15795], 95.00th=[16319], 00:41:47.969 | 99.00th=[17171], 99.50th=[17433], 99.90th=[18482], 99.95th=[48497], 00:41:47.969 | 99.99th=[52167] 00:41:47.969 bw ( KiB/s): min=25344, max=27392, per=33.34%, avg=26508.80, stdev=521.84, samples=20 00:41:47.969 iops : min= 198, max= 214, avg=207.10, stdev= 4.08, samples=20 00:41:47.969 lat (msec) : 10=0.19%, 20=99.71%, 50=0.05%, 100=0.05% 00:41:47.969 cpu : usr=92.46%, sys=7.01%, ctx=18, majf=0, minf=97 00:41:47.969 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:47.969 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:47.969 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:47.969 issued rwts: total=2073,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:47.969 latency : target=0, window=0, percentile=100.00%, depth=3 00:41:47.969 filename0: (groupid=0, jobs=1): err= 0: pid=471033: Mon Nov 18 07:27:07 2024 00:41:47.969 read: IOPS=211, BW=26.4MiB/s (27.7MB/s)(265MiB/10047msec) 00:41:47.969 slat (nsec): min=6975, max=45932, avg=13302.82, stdev=2960.53 00:41:47.969 clat (usec): min=10990, max=53126, avg=14180.70, stdev=2099.82 00:41:47.969 lat (usec): min=11003, max=53149, avg=14194.01, stdev=2099.98 00:41:47.969 clat percentiles (usec): 00:41:47.969 | 1.00th=[11731], 5.00th=[12387], 10.00th=[12911], 20.00th=[13304], 00:41:47.969 | 30.00th=[13566], 40.00th=[13829], 50.00th=[14091], 60.00th=[14353], 00:41:47.969 | 70.00th=[14615], 80.00th=[14877], 90.00th=[15401], 95.00th=[15795], 00:41:47.969 | 99.00th=[16909], 99.50th=[17171], 99.90th=[53216], 99.95th=[53216], 00:41:47.969 | 99.99th=[53216] 00:41:47.969 bw ( KiB/s): min=24576, max=28416, per=34.08%, avg=27097.60, stdev=797.85, samples=20 00:41:47.969 iops : min= 192, max= 222, avg=211.70, stdev= 6.23, samples=20 00:41:47.969 lat (msec) : 20=99.76%, 50=0.05%, 100=0.19% 00:41:47.969 cpu : usr=92.37%, sys=7.11%, ctx=15, majf=0, minf=112 00:41:47.969 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:47.969 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:47.969 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:47.969 issued rwts: total=2120,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:47.969 latency : target=0, window=0, percentile=100.00%, depth=3 00:41:47.969 filename0: (groupid=0, jobs=1): err= 0: pid=471034: Mon Nov 18 07:27:07 2024 00:41:47.969 read: IOPS=203, BW=25.5MiB/s (26.7MB/s)(256MiB/10045msec) 00:41:47.969 slat (nsec): min=7091, max=63803, avg=13577.85, stdev=3413.22 00:41:47.969 clat (usec): min=9226, max=53579, avg=14676.91, stdev=1555.37 00:41:47.969 lat (usec): min=9241, max=53593, avg=14690.48, stdev=1555.38 00:41:47.969 clat percentiles (usec): 00:41:47.969 | 1.00th=[12125], 5.00th=[13042], 10.00th=[13435], 20.00th=[13829], 00:41:47.969 | 30.00th=[14091], 40.00th=[14353], 50.00th=[14615], 60.00th=[14877], 00:41:47.969 | 70.00th=[15139], 80.00th=[15401], 90.00th=[15926], 95.00th=[16319], 00:41:47.969 | 99.00th=[17171], 99.50th=[17433], 99.90th=[18220], 99.95th=[50070], 00:41:47.969 | 99.99th=[53740] 00:41:47.969 bw ( KiB/s): min=25344, max=27648, per=32.94%, avg=26188.80, stdev=532.47, samples=20 00:41:47.969 iops : min= 198, max= 216, avg=204.60, stdev= 4.16, samples=20 00:41:47.969 lat (msec) : 10=0.24%, 20=99.66%, 50=0.05%, 100=0.05% 00:41:47.969 cpu : usr=92.98%, sys=6.48%, ctx=21, majf=0, minf=189 00:41:47.969 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:47.969 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:47.969 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:47.969 issued rwts: total=2048,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:47.969 latency : target=0, window=0, percentile=100.00%, depth=3 00:41:47.969 00:41:47.969 Run status group 0 (all jobs): 00:41:47.970 READ: bw=77.6MiB/s (81.4MB/s), 25.5MiB/s-26.4MiB/s (26.7MB/s-27.7MB/s), io=780MiB (818MB), run=10043-10047msec 00:41:47.970 07:27:07 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:41:47.970 07:27:07 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:41:47.970 07:27:07 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:41:47.970 07:27:07 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:41:47.970 07:27:07 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:41:47.970 07:27:07 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:41:47.970 07:27:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:47.970 07:27:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:41:47.970 07:27:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:47.970 07:27:07 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:41:47.970 07:27:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:47.970 07:27:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:41:47.970 07:27:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:47.970 00:41:47.970 real 0m11.282s 00:41:47.970 user 0m29.141s 00:41:47.970 sys 0m2.350s 00:41:47.970 07:27:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:47.970 07:27:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:41:47.970 ************************************ 00:41:47.970 END TEST fio_dif_digest 00:41:47.970 ************************************ 00:41:47.970 07:27:07 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:41:47.970 07:27:07 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:41:47.970 07:27:07 nvmf_dif -- nvmf/common.sh@516 -- # nvmfcleanup 00:41:47.970 07:27:07 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:41:47.970 07:27:07 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:41:47.970 07:27:07 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:41:47.970 07:27:07 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:41:47.970 07:27:07 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:41:47.970 rmmod nvme_tcp 00:41:47.970 rmmod nvme_fabrics 00:41:47.970 rmmod nvme_keyring 00:41:47.970 07:27:07 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:41:47.970 07:27:07 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:41:47.970 07:27:07 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:41:47.970 07:27:07 nvmf_dif -- nvmf/common.sh@517 -- # '[' -n 464992 ']' 00:41:47.970 07:27:07 nvmf_dif -- nvmf/common.sh@518 -- # killprocess 464992 00:41:47.970 07:27:07 nvmf_dif -- common/autotest_common.sh@954 -- # '[' -z 464992 ']' 00:41:47.970 07:27:07 nvmf_dif -- common/autotest_common.sh@958 -- # kill -0 464992 00:41:47.970 07:27:07 nvmf_dif -- common/autotest_common.sh@959 -- # uname 00:41:47.970 07:27:07 nvmf_dif -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:41:47.970 07:27:07 nvmf_dif -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 464992 00:41:47.970 07:27:07 nvmf_dif -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:41:47.970 07:27:07 nvmf_dif -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:41:47.970 07:27:07 nvmf_dif -- common/autotest_common.sh@972 -- # echo 'killing process with pid 464992' 00:41:47.970 killing process with pid 464992 00:41:47.970 07:27:07 nvmf_dif -- common/autotest_common.sh@973 -- # kill 464992 00:41:47.970 07:27:07 nvmf_dif -- common/autotest_common.sh@978 -- # wait 464992 00:41:47.970 07:27:07 nvmf_dif -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:41:47.970 07:27:07 nvmf_dif -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:41:47.970 Waiting for block devices as requested 00:41:47.970 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:41:48.228 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:41:48.228 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:41:48.228 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:41:48.486 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:41:48.486 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:41:48.486 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:41:48.486 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:41:48.745 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:41:48.745 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:41:48.745 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:41:48.745 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:41:49.006 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:41:49.006 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:41:49.006 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:41:49.006 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:41:49.266 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:41:49.266 07:27:10 nvmf_dif -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:41:49.266 07:27:10 nvmf_dif -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:41:49.266 07:27:10 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:41:49.266 07:27:10 nvmf_dif -- nvmf/common.sh@791 -- # iptables-save 00:41:49.266 07:27:10 nvmf_dif -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:41:49.266 07:27:10 nvmf_dif -- nvmf/common.sh@791 -- # iptables-restore 00:41:49.266 07:27:10 nvmf_dif -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:41:49.266 07:27:10 nvmf_dif -- nvmf/common.sh@302 -- # remove_spdk_ns 00:41:49.266 07:27:10 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:49.266 07:27:10 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:41:49.266 07:27:10 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:51.794 07:27:12 nvmf_dif -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:41:51.794 00:41:51.794 real 1m6.925s 00:41:51.794 user 6m29.704s 00:41:51.794 sys 0m17.799s 00:41:51.794 07:27:12 nvmf_dif -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:51.794 07:27:12 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:41:51.794 ************************************ 00:41:51.794 END TEST nvmf_dif 00:41:51.794 ************************************ 00:41:51.794 07:27:12 -- spdk/autotest.sh@290 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:41:51.794 07:27:12 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:41:51.794 07:27:12 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:41:51.794 07:27:12 -- common/autotest_common.sh@10 -- # set +x 00:41:51.794 ************************************ 00:41:51.794 START TEST nvmf_abort_qd_sizes 00:41:51.794 ************************************ 00:41:51.794 07:27:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:41:51.794 * Looking for test storage... 00:41:51.794 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:41:51.794 07:27:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:41:51.794 07:27:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # lcov --version 00:41:51.794 07:27:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:41:51.794 07:27:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:41:51.794 07:27:12 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:41:51.794 07:27:12 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:41:51.794 07:27:12 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:41:51.794 07:27:12 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:41:51.794 07:27:12 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:41:51.794 07:27:12 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:41:51.794 07:27:12 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:41:51.794 07:27:12 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:41:51.794 07:27:12 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:41:51.794 07:27:12 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:41:51.794 07:27:12 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:41:51.794 07:27:12 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:41:51.794 07:27:12 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:41:51.794 07:27:12 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:41:51.794 07:27:12 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:41:51.794 07:27:12 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:41:51.794 07:27:12 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:41:51.794 07:27:12 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:41:51.794 07:27:12 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:41:51.794 07:27:12 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:41:51.794 07:27:12 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:41:51.794 07:27:12 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:41:51.794 07:27:12 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:41:51.794 07:27:12 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:41:51.794 07:27:12 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:41:51.794 07:27:12 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:41:51.794 07:27:12 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:41:51.794 07:27:12 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:41:51.794 07:27:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:41:51.794 07:27:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:41:51.794 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:51.794 --rc genhtml_branch_coverage=1 00:41:51.794 --rc genhtml_function_coverage=1 00:41:51.794 --rc genhtml_legend=1 00:41:51.794 --rc geninfo_all_blocks=1 00:41:51.794 --rc geninfo_unexecuted_blocks=1 00:41:51.794 00:41:51.794 ' 00:41:51.794 07:27:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:41:51.794 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:51.794 --rc genhtml_branch_coverage=1 00:41:51.794 --rc genhtml_function_coverage=1 00:41:51.794 --rc genhtml_legend=1 00:41:51.794 --rc geninfo_all_blocks=1 00:41:51.794 --rc geninfo_unexecuted_blocks=1 00:41:51.795 00:41:51.795 ' 00:41:51.795 07:27:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:41:51.795 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:51.795 --rc genhtml_branch_coverage=1 00:41:51.795 --rc genhtml_function_coverage=1 00:41:51.795 --rc genhtml_legend=1 00:41:51.795 --rc geninfo_all_blocks=1 00:41:51.795 --rc geninfo_unexecuted_blocks=1 00:41:51.795 00:41:51.795 ' 00:41:51.795 07:27:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:41:51.795 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:51.795 --rc genhtml_branch_coverage=1 00:41:51.795 --rc genhtml_function_coverage=1 00:41:51.795 --rc genhtml_legend=1 00:41:51.795 --rc geninfo_all_blocks=1 00:41:51.795 --rc geninfo_unexecuted_blocks=1 00:41:51.795 00:41:51.795 ' 00:41:51.795 07:27:12 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:41:51.795 07:27:12 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:41:51.795 07:27:12 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:41:51.795 07:27:12 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:41:51.795 07:27:12 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:41:51.795 07:27:12 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:41:51.795 07:27:12 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:41:51.795 07:27:12 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:41:51.795 07:27:12 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:41:51.795 07:27:12 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:41:51.795 07:27:12 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:41:51.795 07:27:12 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:41:51.795 07:27:12 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:41:51.795 07:27:12 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:41:51.795 07:27:12 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:41:51.795 07:27:12 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:41:51.795 07:27:12 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:41:51.795 07:27:12 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:41:51.795 07:27:12 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:41:51.795 07:27:12 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:41:51.795 07:27:12 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:41:51.795 07:27:12 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:51.795 07:27:12 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:51.795 07:27:12 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:51.795 07:27:12 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:51.795 07:27:12 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:51.795 07:27:12 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:41:51.795 07:27:12 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:51.795 07:27:12 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:41:51.795 07:27:12 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:41:51.795 07:27:12 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:41:51.795 07:27:12 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:41:51.795 07:27:12 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:41:51.795 07:27:12 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:41:51.795 07:27:12 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:41:51.795 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:41:51.795 07:27:12 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:41:51.795 07:27:12 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:41:51.795 07:27:12 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:41:51.795 07:27:12 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:41:51.795 07:27:12 nvmf_abort_qd_sizes -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:41:51.795 07:27:12 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:41:51.795 07:27:12 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # prepare_net_devs 00:41:51.795 07:27:12 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # local -g is_hw=no 00:41:51.795 07:27:12 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # remove_spdk_ns 00:41:51.795 07:27:12 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:51.795 07:27:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:41:51.795 07:27:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:51.795 07:27:12 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:41:51.795 07:27:12 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:41:51.795 07:27:12 nvmf_abort_qd_sizes -- nvmf/common.sh@309 -- # xtrace_disable 00:41:51.795 07:27:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:41:53.693 07:27:14 nvmf_abort_qd_sizes -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:41:53.693 07:27:14 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # pci_devs=() 00:41:53.693 07:27:14 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # local -a pci_devs 00:41:53.694 07:27:14 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # pci_net_devs=() 00:41:53.694 07:27:14 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:41:53.694 07:27:14 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # pci_drivers=() 00:41:53.694 07:27:14 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # local -A pci_drivers 00:41:53.694 07:27:14 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # net_devs=() 00:41:53.694 07:27:14 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # local -ga net_devs 00:41:53.694 07:27:14 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # e810=() 00:41:53.694 07:27:14 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # local -ga e810 00:41:53.694 07:27:14 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # x722=() 00:41:53.694 07:27:14 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # local -ga x722 00:41:53.694 07:27:14 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # mlx=() 00:41:53.694 07:27:14 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # local -ga mlx 00:41:53.694 07:27:14 nvmf_abort_qd_sizes -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:41:53.694 07:27:14 nvmf_abort_qd_sizes -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:41:53.694 07:27:14 nvmf_abort_qd_sizes -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:41:53.694 07:27:14 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:41:53.694 07:27:14 nvmf_abort_qd_sizes -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:41:53.694 07:27:14 nvmf_abort_qd_sizes -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:41:53.694 07:27:14 nvmf_abort_qd_sizes -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:41:53.694 07:27:14 nvmf_abort_qd_sizes -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:41:53.694 07:27:14 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:41:53.694 07:27:14 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:41:53.694 07:27:14 nvmf_abort_qd_sizes -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:41:53.694 07:27:14 nvmf_abort_qd_sizes -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:41:53.694 07:27:14 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:41:53.694 07:27:14 nvmf_abort_qd_sizes -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:41:53.694 07:27:14 nvmf_abort_qd_sizes -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:41:53.694 07:27:14 nvmf_abort_qd_sizes -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:41:53.694 07:27:14 nvmf_abort_qd_sizes -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:41:53.694 07:27:14 nvmf_abort_qd_sizes -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:41:53.694 07:27:14 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:41:53.694 07:27:14 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:41:53.694 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:41:53.694 07:27:14 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:41:53.694 07:27:14 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:41:53.694 07:27:14 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:53.694 07:27:14 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:53.694 07:27:14 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:41:53.694 07:27:14 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:41:53.694 07:27:14 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:41:53.694 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:41:53.694 07:27:14 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:41:53.694 07:27:14 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:41:53.694 07:27:14 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:53.694 07:27:14 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:53.694 07:27:14 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:41:53.694 07:27:14 nvmf_abort_qd_sizes -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:41:53.694 07:27:14 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:41:53.694 07:27:14 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:41:53.694 07:27:14 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:41:53.694 07:27:14 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:53.694 07:27:14 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:41:53.694 07:27:14 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:53.694 07:27:14 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:41:53.694 07:27:14 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:41:53.694 07:27:14 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:53.694 07:27:14 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:41:53.694 Found net devices under 0000:0a:00.0: cvl_0_0 00:41:53.694 07:27:14 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:41:53.694 07:27:14 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:41:53.694 07:27:14 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:53.694 07:27:14 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:41:53.694 07:27:14 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:53.694 07:27:14 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:41:53.694 07:27:14 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:41:53.694 07:27:14 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:53.694 07:27:14 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:41:53.694 Found net devices under 0000:0a:00.1: cvl_0_1 00:41:53.694 07:27:14 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:41:53.694 07:27:14 nvmf_abort_qd_sizes -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:41:53.694 07:27:14 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # is_hw=yes 00:41:53.694 07:27:14 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:41:53.694 07:27:14 nvmf_abort_qd_sizes -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:41:53.694 07:27:14 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:41:53.694 07:27:14 nvmf_abort_qd_sizes -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:41:53.694 07:27:14 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:41:53.694 07:27:14 nvmf_abort_qd_sizes -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:41:53.694 07:27:14 nvmf_abort_qd_sizes -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:41:53.694 07:27:14 nvmf_abort_qd_sizes -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:41:53.694 07:27:14 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:41:53.694 07:27:14 nvmf_abort_qd_sizes -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:41:53.694 07:27:14 nvmf_abort_qd_sizes -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:41:53.694 07:27:14 nvmf_abort_qd_sizes -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:41:53.694 07:27:14 nvmf_abort_qd_sizes -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:41:53.694 07:27:14 nvmf_abort_qd_sizes -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:41:53.694 07:27:14 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:41:53.694 07:27:14 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:41:53.694 07:27:14 nvmf_abort_qd_sizes -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:41:53.694 07:27:14 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:41:53.953 07:27:14 nvmf_abort_qd_sizes -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:41:53.953 07:27:14 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:41:53.953 07:27:14 nvmf_abort_qd_sizes -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:41:53.953 07:27:14 nvmf_abort_qd_sizes -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:41:53.953 07:27:14 nvmf_abort_qd_sizes -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:41:53.953 07:27:14 nvmf_abort_qd_sizes -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:41:53.953 07:27:14 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:41:53.953 07:27:14 nvmf_abort_qd_sizes -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:41:53.953 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:41:53.953 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.238 ms 00:41:53.953 00:41:53.953 --- 10.0.0.2 ping statistics --- 00:41:53.953 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:53.953 rtt min/avg/max/mdev = 0.238/0.238/0.238/0.000 ms 00:41:53.953 07:27:14 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:41:53.953 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:41:53.953 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.207 ms 00:41:53.953 00:41:53.953 --- 10.0.0.1 ping statistics --- 00:41:53.953 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:53.953 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:41:53.953 07:27:14 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:41:53.953 07:27:14 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # return 0 00:41:53.953 07:27:14 nvmf_abort_qd_sizes -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:41:53.953 07:27:14 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:41:55.329 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:41:55.329 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:41:55.329 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:41:55.329 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:41:55.329 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:41:55.329 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:41:55.329 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:41:55.329 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:41:55.329 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:41:55.329 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:41:55.329 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:41:55.329 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:41:55.329 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:41:55.329 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:41:55.329 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:41:55.329 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:41:56.267 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:41:56.267 07:27:17 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:41:56.267 07:27:17 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:41:56.267 07:27:17 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:41:56.267 07:27:17 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:41:56.267 07:27:17 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:41:56.267 07:27:17 nvmf_abort_qd_sizes -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:41:56.267 07:27:17 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:41:56.267 07:27:17 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:41:56.267 07:27:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@726 -- # xtrace_disable 00:41:56.267 07:27:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:41:56.267 07:27:17 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # nvmfpid=476575 00:41:56.267 07:27:17 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:41:56.267 07:27:17 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # waitforlisten 476575 00:41:56.267 07:27:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # '[' -z 476575 ']' 00:41:56.267 07:27:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:56.267 07:27:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # local max_retries=100 00:41:56.267 07:27:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:56.268 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:56.268 07:27:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@844 -- # xtrace_disable 00:41:56.268 07:27:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:41:56.268 [2024-11-18 07:27:17.143973] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:41:56.268 [2024-11-18 07:27:17.144066] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:41:56.268 [2024-11-18 07:27:17.214406] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:41:56.526 [2024-11-18 07:27:17.267203] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:41:56.526 [2024-11-18 07:27:17.267277] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:41:56.526 [2024-11-18 07:27:17.267290] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:41:56.526 [2024-11-18 07:27:17.267302] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:41:56.526 [2024-11-18 07:27:17.267311] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:41:56.526 [2024-11-18 07:27:17.268822] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:41:56.526 [2024-11-18 07:27:17.268883] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:41:56.526 [2024-11-18 07:27:17.268950] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:41:56.526 [2024-11-18 07:27:17.268952] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:41:56.526 07:27:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:41:56.526 07:27:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@868 -- # return 0 00:41:56.526 07:27:17 nvmf_abort_qd_sizes -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:41:56.526 07:27:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@732 -- # xtrace_disable 00:41:56.526 07:27:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:41:56.526 07:27:17 nvmf_abort_qd_sizes -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:41:56.526 07:27:17 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:41:56.526 07:27:17 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:41:56.526 07:27:17 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:41:56.526 07:27:17 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:41:56.526 07:27:17 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:41:56.526 07:27:17 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n 0000:88:00.0 ]] 00:41:56.526 07:27:17 nvmf_abort_qd_sizes -- scripts/common.sh@316 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:41:56.526 07:27:17 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:41:56.526 07:27:17 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:88:00.0 ]] 00:41:56.526 07:27:17 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:41:56.526 07:27:17 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:41:56.526 07:27:17 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:41:56.526 07:27:17 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 1 )) 00:41:56.526 07:27:17 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:88:00.0 00:41:56.526 07:27:17 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:41:56.526 07:27:17 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:88:00.0 00:41:56.526 07:27:17 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:41:56.526 07:27:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:41:56.526 07:27:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:41:56.526 07:27:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:41:56.526 ************************************ 00:41:56.526 START TEST spdk_target_abort 00:41:56.526 ************************************ 00:41:56.526 07:27:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1129 -- # spdk_target 00:41:56.526 07:27:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:41:56.526 07:27:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:88:00.0 -b spdk_target 00:41:56.526 07:27:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:56.526 07:27:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:41:59.807 spdk_targetn1 00:41:59.807 07:27:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:59.807 07:27:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:41:59.807 07:27:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:59.807 07:27:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:41:59.807 [2024-11-18 07:27:20.280225] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:41:59.807 07:27:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:59.807 07:27:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:41:59.807 07:27:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:59.807 07:27:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:41:59.807 07:27:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:59.807 07:27:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:41:59.807 07:27:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:59.807 07:27:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:41:59.807 07:27:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:59.807 07:27:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:41:59.807 07:27:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:59.807 07:27:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:41:59.807 [2024-11-18 07:27:20.328955] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:59.807 07:27:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:59.807 07:27:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:41:59.807 07:27:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:41:59.807 07:27:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:41:59.807 07:27:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:41:59.807 07:27:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:41:59.807 07:27:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:41:59.807 07:27:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:41:59.807 07:27:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:41:59.807 07:27:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:41:59.807 07:27:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:41:59.807 07:27:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:41:59.807 07:27:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:41:59.807 07:27:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:41:59.807 07:27:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:41:59.807 07:27:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:41:59.807 07:27:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:41:59.807 07:27:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:41:59.807 07:27:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:41:59.807 07:27:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:41:59.807 07:27:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:41:59.807 07:27:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:42:03.088 Initializing NVMe Controllers 00:42:03.088 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:42:03.088 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:42:03.088 Initialization complete. Launching workers. 00:42:03.088 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 12983, failed: 0 00:42:03.088 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1227, failed to submit 11756 00:42:03.088 success 765, unsuccessful 462, failed 0 00:42:03.088 07:27:23 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:42:03.088 07:27:23 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:42:06.380 Initializing NVMe Controllers 00:42:06.380 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:42:06.380 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:42:06.380 Initialization complete. Launching workers. 00:42:06.380 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8646, failed: 0 00:42:06.380 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1237, failed to submit 7409 00:42:06.380 success 342, unsuccessful 895, failed 0 00:42:06.380 07:27:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:42:06.380 07:27:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:42:09.661 Initializing NVMe Controllers 00:42:09.661 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:42:09.661 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:42:09.661 Initialization complete. Launching workers. 00:42:09.661 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 31436, failed: 0 00:42:09.661 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2701, failed to submit 28735 00:42:09.661 success 521, unsuccessful 2180, failed 0 00:42:09.661 07:27:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:42:09.661 07:27:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:09.661 07:27:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:42:09.661 07:27:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:09.661 07:27:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:42:09.661 07:27:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:09.661 07:27:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:42:10.594 07:27:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:10.594 07:27:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 476575 00:42:10.594 07:27:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # '[' -z 476575 ']' 00:42:10.594 07:27:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # kill -0 476575 00:42:10.594 07:27:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # uname 00:42:10.594 07:27:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:42:10.594 07:27:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 476575 00:42:10.594 07:27:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:42:10.594 07:27:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:42:10.594 07:27:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 476575' 00:42:10.594 killing process with pid 476575 00:42:10.594 07:27:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@973 -- # kill 476575 00:42:10.594 07:27:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@978 -- # wait 476575 00:42:10.853 00:42:10.853 real 0m14.204s 00:42:10.853 user 0m54.107s 00:42:10.853 sys 0m2.439s 00:42:10.853 07:27:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:42:10.853 07:27:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:42:10.853 ************************************ 00:42:10.853 END TEST spdk_target_abort 00:42:10.853 ************************************ 00:42:10.853 07:27:31 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:42:10.853 07:27:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:42:10.853 07:27:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:42:10.853 07:27:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:42:10.853 ************************************ 00:42:10.853 START TEST kernel_target_abort 00:42:10.853 ************************************ 00:42:10.853 07:27:31 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1129 -- # kernel_target 00:42:10.853 07:27:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:42:10.853 07:27:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@769 -- # local ip 00:42:10.853 07:27:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates=() 00:42:10.853 07:27:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # local -A ip_candidates 00:42:10.853 07:27:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:42:10.853 07:27:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:42:10.853 07:27:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:42:10.853 07:27:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:42:10.853 07:27:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:42:10.853 07:27:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:42:10.853 07:27:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:42:10.853 07:27:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:42:10.853 07:27:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:42:10.853 07:27:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:42:10.853 07:27:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:42:10.853 07:27:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:42:10.853 07:27:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:42:10.853 07:27:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # local block nvme 00:42:10.853 07:27:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:42:10.853 07:27:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@670 -- # modprobe nvmet 00:42:10.853 07:27:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:42:10.853 07:27:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:42:11.788 Waiting for block devices as requested 00:42:12.046 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:42:12.046 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:42:12.305 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:42:12.305 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:42:12.305 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:42:12.305 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:42:12.563 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:42:12.563 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:42:12.563 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:42:12.821 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:42:12.821 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:42:12.821 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:42:12.821 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:42:13.078 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:42:13.078 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:42:13.079 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:42:13.079 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:42:13.337 07:27:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:42:13.337 07:27:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:42:13.337 07:27:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:42:13.337 07:27:34 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:42:13.337 07:27:34 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:42:13.337 07:27:34 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:42:13.337 07:27:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:42:13.337 07:27:34 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:42:13.337 07:27:34 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:42:13.337 No valid GPT data, bailing 00:42:13.337 07:27:34 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:42:13.337 07:27:34 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:42:13.337 07:27:34 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:42:13.337 07:27:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:42:13.337 07:27:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:42:13.337 07:27:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:42:13.337 07:27:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:42:13.337 07:27:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:42:13.337 07:27:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:42:13.337 07:27:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 00:42:13.337 07:27:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:42:13.337 07:27:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 1 00:42:13.337 07:27:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:42:13.337 07:27:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo tcp 00:42:13.337 07:27:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@701 -- # echo 4420 00:42:13.337 07:27:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@702 -- # echo ipv4 00:42:13.337 07:27:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:42:13.337 07:27:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:42:13.595 00:42:13.595 Discovery Log Number of Records 2, Generation counter 2 00:42:13.595 =====Discovery Log Entry 0====== 00:42:13.595 trtype: tcp 00:42:13.595 adrfam: ipv4 00:42:13.595 subtype: current discovery subsystem 00:42:13.595 treq: not specified, sq flow control disable supported 00:42:13.595 portid: 1 00:42:13.595 trsvcid: 4420 00:42:13.595 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:42:13.595 traddr: 10.0.0.1 00:42:13.595 eflags: none 00:42:13.595 sectype: none 00:42:13.595 =====Discovery Log Entry 1====== 00:42:13.595 trtype: tcp 00:42:13.595 adrfam: ipv4 00:42:13.595 subtype: nvme subsystem 00:42:13.595 treq: not specified, sq flow control disable supported 00:42:13.595 portid: 1 00:42:13.595 trsvcid: 4420 00:42:13.595 subnqn: nqn.2016-06.io.spdk:testnqn 00:42:13.595 traddr: 10.0.0.1 00:42:13.595 eflags: none 00:42:13.595 sectype: none 00:42:13.595 07:27:34 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:42:13.595 07:27:34 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:42:13.595 07:27:34 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:42:13.595 07:27:34 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:42:13.595 07:27:34 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:42:13.595 07:27:34 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:42:13.595 07:27:34 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:42:13.595 07:27:34 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:42:13.595 07:27:34 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:42:13.595 07:27:34 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:42:13.595 07:27:34 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:42:13.595 07:27:34 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:42:13.595 07:27:34 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:42:13.595 07:27:34 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:42:13.595 07:27:34 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:42:13.595 07:27:34 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:42:13.595 07:27:34 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:42:13.595 07:27:34 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:42:13.595 07:27:34 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:42:13.595 07:27:34 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:42:13.595 07:27:34 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:42:16.874 Initializing NVMe Controllers 00:42:16.874 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:42:16.874 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:42:16.874 Initialization complete. Launching workers. 00:42:16.874 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 57105, failed: 0 00:42:16.874 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 57105, failed to submit 0 00:42:16.874 success 0, unsuccessful 57105, failed 0 00:42:16.874 07:27:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:42:16.874 07:27:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:42:20.189 Initializing NVMe Controllers 00:42:20.189 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:42:20.189 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:42:20.189 Initialization complete. Launching workers. 00:42:20.189 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 101479, failed: 0 00:42:20.189 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 25570, failed to submit 75909 00:42:20.189 success 0, unsuccessful 25570, failed 0 00:42:20.189 07:27:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:42:20.189 07:27:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:42:22.730 Initializing NVMe Controllers 00:42:22.730 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:42:22.730 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:42:22.730 Initialization complete. Launching workers. 00:42:22.730 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 95470, failed: 0 00:42:22.730 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 23882, failed to submit 71588 00:42:22.730 success 0, unsuccessful 23882, failed 0 00:42:22.730 07:27:43 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:42:22.730 07:27:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:42:22.730 07:27:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # echo 0 00:42:22.730 07:27:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:42:22.730 07:27:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:42:22.730 07:27:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:42:22.730 07:27:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:42:22.730 07:27:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:42:22.730 07:27:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:42:22.730 07:27:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:42:24.106 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:42:24.106 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:42:24.106 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:42:24.106 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:42:24.106 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:42:24.106 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:42:24.106 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:42:24.106 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:42:24.106 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:42:24.106 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:42:24.106 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:42:24.106 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:42:24.106 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:42:24.106 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:42:24.106 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:42:24.365 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:42:25.299 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:42:25.299 00:42:25.299 real 0m14.408s 00:42:25.299 user 0m6.620s 00:42:25.299 sys 0m3.295s 00:42:25.299 07:27:46 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:42:25.299 07:27:46 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:42:25.299 ************************************ 00:42:25.299 END TEST kernel_target_abort 00:42:25.299 ************************************ 00:42:25.299 07:27:46 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:42:25.299 07:27:46 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:42:25.299 07:27:46 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # nvmfcleanup 00:42:25.299 07:27:46 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:42:25.299 07:27:46 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:42:25.299 07:27:46 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:42:25.299 07:27:46 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:42:25.299 07:27:46 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:42:25.299 rmmod nvme_tcp 00:42:25.299 rmmod nvme_fabrics 00:42:25.299 rmmod nvme_keyring 00:42:25.299 07:27:46 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:42:25.299 07:27:46 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:42:25.299 07:27:46 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:42:25.299 07:27:46 nvmf_abort_qd_sizes -- nvmf/common.sh@517 -- # '[' -n 476575 ']' 00:42:25.299 07:27:46 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # killprocess 476575 00:42:25.299 07:27:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # '[' -z 476575 ']' 00:42:25.299 07:27:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@958 -- # kill -0 476575 00:42:25.299 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (476575) - No such process 00:42:25.299 07:27:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@981 -- # echo 'Process with pid 476575 is not found' 00:42:25.299 Process with pid 476575 is not found 00:42:25.299 07:27:46 nvmf_abort_qd_sizes -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:42:25.299 07:27:46 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:42:26.676 Waiting for block devices as requested 00:42:26.676 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:42:26.676 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:42:26.935 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:42:26.935 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:42:26.935 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:42:26.935 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:42:27.195 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:42:27.195 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:42:27.195 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:42:27.453 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:42:27.453 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:42:27.453 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:42:27.453 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:42:27.453 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:42:27.713 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:42:27.713 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:42:27.713 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:42:27.973 07:27:48 nvmf_abort_qd_sizes -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:42:27.973 07:27:48 nvmf_abort_qd_sizes -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:42:27.973 07:27:48 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:42:27.973 07:27:48 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-save 00:42:27.973 07:27:48 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-restore 00:42:27.973 07:27:48 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:42:27.973 07:27:48 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:42:27.973 07:27:48 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # remove_spdk_ns 00:42:27.973 07:27:48 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:27.973 07:27:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:42:27.973 07:27:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:29.880 07:27:50 nvmf_abort_qd_sizes -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:42:29.880 00:42:29.880 real 0m38.485s 00:42:29.880 user 1m3.081s 00:42:29.880 sys 0m9.442s 00:42:29.880 07:27:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:42:29.880 07:27:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:42:29.880 ************************************ 00:42:29.880 END TEST nvmf_abort_qd_sizes 00:42:29.880 ************************************ 00:42:29.880 07:27:50 -- spdk/autotest.sh@292 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:42:29.880 07:27:50 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:42:29.880 07:27:50 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:42:29.880 07:27:50 -- common/autotest_common.sh@10 -- # set +x 00:42:29.880 ************************************ 00:42:29.880 START TEST keyring_file 00:42:29.880 ************************************ 00:42:29.880 07:27:50 keyring_file -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:42:29.880 * Looking for test storage... 00:42:29.880 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:42:29.880 07:27:50 keyring_file -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:42:29.880 07:27:50 keyring_file -- common/autotest_common.sh@1693 -- # lcov --version 00:42:29.880 07:27:50 keyring_file -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:42:30.139 07:27:50 keyring_file -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:42:30.139 07:27:50 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:42:30.139 07:27:50 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:42:30.139 07:27:50 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:42:30.139 07:27:50 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:42:30.139 07:27:50 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:42:30.139 07:27:50 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:42:30.139 07:27:50 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:42:30.139 07:27:50 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:42:30.139 07:27:50 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:42:30.139 07:27:50 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:42:30.139 07:27:50 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:42:30.139 07:27:50 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:42:30.139 07:27:50 keyring_file -- scripts/common.sh@345 -- # : 1 00:42:30.139 07:27:50 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:42:30.139 07:27:50 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:42:30.139 07:27:50 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:42:30.139 07:27:50 keyring_file -- scripts/common.sh@353 -- # local d=1 00:42:30.139 07:27:50 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:42:30.139 07:27:50 keyring_file -- scripts/common.sh@355 -- # echo 1 00:42:30.139 07:27:50 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:42:30.140 07:27:50 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:42:30.140 07:27:50 keyring_file -- scripts/common.sh@353 -- # local d=2 00:42:30.140 07:27:50 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:42:30.140 07:27:50 keyring_file -- scripts/common.sh@355 -- # echo 2 00:42:30.140 07:27:50 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:42:30.140 07:27:50 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:42:30.140 07:27:50 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:42:30.140 07:27:50 keyring_file -- scripts/common.sh@368 -- # return 0 00:42:30.140 07:27:50 keyring_file -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:42:30.140 07:27:50 keyring_file -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:42:30.140 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:30.140 --rc genhtml_branch_coverage=1 00:42:30.140 --rc genhtml_function_coverage=1 00:42:30.140 --rc genhtml_legend=1 00:42:30.140 --rc geninfo_all_blocks=1 00:42:30.140 --rc geninfo_unexecuted_blocks=1 00:42:30.140 00:42:30.140 ' 00:42:30.140 07:27:50 keyring_file -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:42:30.140 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:30.140 --rc genhtml_branch_coverage=1 00:42:30.140 --rc genhtml_function_coverage=1 00:42:30.140 --rc genhtml_legend=1 00:42:30.140 --rc geninfo_all_blocks=1 00:42:30.140 --rc geninfo_unexecuted_blocks=1 00:42:30.140 00:42:30.140 ' 00:42:30.140 07:27:50 keyring_file -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:42:30.140 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:30.140 --rc genhtml_branch_coverage=1 00:42:30.140 --rc genhtml_function_coverage=1 00:42:30.140 --rc genhtml_legend=1 00:42:30.140 --rc geninfo_all_blocks=1 00:42:30.140 --rc geninfo_unexecuted_blocks=1 00:42:30.140 00:42:30.140 ' 00:42:30.140 07:27:50 keyring_file -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:42:30.140 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:30.140 --rc genhtml_branch_coverage=1 00:42:30.140 --rc genhtml_function_coverage=1 00:42:30.140 --rc genhtml_legend=1 00:42:30.140 --rc geninfo_all_blocks=1 00:42:30.140 --rc geninfo_unexecuted_blocks=1 00:42:30.140 00:42:30.140 ' 00:42:30.140 07:27:50 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:42:30.140 07:27:50 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:42:30.140 07:27:50 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:42:30.140 07:27:50 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:42:30.140 07:27:50 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:42:30.140 07:27:50 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:42:30.140 07:27:50 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:42:30.140 07:27:50 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:42:30.140 07:27:50 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:42:30.140 07:27:50 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:42:30.140 07:27:50 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:42:30.140 07:27:50 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:42:30.140 07:27:50 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:42:30.140 07:27:50 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:42:30.140 07:27:50 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:42:30.140 07:27:50 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:42:30.140 07:27:50 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:42:30.140 07:27:50 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:42:30.140 07:27:50 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:42:30.140 07:27:50 keyring_file -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:42:30.140 07:27:50 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:42:30.140 07:27:50 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:42:30.140 07:27:50 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:42:30.140 07:27:50 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:42:30.140 07:27:50 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:30.140 07:27:50 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:30.140 07:27:50 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:30.140 07:27:50 keyring_file -- paths/export.sh@5 -- # export PATH 00:42:30.140 07:27:50 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:30.140 07:27:50 keyring_file -- nvmf/common.sh@51 -- # : 0 00:42:30.140 07:27:50 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:42:30.140 07:27:50 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:42:30.140 07:27:50 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:42:30.140 07:27:50 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:42:30.140 07:27:50 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:42:30.140 07:27:50 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:42:30.140 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:42:30.140 07:27:50 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:42:30.140 07:27:50 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:42:30.140 07:27:50 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:42:30.140 07:27:50 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:42:30.140 07:27:50 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:42:30.140 07:27:50 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:42:30.140 07:27:50 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:42:30.140 07:27:50 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:42:30.140 07:27:50 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:42:30.140 07:27:50 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:42:30.140 07:27:50 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:42:30.140 07:27:50 keyring_file -- keyring/common.sh@17 -- # name=key0 00:42:30.140 07:27:50 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:42:30.140 07:27:50 keyring_file -- keyring/common.sh@17 -- # digest=0 00:42:30.140 07:27:50 keyring_file -- keyring/common.sh@18 -- # mktemp 00:42:30.140 07:27:50 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.V8Es8jFGxJ 00:42:30.140 07:27:50 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:42:30.140 07:27:50 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:42:30.140 07:27:50 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:42:30.140 07:27:50 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:42:30.140 07:27:50 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:42:30.140 07:27:50 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:42:30.140 07:27:50 keyring_file -- nvmf/common.sh@733 -- # python - 00:42:30.140 07:27:51 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.V8Es8jFGxJ 00:42:30.140 07:27:51 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.V8Es8jFGxJ 00:42:30.140 07:27:51 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.V8Es8jFGxJ 00:42:30.140 07:27:51 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:42:30.140 07:27:51 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:42:30.140 07:27:51 keyring_file -- keyring/common.sh@17 -- # name=key1 00:42:30.140 07:27:51 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:42:30.140 07:27:51 keyring_file -- keyring/common.sh@17 -- # digest=0 00:42:30.140 07:27:51 keyring_file -- keyring/common.sh@18 -- # mktemp 00:42:30.140 07:27:51 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.SBTPPhJgmw 00:42:30.140 07:27:51 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:42:30.140 07:27:51 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:42:30.140 07:27:51 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:42:30.140 07:27:51 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:42:30.141 07:27:51 keyring_file -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:42:30.141 07:27:51 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:42:30.141 07:27:51 keyring_file -- nvmf/common.sh@733 -- # python - 00:42:30.141 07:27:51 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.SBTPPhJgmw 00:42:30.141 07:27:51 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.SBTPPhJgmw 00:42:30.141 07:27:51 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.SBTPPhJgmw 00:42:30.141 07:27:51 keyring_file -- keyring/file.sh@30 -- # tgtpid=482336 00:42:30.141 07:27:51 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:42:30.141 07:27:51 keyring_file -- keyring/file.sh@32 -- # waitforlisten 482336 00:42:30.141 07:27:51 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 482336 ']' 00:42:30.141 07:27:51 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:30.141 07:27:51 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:42:30.141 07:27:51 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:30.141 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:30.141 07:27:51 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:42:30.141 07:27:51 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:42:30.141 [2024-11-18 07:27:51.115886] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:42:30.141 [2024-11-18 07:27:51.115990] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid482336 ] 00:42:30.399 [2024-11-18 07:27:51.184709] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:30.399 [2024-11-18 07:27:51.230713] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:42:30.657 07:27:51 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:42:30.657 07:27:51 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:42:30.657 07:27:51 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:42:30.657 07:27:51 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:30.657 07:27:51 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:42:30.657 [2024-11-18 07:27:51.467413] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:42:30.657 null0 00:42:30.657 [2024-11-18 07:27:51.499506] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:42:30.657 [2024-11-18 07:27:51.500018] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:42:30.657 07:27:51 keyring_file -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:30.657 07:27:51 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:42:30.657 07:27:51 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:42:30.657 07:27:51 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:42:30.657 07:27:51 keyring_file -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:42:30.657 07:27:51 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:42:30.657 07:27:51 keyring_file -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:42:30.657 07:27:51 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:42:30.657 07:27:51 keyring_file -- common/autotest_common.sh@655 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:42:30.657 07:27:51 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:30.657 07:27:51 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:42:30.657 [2024-11-18 07:27:51.523541] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:42:30.657 request: 00:42:30.657 { 00:42:30.657 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:42:30.657 "secure_channel": false, 00:42:30.657 "listen_address": { 00:42:30.657 "trtype": "tcp", 00:42:30.657 "traddr": "127.0.0.1", 00:42:30.657 "trsvcid": "4420" 00:42:30.657 }, 00:42:30.657 "method": "nvmf_subsystem_add_listener", 00:42:30.657 "req_id": 1 00:42:30.657 } 00:42:30.657 Got JSON-RPC error response 00:42:30.657 response: 00:42:30.657 { 00:42:30.657 "code": -32602, 00:42:30.657 "message": "Invalid parameters" 00:42:30.657 } 00:42:30.658 07:27:51 keyring_file -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:42:30.658 07:27:51 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:42:30.658 07:27:51 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:42:30.658 07:27:51 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:42:30.658 07:27:51 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:42:30.658 07:27:51 keyring_file -- keyring/file.sh@47 -- # bperfpid=482349 00:42:30.658 07:27:51 keyring_file -- keyring/file.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:42:30.658 07:27:51 keyring_file -- keyring/file.sh@49 -- # waitforlisten 482349 /var/tmp/bperf.sock 00:42:30.658 07:27:51 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 482349 ']' 00:42:30.658 07:27:51 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:42:30.658 07:27:51 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:42:30.658 07:27:51 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:42:30.658 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:42:30.658 07:27:51 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:42:30.658 07:27:51 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:42:30.658 [2024-11-18 07:27:51.570872] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:42:30.658 [2024-11-18 07:27:51.570936] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid482349 ] 00:42:30.916 [2024-11-18 07:27:51.636321] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:30.916 [2024-11-18 07:27:51.682384] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:42:30.916 07:27:51 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:42:30.916 07:27:51 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:42:30.916 07:27:51 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.V8Es8jFGxJ 00:42:30.916 07:27:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.V8Es8jFGxJ 00:42:31.173 07:27:52 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.SBTPPhJgmw 00:42:31.173 07:27:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.SBTPPhJgmw 00:42:31.430 07:27:52 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:42:31.430 07:27:52 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:42:31.430 07:27:52 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:31.430 07:27:52 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:42:31.431 07:27:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:31.687 07:27:52 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.V8Es8jFGxJ == \/\t\m\p\/\t\m\p\.\V\8\E\s\8\j\F\G\x\J ]] 00:42:31.687 07:27:52 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:42:31.687 07:27:52 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:42:31.687 07:27:52 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:31.687 07:27:52 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:42:31.687 07:27:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:31.944 07:27:52 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.SBTPPhJgmw == \/\t\m\p\/\t\m\p\.\S\B\T\P\P\h\J\g\m\w ]] 00:42:31.944 07:27:52 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:42:31.944 07:27:52 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:42:31.944 07:27:52 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:31.944 07:27:52 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:31.944 07:27:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:31.944 07:27:52 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:42:32.510 07:27:53 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:42:32.510 07:27:53 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:42:32.510 07:27:53 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:42:32.510 07:27:53 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:32.510 07:27:53 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:32.510 07:27:53 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:42:32.510 07:27:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:32.510 07:27:53 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:42:32.510 07:27:53 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:42:32.510 07:27:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:42:32.768 [2024-11-18 07:27:53.705901] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:42:33.026 nvme0n1 00:42:33.026 07:27:53 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:42:33.026 07:27:53 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:42:33.026 07:27:53 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:33.026 07:27:53 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:33.026 07:27:53 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:42:33.026 07:27:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:33.282 07:27:54 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:42:33.282 07:27:54 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:42:33.282 07:27:54 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:42:33.282 07:27:54 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:33.282 07:27:54 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:33.282 07:27:54 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:42:33.282 07:27:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:33.538 07:27:54 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:42:33.538 07:27:54 keyring_file -- keyring/file.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:42:33.538 Running I/O for 1 seconds... 00:42:34.912 10175.00 IOPS, 39.75 MiB/s 00:42:34.912 Latency(us) 00:42:34.912 [2024-11-18T06:27:55.890Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:34.912 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:42:34.912 nvme0n1 : 1.01 10230.25 39.96 0.00 0.00 12475.05 5170.06 23398.78 00:42:34.912 [2024-11-18T06:27:55.890Z] =================================================================================================================== 00:42:34.912 [2024-11-18T06:27:55.890Z] Total : 10230.25 39.96 0.00 0.00 12475.05 5170.06 23398.78 00:42:34.912 { 00:42:34.912 "results": [ 00:42:34.912 { 00:42:34.912 "job": "nvme0n1", 00:42:34.912 "core_mask": "0x2", 00:42:34.912 "workload": "randrw", 00:42:34.912 "percentage": 50, 00:42:34.912 "status": "finished", 00:42:34.912 "queue_depth": 128, 00:42:34.912 "io_size": 4096, 00:42:34.912 "runtime": 1.007307, 00:42:34.912 "iops": 10230.247580926172, 00:42:34.912 "mibps": 39.96190461299286, 00:42:34.912 "io_failed": 0, 00:42:34.912 "io_timeout": 0, 00:42:34.912 "avg_latency_us": 12475.052654051431, 00:42:34.912 "min_latency_us": 5170.062222222222, 00:42:34.912 "max_latency_us": 23398.77925925926 00:42:34.912 } 00:42:34.912 ], 00:42:34.912 "core_count": 1 00:42:34.912 } 00:42:34.912 07:27:55 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:42:34.912 07:27:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:42:34.912 07:27:55 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:42:34.912 07:27:55 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:42:34.912 07:27:55 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:34.912 07:27:55 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:34.912 07:27:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:34.912 07:27:55 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:42:35.170 07:27:56 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:42:35.170 07:27:56 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:42:35.170 07:27:56 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:42:35.170 07:27:56 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:35.170 07:27:56 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:35.170 07:27:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:35.170 07:27:56 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:42:35.428 07:27:56 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:42:35.428 07:27:56 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:42:35.428 07:27:56 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:42:35.428 07:27:56 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:42:35.428 07:27:56 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:42:35.428 07:27:56 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:42:35.428 07:27:56 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:42:35.428 07:27:56 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:42:35.428 07:27:56 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:42:35.428 07:27:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:42:35.686 [2024-11-18 07:27:56.577556] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:42:35.686 [2024-11-18 07:27:56.577663] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf8db60 (107): Transport endpoint is not connected 00:42:35.686 [2024-11-18 07:27:56.578657] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf8db60 (9): Bad file descriptor 00:42:35.686 [2024-11-18 07:27:56.579656] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:42:35.686 [2024-11-18 07:27:56.579676] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:42:35.686 [2024-11-18 07:27:56.579689] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:42:35.686 [2024-11-18 07:27:56.579704] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:42:35.686 request: 00:42:35.686 { 00:42:35.686 "name": "nvme0", 00:42:35.686 "trtype": "tcp", 00:42:35.686 "traddr": "127.0.0.1", 00:42:35.686 "adrfam": "ipv4", 00:42:35.686 "trsvcid": "4420", 00:42:35.686 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:42:35.686 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:42:35.686 "prchk_reftag": false, 00:42:35.686 "prchk_guard": false, 00:42:35.686 "hdgst": false, 00:42:35.686 "ddgst": false, 00:42:35.686 "psk": "key1", 00:42:35.686 "allow_unrecognized_csi": false, 00:42:35.686 "method": "bdev_nvme_attach_controller", 00:42:35.686 "req_id": 1 00:42:35.686 } 00:42:35.686 Got JSON-RPC error response 00:42:35.686 response: 00:42:35.686 { 00:42:35.686 "code": -5, 00:42:35.686 "message": "Input/output error" 00:42:35.686 } 00:42:35.686 07:27:56 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:42:35.686 07:27:56 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:42:35.686 07:27:56 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:42:35.686 07:27:56 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:42:35.686 07:27:56 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:42:35.686 07:27:56 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:42:35.686 07:27:56 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:35.686 07:27:56 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:35.686 07:27:56 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:42:35.686 07:27:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:35.944 07:27:56 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:42:35.944 07:27:56 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:42:35.944 07:27:56 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:42:35.944 07:27:56 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:35.944 07:27:56 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:35.944 07:27:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:35.944 07:27:56 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:42:36.202 07:27:57 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:42:36.202 07:27:57 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:42:36.202 07:27:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:42:36.459 07:27:57 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:42:36.459 07:27:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:42:36.717 07:27:57 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:42:36.717 07:27:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:36.717 07:27:57 keyring_file -- keyring/file.sh@78 -- # jq length 00:42:36.975 07:27:57 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:42:36.975 07:27:57 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.V8Es8jFGxJ 00:42:37.233 07:27:57 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.V8Es8jFGxJ 00:42:37.233 07:27:57 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:42:37.233 07:27:57 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.V8Es8jFGxJ 00:42:37.233 07:27:57 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:42:37.233 07:27:57 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:42:37.233 07:27:57 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:42:37.233 07:27:57 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:42:37.233 07:27:57 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.V8Es8jFGxJ 00:42:37.233 07:27:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.V8Es8jFGxJ 00:42:37.233 [2024-11-18 07:27:58.204620] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.V8Es8jFGxJ': 0100660 00:42:37.233 [2024-11-18 07:27:58.204657] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:42:37.233 request: 00:42:37.233 { 00:42:37.233 "name": "key0", 00:42:37.233 "path": "/tmp/tmp.V8Es8jFGxJ", 00:42:37.233 "method": "keyring_file_add_key", 00:42:37.233 "req_id": 1 00:42:37.233 } 00:42:37.233 Got JSON-RPC error response 00:42:37.233 response: 00:42:37.233 { 00:42:37.233 "code": -1, 00:42:37.233 "message": "Operation not permitted" 00:42:37.233 } 00:42:37.491 07:27:58 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:42:37.491 07:27:58 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:42:37.491 07:27:58 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:42:37.491 07:27:58 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:42:37.491 07:27:58 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.V8Es8jFGxJ 00:42:37.491 07:27:58 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.V8Es8jFGxJ 00:42:37.491 07:27:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.V8Es8jFGxJ 00:42:37.749 07:27:58 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.V8Es8jFGxJ 00:42:37.749 07:27:58 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:42:37.749 07:27:58 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:42:37.749 07:27:58 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:37.749 07:27:58 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:37.749 07:27:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:37.749 07:27:58 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:42:38.007 07:27:58 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:42:38.007 07:27:58 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:42:38.007 07:27:58 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:42:38.007 07:27:58 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:42:38.007 07:27:58 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:42:38.007 07:27:58 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:42:38.007 07:27:58 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:42:38.007 07:27:58 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:42:38.007 07:27:58 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:42:38.007 07:27:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:42:38.266 [2024-11-18 07:27:59.030891] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.V8Es8jFGxJ': No such file or directory 00:42:38.266 [2024-11-18 07:27:59.030925] nvme_tcp.c:2498:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:42:38.266 [2024-11-18 07:27:59.030947] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:42:38.266 [2024-11-18 07:27:59.030958] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:42:38.266 [2024-11-18 07:27:59.030971] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:42:38.266 [2024-11-18 07:27:59.030981] bdev_nvme.c:6669:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:42:38.266 request: 00:42:38.266 { 00:42:38.266 "name": "nvme0", 00:42:38.266 "trtype": "tcp", 00:42:38.266 "traddr": "127.0.0.1", 00:42:38.266 "adrfam": "ipv4", 00:42:38.266 "trsvcid": "4420", 00:42:38.266 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:42:38.266 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:42:38.266 "prchk_reftag": false, 00:42:38.266 "prchk_guard": false, 00:42:38.266 "hdgst": false, 00:42:38.266 "ddgst": false, 00:42:38.266 "psk": "key0", 00:42:38.266 "allow_unrecognized_csi": false, 00:42:38.266 "method": "bdev_nvme_attach_controller", 00:42:38.266 "req_id": 1 00:42:38.266 } 00:42:38.266 Got JSON-RPC error response 00:42:38.266 response: 00:42:38.266 { 00:42:38.266 "code": -19, 00:42:38.266 "message": "No such device" 00:42:38.266 } 00:42:38.266 07:27:59 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:42:38.266 07:27:59 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:42:38.266 07:27:59 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:42:38.266 07:27:59 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:42:38.266 07:27:59 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:42:38.266 07:27:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:42:38.523 07:27:59 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:42:38.523 07:27:59 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:42:38.523 07:27:59 keyring_file -- keyring/common.sh@17 -- # name=key0 00:42:38.523 07:27:59 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:42:38.523 07:27:59 keyring_file -- keyring/common.sh@17 -- # digest=0 00:42:38.523 07:27:59 keyring_file -- keyring/common.sh@18 -- # mktemp 00:42:38.523 07:27:59 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.AK3FyyAgeV 00:42:38.523 07:27:59 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:42:38.523 07:27:59 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:42:38.523 07:27:59 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:42:38.523 07:27:59 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:42:38.523 07:27:59 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:42:38.523 07:27:59 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:42:38.523 07:27:59 keyring_file -- nvmf/common.sh@733 -- # python - 00:42:38.523 07:27:59 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.AK3FyyAgeV 00:42:38.523 07:27:59 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.AK3FyyAgeV 00:42:38.523 07:27:59 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.AK3FyyAgeV 00:42:38.523 07:27:59 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.AK3FyyAgeV 00:42:38.523 07:27:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.AK3FyyAgeV 00:42:38.803 07:27:59 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:42:38.804 07:27:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:42:39.085 nvme0n1 00:42:39.085 07:27:59 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:42:39.085 07:27:59 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:42:39.085 07:27:59 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:39.085 07:27:59 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:39.085 07:27:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:39.085 07:27:59 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:42:39.373 07:28:00 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:42:39.373 07:28:00 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:42:39.373 07:28:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:42:39.631 07:28:00 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:42:39.631 07:28:00 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:42:39.631 07:28:00 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:39.631 07:28:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:39.631 07:28:00 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:42:39.889 07:28:00 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:42:39.889 07:28:00 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:42:39.889 07:28:00 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:42:39.889 07:28:00 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:39.889 07:28:00 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:39.889 07:28:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:39.889 07:28:00 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:42:40.147 07:28:01 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:42:40.147 07:28:01 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:42:40.147 07:28:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:42:40.405 07:28:01 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:42:40.405 07:28:01 keyring_file -- keyring/file.sh@105 -- # jq length 00:42:40.405 07:28:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:40.662 07:28:01 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:42:40.662 07:28:01 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.AK3FyyAgeV 00:42:40.662 07:28:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.AK3FyyAgeV 00:42:40.920 07:28:01 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.SBTPPhJgmw 00:42:40.920 07:28:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.SBTPPhJgmw 00:42:41.484 07:28:02 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:42:41.485 07:28:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:42:41.742 nvme0n1 00:42:41.742 07:28:02 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:42:41.742 07:28:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:42:42.002 07:28:02 keyring_file -- keyring/file.sh@113 -- # config='{ 00:42:42.002 "subsystems": [ 00:42:42.002 { 00:42:42.002 "subsystem": "keyring", 00:42:42.002 "config": [ 00:42:42.002 { 00:42:42.002 "method": "keyring_file_add_key", 00:42:42.002 "params": { 00:42:42.002 "name": "key0", 00:42:42.002 "path": "/tmp/tmp.AK3FyyAgeV" 00:42:42.002 } 00:42:42.002 }, 00:42:42.002 { 00:42:42.002 "method": "keyring_file_add_key", 00:42:42.002 "params": { 00:42:42.002 "name": "key1", 00:42:42.002 "path": "/tmp/tmp.SBTPPhJgmw" 00:42:42.002 } 00:42:42.002 } 00:42:42.002 ] 00:42:42.002 }, 00:42:42.002 { 00:42:42.002 "subsystem": "iobuf", 00:42:42.002 "config": [ 00:42:42.002 { 00:42:42.002 "method": "iobuf_set_options", 00:42:42.002 "params": { 00:42:42.002 "small_pool_count": 8192, 00:42:42.002 "large_pool_count": 1024, 00:42:42.002 "small_bufsize": 8192, 00:42:42.002 "large_bufsize": 135168, 00:42:42.002 "enable_numa": false 00:42:42.002 } 00:42:42.002 } 00:42:42.002 ] 00:42:42.002 }, 00:42:42.002 { 00:42:42.002 "subsystem": "sock", 00:42:42.002 "config": [ 00:42:42.002 { 00:42:42.002 "method": "sock_set_default_impl", 00:42:42.002 "params": { 00:42:42.002 "impl_name": "posix" 00:42:42.002 } 00:42:42.002 }, 00:42:42.002 { 00:42:42.002 "method": "sock_impl_set_options", 00:42:42.002 "params": { 00:42:42.002 "impl_name": "ssl", 00:42:42.002 "recv_buf_size": 4096, 00:42:42.002 "send_buf_size": 4096, 00:42:42.002 "enable_recv_pipe": true, 00:42:42.002 "enable_quickack": false, 00:42:42.002 "enable_placement_id": 0, 00:42:42.002 "enable_zerocopy_send_server": true, 00:42:42.002 "enable_zerocopy_send_client": false, 00:42:42.002 "zerocopy_threshold": 0, 00:42:42.002 "tls_version": 0, 00:42:42.002 "enable_ktls": false 00:42:42.002 } 00:42:42.002 }, 00:42:42.002 { 00:42:42.002 "method": "sock_impl_set_options", 00:42:42.002 "params": { 00:42:42.002 "impl_name": "posix", 00:42:42.002 "recv_buf_size": 2097152, 00:42:42.002 "send_buf_size": 2097152, 00:42:42.002 "enable_recv_pipe": true, 00:42:42.002 "enable_quickack": false, 00:42:42.002 "enable_placement_id": 0, 00:42:42.002 "enable_zerocopy_send_server": true, 00:42:42.002 "enable_zerocopy_send_client": false, 00:42:42.002 "zerocopy_threshold": 0, 00:42:42.002 "tls_version": 0, 00:42:42.002 "enable_ktls": false 00:42:42.002 } 00:42:42.002 } 00:42:42.002 ] 00:42:42.002 }, 00:42:42.002 { 00:42:42.002 "subsystem": "vmd", 00:42:42.002 "config": [] 00:42:42.002 }, 00:42:42.002 { 00:42:42.002 "subsystem": "accel", 00:42:42.002 "config": [ 00:42:42.002 { 00:42:42.002 "method": "accel_set_options", 00:42:42.002 "params": { 00:42:42.002 "small_cache_size": 128, 00:42:42.002 "large_cache_size": 16, 00:42:42.002 "task_count": 2048, 00:42:42.002 "sequence_count": 2048, 00:42:42.002 "buf_count": 2048 00:42:42.002 } 00:42:42.002 } 00:42:42.002 ] 00:42:42.002 }, 00:42:42.002 { 00:42:42.002 "subsystem": "bdev", 00:42:42.002 "config": [ 00:42:42.002 { 00:42:42.002 "method": "bdev_set_options", 00:42:42.002 "params": { 00:42:42.002 "bdev_io_pool_size": 65535, 00:42:42.002 "bdev_io_cache_size": 256, 00:42:42.002 "bdev_auto_examine": true, 00:42:42.002 "iobuf_small_cache_size": 128, 00:42:42.002 "iobuf_large_cache_size": 16 00:42:42.002 } 00:42:42.002 }, 00:42:42.002 { 00:42:42.002 "method": "bdev_raid_set_options", 00:42:42.002 "params": { 00:42:42.002 "process_window_size_kb": 1024, 00:42:42.002 "process_max_bandwidth_mb_sec": 0 00:42:42.002 } 00:42:42.002 }, 00:42:42.002 { 00:42:42.002 "method": "bdev_iscsi_set_options", 00:42:42.002 "params": { 00:42:42.002 "timeout_sec": 30 00:42:42.002 } 00:42:42.002 }, 00:42:42.002 { 00:42:42.002 "method": "bdev_nvme_set_options", 00:42:42.002 "params": { 00:42:42.002 "action_on_timeout": "none", 00:42:42.002 "timeout_us": 0, 00:42:42.002 "timeout_admin_us": 0, 00:42:42.002 "keep_alive_timeout_ms": 10000, 00:42:42.002 "arbitration_burst": 0, 00:42:42.002 "low_priority_weight": 0, 00:42:42.002 "medium_priority_weight": 0, 00:42:42.002 "high_priority_weight": 0, 00:42:42.002 "nvme_adminq_poll_period_us": 10000, 00:42:42.002 "nvme_ioq_poll_period_us": 0, 00:42:42.002 "io_queue_requests": 512, 00:42:42.002 "delay_cmd_submit": true, 00:42:42.002 "transport_retry_count": 4, 00:42:42.002 "bdev_retry_count": 3, 00:42:42.002 "transport_ack_timeout": 0, 00:42:42.002 "ctrlr_loss_timeout_sec": 0, 00:42:42.002 "reconnect_delay_sec": 0, 00:42:42.002 "fast_io_fail_timeout_sec": 0, 00:42:42.002 "disable_auto_failback": false, 00:42:42.002 "generate_uuids": false, 00:42:42.002 "transport_tos": 0, 00:42:42.002 "nvme_error_stat": false, 00:42:42.002 "rdma_srq_size": 0, 00:42:42.002 "io_path_stat": false, 00:42:42.002 "allow_accel_sequence": false, 00:42:42.002 "rdma_max_cq_size": 0, 00:42:42.002 "rdma_cm_event_timeout_ms": 0, 00:42:42.002 "dhchap_digests": [ 00:42:42.002 "sha256", 00:42:42.002 "sha384", 00:42:42.002 "sha512" 00:42:42.002 ], 00:42:42.002 "dhchap_dhgroups": [ 00:42:42.002 "null", 00:42:42.002 "ffdhe2048", 00:42:42.002 "ffdhe3072", 00:42:42.002 "ffdhe4096", 00:42:42.002 "ffdhe6144", 00:42:42.002 "ffdhe8192" 00:42:42.002 ] 00:42:42.002 } 00:42:42.002 }, 00:42:42.002 { 00:42:42.002 "method": "bdev_nvme_attach_controller", 00:42:42.002 "params": { 00:42:42.002 "name": "nvme0", 00:42:42.002 "trtype": "TCP", 00:42:42.002 "adrfam": "IPv4", 00:42:42.002 "traddr": "127.0.0.1", 00:42:42.002 "trsvcid": "4420", 00:42:42.002 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:42:42.002 "prchk_reftag": false, 00:42:42.002 "prchk_guard": false, 00:42:42.002 "ctrlr_loss_timeout_sec": 0, 00:42:42.002 "reconnect_delay_sec": 0, 00:42:42.002 "fast_io_fail_timeout_sec": 0, 00:42:42.002 "psk": "key0", 00:42:42.002 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:42:42.002 "hdgst": false, 00:42:42.002 "ddgst": false, 00:42:42.002 "multipath": "multipath" 00:42:42.002 } 00:42:42.003 }, 00:42:42.003 { 00:42:42.003 "method": "bdev_nvme_set_hotplug", 00:42:42.003 "params": { 00:42:42.003 "period_us": 100000, 00:42:42.003 "enable": false 00:42:42.003 } 00:42:42.003 }, 00:42:42.003 { 00:42:42.003 "method": "bdev_wait_for_examine" 00:42:42.003 } 00:42:42.003 ] 00:42:42.003 }, 00:42:42.003 { 00:42:42.003 "subsystem": "nbd", 00:42:42.003 "config": [] 00:42:42.003 } 00:42:42.003 ] 00:42:42.003 }' 00:42:42.003 07:28:02 keyring_file -- keyring/file.sh@115 -- # killprocess 482349 00:42:42.003 07:28:02 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 482349 ']' 00:42:42.003 07:28:02 keyring_file -- common/autotest_common.sh@958 -- # kill -0 482349 00:42:42.003 07:28:02 keyring_file -- common/autotest_common.sh@959 -- # uname 00:42:42.003 07:28:02 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:42:42.003 07:28:02 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 482349 00:42:42.003 07:28:02 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:42:42.003 07:28:02 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:42:42.003 07:28:02 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 482349' 00:42:42.003 killing process with pid 482349 00:42:42.003 07:28:02 keyring_file -- common/autotest_common.sh@973 -- # kill 482349 00:42:42.003 Received shutdown signal, test time was about 1.000000 seconds 00:42:42.003 00:42:42.003 Latency(us) 00:42:42.003 [2024-11-18T06:28:02.981Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:42.003 [2024-11-18T06:28:02.981Z] =================================================================================================================== 00:42:42.003 [2024-11-18T06:28:02.981Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:42:42.003 07:28:02 keyring_file -- common/autotest_common.sh@978 -- # wait 482349 00:42:42.262 07:28:03 keyring_file -- keyring/file.sh@118 -- # bperfpid=483818 00:42:42.262 07:28:03 keyring_file -- keyring/file.sh@120 -- # waitforlisten 483818 /var/tmp/bperf.sock 00:42:42.262 07:28:03 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 483818 ']' 00:42:42.262 07:28:03 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:42:42.262 07:28:03 keyring_file -- keyring/file.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:42:42.262 07:28:03 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:42:42.262 07:28:03 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:42:42.262 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:42:42.262 07:28:03 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:42:42.262 07:28:03 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:42:42.262 "subsystems": [ 00:42:42.262 { 00:42:42.262 "subsystem": "keyring", 00:42:42.262 "config": [ 00:42:42.262 { 00:42:42.262 "method": "keyring_file_add_key", 00:42:42.262 "params": { 00:42:42.262 "name": "key0", 00:42:42.262 "path": "/tmp/tmp.AK3FyyAgeV" 00:42:42.262 } 00:42:42.262 }, 00:42:42.262 { 00:42:42.262 "method": "keyring_file_add_key", 00:42:42.262 "params": { 00:42:42.262 "name": "key1", 00:42:42.262 "path": "/tmp/tmp.SBTPPhJgmw" 00:42:42.262 } 00:42:42.262 } 00:42:42.262 ] 00:42:42.262 }, 00:42:42.262 { 00:42:42.262 "subsystem": "iobuf", 00:42:42.262 "config": [ 00:42:42.262 { 00:42:42.262 "method": "iobuf_set_options", 00:42:42.262 "params": { 00:42:42.262 "small_pool_count": 8192, 00:42:42.262 "large_pool_count": 1024, 00:42:42.262 "small_bufsize": 8192, 00:42:42.262 "large_bufsize": 135168, 00:42:42.262 "enable_numa": false 00:42:42.262 } 00:42:42.262 } 00:42:42.262 ] 00:42:42.262 }, 00:42:42.262 { 00:42:42.262 "subsystem": "sock", 00:42:42.262 "config": [ 00:42:42.262 { 00:42:42.262 "method": "sock_set_default_impl", 00:42:42.262 "params": { 00:42:42.262 "impl_name": "posix" 00:42:42.262 } 00:42:42.262 }, 00:42:42.262 { 00:42:42.262 "method": "sock_impl_set_options", 00:42:42.262 "params": { 00:42:42.262 "impl_name": "ssl", 00:42:42.262 "recv_buf_size": 4096, 00:42:42.262 "send_buf_size": 4096, 00:42:42.262 "enable_recv_pipe": true, 00:42:42.262 "enable_quickack": false, 00:42:42.262 "enable_placement_id": 0, 00:42:42.262 "enable_zerocopy_send_server": true, 00:42:42.262 "enable_zerocopy_send_client": false, 00:42:42.262 "zerocopy_threshold": 0, 00:42:42.262 "tls_version": 0, 00:42:42.262 "enable_ktls": false 00:42:42.262 } 00:42:42.262 }, 00:42:42.262 { 00:42:42.262 "method": "sock_impl_set_options", 00:42:42.262 "params": { 00:42:42.262 "impl_name": "posix", 00:42:42.262 "recv_buf_size": 2097152, 00:42:42.262 "send_buf_size": 2097152, 00:42:42.262 "enable_recv_pipe": true, 00:42:42.262 "enable_quickack": false, 00:42:42.262 "enable_placement_id": 0, 00:42:42.262 "enable_zerocopy_send_server": true, 00:42:42.262 "enable_zerocopy_send_client": false, 00:42:42.262 "zerocopy_threshold": 0, 00:42:42.262 "tls_version": 0, 00:42:42.262 "enable_ktls": false 00:42:42.262 } 00:42:42.262 } 00:42:42.262 ] 00:42:42.262 }, 00:42:42.262 { 00:42:42.262 "subsystem": "vmd", 00:42:42.262 "config": [] 00:42:42.262 }, 00:42:42.262 { 00:42:42.262 "subsystem": "accel", 00:42:42.262 "config": [ 00:42:42.262 { 00:42:42.262 "method": "accel_set_options", 00:42:42.262 "params": { 00:42:42.262 "small_cache_size": 128, 00:42:42.262 "large_cache_size": 16, 00:42:42.262 "task_count": 2048, 00:42:42.262 "sequence_count": 2048, 00:42:42.262 "buf_count": 2048 00:42:42.262 } 00:42:42.262 } 00:42:42.262 ] 00:42:42.262 }, 00:42:42.262 { 00:42:42.262 "subsystem": "bdev", 00:42:42.262 "config": [ 00:42:42.262 { 00:42:42.262 "method": "bdev_set_options", 00:42:42.262 "params": { 00:42:42.262 "bdev_io_pool_size": 65535, 00:42:42.262 "bdev_io_cache_size": 256, 00:42:42.262 "bdev_auto_examine": true, 00:42:42.262 "iobuf_small_cache_size": 128, 00:42:42.262 "iobuf_large_cache_size": 16 00:42:42.262 } 00:42:42.262 }, 00:42:42.262 { 00:42:42.262 "method": "bdev_raid_set_options", 00:42:42.262 "params": { 00:42:42.262 "process_window_size_kb": 1024, 00:42:42.262 "process_max_bandwidth_mb_sec": 0 00:42:42.262 } 00:42:42.262 }, 00:42:42.262 { 00:42:42.262 "method": "bdev_iscsi_set_options", 00:42:42.262 "params": { 00:42:42.262 "timeout_sec": 30 00:42:42.262 } 00:42:42.262 }, 00:42:42.262 { 00:42:42.262 "method": "bdev_nvme_set_options", 00:42:42.262 "params": { 00:42:42.262 "action_on_timeout": "none", 00:42:42.262 "timeout_us": 0, 00:42:42.262 "timeout_admin_us": 0, 00:42:42.263 "keep_alive_timeout_ms": 10000, 00:42:42.263 "arbitration_burst": 0, 00:42:42.263 "low_priority_weight": 0, 00:42:42.263 "medium_priority_weight": 0, 00:42:42.263 "high_priority_weight": 0, 00:42:42.263 "nvme_adminq_poll_period_us": 10000, 00:42:42.263 "nvme_ioq_poll_period_us": 0, 00:42:42.263 "io_queue_requests": 512, 00:42:42.263 "delay_cmd_submit": true, 00:42:42.263 "transport_retry_count": 4, 00:42:42.263 "bdev_retry_count": 3, 00:42:42.263 "transport_ack_timeout": 0, 00:42:42.263 "ctrlr_loss_timeout_sec": 0, 00:42:42.263 "reconnect_delay_sec": 0, 00:42:42.263 "fast_io_fail_timeout_sec": 0, 00:42:42.263 "disable_auto_failback": false, 00:42:42.263 "generate_uuids": false, 00:42:42.263 "transport_tos": 0, 00:42:42.263 "nvme_error_stat": false, 00:42:42.263 "rdma_srq_size": 0, 00:42:42.263 "io_path_stat": false, 00:42:42.263 "allow_accel_sequence": false, 00:42:42.263 "rdma_max_cq_size": 0, 00:42:42.263 "rdma_cm_event_timeout_ms": 0, 00:42:42.263 "dhchap_digests": [ 00:42:42.263 "sha256", 00:42:42.263 "sha384", 00:42:42.263 "sha512" 00:42:42.263 ], 00:42:42.263 "dhchap_dhgroups": [ 00:42:42.263 "null", 00:42:42.263 "ffdhe2048", 00:42:42.263 "ffdhe3072", 00:42:42.263 "ffdhe4096", 00:42:42.263 "ffdhe6144", 00:42:42.263 "ffdhe8192" 00:42:42.263 ] 00:42:42.263 } 00:42:42.263 }, 00:42:42.263 { 00:42:42.263 "method": "bdev_nvme_attach_controller", 00:42:42.263 "params": { 00:42:42.263 "name": "nvme0", 00:42:42.263 "trtype": "TCP", 00:42:42.263 "adrfam": "IPv4", 00:42:42.263 "traddr": "127.0.0.1", 00:42:42.263 "trsvcid": "4420", 00:42:42.263 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:42:42.263 "prchk_reftag": false, 00:42:42.263 "prchk_guard": false, 00:42:42.263 "ctrlr_loss_timeout_sec": 0, 00:42:42.263 "reconnect_delay_sec": 0, 00:42:42.263 "fast_io_fail_timeout_sec": 0, 00:42:42.263 "psk": "key0", 00:42:42.263 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:42:42.263 "hdgst": false, 00:42:42.263 "ddgst": false, 00:42:42.263 "multipath": "multipath" 00:42:42.263 } 00:42:42.263 }, 00:42:42.263 { 00:42:42.263 "method": "bdev_nvme_set_hotplug", 00:42:42.263 "params": { 00:42:42.263 "period_us": 100000, 00:42:42.263 "enable": false 00:42:42.263 } 00:42:42.263 }, 00:42:42.263 { 00:42:42.263 "method": "bdev_wait_for_examine" 00:42:42.263 } 00:42:42.263 ] 00:42:42.263 }, 00:42:42.263 { 00:42:42.263 "subsystem": "nbd", 00:42:42.263 "config": [] 00:42:42.263 } 00:42:42.263 ] 00:42:42.263 }' 00:42:42.263 07:28:03 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:42:42.263 [2024-11-18 07:28:03.075655] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:42:42.263 [2024-11-18 07:28:03.075750] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid483818 ] 00:42:42.263 [2024-11-18 07:28:03.144679] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:42.263 [2024-11-18 07:28:03.195054] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:42:42.521 [2024-11-18 07:28:03.378020] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:42:42.521 07:28:03 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:42:42.521 07:28:03 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:42:42.521 07:28:03 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:42:42.521 07:28:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:42.521 07:28:03 keyring_file -- keyring/file.sh@121 -- # jq length 00:42:42.779 07:28:03 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:42:42.779 07:28:03 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:42:43.037 07:28:03 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:42:43.037 07:28:03 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:43.037 07:28:03 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:43.037 07:28:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:43.037 07:28:03 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:42:43.295 07:28:04 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:42:43.295 07:28:04 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:42:43.295 07:28:04 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:42:43.295 07:28:04 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:43.295 07:28:04 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:43.295 07:28:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:43.295 07:28:04 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:42:43.553 07:28:04 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:42:43.553 07:28:04 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:42:43.553 07:28:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:42:43.553 07:28:04 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:42:43.811 07:28:04 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:42:43.811 07:28:04 keyring_file -- keyring/file.sh@1 -- # cleanup 00:42:43.811 07:28:04 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.AK3FyyAgeV /tmp/tmp.SBTPPhJgmw 00:42:43.811 07:28:04 keyring_file -- keyring/file.sh@20 -- # killprocess 483818 00:42:43.811 07:28:04 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 483818 ']' 00:42:43.811 07:28:04 keyring_file -- common/autotest_common.sh@958 -- # kill -0 483818 00:42:43.811 07:28:04 keyring_file -- common/autotest_common.sh@959 -- # uname 00:42:43.811 07:28:04 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:42:43.811 07:28:04 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 483818 00:42:43.811 07:28:04 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:42:43.811 07:28:04 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:42:43.811 07:28:04 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 483818' 00:42:43.811 killing process with pid 483818 00:42:43.811 07:28:04 keyring_file -- common/autotest_common.sh@973 -- # kill 483818 00:42:43.811 Received shutdown signal, test time was about 1.000000 seconds 00:42:43.811 00:42:43.811 Latency(us) 00:42:43.811 [2024-11-18T06:28:04.789Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:43.811 [2024-11-18T06:28:04.789Z] =================================================================================================================== 00:42:43.811 [2024-11-18T06:28:04.789Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:42:43.811 07:28:04 keyring_file -- common/autotest_common.sh@978 -- # wait 483818 00:42:44.069 07:28:04 keyring_file -- keyring/file.sh@21 -- # killprocess 482336 00:42:44.069 07:28:04 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 482336 ']' 00:42:44.069 07:28:04 keyring_file -- common/autotest_common.sh@958 -- # kill -0 482336 00:42:44.069 07:28:04 keyring_file -- common/autotest_common.sh@959 -- # uname 00:42:44.069 07:28:04 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:42:44.069 07:28:04 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 482336 00:42:44.069 07:28:04 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:42:44.069 07:28:04 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:42:44.069 07:28:04 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 482336' 00:42:44.069 killing process with pid 482336 00:42:44.069 07:28:04 keyring_file -- common/autotest_common.sh@973 -- # kill 482336 00:42:44.069 07:28:04 keyring_file -- common/autotest_common.sh@978 -- # wait 482336 00:42:44.327 00:42:44.327 real 0m14.376s 00:42:44.327 user 0m36.751s 00:42:44.327 sys 0m3.250s 00:42:44.327 07:28:05 keyring_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:42:44.327 07:28:05 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:42:44.327 ************************************ 00:42:44.327 END TEST keyring_file 00:42:44.327 ************************************ 00:42:44.327 07:28:05 -- spdk/autotest.sh@293 -- # [[ y == y ]] 00:42:44.327 07:28:05 -- spdk/autotest.sh@294 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:42:44.327 07:28:05 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:42:44.327 07:28:05 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:42:44.327 07:28:05 -- common/autotest_common.sh@10 -- # set +x 00:42:44.327 ************************************ 00:42:44.327 START TEST keyring_linux 00:42:44.327 ************************************ 00:42:44.327 07:28:05 keyring_linux -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:42:44.327 Joined session keyring: 355813169 00:42:44.327 * Looking for test storage... 00:42:44.327 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:42:44.327 07:28:05 keyring_linux -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:42:44.327 07:28:05 keyring_linux -- common/autotest_common.sh@1693 -- # lcov --version 00:42:44.327 07:28:05 keyring_linux -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:42:44.586 07:28:05 keyring_linux -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:42:44.586 07:28:05 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:42:44.586 07:28:05 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:42:44.586 07:28:05 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:42:44.586 07:28:05 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:42:44.586 07:28:05 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:42:44.586 07:28:05 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:42:44.586 07:28:05 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:42:44.586 07:28:05 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:42:44.586 07:28:05 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:42:44.586 07:28:05 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:42:44.586 07:28:05 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:42:44.586 07:28:05 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:42:44.586 07:28:05 keyring_linux -- scripts/common.sh@345 -- # : 1 00:42:44.586 07:28:05 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:42:44.586 07:28:05 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:42:44.586 07:28:05 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:42:44.586 07:28:05 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:42:44.586 07:28:05 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:42:44.586 07:28:05 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:42:44.586 07:28:05 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:42:44.586 07:28:05 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:42:44.586 07:28:05 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:42:44.586 07:28:05 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:42:44.586 07:28:05 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:42:44.586 07:28:05 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:42:44.586 07:28:05 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:42:44.586 07:28:05 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:42:44.586 07:28:05 keyring_linux -- scripts/common.sh@368 -- # return 0 00:42:44.586 07:28:05 keyring_linux -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:42:44.586 07:28:05 keyring_linux -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:42:44.586 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:44.586 --rc genhtml_branch_coverage=1 00:42:44.587 --rc genhtml_function_coverage=1 00:42:44.587 --rc genhtml_legend=1 00:42:44.587 --rc geninfo_all_blocks=1 00:42:44.587 --rc geninfo_unexecuted_blocks=1 00:42:44.587 00:42:44.587 ' 00:42:44.587 07:28:05 keyring_linux -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:42:44.587 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:44.587 --rc genhtml_branch_coverage=1 00:42:44.587 --rc genhtml_function_coverage=1 00:42:44.587 --rc genhtml_legend=1 00:42:44.587 --rc geninfo_all_blocks=1 00:42:44.587 --rc geninfo_unexecuted_blocks=1 00:42:44.587 00:42:44.587 ' 00:42:44.587 07:28:05 keyring_linux -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:42:44.587 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:44.587 --rc genhtml_branch_coverage=1 00:42:44.587 --rc genhtml_function_coverage=1 00:42:44.587 --rc genhtml_legend=1 00:42:44.587 --rc geninfo_all_blocks=1 00:42:44.587 --rc geninfo_unexecuted_blocks=1 00:42:44.587 00:42:44.587 ' 00:42:44.587 07:28:05 keyring_linux -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:42:44.587 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:44.587 --rc genhtml_branch_coverage=1 00:42:44.587 --rc genhtml_function_coverage=1 00:42:44.587 --rc genhtml_legend=1 00:42:44.587 --rc geninfo_all_blocks=1 00:42:44.587 --rc geninfo_unexecuted_blocks=1 00:42:44.587 00:42:44.587 ' 00:42:44.587 07:28:05 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:42:44.587 07:28:05 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:42:44.587 07:28:05 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:42:44.587 07:28:05 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:42:44.587 07:28:05 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:42:44.587 07:28:05 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:42:44.587 07:28:05 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:42:44.587 07:28:05 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:42:44.587 07:28:05 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:42:44.587 07:28:05 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:42:44.587 07:28:05 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:42:44.587 07:28:05 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:42:44.587 07:28:05 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:42:44.587 07:28:05 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:42:44.587 07:28:05 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:42:44.587 07:28:05 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:42:44.587 07:28:05 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:42:44.587 07:28:05 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:42:44.587 07:28:05 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:42:44.587 07:28:05 keyring_linux -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:42:44.587 07:28:05 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:42:44.587 07:28:05 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:42:44.587 07:28:05 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:42:44.587 07:28:05 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:42:44.587 07:28:05 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:44.587 07:28:05 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:44.587 07:28:05 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:44.587 07:28:05 keyring_linux -- paths/export.sh@5 -- # export PATH 00:42:44.587 07:28:05 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:44.587 07:28:05 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:42:44.587 07:28:05 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:42:44.587 07:28:05 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:42:44.587 07:28:05 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:42:44.587 07:28:05 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:42:44.587 07:28:05 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:42:44.587 07:28:05 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:42:44.587 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:42:44.587 07:28:05 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:42:44.587 07:28:05 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:42:44.587 07:28:05 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:42:44.587 07:28:05 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:42:44.587 07:28:05 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:42:44.587 07:28:05 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:42:44.587 07:28:05 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:42:44.587 07:28:05 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:42:44.587 07:28:05 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:42:44.587 07:28:05 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:42:44.587 07:28:05 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:42:44.587 07:28:05 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:42:44.587 07:28:05 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:42:44.587 07:28:05 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:42:44.587 07:28:05 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:42:44.587 07:28:05 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:42:44.587 07:28:05 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:42:44.587 07:28:05 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:42:44.587 07:28:05 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:42:44.587 07:28:05 keyring_linux -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:42:44.587 07:28:05 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:42:44.587 07:28:05 keyring_linux -- nvmf/common.sh@733 -- # python - 00:42:44.587 07:28:05 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:42:44.587 07:28:05 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:42:44.587 /tmp/:spdk-test:key0 00:42:44.587 07:28:05 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:42:44.587 07:28:05 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:42:44.587 07:28:05 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:42:44.587 07:28:05 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:42:44.587 07:28:05 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:42:44.587 07:28:05 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:42:44.587 07:28:05 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:42:44.587 07:28:05 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:42:44.587 07:28:05 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:42:44.587 07:28:05 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:42:44.587 07:28:05 keyring_linux -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:42:44.587 07:28:05 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:42:44.587 07:28:05 keyring_linux -- nvmf/common.sh@733 -- # python - 00:42:44.587 07:28:05 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:42:44.587 07:28:05 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:42:44.587 /tmp/:spdk-test:key1 00:42:44.587 07:28:05 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=484184 00:42:44.587 07:28:05 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:42:44.587 07:28:05 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 484184 00:42:44.587 07:28:05 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 484184 ']' 00:42:44.587 07:28:05 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:44.587 07:28:05 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:42:44.587 07:28:05 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:44.587 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:44.587 07:28:05 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:42:44.587 07:28:05 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:42:44.587 [2024-11-18 07:28:05.530016] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:42:44.587 [2024-11-18 07:28:05.530131] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid484184 ] 00:42:44.845 [2024-11-18 07:28:05.599287] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:44.845 [2024-11-18 07:28:05.644956] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:42:45.103 07:28:05 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:42:45.103 07:28:05 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:42:45.103 07:28:05 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:42:45.103 07:28:05 keyring_linux -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:45.103 07:28:05 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:42:45.103 [2024-11-18 07:28:05.898061] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:42:45.103 null0 00:42:45.104 [2024-11-18 07:28:05.930119] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:42:45.104 [2024-11-18 07:28:05.930628] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:42:45.104 07:28:05 keyring_linux -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:45.104 07:28:05 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:42:45.104 58822136 00:42:45.104 07:28:05 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:42:45.104 339326826 00:42:45.104 07:28:05 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=484308 00:42:45.104 07:28:05 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:42:45.104 07:28:05 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 484308 /var/tmp/bperf.sock 00:42:45.104 07:28:05 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 484308 ']' 00:42:45.104 07:28:05 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:42:45.104 07:28:05 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:42:45.104 07:28:05 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:42:45.104 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:42:45.104 07:28:05 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:42:45.104 07:28:05 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:42:45.104 [2024-11-18 07:28:05.995388] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:42:45.104 [2024-11-18 07:28:05.995460] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid484308 ] 00:42:45.104 [2024-11-18 07:28:06.059172] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:45.362 [2024-11-18 07:28:06.104991] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:42:45.362 07:28:06 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:42:45.362 07:28:06 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:42:45.362 07:28:06 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:42:45.362 07:28:06 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:42:45.619 07:28:06 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:42:45.619 07:28:06 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:42:45.877 07:28:06 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:42:45.877 07:28:06 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:42:46.134 [2024-11-18 07:28:07.089876] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:42:46.392 nvme0n1 00:42:46.392 07:28:07 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:42:46.392 07:28:07 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:42:46.392 07:28:07 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:42:46.392 07:28:07 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:42:46.392 07:28:07 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:46.392 07:28:07 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:42:46.650 07:28:07 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:42:46.650 07:28:07 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:42:46.650 07:28:07 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:42:46.650 07:28:07 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:42:46.650 07:28:07 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:46.650 07:28:07 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:46.650 07:28:07 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:42:46.908 07:28:07 keyring_linux -- keyring/linux.sh@25 -- # sn=58822136 00:42:46.908 07:28:07 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:42:46.908 07:28:07 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:42:46.908 07:28:07 keyring_linux -- keyring/linux.sh@26 -- # [[ 58822136 == \5\8\8\2\2\1\3\6 ]] 00:42:46.908 07:28:07 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 58822136 00:42:46.908 07:28:07 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:42:46.908 07:28:07 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:42:46.908 Running I/O for 1 seconds... 00:42:48.281 11409.00 IOPS, 44.57 MiB/s 00:42:48.281 Latency(us) 00:42:48.281 [2024-11-18T06:28:09.259Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:48.281 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:42:48.281 nvme0n1 : 1.01 11413.25 44.58 0.00 0.00 11147.38 5606.97 16796.63 00:42:48.281 [2024-11-18T06:28:09.259Z] =================================================================================================================== 00:42:48.281 [2024-11-18T06:28:09.259Z] Total : 11413.25 44.58 0.00 0.00 11147.38 5606.97 16796.63 00:42:48.281 { 00:42:48.281 "results": [ 00:42:48.281 { 00:42:48.281 "job": "nvme0n1", 00:42:48.281 "core_mask": "0x2", 00:42:48.281 "workload": "randread", 00:42:48.281 "status": "finished", 00:42:48.281 "queue_depth": 128, 00:42:48.281 "io_size": 4096, 00:42:48.281 "runtime": 1.01093, 00:42:48.281 "iops": 11413.25314314542, 00:42:48.281 "mibps": 44.5830200904118, 00:42:48.281 "io_failed": 0, 00:42:48.281 "io_timeout": 0, 00:42:48.281 "avg_latency_us": 11147.38274121582, 00:42:48.281 "min_latency_us": 5606.968888888889, 00:42:48.281 "max_latency_us": 16796.634074074074 00:42:48.281 } 00:42:48.281 ], 00:42:48.281 "core_count": 1 00:42:48.281 } 00:42:48.281 07:28:08 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:42:48.281 07:28:08 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:42:48.281 07:28:09 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:42:48.281 07:28:09 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:42:48.281 07:28:09 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:42:48.281 07:28:09 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:42:48.281 07:28:09 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:42:48.281 07:28:09 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:48.538 07:28:09 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:42:48.538 07:28:09 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:42:48.538 07:28:09 keyring_linux -- keyring/linux.sh@23 -- # return 00:42:48.538 07:28:09 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:42:48.538 07:28:09 keyring_linux -- common/autotest_common.sh@652 -- # local es=0 00:42:48.538 07:28:09 keyring_linux -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:42:48.538 07:28:09 keyring_linux -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:42:48.538 07:28:09 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:42:48.538 07:28:09 keyring_linux -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:42:48.538 07:28:09 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:42:48.538 07:28:09 keyring_linux -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:42:48.538 07:28:09 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:42:48.794 [2024-11-18 07:28:09.674259] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:42:48.794 [2024-11-18 07:28:09.675150] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e0f8f0 (107): Transport endpoint is not connected 00:42:48.794 [2024-11-18 07:28:09.676142] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e0f8f0 (9): Bad file descriptor 00:42:48.794 [2024-11-18 07:28:09.677141] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:42:48.794 [2024-11-18 07:28:09.677161] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:42:48.794 [2024-11-18 07:28:09.677174] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:42:48.794 [2024-11-18 07:28:09.677195] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:42:48.794 request: 00:42:48.794 { 00:42:48.794 "name": "nvme0", 00:42:48.794 "trtype": "tcp", 00:42:48.794 "traddr": "127.0.0.1", 00:42:48.794 "adrfam": "ipv4", 00:42:48.794 "trsvcid": "4420", 00:42:48.795 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:42:48.795 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:42:48.795 "prchk_reftag": false, 00:42:48.795 "prchk_guard": false, 00:42:48.795 "hdgst": false, 00:42:48.795 "ddgst": false, 00:42:48.795 "psk": ":spdk-test:key1", 00:42:48.795 "allow_unrecognized_csi": false, 00:42:48.795 "method": "bdev_nvme_attach_controller", 00:42:48.795 "req_id": 1 00:42:48.795 } 00:42:48.795 Got JSON-RPC error response 00:42:48.795 response: 00:42:48.795 { 00:42:48.795 "code": -5, 00:42:48.795 "message": "Input/output error" 00:42:48.795 } 00:42:48.795 07:28:09 keyring_linux -- common/autotest_common.sh@655 -- # es=1 00:42:48.795 07:28:09 keyring_linux -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:42:48.795 07:28:09 keyring_linux -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:42:48.795 07:28:09 keyring_linux -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:42:48.795 07:28:09 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:42:48.795 07:28:09 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:42:48.795 07:28:09 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:42:48.795 07:28:09 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:42:48.795 07:28:09 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:42:48.795 07:28:09 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:42:48.795 07:28:09 keyring_linux -- keyring/linux.sh@33 -- # sn=58822136 00:42:48.795 07:28:09 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 58822136 00:42:48.795 1 links removed 00:42:48.795 07:28:09 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:42:48.795 07:28:09 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:42:48.795 07:28:09 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:42:48.795 07:28:09 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:42:48.795 07:28:09 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:42:48.795 07:28:09 keyring_linux -- keyring/linux.sh@33 -- # sn=339326826 00:42:48.795 07:28:09 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 339326826 00:42:48.795 1 links removed 00:42:48.795 07:28:09 keyring_linux -- keyring/linux.sh@41 -- # killprocess 484308 00:42:48.795 07:28:09 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 484308 ']' 00:42:48.795 07:28:09 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 484308 00:42:48.795 07:28:09 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:42:48.795 07:28:09 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:42:48.795 07:28:09 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 484308 00:42:48.795 07:28:09 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:42:48.795 07:28:09 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:42:48.795 07:28:09 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 484308' 00:42:48.795 killing process with pid 484308 00:42:48.795 07:28:09 keyring_linux -- common/autotest_common.sh@973 -- # kill 484308 00:42:48.795 Received shutdown signal, test time was about 1.000000 seconds 00:42:48.795 00:42:48.795 Latency(us) 00:42:48.795 [2024-11-18T06:28:09.773Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:48.795 [2024-11-18T06:28:09.773Z] =================================================================================================================== 00:42:48.795 [2024-11-18T06:28:09.773Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:42:48.795 07:28:09 keyring_linux -- common/autotest_common.sh@978 -- # wait 484308 00:42:49.053 07:28:09 keyring_linux -- keyring/linux.sh@42 -- # killprocess 484184 00:42:49.053 07:28:09 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 484184 ']' 00:42:49.053 07:28:09 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 484184 00:42:49.053 07:28:09 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:42:49.053 07:28:09 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:42:49.053 07:28:09 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 484184 00:42:49.053 07:28:09 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:42:49.053 07:28:09 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:42:49.053 07:28:09 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 484184' 00:42:49.053 killing process with pid 484184 00:42:49.053 07:28:09 keyring_linux -- common/autotest_common.sh@973 -- # kill 484184 00:42:49.053 07:28:09 keyring_linux -- common/autotest_common.sh@978 -- # wait 484184 00:42:49.621 00:42:49.621 real 0m5.125s 00:42:49.621 user 0m10.243s 00:42:49.621 sys 0m1.597s 00:42:49.621 07:28:10 keyring_linux -- common/autotest_common.sh@1130 -- # xtrace_disable 00:42:49.621 07:28:10 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:42:49.621 ************************************ 00:42:49.621 END TEST keyring_linux 00:42:49.621 ************************************ 00:42:49.621 07:28:10 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:42:49.621 07:28:10 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:42:49.621 07:28:10 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:42:49.621 07:28:10 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:42:49.621 07:28:10 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:42:49.621 07:28:10 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:42:49.621 07:28:10 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:42:49.621 07:28:10 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:42:49.621 07:28:10 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:42:49.621 07:28:10 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:42:49.621 07:28:10 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:42:49.621 07:28:10 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:42:49.621 07:28:10 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:42:49.621 07:28:10 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:42:49.621 07:28:10 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:42:49.621 07:28:10 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:42:49.621 07:28:10 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:42:49.621 07:28:10 -- common/autotest_common.sh@726 -- # xtrace_disable 00:42:49.621 07:28:10 -- common/autotest_common.sh@10 -- # set +x 00:42:49.621 07:28:10 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:42:49.621 07:28:10 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:42:49.621 07:28:10 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:42:49.621 07:28:10 -- common/autotest_common.sh@10 -- # set +x 00:42:51.523 INFO: APP EXITING 00:42:51.523 INFO: killing all VMs 00:42:51.523 INFO: killing vhost app 00:42:51.523 INFO: EXIT DONE 00:42:52.899 0000:88:00.0 (8086 0a54): Already using the nvme driver 00:42:52.899 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:42:52.899 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:42:52.899 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:42:52.899 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:42:52.899 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:42:52.899 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:42:52.899 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:42:52.899 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:42:52.899 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:42:52.899 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:42:52.899 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:42:52.899 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:42:52.899 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:42:52.899 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:42:52.899 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:42:52.899 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:42:54.277 Cleaning 00:42:54.277 Removing: /var/run/dpdk/spdk0/config 00:42:54.277 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:42:54.277 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:42:54.277 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:42:54.277 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:42:54.277 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:42:54.277 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:42:54.277 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:42:54.277 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:42:54.277 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:42:54.277 Removing: /var/run/dpdk/spdk0/hugepage_info 00:42:54.277 Removing: /var/run/dpdk/spdk1/config 00:42:54.277 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:42:54.277 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:42:54.277 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:42:54.277 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:42:54.277 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:42:54.277 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:42:54.277 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:42:54.277 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:42:54.277 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:42:54.277 Removing: /var/run/dpdk/spdk1/hugepage_info 00:42:54.277 Removing: /var/run/dpdk/spdk2/config 00:42:54.277 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:42:54.277 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:42:54.277 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:42:54.277 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:42:54.277 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:42:54.277 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:42:54.277 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:42:54.277 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:42:54.277 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:42:54.277 Removing: /var/run/dpdk/spdk2/hugepage_info 00:42:54.277 Removing: /var/run/dpdk/spdk3/config 00:42:54.277 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:42:54.277 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:42:54.277 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:42:54.277 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:42:54.277 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:42:54.277 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:42:54.277 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:42:54.277 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:42:54.277 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:42:54.277 Removing: /var/run/dpdk/spdk3/hugepage_info 00:42:54.277 Removing: /var/run/dpdk/spdk4/config 00:42:54.277 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:42:54.277 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:42:54.277 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:42:54.277 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:42:54.277 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:42:54.277 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:42:54.277 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:42:54.277 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:42:54.277 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:42:54.277 Removing: /var/run/dpdk/spdk4/hugepage_info 00:42:54.277 Removing: /dev/shm/bdev_svc_trace.1 00:42:54.277 Removing: /dev/shm/nvmf_trace.0 00:42:54.277 Removing: /dev/shm/spdk_tgt_trace.pid99237 00:42:54.277 Removing: /var/run/dpdk/spdk0 00:42:54.277 Removing: /var/run/dpdk/spdk1 00:42:54.277 Removing: /var/run/dpdk/spdk2 00:42:54.277 Removing: /var/run/dpdk/spdk3 00:42:54.277 Removing: /var/run/dpdk/spdk4 00:42:54.277 Removing: /var/run/dpdk/spdk_pid100753 00:42:54.277 Removing: /var/run/dpdk/spdk_pid101055 00:42:54.277 Removing: /var/run/dpdk/spdk_pid101784 00:42:54.277 Removing: /var/run/dpdk/spdk_pid101794 00:42:54.277 Removing: /var/run/dpdk/spdk_pid102054 00:42:54.277 Removing: /var/run/dpdk/spdk_pid103372 00:42:54.277 Removing: /var/run/dpdk/spdk_pid104302 00:42:54.277 Removing: /var/run/dpdk/spdk_pid104500 00:42:54.278 Removing: /var/run/dpdk/spdk_pid104814 00:42:54.278 Removing: /var/run/dpdk/spdk_pid105028 00:42:54.278 Removing: /var/run/dpdk/spdk_pid105225 00:42:54.278 Removing: /var/run/dpdk/spdk_pid105380 00:42:54.278 Removing: /var/run/dpdk/spdk_pid105540 00:42:54.278 Removing: /var/run/dpdk/spdk_pid105728 00:42:54.278 Removing: /var/run/dpdk/spdk_pid106042 00:42:54.278 Removing: /var/run/dpdk/spdk_pid108517 00:42:54.278 Removing: /var/run/dpdk/spdk_pid108648 00:42:54.278 Removing: /var/run/dpdk/spdk_pid108849 00:42:54.278 Removing: /var/run/dpdk/spdk_pid108879 00:42:54.278 Removing: /var/run/dpdk/spdk_pid109190 00:42:54.278 Removing: /var/run/dpdk/spdk_pid109312 00:42:54.278 Removing: /var/run/dpdk/spdk_pid109611 00:42:54.278 Removing: /var/run/dpdk/spdk_pid109624 00:42:54.278 Removing: /var/run/dpdk/spdk_pid109787 00:42:54.278 Removing: /var/run/dpdk/spdk_pid109920 00:42:54.278 Removing: /var/run/dpdk/spdk_pid110082 00:42:54.278 Removing: /var/run/dpdk/spdk_pid110092 00:42:54.278 Removing: /var/run/dpdk/spdk_pid110589 00:42:54.278 Removing: /var/run/dpdk/spdk_pid110742 00:42:54.278 Removing: /var/run/dpdk/spdk_pid110953 00:42:54.278 Removing: /var/run/dpdk/spdk_pid113108 00:42:54.278 Removing: /var/run/dpdk/spdk_pid115703 00:42:54.278 Removing: /var/run/dpdk/spdk_pid122691 00:42:54.278 Removing: /var/run/dpdk/spdk_pid123116 00:42:54.278 Removing: /var/run/dpdk/spdk_pid125623 00:42:54.278 Removing: /var/run/dpdk/spdk_pid125896 00:42:54.278 Removing: /var/run/dpdk/spdk_pid128425 00:42:54.278 Removing: /var/run/dpdk/spdk_pid132383 00:42:54.278 Removing: /var/run/dpdk/spdk_pid135075 00:42:54.278 Removing: /var/run/dpdk/spdk_pid141383 00:42:54.278 Removing: /var/run/dpdk/spdk_pid146762 00:42:54.278 Removing: /var/run/dpdk/spdk_pid147987 00:42:54.278 Removing: /var/run/dpdk/spdk_pid148661 00:42:54.278 Removing: /var/run/dpdk/spdk_pid159034 00:42:54.278 Removing: /var/run/dpdk/spdk_pid161454 00:42:54.278 Removing: /var/run/dpdk/spdk_pid217201 00:42:54.278 Removing: /var/run/dpdk/spdk_pid220427 00:42:54.278 Removing: /var/run/dpdk/spdk_pid224239 00:42:54.278 Removing: /var/run/dpdk/spdk_pid229122 00:42:54.278 Removing: /var/run/dpdk/spdk_pid229129 00:42:54.278 Removing: /var/run/dpdk/spdk_pid229787 00:42:54.278 Removing: /var/run/dpdk/spdk_pid230361 00:42:54.278 Removing: /var/run/dpdk/spdk_pid230979 00:42:54.278 Removing: /var/run/dpdk/spdk_pid231378 00:42:54.278 Removing: /var/run/dpdk/spdk_pid231391 00:42:54.278 Removing: /var/run/dpdk/spdk_pid231644 00:42:54.278 Removing: /var/run/dpdk/spdk_pid231782 00:42:54.278 Removing: /var/run/dpdk/spdk_pid231784 00:42:54.278 Removing: /var/run/dpdk/spdk_pid232436 00:42:54.278 Removing: /var/run/dpdk/spdk_pid232980 00:42:54.537 Removing: /var/run/dpdk/spdk_pid233634 00:42:54.537 Removing: /var/run/dpdk/spdk_pid234029 00:42:54.537 Removing: /var/run/dpdk/spdk_pid234052 00:42:54.537 Removing: /var/run/dpdk/spdk_pid234293 00:42:54.537 Removing: /var/run/dpdk/spdk_pid235189 00:42:54.537 Removing: /var/run/dpdk/spdk_pid235917 00:42:54.537 Removing: /var/run/dpdk/spdk_pid241241 00:42:54.537 Removing: /var/run/dpdk/spdk_pid269618 00:42:54.537 Removing: /var/run/dpdk/spdk_pid272542 00:42:54.537 Removing: /var/run/dpdk/spdk_pid273715 00:42:54.537 Removing: /var/run/dpdk/spdk_pid274934 00:42:54.537 Removing: /var/run/dpdk/spdk_pid275063 00:42:54.537 Removing: /var/run/dpdk/spdk_pid275204 00:42:54.537 Removing: /var/run/dpdk/spdk_pid275344 00:42:54.537 Removing: /var/run/dpdk/spdk_pid275782 00:42:54.537 Removing: /var/run/dpdk/spdk_pid277208 00:42:54.537 Removing: /var/run/dpdk/spdk_pid278565 00:42:54.537 Removing: /var/run/dpdk/spdk_pid278881 00:42:54.537 Removing: /var/run/dpdk/spdk_pid280488 00:42:54.537 Removing: /var/run/dpdk/spdk_pid280872 00:42:54.537 Removing: /var/run/dpdk/spdk_pid281348 00:42:54.537 Removing: /var/run/dpdk/spdk_pid283737 00:42:54.537 Removing: /var/run/dpdk/spdk_pid287130 00:42:54.537 Removing: /var/run/dpdk/spdk_pid287132 00:42:54.537 Removing: /var/run/dpdk/spdk_pid287134 00:42:54.537 Removing: /var/run/dpdk/spdk_pid289258 00:42:54.537 Removing: /var/run/dpdk/spdk_pid291455 00:42:54.537 Removing: /var/run/dpdk/spdk_pid294978 00:42:54.537 Removing: /var/run/dpdk/spdk_pid318332 00:42:54.537 Removing: /var/run/dpdk/spdk_pid321107 00:42:54.537 Removing: /var/run/dpdk/spdk_pid324891 00:42:54.537 Removing: /var/run/dpdk/spdk_pid325856 00:42:54.537 Removing: /var/run/dpdk/spdk_pid326943 00:42:54.537 Removing: /var/run/dpdk/spdk_pid327906 00:42:54.537 Removing: /var/run/dpdk/spdk_pid330663 00:42:54.537 Removing: /var/run/dpdk/spdk_pid333242 00:42:54.537 Removing: /var/run/dpdk/spdk_pid335486 00:42:54.537 Removing: /var/run/dpdk/spdk_pid339716 00:42:54.537 Removing: /var/run/dpdk/spdk_pid339833 00:42:54.537 Removing: /var/run/dpdk/spdk_pid342731 00:42:54.537 Removing: /var/run/dpdk/spdk_pid342872 00:42:54.537 Removing: /var/run/dpdk/spdk_pid343256 00:42:54.537 Removing: /var/run/dpdk/spdk_pid343889 00:42:54.537 Removing: /var/run/dpdk/spdk_pid343894 00:42:54.537 Removing: /var/run/dpdk/spdk_pid344977 00:42:54.537 Removing: /var/run/dpdk/spdk_pid346154 00:42:54.537 Removing: /var/run/dpdk/spdk_pid347329 00:42:54.537 Removing: /var/run/dpdk/spdk_pid348505 00:42:54.537 Removing: /var/run/dpdk/spdk_pid349689 00:42:54.537 Removing: /var/run/dpdk/spdk_pid350872 00:42:54.537 Removing: /var/run/dpdk/spdk_pid354788 00:42:54.537 Removing: /var/run/dpdk/spdk_pid355126 00:42:54.537 Removing: /var/run/dpdk/spdk_pid356408 00:42:54.537 Removing: /var/run/dpdk/spdk_pid357275 00:42:54.537 Removing: /var/run/dpdk/spdk_pid361008 00:42:54.537 Removing: /var/run/dpdk/spdk_pid362868 00:42:54.537 Removing: /var/run/dpdk/spdk_pid366366 00:42:54.537 Removing: /var/run/dpdk/spdk_pid369720 00:42:54.537 Removing: /var/run/dpdk/spdk_pid376823 00:42:54.537 Removing: /var/run/dpdk/spdk_pid381191 00:42:54.537 Removing: /var/run/dpdk/spdk_pid381291 00:42:54.537 Removing: /var/run/dpdk/spdk_pid394095 00:42:54.537 Removing: /var/run/dpdk/spdk_pid394578 00:42:54.537 Removing: /var/run/dpdk/spdk_pid395026 00:42:54.537 Removing: /var/run/dpdk/spdk_pid395436 00:42:54.537 Removing: /var/run/dpdk/spdk_pid396011 00:42:54.537 Removing: /var/run/dpdk/spdk_pid396422 00:42:54.537 Removing: /var/run/dpdk/spdk_pid396934 00:42:54.537 Removing: /var/run/dpdk/spdk_pid397349 00:42:54.537 Removing: /var/run/dpdk/spdk_pid399852 00:42:54.537 Removing: /var/run/dpdk/spdk_pid400010 00:42:54.537 Removing: /var/run/dpdk/spdk_pid403919 00:42:54.537 Removing: /var/run/dpdk/spdk_pid404088 00:42:54.537 Removing: /var/run/dpdk/spdk_pid407832 00:42:54.537 Removing: /var/run/dpdk/spdk_pid410442 00:42:54.537 Removing: /var/run/dpdk/spdk_pid417359 00:42:54.537 Removing: /var/run/dpdk/spdk_pid417768 00:42:54.537 Removing: /var/run/dpdk/spdk_pid420268 00:42:54.537 Removing: /var/run/dpdk/spdk_pid420544 00:42:54.537 Removing: /var/run/dpdk/spdk_pid423043 00:42:54.537 Removing: /var/run/dpdk/spdk_pid426728 00:42:54.537 Removing: /var/run/dpdk/spdk_pid428773 00:42:54.537 Removing: /var/run/dpdk/spdk_pid435135 00:42:54.537 Removing: /var/run/dpdk/spdk_pid440442 00:42:54.537 Removing: /var/run/dpdk/spdk_pid442139 00:42:54.537 Removing: /var/run/dpdk/spdk_pid442796 00:42:54.537 Removing: /var/run/dpdk/spdk_pid452959 00:42:54.537 Removing: /var/run/dpdk/spdk_pid455215 00:42:54.537 Removing: /var/run/dpdk/spdk_pid457215 00:42:54.537 Removing: /var/run/dpdk/spdk_pid462153 00:42:54.537 Removing: /var/run/dpdk/spdk_pid462263 00:42:54.537 Removing: /var/run/dpdk/spdk_pid465160 00:42:54.537 Removing: /var/run/dpdk/spdk_pid466551 00:42:54.537 Removing: /var/run/dpdk/spdk_pid467918 00:42:54.537 Removing: /var/run/dpdk/spdk_pid468697 00:42:54.537 Removing: /var/run/dpdk/spdk_pid470100 00:42:54.537 Removing: /var/run/dpdk/spdk_pid470853 00:42:54.537 Removing: /var/run/dpdk/spdk_pid476879 00:42:54.537 Removing: /var/run/dpdk/spdk_pid477267 00:42:54.796 Removing: /var/run/dpdk/spdk_pid477664 00:42:54.796 Removing: /var/run/dpdk/spdk_pid479216 00:42:54.796 Removing: /var/run/dpdk/spdk_pid479496 00:42:54.796 Removing: /var/run/dpdk/spdk_pid479896 00:42:54.796 Removing: /var/run/dpdk/spdk_pid482336 00:42:54.796 Removing: /var/run/dpdk/spdk_pid482349 00:42:54.796 Removing: /var/run/dpdk/spdk_pid483818 00:42:54.796 Removing: /var/run/dpdk/spdk_pid484184 00:42:54.796 Removing: /var/run/dpdk/spdk_pid484308 00:42:54.796 Removing: /var/run/dpdk/spdk_pid97617 00:42:54.796 Removing: /var/run/dpdk/spdk_pid98354 00:42:54.796 Removing: /var/run/dpdk/spdk_pid99237 00:42:54.796 Removing: /var/run/dpdk/spdk_pid99738 00:42:54.796 Clean 00:42:54.796 07:28:15 -- common/autotest_common.sh@1453 -- # return 0 00:42:54.796 07:28:15 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:42:54.796 07:28:15 -- common/autotest_common.sh@732 -- # xtrace_disable 00:42:54.796 07:28:15 -- common/autotest_common.sh@10 -- # set +x 00:42:54.796 07:28:15 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:42:54.796 07:28:15 -- common/autotest_common.sh@732 -- # xtrace_disable 00:42:54.796 07:28:15 -- common/autotest_common.sh@10 -- # set +x 00:42:54.796 07:28:15 -- spdk/autotest.sh@392 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:42:54.796 07:28:15 -- spdk/autotest.sh@394 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:42:54.796 07:28:15 -- spdk/autotest.sh@394 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:42:54.796 07:28:15 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:42:54.796 07:28:15 -- spdk/autotest.sh@398 -- # hostname 00:42:54.796 07:28:15 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-gp-11 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:42:55.054 geninfo: WARNING: invalid characters removed from testname! 00:43:27.126 07:28:46 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:43:29.656 07:28:50 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:43:32.935 07:28:53 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:43:36.216 07:28:56 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:43:38.745 07:28:59 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:43:42.026 07:29:02 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:43:45.307 07:29:05 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:43:45.307 07:29:05 -- spdk/autorun.sh@1 -- $ timing_finish 00:43:45.307 07:29:05 -- common/autotest_common.sh@738 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt ]] 00:43:45.307 07:29:05 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:43:45.307 07:29:05 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:43:45.307 07:29:05 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:43:45.307 + [[ -n 6056 ]] 00:43:45.307 + sudo kill 6056 00:43:45.317 [Pipeline] } 00:43:45.327 [Pipeline] // stage 00:43:45.331 [Pipeline] } 00:43:45.340 [Pipeline] // timeout 00:43:45.344 [Pipeline] } 00:43:45.353 [Pipeline] // catchError 00:43:45.357 [Pipeline] } 00:43:45.367 [Pipeline] // wrap 00:43:45.372 [Pipeline] } 00:43:45.390 [Pipeline] // catchError 00:43:45.397 [Pipeline] stage 00:43:45.399 [Pipeline] { (Epilogue) 00:43:45.410 [Pipeline] catchError 00:43:45.412 [Pipeline] { 00:43:45.423 [Pipeline] echo 00:43:45.424 Cleanup processes 00:43:45.430 [Pipeline] sh 00:43:45.717 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:43:45.718 496571 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:43:45.732 [Pipeline] sh 00:43:46.017 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:43:46.017 ++ grep -v 'sudo pgrep' 00:43:46.017 ++ awk '{print $1}' 00:43:46.017 + sudo kill -9 00:43:46.017 + true 00:43:46.031 [Pipeline] sh 00:43:46.316 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:43:58.525 [Pipeline] sh 00:43:58.809 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:43:58.809 Artifacts sizes are good 00:43:58.824 [Pipeline] archiveArtifacts 00:43:58.831 Archiving artifacts 00:43:59.296 [Pipeline] sh 00:43:59.649 + sudo chown -R sys_sgci: /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:43:59.665 [Pipeline] cleanWs 00:43:59.675 [WS-CLEANUP] Deleting project workspace... 00:43:59.676 [WS-CLEANUP] Deferred wipeout is used... 00:43:59.683 [WS-CLEANUP] done 00:43:59.685 [Pipeline] } 00:43:59.703 [Pipeline] // catchError 00:43:59.715 [Pipeline] sh 00:43:59.999 + logger -p user.info -t JENKINS-CI 00:44:00.007 [Pipeline] } 00:44:00.021 [Pipeline] // stage 00:44:00.027 [Pipeline] } 00:44:00.041 [Pipeline] // node 00:44:00.047 [Pipeline] End of Pipeline 00:44:00.091 Finished: SUCCESS